blender/source/gameengine/VideoTexture/VideoBase.h

186 lines
4.3 KiB
C
Raw Normal View History

VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
/* $Id$
-----------------------------------------------------------------------------
This source file is part of VideoTexture library
Copyright (c) 2007 The Zdeno Ash Miklas
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59 Temple
Place - Suite 330, Boston, MA 02111-1307, USA, or go to
http://www.gnu.org/copyleft/lesser.txt.
-----------------------------------------------------------------------------
*/
#if !defined VIDEOBASE_H
#define VIDEOBASE_H
#include <PyObjectPlus.h>
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
#include "ImageBase.h"
#include "Exception.h"
// source states
const int SourceError = -1;
const int SourceEmpty = 0;
const int SourceReady = 1;
const int SourcePlaying = 2;
const int SourceStopped = 3;
// video source formats
enum VideoFormat { None, RGB24, YV12 };
/// base class for video source
class VideoBase : public ImageBase
{
public:
/// constructor
VideoBase (void) : ImageBase(true), m_format(None), m_status(SourceEmpty),
m_repeat(0), m_frameRate(1.0)
{
m_orgSize[0] = m_orgSize[1] = 0;
m_range[0] = m_range[1] = 0.0;
}
/// destructor
virtual ~VideoBase (void) {}
/// open video file
virtual void openFile (char * file)
{
m_isFile = true;
m_status = SourceReady;
}
/// open video capture device
virtual void openCam (char * file, short camIdx)
{
m_isFile = false;
m_status = SourceReady;
}
/// play video
virtual bool play (void)
{
if (m_status == SourceReady || m_status == SourceStopped)
{
m_status = SourcePlaying;
return true;
}
return false;
}
/// stop/pause video
virtual bool stop (void)
{
if (m_status == SourcePlaying)
{
m_status = SourceStopped;
return true;
}
return false;
}
// get video status
int getStatus (void) { return m_status; }
/// get play range
const double * getRange (void) { return m_range; }
/// set play range
virtual void setRange (double start, double stop)
{
if (m_isFile)
{
m_range[0] = start;
m_range[1] = stop;
}
}
// get video repeat
int getRepeat (void) { return m_repeat; }
/// set video repeat
virtual void setRepeat (int rep)
{ if (m_isFile) m_repeat = rep; }
/// get frame rate
float getFrameRate (void) { return m_frameRate; }
/// set frame rate
virtual void setFrameRate (float rate)
{ if (m_isFile) m_frameRate = rate > 0.0 ? rate : 1.0f; }
protected:
/// video format
VideoFormat m_format;
/// original video size
short m_orgSize[2];
/// video status
int m_status;
/// is source file
bool m_isFile;
/// replay range
double m_range[2];
/// repeat count
int m_repeat;
/// frame rate
float m_frameRate;
/// initialize image data
void init (short width, short height);
/// process source data
void process (BYTE * sample);
};
// python fuctions
// cast Image pointer to Video
inline VideoBase * getVideo (PyImage * self)
{ return static_cast<VideoBase*>(self->m_image); }
extern ExceptionID SourceVideoCreation;
// object initialization
template <class T> void Video_init (PyImage * self)
{
// create source video object
if (self->m_image != NULL) delete self->m_image;
HRESULT hRslt = S_OK;
self->m_image = new T(&hRslt);
CHCKHRSLT(hRslt, SourceVideoCreation);
}
// video functions
void Video_open (VideoBase * self, char * file, short captureID);
PyObject * Video_play (PyImage * self);
PyObject * Video_stop (PyImage * self);
PyObject * Video_refresh (PyImage * self);
PyObject * Video_getStatus (PyImage * self, void * closure);
PyObject * Video_getRange (PyImage * self, void * closure);
int Video_setRange (PyImage * self, PyObject * value, void * closure);
PyObject * Video_getRepeat (PyImage * self, void * closure);
int Video_setRepeat (PyImage * self, PyObject * value, void * closure);
PyObject * Video_getFrameRate (PyImage * self, void * closure);
int Video_setFrameRate (PyImage * self, PyObject * value, void * closure);
#endif