BGE: add audio/video synchronization capability to VideoTexture

Add optional parameter to VideoTexture.Texture refresh() method
to specify timestamp (in seconds from start of movie) of the frame
to be loaded. This value is passed down to image source and for
VideoFFmpeg source, it is used instead of current time to load
the frame from the video file.

When combined with an audio actuator, it can be used to synchronize
the sound and the image: specify the same video file in the sound
actuator and use the KX_SoundActuator time attribute as timestamp
to refresh: the frame corresponding to the sound will be loaded:

GameLogic.video.refresh(True, soundAct.time)
This commit is contained in:
Benoit Bolsee 2010-02-07 19:18:00 +00:00
parent 064345ad8c
commit a8a99a628f
12 changed files with 120 additions and 39 deletions

@ -1,33 +1,94 @@
# $Id$
"""
Documentation for the VideoTexture module.
The VideoTexture module allows you to manipulate textures during the game.
Several sources for texture are possible: video files, image files,
video capture, memory buffer, camera render or a mix of that.
The video and image files can be loaded from the internet using an URL
instead of a file name. In addition, you can apply filters on the images
before sending them to the GPU, allowing video effect: blue screen,
color band, gray, normal map.
VideoTexture uses FFmpeg to load images and videos. All the formats and codecs
that FFmpeg supports are supported by VideoTexture, including but not limited to:
* AVI
* Ogg
* Xvid
* Theora
* dv1394 camera
* video4linux capture card (this includes many webcams)
* videoForWindows capture card (this includes many webcams)
* JPG
The principle is simple: first you identify a texture on an existing object using
the L{materialID} function, then you create a new texture with dynamic content
and swap the two textures in the GPU.
The GE is not aware of the substitution and continues to display the object as always,
except that you are now in control of the texture. At the end, the new texture is
deleted and the old texture restored.
Example:
import VideoTexture
import GameLogic
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
# the creation of the texture must be done once: save the
# texture object in an attribute of GameLogic module makes it persistent
if not hasattr(GameLogic, 'video'):
# identify a static texture by name
matID = VideoTexture.materialID(obj, 'IMvideo.png')
# create a dynamic texture that will replace the static texture
GameLogic.video = VideoTexture.Texture(obj, matID)
# define a source of image for the texture, here a movie
movie = GameLogic.expandPath('//trailer_400p.ogg')
GameLogic.video.source = VideoTexture.VideoFFmpeg(movie)
GameLogic.video.source.scale = True
# quick off the movie, but it wont play in the background
GameLogic.video.source.play()
# you need to call this function every frame to ensure update of the texture.
GameLogic.video.refresh(True)
"""
def getLastError():
"""
Does something
Returns the description of the last error that occured in a VideoTexture function.
@rtype:
@rtype: string
"""
def imageToArray(image):
"""
Does something
Returns a string corresponding to the current image stored in a texture source object
@param image: Image ID
@type image: integer
@rtype: array
@param image: Image source object.
@type image: object of type L{VideoFFmpeg}, L{ImageFFmpeg}, L{ImageBuff}, L{ImageMix}, L{ImageRender}, L{ImageMirror} or L{ImageViewport}
@rtype: string representing the image, 4 bytes per pixel in the RGBA order, line per line, starting from the bottom of the image.
"""
def materialID(material):
def materialID(object,name):
"""
Gets the ID of a material
@param material: the name of the material
@type material: string
@rtype:
Returns a numeric value that can be used in L{Texture} to create a dynamic texture.
The value corresponds to an internal material number that uses the texture identified
by name. name is a string representing a texture name with IM prefix if you want to
identify the texture directly. This method works for basic tex face and for material,
provided the material has a texture channel using that particular texture in first
position of the texture stack. name can also have MA prefix if you want to identify
the texture by material. In that case the material must have a texture channel in first
position.
If the object has no material that matches name, it generates a runtime error. Use try/catch to catch the exception.
Ex: VideoTexture.materialID(obj, 'IMvideo.png')
@param object: the game object that uses the texture you want to make dynamic
@type object: game object
@param name: name of the texture/material you want to make dynamic.
@type name: string
@rtype: integer
"""
def setLogFile():
"""

@ -71,7 +71,7 @@ bool ImageBase::release (void)
// get image
unsigned int * ImageBase::getImage (unsigned int texId)
unsigned int * ImageBase::getImage (unsigned int texId, double ts)
{
// if image is not available
if (!m_avail)
@ -82,12 +82,12 @@ unsigned int * ImageBase::getImage (unsigned int texId)
// get images from sources
for (ImageSourceList::iterator it = m_sources.begin(); it != m_sources.end(); ++it)
// get source image
(*it)->getImage();
(*it)->getImage(ts);
// init image
init(m_sources[0]->getSize()[0], m_sources[0]->getSize()[1]);
}
// calculate new image
calcImage(texId);
calcImage(texId, ts);
}
// if image is available, return it, otherwise NULL
return m_avail ? m_image : NULL;
@ -305,12 +305,12 @@ void ImageSource::setSource (PyImage * source)
// get image from source
unsigned int * ImageSource::getImage (void)
unsigned int * ImageSource::getImage (double ts)
{
// if source is available
if (m_source != NULL)
// get image from source
m_image = m_source->m_image->getImage();
m_image = m_source->m_image->getImage(0, ts);
// otherwise reset buffer
else
m_image = NULL;

@ -54,7 +54,7 @@ public:
virtual bool release (void);
/// get image
unsigned int * getImage (unsigned int texId = 0);
unsigned int * getImage (unsigned int texId = 0, double timestamp=-1.0);
/// get image size
short * getSize (void) { return m_size; }
/// get image buffer size
@ -123,7 +123,7 @@ protected:
bool checkSourceSizes (void);
/// calculate image from sources and set its availability
virtual void calcImage (unsigned int texId) {}
virtual void calcImage (unsigned int texId, double ts) {}
/// perform loop detection
bool loopDetect (ImageBase * img);
@ -269,7 +269,7 @@ public:
void setSource (PyImage * source);
/// get image from source
unsigned int * getImage (void);
unsigned int * getImage (double ts=-1.0);
/// get buffered image
unsigned int * getImageBuf (void) { return m_image; }
/// refresh source

@ -63,7 +63,7 @@ ExceptionID ImageSizesNotMatch;
ExpDesc ImageSizesNotMatchDesc (ImageSizesNotMatch, "Image sizes of sources are different");
// calculate image from sources and set its availability
void ImageMix::calcImage (unsigned int texId)
void ImageMix::calcImage (unsigned int texId, double ts)
{
// check source sizes
if (!checkSourceSizes()) THRWEXCP(ImageSizesNotMatch, S_OK);

@ -78,7 +78,7 @@ protected:
virtual ImageSource * newSource (const char * id) { return new ImageSourceMix(id); }
/// calculate image from sources and set its availability
virtual void calcImage (unsigned int texId);
virtual void calcImage (unsigned int texId, double ts);
};

@ -92,7 +92,7 @@ void ImageRender::setBackground (int red, int green, int blue, int alpha)
// capture image from viewport
void ImageRender::calcImage (unsigned int texId)
void ImageRender::calcImage (unsigned int texId, double ts)
{
if (m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
@ -105,7 +105,7 @@ void ImageRender::calcImage (unsigned int texId)
// render the scene from the camera
Render();
// get image from viewport
ImageViewport::calcImage(texId);
ImageViewport::calcImage(texId, ts);
// restore OpenGL state
m_canvas->EndFrame();
}

@ -90,7 +90,7 @@ protected:
/// render 3d scene to image
virtual void calcImage (unsigned int texId);
virtual void calcImage (unsigned int texId, double ts);
void Render();
void SetupRenderFrame(KX_Scene *scene, KX_Camera* cam);

@ -105,7 +105,7 @@ void ImageViewport::setPosition (GLint * pos)
// capture image from viewport
void ImageViewport::calcImage (unsigned int texId)
void ImageViewport::calcImage (unsigned int texId, double ts)
{
// if scale was changed
if (m_scaleChange)

@ -81,7 +81,7 @@ protected:
bool m_texInit;
/// capture image from viewport
virtual void calcImage (unsigned int texId);
virtual void calcImage (unsigned int texId, double ts);
/// get viewport size
GLint * getViewportSize (void) { return m_viewport + 2; }

@ -279,7 +279,9 @@ PyObject * Texture_refresh (Texture * self, PyObject * args)
{
// get parameter - refresh source
PyObject * param;
if (!PyArg_ParseTuple(args, "O:refresh", &param) || !PyBool_Check(param))
double ts = -1.0;
if (!PyArg_ParseTuple(args, "O|d:refresh", &param, &ts) || !PyBool_Check(param))
{
// report error
PyErr_SetString(PyExc_TypeError, "The value must be a bool");
@ -315,7 +317,7 @@ PyObject * Texture_refresh (Texture * self, PyObject * args)
}
// get texture
unsigned int * texture = self->m_source->m_image->getImage(self->m_actTex);
unsigned int * texture = self->m_source->m_image->getImage(self->m_actTex, ts);
// if texture is available
if (texture != NULL)
{

@ -54,7 +54,7 @@ VideoFFmpeg::VideoFFmpeg (HRESULT * hRslt) : VideoBase(),
m_codec(NULL), m_formatCtx(NULL), m_codecCtx(NULL),
m_frame(NULL), m_frameDeinterlaced(NULL), m_frameRGB(NULL), m_imgConvertCtx(NULL),
m_deinterlace(false), m_preseek(0), m_videoStream(-1), m_baseFrameRate(25.0),
m_lastFrame(-1), m_eof(false), m_curPosition(-1), m_startTime(0),
m_lastFrame(-1), m_eof(false), m_externTime(false), m_curPosition(-1), m_startTime(0),
m_captWidth(0), m_captHeight(0), m_captRate(0.f), m_isImage(false),
m_isThreaded(false), m_stopThread(false), m_cacheStarted(false)
{
@ -723,22 +723,37 @@ void VideoFFmpeg::setFrameRate (float rate)
// image calculation
void VideoFFmpeg::calcImage (unsigned int texId)
void VideoFFmpeg::calcImage (unsigned int texId, double ts)
{
loadFrame();
loadFrame(ts);
}
// load frame from video
void VideoFFmpeg::loadFrame (void)
void VideoFFmpeg::loadFrame (double ts)
{
if (m_status == SourcePlaying)
{
// get actual time
double startTime = PIL_check_seconds_timer();
if (m_lastFrame == -1 && !m_isFile)
m_startTime = startTime;
double actTime = startTime - m_startTime;
double actTime;
if (m_isFile && ts >= 0.0)
{
// allow setting timestamp only when not streaming
actTime = ts;
if (m_eof && actTime * actFrameRate() < m_lastFrame)
{
// user is asking to rewind while the playback is already finished in the cache.
// we must clean the cache otherwise the eof condition will prevent any further reading.
stopCache();
}
}
else
{
if (m_lastFrame == -1 && !m_isFile)
m_startTime = startTime;
actTime = startTime - m_startTime;
}
// if video has ended
if (m_isFile && actTime * m_frameRate >= m_range[1])
{

@ -129,6 +129,9 @@ protected:
/// end of file reached
bool m_eof;
/// flag to indicate that time is coming from application
bool m_externTime;
/// current file pointer position in file expressed in frame number
long m_curPosition;
@ -154,10 +157,10 @@ protected:
STR_String m_imageName;
/// image calculation
virtual void calcImage (unsigned int texId);
virtual void calcImage (unsigned int texId, double ts);
/// load frame from video
void loadFrame (void);
void loadFrame (double ts);
/// set actual position
void setPositions (void);