blender/source/gameengine/VideoTexture/VideoBase.cpp
Benoit Bolsee 40f1c4f343 BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
    The render pass is enabled by default but it can be disabled with
    bge.logic.setRender(False).
    Once disabled, the render pass is skipped and a new logic frame starts
    immediately. Note that VSync no longer limits the fps when render is off
    but the 'Use Frame Rate' option in the Render Properties still does.
    To run as many frames as possible, untick the option
    This function is useful when you don't need the default render, e.g.
    when doing offscreen render to an alternate device than the monitor.
    Note that without VSync, you must limit the frame rate by other means.

fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
    Use this method to create an offscreen buffer of given size, with given MSAA
    samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
    or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
    retrieve the frame buffer on the host and the latter if you want to pass the render
    to another context (texture are proper OGL object, render buffers aren't)
    The object created by this function can only be used as a parameter of the
    bge.texture.ImageRender() constructor to send the the render to the FBO rather
    than to the frame buffer. This is best suited when you want to create a render
    of specific size, or if you need an image with an alpha channel.

bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
    Without arg, the refresh method of the image objects is pretty much a no-op, it
    simply invalidates the image so that on next texture refresh, the image will
    be recalculated.
    It is now possible to pass an optional buffer object to transfer the image (and
    recalculate it if it was invalid) to an external object. The object must implement
    the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
    depending on format argument (only those 2 formats are supported) and ts is an
    optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
    With this function you don't need anymore to link the image object to a Texture
    object to use: the image object is self-sufficient.

bge.texture.ImageRender(scene, camera, fbo=None)
    Render to buffer is possible by passing a FBO object (see offScreenCreate).

bge.texture.ImageRender.render()
    Allows asynchronous render: call this method to render the scene but without
    extracting the pixels yet. The function returns as soon as the render commands
    have been send to the GPU. The render will proceed asynchronously in the GPU
    while the host can perform other tasks.
    To complete the render, you can either call refresh() directly of refresh the texture
    to which this object is the source. Asynchronous render is useful to achieve optimal
    performance: call render() on frame N and refresh() on frame N+1 to give as much as
    time as possible to the GPU to render the frame while the game engine can perform other tasks.

Support negative scale on camera.
    Camera scale was previously ignored in the BGE.
    It is now injected in the modelview matrix as a vertical or horizontal flip
    of the scene (respectively if scaleY<0 and scaleX<0).
    Note that the actual value of the scale is not used, only the sign.
    This allows to flip the image produced by ImageRender() without any performance
    degradation: the flip is integrated in the render itself.

Optimized image transfer from ImageRender to buffer.
    Previously, images that were transferred to the host were always going through
    buffers in VideoTexture. It is now possible to transfer ImageRender
    images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
    if the attributes of the ImageRender objects are set as follow:
       flip=False, alpha=True, scale=False, depth=False, zbuff=False.
       (if you need to flip the image, use camera negative scale)
2016-06-11 22:05:20 +02:00

255 lines
6.5 KiB
C++

/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright (c) 2007 The Zdeno Ash Miklas
*
* This source file is part of VideoTexture library
*
* Contributor(s):
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file gameengine/VideoTexture/VideoBase.cpp
* \ingroup bgevideotex
*/
#if defined WIN32
#define WINDOWS_LEAN_AND_MEAN
#include <windows.h>
#endif
#include "VideoBase.h"
#include "FilterSource.h"
// VideoBase implementation
// initialize image data
void VideoBase::init(short width, short height)
{
// save original sizes
m_orgSize[0] = width;
m_orgSize[1] = height;
// call base class initialization
ImageBase::init(width, height);
}
// process video frame
void VideoBase::process (BYTE *sample)
{
// if scale was changed
if (m_scaleChange)
// reset image
init(m_orgSize[0], m_orgSize[1]);
// if image is allocated and is able to store new image
if (m_image != NULL && !m_avail)
{
// filters used
// convert video format to image
switch (m_format)
{
case RGBA32:
{
FilterRGBA32 filtRGBA;
// use filter object for format to convert image
filterImage(filtRGBA, sample, m_orgSize);
// finish
break;
}
case RGB24:
{
FilterRGB24 filtRGB;
// use filter object for format to convert image
filterImage(filtRGB, sample, m_orgSize);
// finish
break;
}
case YV12:
{
// use filter object for format to convert image
FilterYV12 filtYUV;
filtYUV.setBuffs(sample, m_orgSize);
filterImage(filtYUV, sample, m_orgSize);
// finish
break;
}
case None:
break; /* assert? */
}
}
}
// python functions
// exceptions for video source initialization
ExceptionID SourceVideoEmpty, SourceVideoCreation;
ExpDesc SourceVideoEmptyDesc(SourceVideoEmpty, "Source Video is empty");
ExpDesc SourceVideoCreationDesc(SourceVideoCreation, "SourceVideo object was not created");
// open video source
void Video_open(VideoBase *self, char *file, short captureID)
{
// if file is empty, throw exception
if (file == NULL) THRWEXCP(SourceVideoEmpty, S_OK);
// open video file or capture device
if (captureID >= 0)
self->openCam(file, captureID);
else
self->openFile(file);
}
// play video
PyObject *Video_play(PyImage *self)
{ if (getVideo(self)->play()) Py_RETURN_TRUE; else Py_RETURN_FALSE; }
// pause video
PyObject *Video_pause(PyImage *self)
{ if (getVideo(self)->pause()) Py_RETURN_TRUE; else Py_RETURN_FALSE; }
PyObject *Video_stop(PyImage *self)
{ if (getVideo(self)->stop()) Py_RETURN_TRUE; else Py_RETURN_FALSE; }
// get status
PyObject *Video_getStatus(PyImage *self, void *closure)
{
return Py_BuildValue("h", getVideo(self)->getStatus());
}
// refresh video
PyObject *Video_refresh(PyImage *self, PyObject *args)
{
Py_buffer buffer;
char *mode = NULL;
unsigned int format;
double ts = -1.0;
memset(&buffer, 0, sizeof(buffer));
if (PyArg_ParseTuple(args, "|s*sd:refresh", &buffer, &mode, &ts)) {
if (buffer.buf) {
// a target buffer is provided, verify its format
if (buffer.readonly) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be writable");
}
else if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be contiguous in memory");
}
else if (((intptr_t)buffer.buf & 3) != 0) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be aligned to 4 bytes boundary");
}
else {
// ready to get the image into our buffer
try {
if (mode == NULL || !strcmp(mode, "RGBA"))
format = GL_RGBA;
else if (!strcmp(mode, "BGRA"))
format = GL_BGRA;
else
THRWEXCP(InvalidImageMode,S_OK);
if (!self->m_image->loadImage((unsigned int *)buffer.buf, buffer.len, format, ts)) {
PyErr_SetString(PyExc_TypeError, "Could not load the buffer, perhaps size is not compatible");
}
}
catch (Exception & exp) {
exp.report();
}
}
PyBuffer_Release(&buffer);
if (PyErr_Occurred())
return NULL;
}
}
else
{
return NULL;
}
getVideo(self)->refresh();
return Video_getStatus(self, NULL);
}
// get range
PyObject *Video_getRange(PyImage *self, void *closure)
{
return Py_BuildValue("[ff]", getVideo(self)->getRange()[0],
getVideo(self)->getRange()[1]);
}
// set range
int Video_setRange(PyImage *self, PyObject *value, void *closure)
{
// check validity of parameter
if (value == NULL || !PySequence_Check(value) || PySequence_Size(value) != 2 ||
/* XXX - this is incorrect if the sequence is not a list/tuple! */
!PyFloat_Check(PySequence_Fast_GET_ITEM(value, 0)) ||
!PyFloat_Check(PySequence_Fast_GET_ITEM(value, 1)))
{
PyErr_SetString(PyExc_TypeError, "The value must be a sequence of 2 float");
return -1;
}
// set range
getVideo(self)->setRange(PyFloat_AsDouble(PySequence_Fast_GET_ITEM(value, 0)),
PyFloat_AsDouble(PySequence_Fast_GET_ITEM(value, 1)));
// success
return 0;
}
// get repeat
PyObject *Video_getRepeat (PyImage *self, void *closure)
{ return Py_BuildValue("h", getVideo(self)->getRepeat()); }
// set repeat
int Video_setRepeat(PyImage *self, PyObject *value, void *closure)
{
// check validity of parameter
if (value == NULL || !PyLong_Check(value))
{
PyErr_SetString(PyExc_TypeError, "The value must be an int");
return -1;
}
// set repeat
getVideo(self)->setRepeat(int(PyLong_AsLong(value)));
// success
return 0;
}
// get frame rate
PyObject *Video_getFrameRate (PyImage *self, void *closure)
{ return Py_BuildValue("f", double(getVideo(self)->getFrameRate())); }
// set frame rate
int Video_setFrameRate(PyImage *self, PyObject *value, void *closure)
{
// check validity of parameter
if (value == NULL || !PyFloat_Check(value))
{
PyErr_SetString(PyExc_TypeError, "The value must be a float");
return -1;
}
// set repeat
getVideo(self)->setFrameRate(float(PyFloat_AsDouble(value)));
// success
return 0;
}