2011-10-23 17:52:20 +00:00
/*
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
This source file is part of VideoTexture library
Copyright ( c ) 2007 The Zdeno Ash Miklas
This program is free software ; you can redistribute it and / or modify it under
the terms of the GNU Lesser General Public License as published by the Free Software
Foundation ; either version 2 of the License , or ( at your option ) any later
version .
This program is distributed in the hope that it will be useful , but WITHOUT
ANY WARRANTY ; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE . See the GNU Lesser General Public License for more details .
You should have received a copy of the GNU Lesser General Public License along with
this program ; if not , write to the Free Software Foundation , Inc . , 59 Temple
Place - Suite 330 , Boston , MA 02111 - 1307 , USA , or go to
http : //www.gnu.org/copyleft/lesser.txt.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
*/
2011-10-23 17:52:20 +00:00
/** \file gameengine/VideoTexture/ImageBuff.cpp
* \ ingroup bgevideotex
*/
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// implementation
2012-06-16 20:20:07 +00:00
# include "PyObjectPlus.h"
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
# include <structmember.h>
# include "ImageBuff.h"
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
# include "Exception.h"
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
# include "ImageBase.h"
# include "FilterSource.h"
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
// use ImBuf API for image manipulation
extern " C " {
# include "IMB_imbuf_types.h"
# include "IMB_imbuf.h"
2010-02-28 14:57:26 +00:00
# include "bgl.h"
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
} ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// default filter
2008-11-04 12:04:59 +00:00
FilterRGB24 defFilter ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
// forward declaration;
extern PyTypeObject ImageBuffType ;
2010-02-26 22:14:31 +00:00
static int ImageBuff_init ( PyObject * pySelf , PyObject * args , PyObject * kwds )
{
short width = - 1 ;
short height = - 1 ;
unsigned char color = 0 ;
PyObject * py_scale = Py_False ;
ImageBuff * image ;
PyImage * self = reinterpret_cast < PyImage * > ( pySelf ) ;
// create source object
if ( self - > m_image ! = NULL )
delete self - > m_image ;
image = new ImageBuff ( ) ;
self - > m_image = image ;
if ( PyArg_ParseTuple ( args , " hh|bO!:ImageBuff " , & width , & height , & color , & PyBool_Type , & py_scale ) )
{
// initialize image buffer
image - > setScale ( py_scale = = Py_True ) ;
image - > clear ( width , height , color ) ;
}
else
{
// check if at least one argument was passed
if ( width ! = - 1 | | height ! = - 1 )
// yes and they didn't match => it's an error
return - 1 ;
// empty argument list is okay
PyErr_Clear ( ) ;
}
// initialization succeded
return 0 ;
}
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
ImageBuff : : ~ ImageBuff ( void )
{
if ( m_imbuf )
IMB_freeImBuf ( m_imbuf ) ;
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// load image from buffer
void ImageBuff : : load ( unsigned char * img , short width , short height )
{
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
// loading a new buffer implies to reset the imbuf if any, because the size may change
if ( m_imbuf )
{
IMB_freeImBuf ( m_imbuf ) ;
m_imbuf = NULL ;
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// initialize image buffer
init ( width , height ) ;
// original size
short orgSize [ 2 ] = { width , height } ;
// is filter available
if ( m_pyfilter ! = NULL )
// use it to process image
convImage ( * ( m_pyfilter - > m_filter ) , img , orgSize ) ;
else
// otherwise use default filter
convImage ( defFilter , img , orgSize ) ;
// image is available
m_avail = true ;
}
2010-02-26 22:14:31 +00:00
void ImageBuff : : clear ( short width , short height , unsigned char color )
{
unsigned char * p ;
int size ;
// loading a new buffer implies to reset the imbuf if any, because the size may change
if ( m_imbuf )
{
IMB_freeImBuf ( m_imbuf ) ;
m_imbuf = NULL ;
}
// initialize image buffer
init ( width , height ) ;
// the width/height may be different due to scaling
size = ( m_size [ 0 ] * m_size [ 1 ] ) ;
// initialize memory with color for all channels
memset ( m_image , color , size * 4 ) ;
// and change the alpha channel
p = & ( ( unsigned char * ) m_image ) [ 3 ] ;
2010-04-11 12:05:27 +00:00
for ( ; size > 0 ; size - - )
2010-02-26 22:14:31 +00:00
{
* p = 0xFF ;
p + = 4 ;
}
// image is available
m_avail = true ;
}
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
// img must point to a array of RGBA data of size width*height
void ImageBuff : : plot ( unsigned char * img , short width , short height , short x , short y , short mode )
{
struct ImBuf * tmpbuf ;
if ( m_size [ 0 ] = = 0 | | m_size [ 1 ] = = 0 | | width < = 0 | | height < = 0 )
return ;
if ( ! m_imbuf ) {
// allocate most basic imbuf, we will assign the rect buffer on the fly
2010-10-16 14:32:17 +00:00
m_imbuf = IMB_allocImBuf ( m_size [ 0 ] , m_size [ 1 ] , 0 , 0 ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
}
2010-10-16 14:32:17 +00:00
tmpbuf = IMB_allocImBuf ( width , height , 0 , 0 ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
// assign temporarily our buffer to the ImBuf buffer, we use the same format
tmpbuf - > rect = ( unsigned int * ) img ;
m_imbuf - > rect = m_image ;
IMB_rectblend ( m_imbuf , tmpbuf , x , y , 0 , 0 , width , height , ( IMB_BlendMode ) mode ) ;
// remove so that MB_freeImBuf will free our buffer
m_imbuf - > rect = NULL ;
tmpbuf - > rect = NULL ;
IMB_freeImBuf ( tmpbuf ) ;
}
void ImageBuff : : plot ( ImageBuff * img , short x , short y , short mode )
{
if ( m_size [ 0 ] = = 0 | | m_size [ 1 ] = = 0 | | img - > m_size [ 0 ] = = 0 | | img - > m_size [ 1 ] = = 0 )
return ;
if ( ! m_imbuf ) {
// allocate most basic imbuf, we will assign the rect buffer on the fly
2010-10-16 14:32:17 +00:00
m_imbuf = IMB_allocImBuf ( m_size [ 0 ] , m_size [ 1 ] , 0 , 0 ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
}
if ( ! img - > m_imbuf ) {
// allocate most basic imbuf, we will assign the rect buffer on the fly
2010-10-16 14:32:17 +00:00
img - > m_imbuf = IMB_allocImBuf ( img - > m_size [ 0 ] , img - > m_size [ 1 ] , 0 , 0 ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
}
// assign temporarily our buffer to the ImBuf buffer, we use the same format
img - > m_imbuf - > rect = img - > m_image ;
m_imbuf - > rect = m_image ;
IMB_rectblend ( m_imbuf , img - > m_imbuf , x , y , 0 , 0 , img - > m_imbuf - > x , img - > m_imbuf - > y , ( IMB_BlendMode ) mode ) ;
// remove so that MB_freeImBuf will free our buffer
m_imbuf - > rect = NULL ;
img - > m_imbuf - > rect = NULL ;
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// cast Image pointer to ImageBuff
inline ImageBuff * getImageBuff ( PyImage * self )
{ return static_cast < ImageBuff * > ( self - > m_image ) ; }
// python methods
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
static bool testPyBuffer ( Py_buffer * buffer , int width , int height , unsigned int pixsize )
{
if ( buffer - > itemsize ! = 1 )
{
PyErr_SetString ( PyExc_ValueError , " Buffer must be an array of bytes " ) ;
return false ;
}
if ( buffer - > len ! = width * height * pixsize )
{
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
PyErr_SetString ( PyExc_ValueError , " Buffer hasn't the correct size " ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
return false ;
}
// multi dimension are ok as long as there is no hole in the memory
Py_ssize_t size = buffer - > itemsize ;
for ( int i = buffer - > ndim - 1 ; i > = 0 ; i - - )
{
if ( buffer - > suboffsets ! = NULL & & buffer - > suboffsets [ i ] > = 0 )
{
PyErr_SetString ( PyExc_ValueError , " Buffer must be of one block " ) ;
return false ;
}
if ( buffer - > strides ! = NULL & & buffer - > strides [ i ] ! = size )
{
PyErr_SetString ( PyExc_ValueError , " Buffer must be of one block " ) ;
return false ;
}
if ( i > 0 )
size * = buffer - > shape [ i ] ;
}
return true ;
}
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
static bool testBGLBuffer ( Buffer * buffer , int width , int height , unsigned int pixsize )
{
unsigned int size = BGL_typeSize ( buffer - > type ) ;
for ( int i = 0 ; i < buffer - > ndimensions ; i + + )
{
size * = buffer - > dimensions [ i ] ;
}
if ( size ! = width * height * pixsize )
{
PyErr_SetString ( PyExc_ValueError , " Buffer hasn't the correct size " ) ;
return false ;
}
return true ;
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// load image
static PyObject * load ( PyImage * self , PyObject * args )
{
// parameters: string image buffer, its size, width, height
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
Py_buffer buffer ;
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
Buffer * bglBuffer ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
short width ;
short height ;
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
unsigned int pixSize ;
// calc proper buffer size
// use pixel size from filter
if ( self - > m_image - > getFilter ( ) ! = NULL )
pixSize = self - > m_image - > getFilter ( ) - > m_filter - > firstPixelSize ( ) ;
else
pixSize = defFilter . firstPixelSize ( ) ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// parse parameters
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
if ( ! PyArg_ParseTuple ( args , " s*hh:load " , & buffer , & width , & height ) )
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
PyErr_Clear ( ) ;
// check if it is BGL buffer
if ( ! PyArg_ParseTuple ( args , " O!hh:load " , & BGL_bufferType , & bglBuffer , & width , & height ) )
{
// report error
return NULL ;
}
else
{
if ( testBGLBuffer ( bglBuffer , width , height , pixSize ) )
{
try
{
// if correct, load image
getImageBuff ( self ) - > load ( ( unsigned char * ) bglBuffer - > buf . asvoid , width , height ) ;
}
catch ( Exception & exp )
{
exp . report ( ) ;
}
}
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
else
{
// check if buffer size is correct
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
if ( testPyBuffer ( & buffer , width , height , pixSize ) )
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
try
{
// if correct, load image
getImageBuff ( self ) - > load ( ( unsigned char * ) buffer . buf , width , height ) ;
}
catch ( Exception & exp )
{
exp . report ( ) ;
}
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
}
PyBuffer_Release ( & buffer ) ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
if ( PyErr_Occurred ( ) )
return NULL ;
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
Py_RETURN_NONE ;
}
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
static PyObject * plot ( PyImage * self , PyObject * args )
{
PyImage * other ;
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
Buffer * bglBuffer ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
Py_buffer buffer ;
//unsigned char * buff;
//unsigned int buffSize;
short width ;
short height ;
short x , y ;
short mode = IMB_BLEND_COPY ;
if ( PyArg_ParseTuple ( args , " s*hhhh|h:plot " , & buffer , & width , & height , & x , & y , & mode ) )
{
// correct decoding, verify that buffer size is correct
2012-03-03 11:45:08 +00:00
// we need a continuous memory buffer
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
if ( testPyBuffer ( & buffer , width , height , 4 ) )
{
getImageBuff ( self ) - > plot ( ( unsigned char * ) buffer . buf , width , height , x , y , mode ) ;
}
PyBuffer_Release ( & buffer ) ;
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
if ( PyErr_Occurred ( ) )
return NULL ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
Py_RETURN_NONE ;
}
PyErr_Clear ( ) ;
// try the other format
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
if ( PyArg_ParseTuple ( args , " O!hh|h:plot " , & ImageBuffType , & other , & x , & y , & mode ) )
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
{
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
getImageBuff ( self ) - > plot ( getImageBuff ( other ) , x , y , mode ) ;
Py_RETURN_NONE ;
}
PyErr_Clear ( ) ;
// try the last format (BGL buffer)
if ( ! PyArg_ParseTuple ( args , " O!hhhh|h:plot " , & BGL_bufferType , & bglBuffer , & width , & height , & x , & y , & mode ) )
{
PyErr_SetString ( PyExc_TypeError , " Expecting ImageBuff or Py buffer or BGL buffer as first argument; width, height next; postion x, y and mode as last arguments " ) ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
return NULL ;
}
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
if ( testBGLBuffer ( bglBuffer , width , height , 4 ) )
{
getImageBuff ( self ) - > plot ( ( unsigned char * ) bglBuffer - > buf . asvoid , width , height , x , y , mode ) ;
}
2010-02-22 09:02:05 +00:00
if ( PyErr_Occurred ( ) )
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
return NULL ;
Py_RETURN_NONE ;
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
}
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// methods structure
static PyMethodDef imageBuffMethods [ ] =
{
{ " load " , ( PyCFunction ) load , METH_VARARGS , " Load image from buffer " } ,
BGE: Add plot method to VideoTexture.ImageBuff class.
Synopsis: plot(brush,width,height,x,y,mode)
plot(imgbuff,x,y,mode)
The first form uses a byte array containing the brush shape.
The second form uses another ImageBuff object as a brush.
The ImageBuff object must be initialized before you can call
these methods. Use load(rgb_buffer,sizex,sizey) method to create
an image buffer of given size (with alpha channel set to 255).
The brush is plotted directly in the image buffer. The texture
is updated only when the VideoTexture.Texture parent object is
refreshed: this will download the image buffer to the GPU.
brush: Byte array containing RGBA data to be plotted in image buffer.
The data must be continuous in memory, organized row by row
starting from lower left corner of the image. Each pixel is
4 bytes representing RGBA data in that order.
width: Horizontal size in pixels of image in brush.
height: Vertical size in pixels of the image in brush.
imgbuff:Another ImageBuff object that is used as a brush. The object
must have been initialized first with load().
x: Horizontal position in pixel from left side of the image buffer
where the brush will be plotted. The brush is plotted on pixels
positions x->x+width-1. Clipping is performed if the brush falls
partially outside the image buffer.
y: Vertical position in pixel from bottom side of the image buffer
where the brush will be plotted.
mode: Mode of drawing. Use one of the following value:
0 : MIX
1 : ADD
2 : SUB
3 : MUL
4 : LIGHTEN
5 : DARKEN
6 : ERASE ALPHA
7 : ADD ALPHA
1000 : COPY RGBA (default)
1001 : COPY RGB
1002 : COPY ALPHA
Modes 0 to 7 are 'blend' modes: the brush pixels are combined
with the image pixel in various ways. Refer to Blender documentation
to learn more about these modes.
2009-12-08 10:02:22 +00:00
{ " plot " , ( PyCFunction ) plot , METH_VARARGS , " update image buffer " } ,
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{ NULL }
} ;
// attributes structure
static PyGetSetDef imageBuffGetSets [ ] =
{ // attributes from ImageBase class
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
{ ( char * ) " valid " , ( getter ) Image_valid , NULL , ( char * ) " bool to tell if an image is available " , NULL } ,
2008-11-04 09:21:27 +00:00
{ ( char * ) " image " , ( getter ) Image_getImage , NULL , ( char * ) " image data " , NULL } ,
{ ( char * ) " size " , ( getter ) Image_getSize , NULL , ( char * ) " image size " , NULL } ,
2012-03-01 12:20:18 +00:00
{ ( char * ) " scale " , ( getter ) Image_getScale , ( setter ) Image_setScale , ( char * ) " fast scale of image (near neighbor) " , NULL } ,
2008-11-04 09:21:27 +00:00
{ ( char * ) " flip " , ( getter ) Image_getFlip , ( setter ) Image_setFlip , ( char * ) " flip image vertically " , NULL } ,
{ ( char * ) " filter " , ( getter ) Image_getFilter , ( setter ) Image_setFilter , ( char * ) " pixel filter " , NULL } ,
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{ NULL }
} ;
// define python type
PyTypeObject ImageBuffType =
{
2009-06-08 20:08:19 +00:00
PyVarObject_HEAD_INIT ( NULL , 0 )
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
" VideoTexture.ImageBuff " , /*tp_name*/
sizeof ( PyImage ) , /*tp_basicsize*/
0 , /*tp_itemsize*/
( destructor ) Image_dealloc , /*tp_dealloc*/
0 , /*tp_print*/
0 , /*tp_getattr*/
0 , /*tp_setattr*/
0 , /*tp_compare*/
0 , /*tp_repr*/
0 , /*tp_as_number*/
0 , /*tp_as_sequence*/
0 , /*tp_as_mapping*/
0 , /*tp_hash */
0 , /*tp_call*/
0 , /*tp_str*/
0 , /*tp_getattro*/
0 , /*tp_setattro*/
VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.
The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.
BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.
It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".
All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.
v = memoryview(source)
A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:
if source.valid:
v = memoryview(source)
...
Note: the BGL buffer object itself does not yet support
the buffer interface.
Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:
# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
v = memoryview(texture.source)
# process image
...
# invalidate image for next texture refresh
texture.source.refresh()
Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
& imageBufferProcs , /*tp_as_buffer*/
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
Py_TPFLAGS_DEFAULT , /*tp_flags*/
" Image source from image buffer " , /* tp_doc */
0 , /* tp_traverse */
0 , /* tp_clear */
0 , /* tp_richcompare */
0 , /* tp_weaklistoffset */
0 , /* tp_iter */
0 , /* tp_iternext */
imageBuffMethods , /* tp_methods */
0 , /* tp_members */
imageBuffGetSets , /* tp_getset */
0 , /* tp_base */
0 , /* tp_dict */
0 , /* tp_descr_get */
0 , /* tp_descr_set */
0 , /* tp_dictoffset */
2010-02-26 22:14:31 +00:00
( initproc ) ImageBuff_init , /* tp_init */
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
0 , /* tp_alloc */
Image_allocNew , /* tp_new */
} ;