blender/source/gameengine/VideoTexture/ImageRender.cpp

321 lines
11 KiB
C++
Raw Normal View History

VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
/* $Id$
-----------------------------------------------------------------------------
This source file is part of VideoTexture library
Copyright (c) 2007 The Zdeno Ash Miklas
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59 Temple
Place - Suite 330, Boston, MA 02111-1307, USA, or go to
http://www.gnu.org/copyleft/lesser.txt.
-----------------------------------------------------------------------------
*/
// implementation
#include <PyObjectPlus.h>
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
#include <structmember.h>
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
#include <BIF_gl.h>
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
#include "KX_PythonInit.h"
#include "DNA_scene_types.h"
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
#include "ImageRender.h"
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
#include "ImageBase.h"
#include "BlendType.h"
#include "Exception.h"
ExceptionID SceneInvalid, CameraInvalid;
ExpDesc SceneInvalidDesc (SceneInvalid, "Scene object is invalid");
ExpDesc CameraInvalidDesc (CameraInvalid, "Camera object is invalid");
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// constructor
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
ImageRender::ImageRender (KX_Scene * scene, KX_Camera * camera) :
ImageViewport(),
m_scene(scene),
m_camera(camera)
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{
// initialize background colour
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
setBackground(0, 0, 255, 255);
// retrieve rendering objects
m_engine = KX_GetActiveEngine();
m_rasterizer = m_engine->GetRasterizer();
m_canvas = m_engine->GetCanvas();
m_rendertools = m_engine->GetRenderTools();
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
// destructor
ImageRender::~ImageRender (void)
{
}
// set background color
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
void ImageRender::setBackground (int red, int green, int blue, int alpha)
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
m_background[0] = (red < 0) ? 0.f : (red > 255) ? 1.f : float(red)/255.f;
m_background[1] = (green < 0) ? 0.f : (green > 255) ? 1.f : float(green)/255.f;
m_background[2] = (blue < 0) ? 0.f : (blue > 255) ? 1.f : float(blue)/255.f;
m_background[3] = (alpha < 0) ? 0.f : (alpha > 255) ? 1.f : float(alpha)/255.f;
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
// capture image from viewport
void ImageRender::calcImage (unsigned int texId)
{
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
if (m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
m_camera == m_scene->GetActiveCamera())
{
// no need to compute texture in non texture rendering
m_avail = false;
return;
}
// render the scene from the camera
Render();
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// get image from viewport
ImageViewport::calcImage(texId);
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
// restore OpenGL state
m_canvas->EndFrame();
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
void ImageRender::Render()
{
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
const float ortho = 100.0;
const RAS_IRasterizer::StereoMode stereomode = m_rasterizer->GetStereoMode();
// The screen area that ImageViewport will copy is also the rendering zone
m_canvas->SetViewPort(m_position[0], m_position[1], m_position[0]+m_capSize[0]-1, m_position[1]+m_capSize[1]-1);
m_canvas->ClearColor(m_background[0], m_background[1], m_background[2], m_background[3]);
m_canvas->ClearBuffer(RAS_ICanvas::COLOR_BUFFER|RAS_ICanvas::DEPTH_BUFFER);
m_rasterizer->BeginFrame(RAS_IRasterizer::KX_TEXTURED,m_engine->GetClockTime());
m_rendertools->BeginFrame(m_rasterizer);
m_engine->SetWorldSettings(m_scene->GetWorldInfo());
m_rendertools->SetAuxilaryClientInfo(m_scene);
m_rasterizer->DisplayFog();
// matrix calculation, don't apply any of the stereo mode
m_rasterizer->SetStereoMode(RAS_IRasterizer::RAS_STEREO_NOSTEREO);
if (m_camera->hasValidProjectionMatrix())
{
m_rasterizer->SetProjectionMatrix(m_camera->GetProjectionMatrix());
} else
{
RAS_FrameFrustum frustrum;
float lens = m_camera->GetLens();
bool orthographic = !m_camera->GetCameraData()->m_perspective;
float nearfrust = m_camera->GetCameraNear();
float farfrust = m_camera->GetCameraFar();
float aspect_ratio = 1.0f;
Scene *blenderScene = m_scene->GetBlenderScene();
if (orthographic) {
lens *= ortho;
nearfrust = (nearfrust + 1.0)*ortho;
farfrust *= ortho;
}
// compute the aspect ratio from frame blender scene settings so that render to texture
// works the same in Blender and in Blender player
if (blenderScene->r.ysch != 0)
aspect_ratio = float(blenderScene->r.xsch) / float(blenderScene->r.ysch);
RAS_FramingManager::ComputeDefaultFrustum(
nearfrust,
farfrust,
lens,
aspect_ratio,
frustrum);
MT_Matrix4x4 projmat = m_rasterizer->GetFrustumMatrix(
frustrum.x1, frustrum.x2, frustrum.y1, frustrum.y2, frustrum.camnear, frustrum.camfar);
m_camera->SetProjectionMatrix(projmat);
}
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
MT_Transform camtrans(m_camera->GetWorldToCamera());
if (!m_camera->GetCameraData()->m_perspective)
camtrans.getOrigin()[2] *= ortho;
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, m_camera->NodeGetWorldPosition(),
m_camera->GetCameraLocation(), m_camera->GetCameraOrientation());
m_camera->SetModelviewMatrix(viewmat);
// restore the stereo mode now that the matrix is computed
m_rasterizer->SetStereoMode(stereomode);
// do not update the mesh, we don't want to do it more than once per frame
//m_scene->UpdateMeshTransformations();
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
m_scene->CalculateVisibleMeshes(m_rasterizer,m_camera);
m_scene->RenderBuckets(camtrans, m_rasterizer, m_rendertools);
}
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// cast Image pointer to ImageRender
inline ImageRender * getImageRender (PyImage * self)
{ return static_cast<ImageRender*>(self->m_image); }
// python methods
// Blender Scene type
BlendType<KX_Scene> sceneType ("KX_Scene");
// Blender Camera type
BlendType<KX_Camera> cameraType ("KX_Camera");
// object initialization
static int ImageRender_init (PyObject * pySelf, PyObject * args, PyObject * kwds)
{
// parameters - scene object
PyObject * scene;
// camera object
PyObject * camera;
// parameter keywords
static char *kwlist[] = {"sceneObj", "cameraObj", NULL};
// get parameters
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO", kwlist, &scene, &camera))
return -1;
try
{
// get scene pointer
KX_Scene * scenePtr (NULL);
if (scene != NULL) scenePtr = sceneType.checkType(scene);
// throw exception if scene is not available
if (scenePtr == NULL) THRWEXCP(SceneInvalid, S_OK);
// get camera pointer
KX_Camera * cameraPtr (NULL);
if (camera != NULL) cameraPtr = cameraType.checkType(camera);
// throw exception if camera is not available
if (cameraPtr == NULL) THRWEXCP(CameraInvalid, S_OK);
// get pointer to image structure
PyImage * self = reinterpret_cast<PyImage*>(pySelf);
// create source object
if (self->m_image != NULL) delete self->m_image;
self->m_image = new ImageRender(scenePtr, cameraPtr);
}
catch (Exception & exp)
{
exp.report();
return -1;
}
// initialization succeded
return 0;
}
// get background color
PyObject * getBackground (PyImage * self, void * closure)
{
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
return Py_BuildValue("[BBBB]",
getImageRender(self)->getBackground(0),
getImageRender(self)->getBackground(1),
getImageRender(self)->getBackground(2),
getImageRender(self)->getBackground(3));
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
}
// set color
static int setBackground (PyImage * self, PyObject * value, void * closure)
{
// check validity of parameter
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
if (value == NULL || !PySequence_Check(value) || PySequence_Length(value) != 4
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|| !PyInt_Check(PySequence_Fast_GET_ITEM(value, 0))
|| !PyInt_Check(PySequence_Fast_GET_ITEM(value, 1))
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
|| !PyInt_Check(PySequence_Fast_GET_ITEM(value, 2))
|| !PyInt_Check(PySequence_Fast_GET_ITEM(value, 3)))
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
PyErr_SetString(PyExc_TypeError, "The value must be a sequence of 4 integer between 0 and 255");
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
return -1;
}
// set background color
getImageRender(self)->setBackground((unsigned char)(PyInt_AsLong(PySequence_Fast_GET_ITEM(value, 0))),
(unsigned char)(PyInt_AsLong(PySequence_Fast_GET_ITEM(value, 1))),
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
(unsigned char)(PyInt_AsLong(PySequence_Fast_GET_ITEM(value, 2))),
(unsigned char)(PyInt_AsLong(PySequence_Fast_GET_ITEM(value, 3))));
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// success
return 0;
}
// methods structure
static PyMethodDef imageRenderMethods[] =
{ // methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{NULL}
};
// attributes structure
static PyGetSetDef imageRenderGetSets[] =
{
{(char*)"background", (getter)getBackground, (setter)setBackground, (char*)"background color", NULL},
VideoTexture: new ImageRender class for Render To Texture The new class VideoTexture.ImageRender() is available to perform render to texture in the GE. Constructor: VideoTexture.ImageRender(scene,cam) cam : camera object that will be used for the render. It must be an inactive camera. scene: reference to the scene that will be rendered. The camera must be part of that scene. Returns an object that can be used as a source of a VideoTexture.Texture object Methods: none Attributes: background: 4-tuple representing the background color of the rendering as RGBA color components, each component being an integer between 0 and 255. Default value = [0,0,255,255] (=saturated blue) Note: athough the alpha component can be specified, it is not supported at the moment, the alpha channel of the rendered texture will always be 255. You can however introduce an alpha channel by appending a FilterBlueScreen() filter, it will set the alpha to 0 (transparent) on all pixels that were not rendered. capsize: 2-tuple representing the size of the render area as [x,y] number of pixels. Default value = largest rectangle with power of 2 dimensions that fits in the canvas You may want to reduce the render area to increase performance. For example, a render area of [256,128] is probably sufficient to implement a car inner mirror. For best performance, use power of 2 dimensions and don't set any filter: this allows direct transfer between the GPU frame buffer and texture memory without going through the host. alpha: Boolean indicating if the render alpha channel should be copied to the texture. Default value: False Experimental, do not use. whole: Boolean indicating if the entire canvas should be used for the rendering. Default value: False Note: There is no reason to set this attribute to True: the rendering will in any case be scaled down to the largest rectangle with power of 2 dimensions before transfering to the texture. Attributes inherited from the ImageBase class: image : image binary data, read-only size : [x,y] size of the texture, read-only scale : set to True for fast scale down in case the render area dimensions are not power of 2 flip : set to True for vertical flip. filter: set a post-processing filter on the render. Notes: * Aspect Ratio For consistent results in Blender and Blenderplayer, the same aspect ratio used by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) is also used during the rendering. You can control the portion of the scene that will be rendered by "looking through the camera": the zone inside the outer dotted rectangle will be rendered to the texture. In order to reproduce the scene without X/Y distortion, you must apply the texture on an object or portion of object that has the same aspect ratio. * Order of rendering The rendereing is performed when you call the refresh() method of the parent Texture object. This happens outside the normal frame rendering and will have no effect on it. However, if you want to use ImageViewport and ImageRender at the same time, be sure to refresh the viewport texture before the render texture because the latter will destroy the frame buffer that is used by the former to update the texture. * Scene status The meshes are not updated during the render to texture: the rendered texture is one frame late to the rendered frame with regards to mesh deformation. * Example: cont = GameLogic.getCurrentController() # object that receives the texture obj = contr.getOwner() scene = GameLogic.getCurrentScene() # camera used for the render tvcam = scene.getObjectList()['OBtvcam'] # assume obj has some faces UV assigned to tv.png matID = VideoTexture.materialID(obj, 'IMtv.png') GameLogic.tv = VideoTexture.Texture(obj, matID) GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam) GameLogic.tv.source.capsize = [256,256] # to render the texture, just call GameLogic.tv.refresh(True) on each frame. You can download a demo game (with a video file) here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip For those who have already downloaded the demo, you can just update the blend file: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
// attribute from ImageViewport
{(char*)"capsize", (getter)ImageViewport_getCaptureSize, (setter)ImageViewport_setCaptureSize, (char*)"size of render area", NULL},
{(char*)"alpha", (getter)ImageViewport_getAlpha, (setter)ImageViewport_setAlpha, (char*)"use alpha in texture", NULL},
{(char*)"whole", (getter)ImageViewport_getWhole, (setter)ImageViewport_setWhole, (char*)"use whole viewport to render", NULL},
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
// attributes from ImageBase class
{(char*)"image", (getter)Image_getImage, NULL, (char*)"image data", NULL},
{(char*)"size", (getter)Image_getSize, NULL, (char*)"image size", NULL},
{(char*)"scale", (getter)Image_getScale, (setter)Image_setScale, (char*)"fast scale of image (near neighbour)", NULL},
{(char*)"flip", (getter)Image_getFlip, (setter)Image_setFlip, (char*)"flip image vertically", NULL},
{(char*)"filter", (getter)Image_getFilter, (setter)Image_setFilter, (char*)"pixel filter", NULL},
VideoTexture module. The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
{NULL}
};
// define python type
PyTypeObject ImageRenderType =
{
PyObject_HEAD_INIT(NULL)
0, /*ob_size*/
"VideoTexture.ImageRender", /*tp_name*/
sizeof(PyImage), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)Image_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT, /*tp_flags*/
"Image source from render", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
imageRenderMethods, /* tp_methods */
0, /* tp_members */
imageRenderGetSets, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)ImageRender_init, /* tp_init */
0, /* tp_alloc */
Image_allocNew, /* tp_new */
};