Commit Graph

38 Commits

Author SHA1 Message Date
Benoit Bolsee
2d0d06f642 BGE VideoTexture: fix bug with VideoTexture.materialID() since recent commit. 2009-04-20 21:20:33 +00:00
Campbell Barton
f5fc4ebdd8 BGE Python API
- More verbose error messages.
- BL_Shader wasnt setting error messages on some errors
- FilterNormal depth attribute was checking for float which is bad because scripts often expect ints assigned to float attributes.
- Added a check to PyVecTo for a tuple rather then always using a generic python sequence. On my system this is over 2x faster with an optmized build.
2009-04-19 21:01:12 +00:00
Campbell Barton
8d2cb5bea4 BGE Python API
This changes how the BGE classes and Python work together, which hasnt changed since blender went opensource.
The main difference is PyObjectPlus - the base class for most game engine classes, no longer inherit from PyObject, and cannot be cast to a PyObject.

This has the advantage that the BGE does not have to keep 2 reference counts valid for C++ and Python.

Previously C++ classes would never be freed while python held a reference, however this reference could be problematic eg: a GameObject that isnt in a scene anymore should not be used by python, doing so could even crash blender in some cases.

Instead PyObjectPlus has a member "PyObject *m_proxy" which is lazily initialized when python needs it. m_proxy reference counts are managed by python, though it should never be freed while the C++ class exists since it holds a reference to avoid making and freeing it all the time.
When the C++ class is free'd it sets the m_proxy reference to NULL, If python accesses this variable it will raise a RuntimeError, (check the isValid attribute to see if its valid without raising an error).
- This replaces the m_zombie bool and IsZombie() tests added recently.

In python return values that used to be..
 return value->AddRef();
Are now
 return value->GetProxy();
or...
 return value->NewProxy(true); // true means python owns this C++ value which will be deleted when the PyObject is freed
2009-04-19 12:46:39 +00:00
Campbell Barton
df8cf26404 Added m_zombie to the base python class (PyObjectPlus), when this is set all the subclasses will raise an error on access to their members.
Other small changes...
- KX_Camera and KX_Light didnt have get/setitem access in their PyType definition.
- CList.from_id() error checking for a long was checking for -1 against an unsigned value (own fault)
- CValue::SpecialRelease was incrementing an int for no reason.
- renamed m_attrlist to m_attr_dict since its a PyDict type.
- removed custom getattro/setattro functions for KX_Scene and KX_GameObject, use py_base_getattro, py_base_setattro for all subclasses of PyObjectPlus.
- lowercase windows.h in VideoBase.cpp for cross compiling.
2009-04-17 20:06:06 +00:00
Benoit Bolsee
0b8661ab4d BGE: Occlusion culling and other performance improvements.
Added occlusion culling capability in the BGE. 
More info: http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.49/Game_Engine#BGE_Scenegraph_improvement
MSVC, scons, cmake, Makefile updated.

Other minor performance improvements:
- The rasterizer was computing the openGL model matrix of the objects too many times
- DBVT view frustrum culling was not properly culling behind the near plane:
  Large objects behind the camera were sent to the GPU
- Remove all references to mesh split/join feature as it is not yet functional
2009-04-13 20:08:33 +00:00
Peter Schlaile
615c5232c7 == FFMPEG ==
Updated ffmpeg to release version 0.5
updated x264 to today's daily build
thanks to ben2610 for first patches (but you got hddaudio.c wrong :)
2009-03-22 19:19:21 +00:00
Benoit Bolsee
ab8e9ba3dd VideoTexture: reactivate VideoTexture for scons/cmake/makefile compilation systems, fix video streaming, fix camera support in Linux, add multi-thread cache service, fix crash when a VideoFFmpeg object could not be created.
The multi-thread cache service is activated only on multi-core processors.
It consists in loading, decoding and caching the video frames in a 
separate thread. The cache size is 5 decoded frames and 30 raw frames.
Note that the opening of video file/stream/camera is not multi-thread:
you will still experience a delay at the VideoFFmpeg object creation.
Processing of the video frame (resize, loading to texture) is still done
in the main thread.  Caching is automatically enabled for video file, 
video streaming and video camera. 

Video streaming now works correctly: the videos frames are loaded
at the correct rate. Network delays and frequency drifts are automatically
compensated. 
Note: an http video source is always treated as a streaming source,
even though the http protocol allows seeking. For the user it means that
he cannot define start/stop range and cannot restart the video except
by reopening the source. Pause/play is however possible.

Video camera is now correctly handled on Linux: it will not slow down the BGE.
A video camera is treated as a streaming source.
2009-03-05 15:16:43 +00:00
Campbell Barton
2eb85c01f3 remove warnings for the BGE
- variables that shadow vers declared earlier
- Py_Fatal print an error to the stderr
- gcc was complaining about the order of initialized vars (for classes)
- const return values for ints and bools didnt do anything.
- braces for ambiguous if  statements
2009-02-25 03:26:02 +00:00
Nathan Letwory
c3d74547be SCons:
* giving compileflags, cc_compileflags and cxx_compileflags to BlenderLib() now actually overrides any other setting (so there's no unclarity when ie. conflicting options are being specified in REL_CFLAGS et al). These are set after either release or debug flags, but before any *_WARN flags (so those stay maintained).
* add cxx_compileflags for GE parts on win32-vc to have better performance.
* NOTE: if platform maintainers (OSX and Linux) could check and do the same for their systems. Not vital, but probably very, very much welcomed by GE users.
2009-02-15 23:26:00 +00:00
Benoit Bolsee
1ecf855276 VideoTexture.ImageMirror: if the mirror is horizontal in mesh coord, take the Y axis as the up direction for the UV map. If the mirror is not perfectly vertical(horizontal), the projection of the Z(Y) axis on the mirror plane is the mirror Up direction. 2008-12-11 23:02:33 +00:00
Benoit Bolsee
cd47292745 ImageMirror: add clip attribute to limit clipping distance of mirror rendering 2008-12-09 14:16:10 +00:00
Benoit Bolsee
149d231d69 VideoTexture: new ImageMirror class for easy mirror (and portal) creation
The new class VideoTexture.ImageMirror() is available to perform
automatic mirror rendering.

Constructor:

  VideoTexture.ImageMirror(scene,observer,mirror,material)
    scene:    reference to the scene that will be rendered.
              Both observer and mirror must be part of that scene.
    observer: reference to a game object used as view point for
              mirror rendering: the scene will be rendered through
              the mirror as if the active camera was at the observer 
              location. Usually the observer is the active camera
              but you can use any game obejct.
    mirror:   reference to the mesh object holding the mirror.
    material: material ID of the mirror texture as returned by 
              VideoTexture.materialID(). The mirror is formed by 
              the polygons mapped to that material.

There are no specific methods or attributes. ImageMirror inherits 
all methods and attributes from ImageRender. You must refresh the
parent VideoTexture.Texture object regularly to update the mirror 
rendering.

Guidelines on how to create a working mirror:
- Use a texture that is specific to the mirror so that the mirror 
  rendering only appears on the mirror.
- The mirror must be planar; the algorithm works well only for planar
  or quasi planar mirror. For spherical mirror, you will get better
  results with ImageRender and a camera at the center of the mirror. 
  ImageMirror automatically computes the mirror orientation and 
  position. The mirror doesn't need to be rectangular, it can be 
  circular or take any form provided it is planar.
- The mirror up direction must be along the Z axis in local mesh
  coordinates. If the mirror is not vertical, ImageMirror will 
  compute the up direction as being the projection of the Z axis
  on the mirror plane.
- UV mapping must be set right to get correct mirror rendering:
  - make a planar projection of the mirror polygons (Unwrap or projection from view)
  - eventually rotate the projection so that UV up direction corresponds to the mesh Z axis
  - scale the projection so that the extreme points touch the border of the texture
  - flip the UV projection horizontally (scale -1 on X axis). This is needed
    because the mirror texture is rendered from the back of the mirror and
    thus is reversed from the view point of the observer. Horizontal flip 
    in the UV map restores the correct orientation.

Besides these simple rules, the mirror rendering is completely automatic. 
In particular, you don't need to allocate a camera for the rendering, 
ImageMirror creates dynamically a camera for that. The reflection is correct
even on large angles. The mirror can be a dynamic and moving object, the 
algorithm always computes the correct camera position based on observer 
relative position. You don't have to worry about mirror position in the scene: 
the algorithm automatically computes the camera frustum so that any object 
behind the mirror is not rendered.

Warnings:
- observer and mirror are references to game objects. ImageMirror keeps
  a pointer to them but does not increment the reference count. You must ensure 
  that these game objects are not deleted as long as you refresh() the ImageMirror
  object. You must release the ImageMirror object before you delete the game
  objects. To release the ImageMirror object (normally stored in GameLogic),
  just assign it to None.
- Mirror rendering is automatically skipped when the observer is behind the mirror
  but it is not disabled when the mirror is out of sight of the observer.
  You should only refresh the mirror when you know that the observer is likely to see it.
  For example, no need to refresh a car inner mirror when the player is not in the car.

Example:

  contr = GameLogic.getCurrentController()
  # object holding the mirror
  mirror = contr.getOwner()
  scene = GameLogic.getCurrentScene()
  # observer will be the active camere
  camera = scene.getObjectList()['OBCamera']
  matID = VideoTexture.materialID(mirror, 'IMmirror.png')
  GameLogic.mirror = VideoTexture.Texture(mirror, matID)
  GameLogic.mirror.source = VideoTexture.ImageMirror(scene,camera,mirror,matID)
  # to render the mirror, just call GameLogic.mirror.refresh(True) on each frame.

You can download a demo game (with a video file) here:

  http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip

For those who have already downloaded the demo, you can just update the blend file:

  http://home.scarlet.be/~tsi46445/blender/MirrorTextureDemo.blend
2008-12-04 16:07:46 +00:00
Benoit Bolsee
6a51ba54cd VideoTexture: new ImageRender class for Render To Texture
The new class VideoTexture.ImageRender() is available to perform
render to texture in the GE.

Constructor:

  VideoTexture.ImageRender(scene,cam)
    cam  : camera object that will be used for the render.
           It must be an inactive camera.
    scene: reference to the scene that will be rendered.
           The camera must be part of that scene.
  Returns an object that can be used as a source of a VideoTexture.Texture object

Methods: none

Attributes:

  background: 
     4-tuple representing the background color of the rendering
     as RGBA color components, each component being an integer 
     between 0 and 255. 
     Default value = [0,0,255,255] (=saturated blue)
     Note: athough the alpha component can be specified, it is not
           supported at the moment, the alpha channel of the rendered
           texture will always be 255. You can however introduce an
           alpha channel by appending a FilterBlueScreen() filter, it
           will set the alpha to 0 (transparent) on all pixels that were
           not rendered.

  capsize:
     2-tuple representing the size of the render area as [x,y] number of pixels.
     Default value = largest rectangle with power of 2 dimensions that fits in the canvas 
     You may want to reduce the render area to increase performance. For example,
     a render area of [256,128] is probably sufficient to implement a car inner mirror.
     For best performance, use power of 2 dimensions and don't set any filter: this
     allows direct transfer between the GPU frame buffer and texture memory
     without going through the host.

  alpha: 
     Boolean indicating if the render alpha channel should be copied to the texture.
     Default value: False
     Experimental, do not use.

  whole:
     Boolean indicating if the entire canvas should be used for the rendering. 
     Default value: False
     Note: There is no reason to set this attribute to True: the rendering will
           in any case be scaled down to the largest rectangle with power of 2
           dimensions before transfering to the texture.

Attributes inherited from the ImageBase class:

  image : image binary data, read-only
  size  : [x,y] size of the texture, read-only
  scale : set to True for fast scale down in case the render area dimensions are not power of 2
  flip  : set to True for vertical flip. 
  filter: set a post-processing filter on the render.

Notes:

* Aspect Ratio
For consistent results in Blender and Blenderplayer, the same aspect ratio used
by Blender to draw the camera viewport (Scene(F10)->Format tab->Size X/Size Y) 
is also used during the rendering. You can control the portion of the scene that
will be rendered by "looking through the camera": the zone inside the outer dotted 
rectangle will be rendered to the texture.
In order to reproduce the scene without X/Y distortion, you must apply the texture
on an object or portion of object that has the same aspect ratio.

* Order of rendering
The rendereing is performed when you call the refresh() method of the parent 
Texture object. This happens outside the normal frame rendering and will have no 
effect on it.
However, if you want to use ImageViewport and ImageRender at the same time, be 
sure to refresh the viewport texture before the render texture because the latter
will destroy the frame buffer that is used by the former to update the texture.

* Scene status
The meshes are not updated during the render to texture: the rendered texture
is one frame late to the rendered frame with regards to mesh deformation.

* Example:

  cont = GameLogic.getCurrentController()
  # object that receives the texture
  obj = contr.getOwner()
  scene = GameLogic.getCurrentScene()
  # camera used for the render
  tvcam = scene.getObjectList()['OBtvcam']
  # assume obj has some faces UV assigned to tv.png
  matID = VideoTexture.materialID(obj, 'IMtv.png')
  GameLogic.tv = VideoTexture.Texture(obj, matID)
  GameLogic.tv.source = VideoTexture.ImageRender(scene,tvcam)
  GameLogic.tv.source.capsize = [256,256]
  # to render the texture, just call GameLogic.tv.refresh(True) on each frame.

You can download a demo game (with a video file) here:

  http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip

For those who have already downloaded the demo, you can just update the blend file:

  http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
2008-11-26 17:47:42 +00:00
Benoit Bolsee
773824bbea VideoTexture: support VideoTexture in blenderplayer 2008-11-10 22:17:40 +00:00
Benoit Bolsee
2de476c88f VideoTexture: Preserve alpha channel if present in video, images and sequences. Better detection of end of video. 2008-11-09 21:42:30 +00:00
Benoit Bolsee
8b2811d9d5 VideoTexture: VideoTexture.materialID() can now take texture image name.
You can specify a image name (starting with 'IM') instead of a material
name in VideoTexture.materialID() and return the material ID matching
this texture.
The advantage of this method is that is works with blender material
and UV texture. In case of UV texture, it grabs the internal material
corresponding to the faces that are assigned to this texture. In case
of blender material, it grabs the material that has an image texture
matching the name as first texture channel.
In both cases, the texture id used in VideoTexture.Texture() should be 0.

Ex:

matID = VideoTexture.materialID(obj,'IMvideo.png')
GameLogic.video = VideoTexture.Texture(obj, matID, 0)
2008-11-07 10:54:32 +00:00
Benoit Bolsee
5567f3dded VideoTexture: comment was misplaced after previous commit. 2008-11-06 23:52:47 +00:00
Benoit Bolsee
87538be426 VideoTexture: fix compile error when FFmpeg is disabled. 2008-11-06 16:01:17 +00:00
Benoit Bolsee
ce1625ebc0 VideoTexture: new VideoTexture.ImageFFmpeg to load and reload images.
The FFmpeg library allows to load image files. Although it is possible
to load images using the VideoFFmpeg class, it is not very efficient.
The new class VideoTexture.ImageFFmpeg is dedicated to image management.

Constructor:
-----------
VideoTexture.ImageFFmpeg('image_file_name')
  Opens the file but does not load the texture yet.
  The file name can also be a network address. It can also be a video
  file name; in that case only the first image is loaded.

Methods:
-------
refresh(True)
  Loads the image to texture. 
  You just need to call it once, the file is automatically closed after
  that and calling refresh() again will have no effect.

reload('new_file_name')
  Reloads the image (if new_file_name is omitted) or loads a new image.
  The file is opened but the texture is not updated yet, you need
  to call refresh() once to load the texture.

Attributes:
----------
status
  returns the image status:
    2 : file opened, texture not loaded
    3 : file closed, texture loaded

image
  returns the image data as a string of RGBA pixel

size
  returns the image size [x,y]

scale
  get/set the scale flag. 
  If the scale flag is False, the image is rescale to texture format
  using gluScaleImage() function, slow but good quality.
  If the scale flag is True, the image is rescaled using a fast but
  less accurate algorithm.

flip
  get/set Y-flip flag.
  Set to True by default as FFmpeg always provides the image upside down

filter
  get/set filter(s) on the image.

Example:
2008-11-05 21:53:22 +00:00
Benoit Bolsee
0ade815aff VideoTexture: fixing a crash when loading an image as a video file - yes it works, provided that you don't set repeat and also no need to refresh all the time. 2008-11-05 17:38:31 +00:00
Benoit Bolsee
8916f84622 VideoTexture: Add support for GLSL. FIx small printout bug in Exception printout 2008-11-05 13:22:10 +00:00
Benoit Bolsee
1886b7bf52 VideoTexture: fix RGB/BGR confusion, make code compatible with big endian CPU, add RGBA source filter. 2008-11-04 12:04:59 +00:00
Benoit Bolsee
6eb3bf53dd VideoTexture: Bug report #17946: add (char*) casting to fix compile error with Python get-set method and module object. 2008-11-04 09:21:27 +00:00
Kent Mein
b7fdf2ab50 Add's GSR's INT64_C fix and removes dos line endings...
Kent
2008-11-03 23:35:41 +00:00
Benoit Bolsee
17296efd62 VideoTexture: fix compile error with GLint in ImageViewport under osx, part 2 2008-11-02 18:41:24 +00:00
Benoit Bolsee
7d63f5a7fe VideoTexture: fix compile error with GLint in ImageViewport under osx. 2008-11-02 18:31:54 +00:00
Benoit Bolsee
2973bd8ea2 VideoTexture: use PyObjectPlus.h instead of Python.h for compatibility with Python2.3 2008-11-02 18:02:31 +00:00
Martin Poirier
12b2f1c83c Include path for numpy no longer needed. 2008-11-02 14:36:32 +00:00
Benoit Bolsee
68f50e0c6b VideoTexture: remove numpy dependency. 2008-11-01 22:28:27 +00:00
Benoit Bolsee
e72d6c7bfa VideoTexture: fix NULL pointer crash when material name is not found. 2008-11-01 20:18:15 +00:00
Martin Poirier
b24c6ef70d Adding include path for numpy to sconscript. There must be a better way to do this. 2008-11-01 17:44:12 +00:00
Benoit Bolsee
19cdef1f71 VideoTexture: typo in linux code 2008-11-01 17:26:34 +00:00
Benoit Bolsee
e6a2ab319f VideoTexture: AVFormatContext::pb is not a pointer for avformat library older than 52 (linux uses 51) 2008-11-01 17:15:17 +00:00
Martin Poirier
58d0dc21aa Getting video texture closer to compiling under linux 2008-11-01 17:06:36 +00:00
Benoit Bolsee
bd81d96b78 Video Texture: missing newlines at the end of several files. 2008-11-01 15:58:49 +00:00
Benoit Bolsee
54401d36aa BGE Video Texture: fix constant initializer problem with Exception description. Uniformized the line ending. 2008-11-01 12:48:46 +00:00
Benoit Bolsee
48fb0edc78 Fix Cmake for MSVC 32bit 2008-11-01 11:15:13 +00:00
Benoit Bolsee
a8c4eef326 VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
 
This is Zdeno Miklas video texture plugin ported to trunk. 
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:

The module name is changed to VideoTexture (instead of blendVideoTex).

A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):

VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture

file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. 
The user specifies the type of device with the file parameter:
   [<device_type>][:<standard>]
   <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
   <standard>    : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
   v4l   : /dev/video<capture>
   dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely 
instead of device type. Examples of valid file parameter:
   /dev/v4l/video0:pal
   /dev/ieee1394/1:ntsc
   dv1394:ntsc
   v4l:pal
   :secam

capture: 
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.

rate: 
the capture frame rate, by default 25 frames/sec

width: 
height: 
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability. 
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, 
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.

Simple example
**************
1. Texture definition script:

import VideoTexture

contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
	matID = VideoTexture.materialID(obj, 'MAVideoMat')
	GameLogic.video = VideoTexture.Texture(obj, matID)
	GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
	# Streaming is also possible:
	#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
	GameLogic.vidSrc.repeat = -1
	# If the video dimensions are not a power of 2, scaling must be done before
	# sending the texture to the GPU. This is done by default with gluScaleImage()
	# but you can also use a faster, but less precise, scaling by setting scale
	# to True. Best approach is to convert the video offline and set the dimensions right.
	GameLogic.vidSrc.scale = True
	# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
	#GameLogic.vidSrc.flip = True

if contr.getSensors()[0].isPositive():
	GameLogic.video.source = GameLogic.vidSrc
	GameLogic.vidSrc.play()


2. Texture refresh script:

obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
  GameLogic.video.refresh(True)

You can download this demo here: 
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00