Commit Graph

17 Commits

Author SHA1 Message Date
Campbell Barton
4a04f72069 remove $Id: tags after discussion on the mailign list: http://markmail.org/message/fp7ozcywxum3ar7n 2011-10-23 17:52:20 +00:00
Peter Schlaile
42121590f4 == FFMPEG ==
Added central compatibility header file, which enables blender to compile
against very old ffmpeg versions as well as very new versions using the
*NEW* API. (Old API functions are simulated using macros and inline functions)

Added a whole lot of additional checks, tested against 6 different versions
down the timeline, hopefully, now finally all is well.
2011-05-27 23:33:40 +00:00
Peter Schlaile
0381c444fd == FFMPEG ==
Fixed and added additional ffmpeg cruft checking. Oh dear.
2011-05-27 07:47:42 +00:00
Peter Schlaile
4b9a63c6d3 == FFMPEG ==
* removed a lot of old cruft code for ancient ffmpeg versions
* made it compile again against latest ffmpeg / libav GIT
  (also shouldn't break distro ffmpegs, since those API changes
  have been introduced over a year ago. If it nevertheless breaks,
  please send me an email)
2011-05-26 21:57:02 +00:00
Nathan Letwory
1f4fc992ef doxygen: bge scenegraph and videotexture 2011-02-22 19:30:37 +00:00
Benoit Bolsee
a99d584008 VideoTexture: fix video capture lagging when CPU is busy. This problem was caused by special frame handling that was appropriate for video streaming but not for video capture: drift compensation and no frame skipping. Disable that for video capture to take into account the realtime nature of video. 2010-03-28 17:50:45 +00:00
Benoit Bolsee
37b9c9fe4d VideoTexture: improvements to image data access API.
- Use BGL buffer instead of string for image data.
- Add buffer interface to image source.
- Allow customization of pixel format.
- Add valid property to check if the image data is available.

The image property of all Image source objects will now
return a BGL 'buffer' object. Previously it was returning
a string, which was not working at all with Python 3.1.
The BGL buffer type allows sequence access to bytes and
is directly usable in BGL OpenGL wrapper functions.
The buffer is formated as a 1 dimensional array of bytes
with 4 bytes per pixel in RGBA order.

BGL buffers will also be accepted in the ImageBuff load()
and plot() functions.

It is possible to customize the pixel format by using
the VideoTexture.imageToArray(image, mode) function:
the first argument is a Image source object, the second
optional argument is a format string using the R, G, B,
A, 0 and 1 characters. For example "BGR" means that each
pixel will be 3 bytes, corresponding to the Blue, Green
and Red channel in that order. Use 0 for a fixed hex 00
value, 1 for hex FF. The default mode is "RGBA".

All Image source objects now support the buffer interface
which allows to create memoryview objects for direct access
to the image internal buffer without memory copy. The buffer
format is one dimensional array of bytes with 4 bytes per
pixel in RGBA order. The buffer is writable, which allows
custom modifications of the image data.

v = memoryview(source)

A bug in the Python 3.1 buffer API will cause a crash if
the memoryview object cannot be created. Therefore, you
must always check first that an image data is available
before creating a memoryview object. Use the new valid
attribute for that:

if source.valid:
    v = memoryview(source)
    ...	

Note: the BGL buffer object itself does not yet support
the buffer interface.

Note: the valid attribute makes sense only if you use
image source in conjunction with texture object like this:

# refresh texture but keep image data in memory
texture.refresh(False)
if texture.source.valid:
    v = memoryview(texture.source)
    # process image
    ...
    # invalidate image for next texture refresh
    texture.source.refresh()

Limitation: While memoryview objects exist, the image cannot be
resized. Resizing occurs with ImageViewport objects when the
viewport size is changed or with ImageFFmpeg when a new image
is reloaded for example. Any attempt to resize will cause a
runtime error. Delete the memoryview objects is you want to
resize an image source object.
2010-02-21 22:20:00 +00:00
Benoit Bolsee
a8a99a628f BGE: add audio/video synchronization capability to VideoTexture
Add optional parameter to VideoTexture.Texture refresh() method
to specify timestamp (in seconds from start of movie) of the frame
to be loaded. This value is passed down to image source and for
VideoFFmpeg source, it is used instead of current time to load
the frame from the video file.

When combined with an audio actuator, it can be used to synchronize
the sound and the image: specify the same video file in the sound
actuator and use the KX_SoundActuator time attribute as timestamp
to refresh: the frame corresponding to the sound will be loaded:

GameLogic.video.refresh(True, soundAct.time)
2010-02-07 19:18:00 +00:00
Nathan Letwory
ca54ea078e * due to the setup of headers in mingw 4.4.0, includes could mess up. Making sure that windows.h isn't included where it shouln't (outside of __cplusplus) 2009-10-02 15:51:25 +00:00
Benoit Bolsee
328d3128a5 BGE VideoTexture: VideoFFmpeg was missing a rewind function: rename stop() to pause() and add stop() that will also reset the frame counter. 2009-05-26 18:37:46 +00:00
Peter Schlaile
615c5232c7 == FFMPEG ==
Updated ffmpeg to release version 0.5
updated x264 to today's daily build
thanks to ben2610 for first patches (but you got hddaudio.c wrong :)
2009-03-22 19:19:21 +00:00
Benoit Bolsee
ab8e9ba3dd VideoTexture: reactivate VideoTexture for scons/cmake/makefile compilation systems, fix video streaming, fix camera support in Linux, add multi-thread cache service, fix crash when a VideoFFmpeg object could not be created.
The multi-thread cache service is activated only on multi-core processors.
It consists in loading, decoding and caching the video frames in a 
separate thread. The cache size is 5 decoded frames and 30 raw frames.
Note that the opening of video file/stream/camera is not multi-thread:
you will still experience a delay at the VideoFFmpeg object creation.
Processing of the video frame (resize, loading to texture) is still done
in the main thread.  Caching is automatically enabled for video file, 
video streaming and video camera. 

Video streaming now works correctly: the videos frames are loaded
at the correct rate. Network delays and frequency drifts are automatically
compensated. 
Note: an http video source is always treated as a streaming source,
even though the http protocol allows seeking. For the user it means that
he cannot define start/stop range and cannot restart the video except
by reopening the source. Pause/play is however possible.

Video camera is now correctly handled on Linux: it will not slow down the BGE.
A video camera is treated as a streaming source.
2009-03-05 15:16:43 +00:00
Benoit Bolsee
2de476c88f VideoTexture: Preserve alpha channel if present in video, images and sequences. Better detection of end of video. 2008-11-09 21:42:30 +00:00
Benoit Bolsee
ce1625ebc0 VideoTexture: new VideoTexture.ImageFFmpeg to load and reload images.
The FFmpeg library allows to load image files. Although it is possible
to load images using the VideoFFmpeg class, it is not very efficient.
The new class VideoTexture.ImageFFmpeg is dedicated to image management.

Constructor:
-----------
VideoTexture.ImageFFmpeg('image_file_name')
  Opens the file but does not load the texture yet.
  The file name can also be a network address. It can also be a video
  file name; in that case only the first image is loaded.

Methods:
-------
refresh(True)
  Loads the image to texture. 
  You just need to call it once, the file is automatically closed after
  that and calling refresh() again will have no effect.

reload('new_file_name')
  Reloads the image (if new_file_name is omitted) or loads a new image.
  The file is opened but the texture is not updated yet, you need
  to call refresh() once to load the texture.

Attributes:
----------
status
  returns the image status:
    2 : file opened, texture not loaded
    3 : file closed, texture loaded

image
  returns the image data as a string of RGBA pixel

size
  returns the image size [x,y]

scale
  get/set the scale flag. 
  If the scale flag is False, the image is rescale to texture format
  using gluScaleImage() function, slow but good quality.
  If the scale flag is True, the image is rescaled using a fast but
  less accurate algorithm.

flip
  get/set Y-flip flag.
  Set to True by default as FFmpeg always provides the image upside down

filter
  get/set filter(s) on the image.

Example:
2008-11-05 21:53:22 +00:00
Benoit Bolsee
1886b7bf52 VideoTexture: fix RGB/BGR confusion, make code compatible with big endian CPU, add RGBA source filter. 2008-11-04 12:04:59 +00:00
Benoit Bolsee
e6a2ab319f VideoTexture: AVFormatContext::pb is not a pointer for avformat library older than 52 (linux uses 51) 2008-11-01 17:15:17 +00:00
Benoit Bolsee
a8c4eef326 VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
 
This is Zdeno Miklas video texture plugin ported to trunk. 
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:

The module name is changed to VideoTexture (instead of blendVideoTex).

A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):

VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture

file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. 
The user specifies the type of device with the file parameter:
   [<device_type>][:<standard>]
   <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
   <standard>    : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
   v4l   : /dev/video<capture>
   dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely 
instead of device type. Examples of valid file parameter:
   /dev/v4l/video0:pal
   /dev/ieee1394/1:ntsc
   dv1394:ntsc
   v4l:pal
   :secam

capture: 
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.

rate: 
the capture frame rate, by default 25 frames/sec

width: 
height: 
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability. 
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, 
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.

Simple example
**************
1. Texture definition script:

import VideoTexture

contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
	matID = VideoTexture.materialID(obj, 'MAVideoMat')
	GameLogic.video = VideoTexture.Texture(obj, matID)
	GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
	# Streaming is also possible:
	#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
	GameLogic.vidSrc.repeat = -1
	# If the video dimensions are not a power of 2, scaling must be done before
	# sending the texture to the GPU. This is done by default with gluScaleImage()
	# but you can also use a faster, but less precise, scaling by setting scale
	# to True. Best approach is to convert the video offline and set the dimensions right.
	GameLogic.vidSrc.scale = True
	# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
	#GameLogic.vidSrc.flip = True

if contr.getSensors()[0].isPositive():
	GameLogic.video.source = GameLogic.vidSrc
	GameLogic.vidSrc.play()


2. Texture refresh script:

obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
  GameLogic.video.refresh(True)

You can download this demo here: 
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00