Martin Sell (thanks!) reported that threading via scripts was not working in the game engine with Blender 2.46 and later. My fault, to make pynodes work properly with threads > 1 I disabled Python's "check interval", preventing threads created via scripts from receiving time to run.
Now only during rendering check interval is disabled (set to max int). Still experimental, I added the calls in BPY_do_all_scripts, since it's called in BIF_do_render, but will probably move the code to its own function after more testing & feedback.
This meant an error in a script could be reported in a different line or script file which makes it quite hard to trace the problem. There were also places where invalid pointers could be used because of this.
The whole game engine pyapi probably needs to have these checks added.
nurbs/curves/text dissappears.
This also removes the "vertex arrays" option and enables it always
for OpenGL version >= 1.1 - there's no need to have an option to
make things render faster disabled by default, also it should work
stable now.
* When all the action-channels for a group are hidden (i.e. their related bones are not visible), the group in question is also not drawn. This helps reduce clutter. (slikdigit funboard request)
* When a group has no channels belonging to it, the expand icon/button isn't drawn for that group.
Also checked all other uses of text->lines.first to make sure the assumption isn't made elsewhere.
Added 2 more checks for text->lines.first when converting text buffer to objects.
* Action FrameProp was checking if the string was true, not that it contained any text.
* Added GameObject.getVisible() since there is already a getVisible
* Added GameObject.getPropertyNames() Needed in apricot so Franky can collect and throw items in the level without having the names defined elsewhere or modifying his game logic which is stored in a separate blend file.
Edited Game engine docs to note that the matrix will need to be transposed if used with Mathutils.Matrix()
Edited "Collision" button since ray-sensor also uses collision.
To take advantage of this feature, you must have a mesh with
relative shape keys and shape Ipo curves with drivers referring
to bones of the mesh's parent armature.
The BGE will automatically detect the dependency between the
shape keys and the armature and execute the Ipo drivers during
the rendering of the armature actions.
This technique is used to make the armature action more natural:
the shape keys compensate in places where the armature deformation
is uggly and the drivers make sure that the shape correction
is synchronized with the bone position.
Note: This is not compatible with shape actions; BLender does
not allow to have Shape Ipo Curves and Shape actions at the same
time.
This patch introduces two options for the motion actuator:
damping: number of frames to reach the target velocity. It takes
into account the startup velocityin the target velocity direction
and add 1/damping fraction of target velocity until the full
velocity is reached. Works only with linear and angular velocity.
It will be extended to delta and force motion method in a future
release.
clamping: apply the force and torque as long as the target velocity
is not reached. If this option is set, the velocity specified
in linV or angV are not applied to the object but used as target
velocity. You should also specify a force in force or torque field:
the force will be applied as long as the velocity along the axis of
the vector set in linV or angV is not reached. Works best in low
friction environment.
* Nothing indicated - is not helpful, and very annoying with occluded geometry with high poly meshes, sometimes the selection doesn't work 100% of the time and the menu pops up over what you want to select.
* No (correct) camera error doesn't seem to be needed, has been there since rev 2.
NAND controller is an inverted AND controller: the output is
1 if any of the input is 0.
NOR controller is an inverted OR controller: the output is 0
if any of the input is 1.
XOR controller is an exclusive OR: the output is 1 if and only
if one input is 1 and all the other inputs are 0.
XNOR controller is an inverted XOR: the output is 0 if and only
if one input is 0 and all the other inputs are 0.
The NAND, NORT and XNOR controllers are very usefull to create
complementary outputs to start and stop actuators synchronously.
MSCV project files updated.
Level option is now available on all sensors but is only implemented on
mouse and keyboard sensors. The purpose of that option is to make
the sensor react on level rather than edge by default. It's only
applicable to state engine system when there is a state transition:
the sensor will generate a pulse if the condition is met from the
start of the state. Normally, the keyboard sensor generate a pulse
only when the key is pressed and not when the key is already pressed.
This patch allows to select this behavior.
The second part of the patch corrects the reset method for sensors
with inverted output.
patch from Masaru Nemoto (mnemoto)
Made some modifications to the patch, use reduce() to get total face verts and some speedup for face vcol looping, also don't write vcol alpha since its used by brushes internally and has no useful meaning.
Snap to edges and vertice without have to go through faces.
This means you can import floor plans and use the edges as snapping guides and other sort of fun things.
The bounding box test still needs padding though.
AGAIN PLEAST USE TABS, lost quite some time with mixed tab/space adjustments alone.
Other then that, patch is very useful ;)
---Text from patch submission ---
Using a slightly revised BPy_Armature, this script takes any non-armature object type and creates an Action that keys
the object location (by default, for every frame). If it is an Armature, it goes into each bone and keys the locrot
of the bone. You can now edit the armature, but the motions still rotate the bones. This enables the next step, re-targeting,
which changes bone lengths to fit a mesh. High-level, we are working toward:
1. import mocap (bvh or c3d)
2. bake to make an action library (using this script)
3. re-target and use the actions to drive/deform any character mesh (theeth)
c3d importer for motion capture data
This could do with some improvements but for now its acceptable.
- Note, could people please not mix tabs and spaces.
-Text copied from the patch submission-
The c3d_import with 2.46 was able to import a mocap cloud for some c3d files. I have improved it:
Version History:
0.4: PERIN Released under Blender Artistic Licence
0.5: WICKES used marker names, fixed 2.45 depricated call
0.6: WICKES creates armature for each subject
0.7: WICKES constrains armature to follow the empties (markers). Verified for shake hands s
0.8: WICKES resolved DEC support issue
see also http://wiki.blender.org/index.php/Tutorials%5CMoCap-Section_3 for how this program gets the mocap data into
Blender and creates an armature, like the BVH script does.
I'd like someone to test and verify, and if accepted, replace the current c3d_import.py
--- See patch url for example files http://projects.blender.org/tracker/index.php?func=detail&aid=14392&group_id=9&atid=127
Added serious interlacing to movies opened using ffmpeg.
(Other video decoders to be done)
Rational: deinterlacing, if done seriously _has_ to be done
in YUV-space. Since internal interface first converts data
to RGB we are pretty much lost (and fall back to IMB_filtery
in that case).
This patch introduces a simple state engine system with the logic bricks. This system features full
backward compatibility, multiple active states, multiple state transitions, automatic disabling of
sensor and actuators, full GUI support and selective display of sensors and actuators.
Note: Python API is available but not documented yet. It will be added asap.
State internals
===============
The state system is object based. The current state mask is stored in the object as a 32 bit value;
each bit set in the mask is an active state. The controllers have a state mask too but only one bit
can be set: a controller belongs to a single state. The game engine will only execute controllers
that belong to active states. Sensors and actuators don't have a state mask but are effectively
attached to states via their links to the controllers. Sensors and actuators can be connected to more
than one state. When a controller becomes inactive because of a state change, its links to sensors
and actuators are temporarily broken (until the state becomes active again). If an actuator gets isolated,
i.e all the links to controllers are broken, it is automatically disabled. If a sensor gets isolated,
the game engine will stop calling it to save CPU. It will also reset the sensor internal state so that
it can react as if the game just started when it gets reconnected to an active controller. For example,
an Always sensor in no pulse mode that is connected to a single state (i.e connected to one or more
controllers of a single state) will generate a pulse each time the state becomes active. This feature is
not available on all sensors, see the notes below.
GUI
===
This system system is fully configurable through the GUI: the object state mask is visible under the
object bar in the controller's colum as an array of buttons just like the 3D view layer mask.
Click on a state bit to only display the controllers of that state. You can select more than one state
with SHIFT-click. The All button sets all the bits so that you can see all the controllers of the object.
The Ini button sets the state mask back to the object default state. You can change the default state
of object by first selecting the desired state mask and storing using the menu under the State button.
If you define a default state mask, it will be loaded into the object state make when you load the blend
file or when you run the game under the blenderplayer. However, when you run the game under Blender,
the current selected state mask will be used as the startup state for the object. This allows you to test
specific state during the game design.
The controller display the state they belong to with a new button in the controller header. When you add
a new controller, it is added by default in the lowest enabled state. You can change the controller state
by clicking on the button and selecting another state. If more than one state is enabled in the object
state mask, controllers are grouped by state for more readibility.
The new Sta button in the sensor and actuator column header allows you to display only the sensors and
actuators that are linked to visible controllers.
A new state actuator is available to modify the state during the game. It defines a bit mask and
the operation to apply on the current object state mask:
Cpy: the bit mask is copied to the object state mask.
Add: the bits that set in the bit mask will be turned on in the object state mask.
Sub: the bits that set in the bit mask will be turned off in the object state mask.
Inv: the bits that set in the bit mask will be inverted in the objecyy state mask.
Notes
=====
- Although states have no name, a simply convention consists in using the name of the first controller
of the state as the state name. The GUI will support that convention by displaying as a hint the name
of the first controller of the state when you move the mouse over a state bit of the object state mask
or of the state actuator bit mask.
- Each object has a state mask and each object can have a state engine but if several objects are
part of a logical group, it is recommended to put the state engine only in the main object and to
link the controllers of that object to the sensors and actuators of the different objects.
- When loading an old blend file, the state mask of all objects and controllers are initialized to 1
so that all the controllers belong to this single state. This ensures backward compatibility with
existing game.
- When the state actuator is activated at the same time as other actuators, these actuators are
guaranteed to execute before being eventually disabled due to the state change. This is useful for
example to send a message or update a property at the time of changing the state.
- Sensors that depend on underlying resource won't reset fully when they are isolated. By the time they
are acticated again, they will behave as follow:
* keyboard sensor: keys already pressed won't be detected. The keyboard sensor is only sensitive
to new key press.
* collision sensor: objects already colliding won't be detected. Only new collisions are
detected.
* near and radar sensor: same as collision sensor.