Blender's operators are tools for users to access, that python can access them too is very useful nevertheless operators have limitations that can make them cumbersome to script.
When calling an operator gives an error like this:
>>> bpy.ops.action.clean(threshold=0.001)
RuntimeError: Operator bpy.ops.action.clean.poll() failed, context is incorrect
Which raises the question as to what the correct context might be?
Typically operators check for the active area type, a selection or active object they can operate on, but some operators are more picky about when they run.
Unfortunately if you're still stuck - the only way to **really** know whats going on is to read the source code for the poll function and see what its checking.
For python operators it's not so hard to find the source since it's included with Blender and the source file/line is included in the operator reference docs.
Downloading and searching the C code isn't so simple, especially if you're not familiar with the C language but by searching the operator name or description you should be able to find the poll function with no knowledge of C.
..note::
Blender does have the functionality for poll functions to describe why they fail, but its currently not used much, if you're interested to help improve our API feel free to add calls to ``CTX_wm_operator_poll_msg_set`` where its not obvious why poll fails.
>>> bpy.ops.gpencil.draw()
RuntimeError: Operator bpy.ops.gpencil.draw.poll() Failed to find Grease Pencil data to draw into
Certain operators in Blender are only intended for use in a specific context, some operators for example are only called from the properties window where they check the current material, modifier or constraint.
Examples of this are:
*:mod:`bpy.ops.texture.slot_move`
*:mod:`bpy.ops.constraint.limitdistance_reset`
*:mod:`bpy.ops.object.modifier_copy`
*:mod:`bpy.ops.buttons.file_browse`
Another possibility is that you are the first person to attempt to use this operator in a script and some modifications need to be made to the operator to run in a different context, if the operator should logically be able to run but fails when accessed from a script it should be reported to the bug tracker.
Once changing the objects :class:`bpy.types.Object.location` you may want to access its transformation right after from :class:`bpy.types.Object.matrix_world`, but this doesn't work as you might expect.
The official answer to this is no, or... *"You don't want to do that"*.
To give some background on the topic...
While a script executes Blender waits for it to finish and is effectively locked until its done, while in this state Blender won't redraw or respond to user input.
Normally this is not such a problem because scripts distributed with Blender tend not to run for an extended period of time, nevertheless scripts *can* take ages to execute and its nice to see whats going on in the view port.
Tools that lock Blender in a loop and redraw are highly discouraged since they conflict with Blenders ability to run multiple operators at once and update different parts of the interface as the tool runs.
So the solution here is to write a **modal** operator, that is - an operator which defines a modal() function, See the modal operator template in the text editor.
Modal operators execute on user input or setup their own timers to run frequently, they can handle the events or pass through to be handled by the keymap or other modal operators.
Transform, Painting, Fly-Mode and File-Select are example of a modal operators.
Writing modal operators takes more effort than a simple ``for`` loop that happens to redraw but is more flexible and integrates better with Blenders design.
If you insist - yes its possible, but scripts that use this hack wont be considered for inclusion in Blender and any issues with using it wont be considered bugs, this is also not guaranteed to work in future releases.
When working with mesh data you may run into the problem where a script fails to run as expected in edit-mode. This is caused by edit-mode having its own data which is only written back to the mesh when exiting edit-mode.
A common example is that exporters may access a mesh through ``obj.data`` (a :class:`bpy.types.Mesh`) but the user is in edit-mode, where the mesh data is available but out of sync with the edit mesh.
*:class:`bpy.types.MeshPolygon` - this is the data structure which now stores faces in object mode (access as ``mesh.polygons`` rather then ``mesh.faces``).
*:class:`bpy.types.MeshTessFace` - the result of triangulating (tessellated) polygons, the main method of face access in 2.62 or older (access as ``mesh.tessfaces``).
Using the :mod:`bmesh` api is completely separate api from :mod:`bpy`, typically you would would use one or the other based on the level of editing needed, not simply for a different way to access faces.
* polygons are the most efficient way to create faces but the data structure is _very_ rigid and inflexible, you must have all your vertes and faces ready and create them all at once. This is further complicated by the fact that each polygon does not store its own verts (as with tessfaces), rather they reference an index and size in :class:`bpy.types.Mesh.loops` which are a fixed array too.
* tessfaces ideally should not be used for creating faces since they are really only tessellation cache of polygons, however for scripts upgrading from 2.62 this is by far the most straightforward option. This works by creating tessfaces and when finished - they can be converted into polygons by calling :class:`bpy.types.Mesh.update`. The obvious limitation is ngons can't be created this way.
* bmesh-faces are most likely the easiest way for new scripts to create faces, since faces can be added one by one and the api has features intended for mesh manipulation. While :class:`bmesh.types.BMesh` uses more memory it can be managed by only operating on one mesh at a time.
Editing
-------
Editing is where the 3 data types vary most.
* polygons are very limited for editing, changing materials and options like smooth works but for anything else they are too inflexible and are only intended for storage.
* tessfaces should not be used for editing geometry because doing so will cause existing ngons to be tessellated.
* bmesh-faces are by far the best way to manipulate geometry.
* polygons are the most direct & efficient way to export providing they convert into the output format easily enough.
* tessfaces work well for exporting to formats which dont support ngons, in fact this is the only place where their use is encouraged.
* bmesh-faces can work for exporting too but may not be necessary if polygons can be used since using bmesh gives some overhead because its not the native storage format in object mode.
Upgrading Importers from 2.62
-----------------------------
Importers can be upgraded to work with only minor changes.
Once the data is created call :class:`bpy.types.Mesh.update` to convert the tessfaces into polygons.
Upgrading Exporters from 2.62
-----------------------------
For exporters the most direct way to upgrade is to use tessfaces as with importing however its important to know that tessfaces may **not** exist for a mesh, the array will be empty as if there are no faces.
Armature Bones in Blender have three distinct data structures that contain them. If you are accessing the bones through one of them, you may not have access to the properties you really need.
``bpy.context.object.data.edit_bones`` contains a editbones; to access them you must set the armature mode to edit mode first (editbones do not exist in object or pose mode). Use these to create new bones, set their head/tail or roll, change their parenting relationships to other bones, etc.
Example using :class:`bpy.types.EditBone` in armature editmode:
``bpy.context.object.data.bones`` contains bones. These *live* in object mode, and have various properties you can change, note that the head and tail properties are read-only.
Example using :class:`bpy.types.Bone` in object or pose mode:
``bpy.context.object.pose.bones`` contains pose bones. This is where animation data resides, i.e. animatable transformations are applied to pose bones, as are constraints and ik-settings.
Examples using :class:`bpy.types.PoseBone` in object or pose mode:
Notice the pose is accessed from the object rather than the object data, this is why blender can have 2 or more objects sharing the same armature in different poses.
Strictly speaking PoseBone's are not bones, they are just the state of the armature, stored in the :class:`bpy.types.Object` rather than the :class:`bpy.types.Armature`, the real bones are however accessible from the pose bones - :class:`bpy.types.PoseBone.bone`
While writing scripts that deal with armatures you may find you have to switch between modes, when doing so take care when switching out of editmode not to keep references to the edit-bones or their head/tail vectors. Further access to these will crash blender so its important the script clearly separates sections of the code which operate in different modes.
This is mainly an issue with editmode since pose data can be manipulated without having to be in pose mode, however for operator access you may still need to enter pose mode.
Data names may not match the assigned values if they exceed the maximum length, are already used or an empty string.
Its better practice not to reference objects by names at all, once created you can store the data in a list, dictionary, on a class etc, there is rarely a reason to have to keep searching for the same data by name.
If you do need to use name references, its best to use a dictionary to maintain a mapping between the names of the imported assets and the newly created data, this way you don't run this risk of referencing existing data from the blend file, or worse modifying it.
Blender keeps data names unique - :class:`bpy.types.ID.name` so you can't name two objects, meshes, scenes etc the same thing by accident.
However when linking in library data from another blend file naming collisions can occur, so its best to avoid referencing data by name at all.
This can be tricky at times and not even blender handles this correctly in some case (when selecting the modifier object for eg you can't select between multiple objects with the same name), but its still good to try avoid problems in this area.
If you need to select between local and library data, there is a feature in ``bpy.data`` members to allow for this.
..code-block:: python
# typical name lookup, could be local or library.
obj = bpy.data.objects["my_obj"]
# library object name look up using a pair
# where the second argument is the library path matching bpy.types.Library.filepath
When using blender data from linked libraries there is an unfortunate complication since the path will be relative to the library rather then the open blend file. When the data block may be from an external blend file pass the library argument from the :class:`bpy.types.ID`.
Unicode encoding/decoding is a big topic with comprehensive python documentation, to avoid getting stuck too deep in encoding problems - here are some suggestions:
* Always use utf-8 encoiding or convert to utf-8 where the input is unknown.
* Avoid manipulating filepaths as strings directly, use ``os.path`` functions instead.
* Use ``os.fsencode()`` / ``os.fsdecode()`` rather then the built in string decoding functions when operating on paths.
* To print paths or to include them in the user interface use ``repr(path)`` first or ``"%r" % path`` with string formatting.
***Possibly** - use bytes instead of python strings, when reading some input its less trouble to read it as binary data though you will still need to decide how to treat any strings you want to use with Blender, some importers do this.
Use cases like the one above which leave the thread running once the script finishes may seem to work for a while but end up causing random crashes or errors in Blender's own drawing code.
Pythons threads only allow co-currency and won't speed up your scripts on multi-processor systems, the ``subprocess`` and ``multiprocess`` modules can be used with Blender and make use of multiple CPU's too.
Ideally it would be impossible to crash Blender from python however there are some problems with the API where it can be made to crash.
Strictly speaking this is a bug in the API but fixing it would mean adding memory verification on every access since most crashes are caused by the python objects referencing Blenders memory directly, whenever the memory is freed, further python access to it can crash the script. But fixing this would make the scripts run very slow, or writing a very different kind of API which doesn't reference the memory directly.
Here are some general hints to avoid running into these problems.
* Be aware of memory limits, especially when working with large lists since Blender can crash simply by running out of memory.
* Many hard to fix crashes end up being because of referencing freed data, when removing data be sure not to hold any references to it.
* Modules or classes that remain active while Blender is used, should not hold references to data the user may remove, instead, fetch data from the context each time the script is activated.
* Crashes may not happen every time, they may happen more on some configurations/operating-systems.
This example shows how you can tell undo changes the memory locations.
>>> hash(bpy.context.object)
-9223372036849950810
>>> hash(bpy.context.object)
-9223372036849950810
# ... move the active object, then undo
>>> hash(bpy.context.object)
-9223372036849951740
As suggested above, simply not holding references to data when Blender is used interactively by the user is the only way to ensure the script doesn't become unstable.
One of the advantages with Blenders library linking system that undo can skip checking changes in library data since it is assumed to be static.
Tools in Blender are not allowed to modify library data.
Python however does not enforce this restriction.
This can be useful in some cases, using a script to adjust material values for example.
But its also possible to use a script to make library data point to newly created local data, which is not supported since a call to undo will remove the local data but leave the library referencing it and likely crash.
So it's best to consider modifying library data an advanced usage of the API and only to use it when you know what you're doing.
Switching edit-mode ``bpy.ops.object.mode_set(mode='EDIT')`` / ``bpy.ops.object.mode_set(mode='OBJECT')`` will re-allocate objects data, any references to a meshes vertices/polygons/uvs, armatures bones, curves points etc cannot be accessed after switching edit-mode.
This can be avoided by re-assigning the point variables after adding the new one or by storing indices's to the points rather then the points themselves.
The best way is to sidestep the problem altogether add all the points to the curve at once. This means you don't have to worry about array re-allocation and its faster too since reallocating the entire array for every point added is inefficient.
**Any** data that you remove shouldn't be modified or accessed afterwards, this includes f-curves, drivers, render layers, timeline markers, modifiers, constraints along with objects, scenes, groups, bones.. etc.
Some python modules will call ``sys.exit()`` themselves when an error occurs, while not common behavior this is something to watch out for because it may seem as if blender is crashing since ``sys.exit()`` will quit blender immediately.
An ugly way of troubleshooting this is to set ``sys.exit = None`` and see what line of python code is quitting, you could of course replace ``sys.exit`` with your own function but manipulating python in this way is bad practice.