=========================
Documentation: http://wiki.blender.org/index.php/User:Psy-Fi/UV_Tools
Major features include:
*16 bit image support in viewport
*Subsurf aware unwrapping
*Smart Stitch(snap/rotate islands, preview, middlepoint/endpoint stitching)
*Seams from islands tool (marks seams and sharp, depending on settings)
*Uv Sculpting(Grab/Pinch/Rotate)
All tools are complete apart from stitching that is considered stable but with an extra edge mode under development(will be in soc-2011-onion-uv-tools).
- use more logical names for strings, noticed too many strings called `str` when reviewing name patch.
- pass __func__ macro to uiBeginBlock(), quite a few names were wrong (copy/paste error).
The rendering device is now set in User Preferences > System, where you can
choose between OpenCL/CUDA and devices. Per scene you can then still choose
to use CPU or GPU rendering.
Load balancing still needs to be improved, now it just splits the entire
render in two, that will be done in a separate commit.
details:
- setting format options from python isnt possible anymore since this isnt exposed via op->properties, python should use image.save() function instead.
- image save UI now hides 'Relative' option when copy is selected since it has no effect.
- default image depth is set to 8 or more if the image has no float buffer, otherwise its set to 32 or less.
other fixes:
- image new was adding an image with a filepath set to "untitled", if this file happened to exist in the current directory a save on the generated image would overwrite it, now initialize to empty path.
- BKE_ftype_to_imtype was returning an invalid value if ftype==0.
- no need to link to scenes when using a frame from the pydriver, this made linking rigs for eg, quite messy.
- advantage that we get subframe values (where scenes from was fixed to a whole number).
- use NULL rather then 0 for pointers
- use static functions where possible
- add own includes to ensure func's and their declarations don't get out of sync.
effect for a render engine using new shading nodes. In short:
* No longer uses images assigned to faces in the uv layer, rather the active
image texture node is what is edited/painted/drawn.
* Textured draw type now shows the active image texture node, with solid
lighting.
* Material draw mode shows GLSL shader of a simplified material node tree,
using solid lighting.
* Textures for modifiers, brushes, etc, are now available from a dropdown in
the texture tab in the properties editor. These do not use new shading nodes
yet.
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/TextureWorkflow
to be used by cycles. For testing there's a panel in the node editor if you set
debug to 777, didn't enable it because I'm not sure it's very useful there.
===========================
Commiting camera tracking integration gsoc project into trunk.
This commit includes:
- Bundled version of libmv library (with some changes against official repo,
re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
clips (both of movie files and image sequences) and doing camera/motion
tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
stuff only, but which can be easily extended to work with masks too.
This editor supports:
* Loading movie files/image sequences
* Build proxies with different size for loaded movie clip, also supports
building undistorted proxies to increase speed of playback in
undistorted mode.
* Manual lens distortion mode calibration using grid and grease pencil
* Supervised 2D tracking using two different algorithms KLT and SAD.
* Basic algorithm for feature detection
* Camera motion solving. scene orientation
- New constraints to "link" scene objects with solved motions from clip:
* Follow Track (make object follow 2D motion of track with given name
or parent object to reconstructed 3D position of track)
* Camera Solver to make camera moving in the same way as reconstructed camera
This commit NOT includes changes from tomato branch:
- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
(need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
readability and understanadability of code. Better to make this chaneg when
Keir will finish his current patch.
More details about this project can be found on this page:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011
Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/RenderEngineAPI
* This adds a Rendered draw type in the 3D view, only available when
the render engine implements the view_draw callback.
* 3D view now stores a pointer to a RenderEngine.
* view_draw() callback will do OpenGL drawing instead of the viewport.
* view_update() callback is called after depsgraph updates.