This is done by adding a Volume Scatter node. In many cases you will want to
add together a Volume Absorption and Volume Scatter node with the same color
and density to get the expected results.
This should work with branched path tracing, mixing closures, overlapping
volumes, etc. However there's still various optimizations needed for sampling.
The main missing thing from the volume branch is the equiangular sampling for
homogeneous volumes.
The heterogeneous scattering code was arranged such that we can use a single
stratified random number for distance sampling, which gives less noise than
pseudo random numbers for each step. For volumes where the color is textured
there still seems to be something off, needs to be investigated.
Volumes can now have textured colors and density. There is a Volume Sampling
panel in the Render properties with these settings:
* Step size: distance between volume shader samples when rendering the volume.
Lower values give more accurate and detailed results but also increased render
time.
* Max steps: maximum number of steps through the volume before giving up, to
protect from extremely long render times with big objects or small step sizes.
This is much more compute intensive than homogeneous volume, so when you are not
using a texture you should enable the Homogeneous Volume option in the material
or world for faster rendering.
One important missing feature is that Generated texture coordinates are not yet
working in volumes, and they are the default coordinates for nearly all texture
nodes. So until that works you need to plug in object texture coordinates or a
world space position.
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
instead of sobol. So far one doesn't seem to be consistently better or worse than
the other for the same number of samples but more testing is needed.
The random number generator itself is slower than sobol for most number of samples,
except 16, 64, 256, .. because they can be computed faster. This can probably be
optimized, but we can do that when/if this actually turns out to be useful.
Paper this implementation is based on:
http://graphics.pixar.com/library/MultiJitteredSampling/
Also includes some refactoring of RNG code, fixing a Sobol correlation issue with
the first BSDF and < 16 samples, skipping some unneeded RNG calls and using a
simpler unit square to unit disk function.
well as I would like, but it works, just add a subsurface scattering node and
you can use it like any other BSDF.
It is using fully raytraced sampling compatible with progressive rendering
and other more advanced rendering algorithms we might used in the future, and
it uses no extra memory so it's suitable for complex scenes.
Disadvantage is that it can be quite noisy and slow. Two limitations that will
be solved are that it does not work with bump mapping yet, and that the falloff
function used is a simple cubic function, it's not using the real BSSRDF
falloff function yet.
The node has a color input, along with a scattering radius for each RGB color
channel along with an overall scale factor for the radii.
There is also no GPU support yet, will test if I can get that working later.
Node Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
Implementation notes:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Subsurface_Scattering
direct and indirect lighting differently. Rather than picking one light for each
point on the path, it now loops over all lights for direct lighting. For indirect
lighting it still picks a random light each time.
It gives control over the number of AA samples, and the number of Diffuse, Glossy,
Transmission, AO, Mesh Light, Background and Lamp samples for each AA sample.
This helps tuning render performance/noise and tends to give less noise for renders
dominated by direct lighting.
This sampling mode only works on the CPU, and still needs proper tile rendering
to show progress (will follow tommorrow or so), because each AA sample can be quite
slow now and so the delay between each update wil be too long.
Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
but this makes it more reliable for now.
Also add an integrator "Clamp" option, to clamp very light samples to a maximum
value. This will reduce accuracy but may help reducing noise and speed up
convergence.
* Add max diffuse/glossy/transmission bounces
* Add separate min/max for transparent depth
* Updated/added some presets that use these options
* Add ray visibility options for objects, to hide them from
camera/diffuse/glossy/transmission/shadow rays
* Is singular ray output for light path node
Details here:
http://wiki.blender.org/index.php/Dev:2.5/Source/Render/Cycles/LightPaths