This reduces stress on the the stack memory which could be really handy
on certain operation systems which applies strict limits on the stack.
Reviewers: brecht, juicyfruit, dingto
Reviewed By: brecht, juicyfruit, dingto
Differential Revision: https://developer.blender.org/D1656
There were multiple issues which are solved now:
- It was possible that ray wouldn't be bounced off the BSSRDF, for example
when PDF or shader eval is zero. In this case PathState might have been
left in pre-bounced state which would have been gave incorrect shading
results.
This is solved by having separate PathState for each of the hits.
- Path radiance summing wasn't happening correct as well, indirect rays
were using wrong path radiance in the case when there were more than
one hit recorded.
This is now using a bit trickier state machine which calculates path
radiance for just SSS (both direct and indirect) and then sums it back
to the final radiance.
- Previous commit wasn't totally correct either and was an induced bug
due to wrong path state left from the "un-happened" ray bounce.
There should be no special case happening here, BSSRDFs will be replaced
with diffuse ones due to PATH_RAY_DIFFUSE_ANCESTOR flag.
- Merged back codebases for "delayed" and "immediate" indirect SSS ray
tracing, hopefully making it easier to maintain the codebase.
Sure this changes brings memory usage back by about 4-5%, but overall
it's still about 2x memory reduction for the experimental kernel here.
Thanks Brecht for the review!
This is actually how it was intended to work, just didn't notice it wasn't
really happening in the main ray loop.
Solves some memory issues reported in T46880.
There are some issues to be solved with the recent optimization we did for
the indirect rays for the SSS. Those issues will take a bit of a time to
be fully solved still and we need to unlock Caminandes team now, so let's
revert some changes back.
CUDA will still use delayed indirect rays since it's an experimental
feature.
For the details about what's to be done still please refer to T46880.
This wasn't really a complete fix and only worked if there was a single scatter
event recorded only. Proper fix requires some more thoughts to make it correct
without memory use increase.
This reverts commit bf9e88bfbebaf5c6228363560970fa526e779c8b.
Radiance sum and reset was happening in different order after 26f1c51.
This is a quick fix to unlock Caminandes team, perhaps we can avoid having
separate variable to detect when radiance is to be sum.
Previously render nodes will be always created with just a VECTOR socket
type and then those sockets will try to be set as all point, vector and
normal to work around lack of such a subtype distinguishing in blender.
This change makes it so subtype is being queried from OSL itself and
proper subtupe is being used for socket.
It's still not in use for the official builds because it requires changes
applied recently on the 1.7 branch of OSL:
https://github.com/imageworks/OpenShadingLanguage/commit/f70e58f
This solves artists confusion reported in T46117.
Reviewers: #cycles, juicyfruit
Reviewed By: #cycles, juicyfruit
Subscribers: juicyfruit
Differential Revision: https://developer.blender.org/D1627
Graph::disconnect() actually modifies links, needs to create a copy to iterate
if disconnect happens form inside the loop.
Question tho whether we can control this somehow..
Reported by BzztPloink in IRC, thanks!
The issue was caused by possible use of object->derivedFinal from the render
thread, The patch tries to eliminate (or at least minimize, huh) amount of
access to the derivedFinal of a source object. It's still possible that in
the case of particle source derived mesh will be still unsafely used, but
with the patch applied we can easily change runtime part of the code and
cache derived mesh on the preparation stage.
Some ideas for the future:
- Check whether cache() was called on the point density node when calling
calc().
- Cache derivedMesh in the runtime part of point density node to avoid
possible remained thread conflicts.
- NULL the runtime part of the node on .blend load
Reviewers: campbellbarton, plasmasolutions
Reviewed By: plasmasolutions
Differential Revision: https://developer.blender.org/D1614
Seems set_intersection() requires passing explicit comparator if non-default
one is used for the sets. A bit weird, but can't really find another explanation
here about whats' going on here.
The issue was caused by not really optimal graph traversal for gathering nodes
dependencies which could have exponential complexity with a long tree branches
connected with multiple connections between them.
Now we optimize the depth traversal and perform early output if the node was
already traversed.
Please note that this adds some limitations to the use of SVM compiler's
find_dependencies() in the cases when skip_node is not NULL and one wants to
perform dependencies find sequentially with the same set. This doesn't happen
in the code, but one should be aware of this.
The issue was than nodes dependencies were stored as set<ShaderNode*> which
is actually a so called "strict weak ordered", meaning order of nodes in
the set is strictly defined, but based on the ShaderNode pointer. This means
that between different render invokations order of original nodes could be
different due to different pointers allocated for ShaderNode.
This commit makes it so dependencies and maps used for ShaderNodes are based
on the node->id which has much more predictable order. It's still possible
to trick the system by doing some crazy edits during viewport rendfer and
cause difference between viewport and final render stacks.
Reviewers: brecht
Reviewed By: brecht
Subscribers: LazyDodo
Differential Revision: https://developer.blender.org/D1630
This gives much lower stack usage on GPU and reduces kernel memory size to
around 448MB on GTX560Ti (comparing to 652MB with previous commit and 946MB
with official release). There's also a barely measurable speedup of around
5%, but this is to be confirmed still.
At this stage we're using only ~3% for the experimental kernel and SSS
rendering seems to be faster by 40% and after some further testing we might
consider making SSS and CMJ official features and remove experimental
precompiled kernels.
The idea is to delay shooting indirect rays for the SSS sampling and
trace them after the main integration loop was finished.
This reduces GPU stack usage even further and brings it down to around
652MB (comparing to 722MB before the change and 946MB with previous
stable release).
This also solves the speed regression happened in the previous commit
and now simple SSS scene (SSS suzanne on the floor) renders in 0:50
(comparing to 1:16 with previous commit and 1:03 with official release).
This commit introduces a SSS-oriented intersection structure which is replacing
old logic of having separate arrays for just intersections and shader data and
encapsulates all the data needed for SSS evaluation.
This giver a huge stack memory saving on GPU. In own experiments it gave 25%
memory usage reduction on GTX560Ti (722MB vs. 946MB).
Unfortunately, this gave some performance loss of 20% which only happens on GPU.
This is perhaps due to different memory access pattern. Will be solved in the
future, hopefully.
Famous saying: won in memory - lost in time (which is also valid in other way
around).
Input array length is implicitly set at link time, based on the geometry
shader's layout. Specifying the wrong value here is an error; specifying
no value is the same as getting it right. (inspired by a recent codegen
change)
This is sort of extension of existing Use Environment option which now allows to
disable AO on the render layer basis.
Useful in cases like disabling AO for the background because it might make it
too flat and so.
Reviewers: juicyfruit, dingto, brecht
Reviewed By: brecht
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1633
GLSL 130, 140, 150 with extensions as needed.
Similar logic to my recent gpu_extensions changes.
Partially fixes T46706. Matcaps now work with OpenSubdiv, as do basic
materials. Anything with UV coordinates is still broken.
Performance is roughly the same because it's using the same COLAMD ordering
and supernodal LU factorization algorithms. Solve results also appear to be
identical.
Unfortunately there's no easy way to show a messagebox here, so just
print a warning on fstderr and exit. If we don't call exit() here we get
crashes on other blender systems (python, opensubdiv) and it can get
tricky to track the initialization state here, so just using exit()
should do the trick for now.