Supports both smoke/fire and point density textures now.
Reduces number of textures available for sm_20 and sm_21, but you have
to compromise somewhere on such a limited hardware.
Currently limited to linear interpolation only, and decoupled ray
marching is not supported yet. Think those could be considered just a
further improvement.
Some quick example:
https://developer.blender.org/F282934
Code is minimal and we can fully consider it a fix for missing
support of 3D textures with CUDA.
Reviewers: lukasstockner97, brecht, juicyfruit, dingto
Reviewed By: brecht, juicyfruit, dingto
Subscribers: mib2berlin
Differential Revision: https://developer.blender.org/D1806
The issue was caused by static vectors allocating some internal
data using rebound element allocator for them, which was causing
access to a non-initialized statistics objects and was failing a
lot when switching Blender to a fully guarded allocation.
Additionally, we were not able to free that internal memory before
Blender exits, which was causing false-positive memory leak prints.
Now we're not using GuardedAllocator for those proxy containers.
Ideally this should be done as a GuardedAllocator::rebind, but
it didn't work for vector<bool> because it seems some internal
parts are converting bool to char32_t, which either makes it so
we can't use GuardedAllocator for those vectors or the compiler
get's confused when we're trying explicitly allow GuardedAllocator
for rebind<char32_t>.
This with current approach we should be fine for the release.
Was caused by some safety things of making sure we've for NULL
terminator for the buffer when doing mbs<->wcs conversion, but
it turns out this simply confuses str::string and it can no
longer have proper .size(). Let's assume behavior of string
allocation is same all over the std, and we can avoid having
that extra null-terminator allocated.
There are several fixes in here, which hopefully will make the shader
working correct without too much magic in there.
First of all, this commit brings BURLEY_TRUNCATE down from 30 to 16
which reduces noise a lot. It's still higher than original truncate
from Brecht, but this reduces PDF value at a cutoff distance by an
order of magnitude (now it's 0.008387, previously it was 0.063521
for the albedo of 0.8 and radius 1.0). This should converge to a
proper result faster and don't have artifacts.
This kind of reverts fix for T47356, but after additional thinking
came to conclusion Burley is not being totally smooth, it is about
giving less waxy results which it's kind of doing in the file.
Second of all, this commit fixes burley_eval() to use normalized
diffusion reflectance. This matches the way we calculate CDF and
solves numeric instability close to 0, making PDF profile looking
closer to other SSS profiles:
https://developer.blender.org/F282355https://developer.blender.org/F282356https://developer.blender.org/F282357
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1792
C++ requires specific alignment of the allocations which was not an
issue when using GCC but uncovered issue when using Clang on OSX.
Perhaps some versions of Clang might show errors on other platforms
as well.
We don't have vectors re-allocation happening multiple times from inside
a loop anymore, so we can safely switch to a memory guarded allocator for
vectors and keep track on the memory usage at various stages of rendering.
Additionally, when building from inside Blender repository, Cycles will
use Blender's guarded allocator, so actual memory usage will be displayed
in the Space Info header.
There are couple of tricky aspects of the patch:
- TaskScheduler::exit() now explicitly frees memory used by `threads`.
This is needed because `threads` is a static member which destructor
isn't getting called on Blender's exit which caused memory leak print
to happen.
This shouldn't give any measurable speed issues, reallocation of that
vector is only one of fewzillion other allocations happening during
synchronization.
- Use regular guarded malloc (not aligned one). No idea why it was
made to be aligned in the first place. Perhaps some corner case tests
or so. Vector was never expected to be aligned anyway. Let's see if
we'll have actual bugs with this.
Reviewers: dingto, lukasstockner97, juicyfruit, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1774
Basically the idea is to make code robust against extending
enum options in the future by falling back to a known safe
default setting when RNA is set to something unknown.
While this approach solves the issues similar to T47377,
but it wouldn't really help when/if any of the RNA values
gets ever deprecated and removed. There'll be no simple
solution to that apart from defining explicit mapping from
RNA value to Cycles one.
Another part which isn't so great actually is that we now
have to have some enum guards and give some explicit values
to the enum items, but we can live with that perhaps.
Reviewers: dingto, juicyfruit, lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1785
This happens when the properties panel is pinned to the material tab.
Patch by Ralf Hölzemer (aka cheleb), thanks!
Differential Revision: https://developer.blender.org/D1776
Really annoying bug, the code was not forward compatible at all and
resulted in crash. And it is really good to keep at least one release
forward compatibility so possible regressions could be verified easily.
The idea now is to use new property name for the pixel filter type,
but keep old property around for a couple of releases, so we have at
least some forward compatibility.
Don't like this situation at all, but seems it's least of the evil
we can choose.
Thanks Brecht for the review!
After the clamping commit we need to bump BURLEY_TRUNCATE
constant a bit, otherwise mean free path does not really
match the disk radius needed for importance sampling.
Now pass_filter is modified to have exactly the flags for the light components
that need to be baked, based on the shader type. This simplifies the logic.
There were two issues:
1. Memory leak: std:;erase does not call delete on the
pointer (which is actually a good idea),
2. After MIS was disabled in viewport render there was
no way to bring MIS back.
Now instead of removing light from the scene data we
kind of tagging it for an ignore. Possible cleanup
would be to add Light::is_enabled and use that instead
of passing weird and wonderful function arguments.
The title says it all actually, the idea is to make Cycles
only requiring Boost via 3rd party dependencies like OIIO
and OSL.
So now there are only few places which still uses Boost:
- Foreach, function bindings and threading primitives.
Those we can easily get rid with C++11 bump (which seems
inevitable sooner or later if we'll want ot use newer
LLVM for OSL),
- Networking devices
There's no quick solution for those currently, but there
are some patches around which improves serialization.
Reviewers: juicyfruit, mont29, campbellbarton, brecht, dingto
Reviewed By: brecht, dingto
Differential Revision: https://developer.blender.org/D1764
This is an initial move to have unittests to at least cover
utility functions, which then could be extended further to
test such areas as shader optimization and such.
Currently only based on initial "infrastructure" layout and
writing tests needed to test the no-boost patch.
Note: This patch starts to use "<dir>/<header>.h" notation
for the include statements which i just got used to do in
other projects. Something what would be cool to use globally
in the code eventually.
Reviewers: dingto, juicyfruit, lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1770
We've upgraded to Boost-1.60 and MSVC2013 since the workaround
was originally committed. After checks with current compiler and
libraries the original bug is no longer happening.
This will make string comparison much faster in Windows, solving
synchronization bottlenecks of fewzillion objects.
Thanks Martin Felke (aka scorpion81) for the tests!
This change the following values:
- World settings:
- Use MIS: On
- MIS Samples: 1
- MIS Resolution: 1024
Enabling World MIS per default won't make simple backgrounds (flat background color) slower,
see previous commit. This gets disabled internally if World MIS is not actually needed.
When World MIS is enabled by the user, we now check if we actually need it.
In case of a simple node setup (no procedurals, no HDRs..) we auto disable MIS internally to save render time.
This change is important for upcoming default changes.
The value was too high, causing bad Newton iteration step.
Now the value is not so good, but it's still within 9 iterations
and those high number of iterations are only happening in
approx 1% of input values.
The idea is simply to pre-compute fitting and parameterization
in the bssrdf_setup() function and re-use the values in both
sample() and eval().
The only trick is where to store the pre-calculated values and
the answer is inside of ShaderClosure->custom{1,2,3}. There's
no memory bump here because we now simply re-use padding fields
for the pre-calculated values. Similar trick we can do for other
BSDFs.
Seems to give nice speedup up to 7% here on my desktop with
Core i7 CPU, SSE4.1 kernel.