This commit broke compatibility with newer files: due to rename
of Speed to Vector the links got lost.
This reverts commit 0e46da76b70a42bab2268942cba0e0d3e4ba47e8.
This makes it possible to have an animated / procedurally generated mesh
that starts empty and obtains data in later frames.
Fixes the export of an empty mesh with an Ocean Modifier, as described in
issue T51351.
This allows you to put any kind of animation data on the mesh, and its
shape will be exported on each timekey. Note that this timekey is unrelated
to the animation data (so we don't export on each keyframe, for example).
A practical example is the addition of an animated custom property to
trigger the export of animated mesh data. The mesh data can then be created
from any source, like Python scripts.
Not only is this useful in itself, it also provides a workaround for one
of the two issues described in T51351.
This is written in a custom metadata key, so it isn't shown by utilities
like abcecho or abcls. However, it's still something that's useful to
have available.
Houdini writes vertex data in a different format than Blender does; Houdini
uses "face-varying scope", which means that the vertex colours are indexed
by an ever-increasing number over all vertices of all faces instead of the
vertex index.
I've also merged the read_custom_data_mcols() and read_mcols() functions,
because the latter was only called from the former, and the changes in this
commit would add yet more function parameters to pass.
A big chunk of code was copied between the if and else bodies. By using
a boolean to store whether the c3f_ptr or c4f_ptr should be used, the
in-loop condition is kept as simple as possible.
Note: the angle in bug isn't really reflex - using the vertex normal
for this test isn't always right, but usually is. At any rate,
shouldn't try to put vertex on edge between if a reflex angle.
This was two-fold.
1) The export used viewport settings to obtain the particle cache, rather
than render settings.
2) The child hair writer tried to obtain UV-coordinates from the parent
chair, without checking whether those were available in the first place.
Since we already have a rather advanced PovRay exporter, makes sense to
also nicely display generated 'code'.
Patch by Maurice Raybaud (@mauriceraybaud), thanks!
Cleanup (mostly styling) by @mont29.
Brightness/contrast node was changing color but did not modify alpha
or ensured colors are premultiplied on the output. This was giving
artifacts later on unless alpha was manually converted.
Compositor is supposed to work in premultiplied alpha (except of
some really corner cases) so it makes sense to ensure premultiplied
alpha after brightness/contrast node.
This is now done as an option enabled by default, so we:
(a) Keep compatibility with old files.
(b) Have correct behavior for newly created files.
Later on we can get rid of this option.
We were looping over all vgroups in destination mesh and making string
comparison, for every vgroup of every vertex of merged mesh! Crazy!
Now we simply create a temp mapping of vgroup indices, seriously
simplifies things (and gives significant speedup when merging huge meshes
with lots of vgroups, here with quick stupid test went from 120ms in
vgroup merging to less than 5ms, 25 times quicker!).
Root of the issue here was that two stupid modifiers could create named
vgroup CD layers (vgroup editing ones... shame on me :") ).
Fix that, and added some versionning code to also fix 'corrupted' blend
files created by those so far.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
`screen_findedge()` is not expected to return NULL in that case, but
checking against that does not hurt (we do it in all its other call
cases anyway), better than crashing.
For some reason GCC-6 successfully compiles test program with
-Wno-implicit-fallthrough passed via command line. It just
silently ignores the unknown arguments which are starting with
-Wno-.
The issue is, if some other waning happens in the code, then
GCC will complain about unknown -Wno- argument which is not
supported by current GCC version.
This makes some misleading warning prints about unknown
command line argument when any other warning happens in code
from extern/.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The issue is coming from some weird semi-finished canvas feature, which
was remapping coordinate without applying any differential on the sampling
ellipse (in fact, there is no ellipse, sampling think is always a single
pixel).
The whole thing is just weak in the compositor, for now just bring behavior
back to how it was prior to optimization (multithreading) commit.