This removed line is not necessary anymore since
3b3b1bb1a75399cf25af2db65ed7fff57538dd5f.
That's because the run-time data is not written to the
.blend file anymore at all.
There was a second randomization loop after the first loop which ignored
the operator settings. This may have been accidentally left in from when
the operator was being developed.
Removing this second loop makes the operator work as expected.
Pull Request: https://projects.blender.org/blender/blender/pulls/116835
Crash is observed when `dst_curve` nums are same as original curve.
`type_count` array in runtime struct of `dst_curve` is not updated after copying
curve attributes hence the crash/assert. In other case (when source and
dst curve_nums are different), type_count is updated when dst.finish()
is called in `gather_attributes()`
Pull Request: https://projects.blender.org/blender/blender/pulls/116839
This modify most shader using the GBuffer to use the
`ClosureUndetermined`. This in turn reduces the amount
of duplicated data in the `GBufferReader` structure.
The most important part is the usage of fixed array
indices to access the `GBufferReader.closures[]`.
This avoid the struct to be moved from local register
to device memory and remove a huge performance penalty.
Pull Request: https://projects.blender.org/blender/blender/pulls/116772
This refactors the whole pipeline to be closure agnostic.
This mean we can have only one raytracing step that covers
any closure type. In practice, it means that there is
3 objects having 1 closure each with a different closure
types each, we can run the raytracing once.
This create a huge overhead during the tile classification
stage. This is fixed by #116772 and will be merged separately.
Pull Request: https://projects.blender.org/blender/blender/pulls/116670
The same is done for other geometry types. This allows us to use C++ types in
the run-time data more easily and avoids dumping runtime data into .blend files.
Pull Request: https://projects.blender.org/blender/blender/pulls/116840
Now that OIIO has proper `valid_file` APIs for the formats we care
about, and which take MemReaders, we can remove the code added to TIFF,
PSD, and PNG as part of 5cc8fea7e99.
Additionally, this change eliminates the recent console spew on startup
where the TIFF loader is asked to load non-TIFF files (it is based on
the ordering of the filetype array)[1]. We now make a `valid_file` check
during open to address this.
[1] `: Not a TIFF or MDI file, bad magic number 12150 (0x2f76).`
Pull Request: https://projects.blender.org/blender/blender/pulls/116826
Added support to Drag and Drop to file handlers, part of #111242.
If file handlers are registered with an import operator they can now be
invoked with drag and drop path data.
Import operators must either declare a `filepath` StringProperty or both
a `directory` StringProperty and a `files` CollectionProperty depending
on if they support single or multiple files respectively.
Multiple FileHandlers could be valid for handling a dropped path. When
this happens a menu is shown so the user can choose which exact handler
to use for the file.
Pull Request: https://projects.blender.org/blender/blender/pulls/116047
- Improve the look of them, so they feel less like from year
1998 (more details and images in the PR).
- Some of the scopes got slightly faster in the process, others
stayed the same performance (details below).
- Remove VSE Scopes related data from SpaceSeq DNA, move it into
runtime instead.
The current handling had a fairly bad issue: multiple calls to
`set_tests_properties` to set envvars of a same test.
This does not work, only the last call is effective, all previous
ones have absolutely no effect.
This has been addressed by moving all 'set envvar for test' logic into a
single CMake function, `blender_test_set_envvars`.
This function takes optional extra envvars if needed, and define a set
of default ones (currently, `PATH` from `PLATFORM_ENV_INSTALL` if
defined, and the 'nuke' `exitcode=0` `LSAN_OPTIONS` if relevant).
NOTE: The way `blender_test_set_envvars` handles extra envvars passed to
it as parameter is fairly basic and unsafe, in that there is no check
whether a same envvar is defined more than once. Think for now this is
an acceptable limitation.
NOTE: Although this commit _should_ be a non-functional change one, the
unification of the handling of all envvars makes it hard to ensure there is no
side effects.
The `PATH` envvar e.g. was set to either `PLATFORM_ENV_INSTALL` if defined,
or a copy of that variable's definition, but only in Windows case. So technically,
the behavior for this envvar is changed.
Overall the transition to C++ in the draw module is awkwardly half
complete, but moving more code to a C++ namespace makes cleaning up
this code in other ways much easier, and the next C++ cleanup steps
are clear anyway.
This commit adds a new helper to define expected properties when a
target needs to use the unity build feature.
That new helper does what was already done for existing cases, and in
addition add the target to the Ninja 'heavy' pooljobs if relevant.
Pull Request: https://projects.blender.org/blender/blender/pulls/116791
Using pooljobs with default settings should never have any significant impact
on the build speed, and it makes building full debug with sanitizer builds
safe on (almost) all machines.
Quick test showed no significant difference in Release build time with or
without Ninja pooljobs (on linux, with a 16 cores, 64GBb machine).
Instead of moving bone collections by absolute index, they can now be
moved by manipulating `.child_number`. This is the relative index of the
bone collection within the list of its siblings.
This replaces the much more cumbersome `collections.move_to_parent()`
function. Since that function is now no longer necessary (it's been
replaced by assignment to `.parent` and `.child_number`), it's removed
from RNA. Note that this function was never part of even a beta build of
Blender.
The `expect_bcolls()` function now no longer calls the `EXPECT_EQ` macro,
but returns a `testing::AssertionResult` instead. The function call does
need to be wrapped in an `EXPECT_TRUE()` call now, but that also means that
any failure message points directly to the call site.
Texture filtering does not work in the GPU compositor. That's because
filtering is set after textures are bound. It works in the viewport
compositor because the textures come pre filtered from the DRW texture
pool.
Add an RNA getter for `bone_collection.is_visible_effectively`. This
value could already be derived from other flags, but now that logic sits
in C++ and doesn't require duplicated logic in Python any more.
The ability to adjust the "Backwire Opacity" was mistakenly removed in
version 2.93 (b365cc017a).
As this issue went unnoticed by most users, it appears reasonable to
completely remove this setting from the code.
By making this change, there is no longer a need to define a default
value for `View3DOverlay::backwire_opacity`.
Pull Request: https://projects.blender.org/blender/blender/pulls/116799
This PR changes two things
Move setup/cleanup code into `setUp`/`tearDown`
Change the `_fcurve_paths_match` to raise an error instead of returning a bool.
That makes it easier to see what the actual error is.
Pull Request: https://projects.blender.org/blender/blender/pulls/116816
The GPU implementation of the morphological feather erosion operator is
different from the CPU implementation. This is because the CPU does
erosion by doing dilation sandwiched between two inverse functions. So
this patch fixes the different by following the CPU implementation.
Update the regular jobs amount computation to follow the same logic as
for the heavy ones. the main difference is that it uses a '2Gb of RAM
per job' base value.
This change is mainly targetted at machines with a relatively low
RAM/cores ratio, since even regular compile jobs can end up using quite
a lot of RAM if many are running in parallel, previous defaults would
likely not work well on machines with e.g. 16Gb of RAM and 16 cores.
Also fix a typo in previous commit (6493d0233c), sorry about that.
This commit simplifies and makes more generic the computation of the
maximum number of parallel heavy build jobs. Essentially, it allows 1
heavy job per 8Gb of RAM.
It also systematically sets the amount of heavy jobs, since we are going
to get more of these in the future (like the 'unity build' units), the
previous heuristic had some loose ends (e,g for a 40Gb RAM, 16 threads
machine, it would not set any limit to heavy jobs, yet said machine
would likely not be able to run 16 3.5+Gb heavy jobs in parallel...).
This is some initial step towards a better handling of 'sanitizer' builds
on the Blender buildbot.
Add unit tests for the user preference option "Insert Needed"
Basic tests for objects and bones that check if autokeying in
combination with "Insert Needed" only
* keys all location channels on the first key
* keys only the modified channel on the second key
It is supposed to add only keyframes that have been affected
by the used transform operation.
E.g. translating an object will only add keys on translation keys.
The behavior of keying all property array channels first, and then
only add keys on values that have actually changed may change
in the future. Ideally it would only key actual changes to begin
with. But there is no way to do this right now.
Pull Request: https://projects.blender.org/blender/blender/pulls/116419
- Move code to C++ namespace for blenkernel
- Remove unnecessary prefixes based on namespace change
- Remove use of `RawVector` for function-scoped static variable
- Use `StringRef` instead of char pointer
- Use safer `STRNCPY` instead of `strcpy` in tests
- Give span instead of vector to users of API
Pull Request: https://projects.blender.org/blender/blender/pulls/116808