Although the contour filter was recently divided into 2 filters, flying
edges and marching cubes, the marching cubes version still had many
conditions and was the file that took the longest to compile on Frontier.
To help speed up parallel compiles and prevent a single run of a
compiler from being overwhelmed, the compilation of all the marching
cubes conditions has been split up using instantiations.
This resolves a compiler ambiguity I hit during development.
In my case, I created an `ArrayHandleDecorator` with one of the arrays being
an `ArrayHandleTransform`. The ambiguity occurs in function
`DecoratorStorageTraits<...>::BuffersToArray`, here an `ArrayHandleTransform`
is constructed using buffers (`std::vector<vtkm::cont::internal::Buffer>`)
This constructor is not defined for `ArrayHandleTransform`, but it's defined for
its superclass (`vtkm::cont::ArrayHandle`). `ArrayHandleTransform` does have a
non-explicit constructor that takes the array to be transformed and the
transform-functor as parameters, where the later has a default value.
This produces the following ambiguous options for the compiler:
1. Create a "to-be-transformed" ArrayHandle instance using the buffers, call
the `ArrayHandleTransform` constructor with this array with the defaulted
functor parameter.
2. Create the superclass instance using the buffer and call the
`ArrayHandleTransform` constructor that takes the superclass.
In this situation, option 2 is the correct choice.
The ambiguity is resolved by marking the constructors that take
buffers as explicit. These constructors are also added for the
derived classess via the `VTK_M_ARRAY_HANDLE_SUBCLASS_IMPL` macro.
There are occasions when you need a worklet to opeate on 2D or 3D
indices. Most worklets operate on 1D indices, which requires recomputing
the 3D index in each worklet instance. A workaround is to use a worklet
that does a 3D scheduling and pull the working index from that.
The problem was that there was no easy way to get this 3D index. To
provide this option, a feature was added to the `BoundaryState` class
that can be provided by `WorkletPointNeighborhood`.
Thus, to get a 3D index in a worklet, use the
`WorkletPointNeighborhood`, add `Boundary` as an argument to the
`ExecutionSignature`, and then call `GetCenterIndex` on the
`BoundaryState` object passed to the worklet operator.
The file `ArrayRangeCompute.cxx` was taking a long time to compile with
some device compilers. This is because it precompiles the range computation
for many types of array structures. It thus compiled the same operation
many times over.
The new implementation compiles just as many cases. However, the
compilation is split into many different translation units using the
instantiations feature of VTK-m's configuration. Although this rarely
reduces the overall CPU time spent during compiling, it prevents parallel
compiles from waiting for this one build to complete. It also avoids
potential issues with compilers running out of resources as it tries to
build a monolithic file.
Previously, the `ComputeMoments` filter only operated on a finite set of
array types as its input field. This included a prescribed list of `Vec`
sizes for the input. The filter has been updated to use more generic
interfaces to the field's array (and float fallback) to enable the
computation of moments on any type of scalar field.
The structured connectivity classes are templated on two tags to
determine what 2 incident topological elements are being accessed. Back
in the day, these were called the "from" elements and "to" elements, as
taken from VTK filter names like `PointDataToCellData`. However, these
names were found to be very confusion, and after much debate they have
been renamed to the visit element type and the incident element type.
Meaning that a worklet is "visiting" elements of a particular type (such
as visiting each cell) and can access "incident" elements of a
particular type (such as the points incident on the cell).
I found a few methods converting flat and logical indices using the old,
confusing from/to convention. This changes them to the new convention.
6508d47cb Remove 3DPara
c39ff5f86 add include CanvasRayTracer.h
ca1fe424c cleanup Camera
d660a53b8 clean up Ray and RayOperator
9ed11540f cleanup Camera
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Kenneth Moreland <morelandkd@ornl.gov>
Merge-request: !3072
2631e5561 Split up the particle advection/streamline test
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Dave Pugmire <dpugmire@gmail.com>
Merge-request: !3067
This commit adds the flag VTKm_ENABLE_GPU_MPI which when enable it
will use GPU AWARE MPI.
- This will only work with GPUs and MPI implementation that supports GPU
AWARE MPI calls.
- Enabling VTKm_ENABLE_GPU_MPI without MPI/GPU support might results in
errors when running VTK-m with DIY/MPI.
- Only the following tests can run with this feature if enabled:
- UnitTestSerializationDataSet
- UnitTestSerializationArrayHandle
The internal array copy has an optimization to use the device the array
exists on to do the copy. However, if that device is disabled the copy
would fail. This problem has been fixed.
The test for particle advection filters was one large test that tested 3
versions --- advection, streamlines, and pathlines --- with each tested
for a variety of conditions including asynchronous communication, number
of blocks, ghost cells, etc. This was causing the test to take a while
and sometimes time out. (It would also sometimes seg fault, which I hope
is related.) To attempt to fix this problem, break up this test into
pieces so that each piece takes a shorter amount of time.
Because these tests share most of their implementation (which is why
they were grouped together in the first place) the common code is placed
in a source file of shared implementation. To support this I also added a
way to mark a source file to `vtkm_unit_tests` as a source file that does
not contain its own test. Normally you would just compile all of the
tests together, select each with command line arguments, and use
duplicate `add_tests` for each argument. But that is not how
`vtkm_unit_tests` works, and it would be too hard to make that change.
The `FilterField` class provides convenience functions for subclasses to
determine the `ArrayHandle` type for scalar and vector fields. However, you
needed to know the specific size of vectors. For filters that support an
input field of any type, a new form, `CastAndCallVariableVecField` has been
added. This calls the underlying functor with an `ArrayHandleRecombineVec`
of the appropriate component type.
The `CastAndaCallVariableVecField` method also reduces the number of
instances created by having a float fallback for any component type that
does not satisfy the field types.
The legacy VTK file reader previously only supported a specific set of Vec
lengths (i.e., 1, 2, 3, 4, 6, and 9). This is because a basic array
handle has to have the vec length compiled in. However, the new
`ArrayHandleRuntimeVec` feature is capable of reading in any vec-length
and can be leveraged to read in arbitrarily sized vectors in field
arrays.
5bdd3c7bc Move ArrayHandleRuntimeVec metadata to a separate class
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Li-Ta Lo <ollie@lanl.gov>
Merge-request: !3062
ee9c78ca9 Merge branch 'master' into rendering_cpp_cleanup
d05025ce8 simplify comment
066e6a696 I think this is the proper way doing PIMPL
6e75be33e unique_ptr know if itself is valid
50a9efc2f use rvalue ref for parameter
61c002072 cleanup C++ usage, use unique_ptr for PIMPL
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Kenneth Moreland <morelandkd@ornl.gov>
Merge-request: !3057
Originally, the metadata structure used by the `ArrayHandleRuntimeVec`
storage was a nested class of its `Storage`. But the `Storage` is
templated on the component type, and the metadata object is the same
regardless. In addition to the typical minor issue of having the
compiler create several identical classes, this caused problems when
pulling arrays from equivalent but technically different C types (for
example, `long` is the same as either `int` or `long long`).
There was an error in `operator-=` for `IteratorFromArrayPortal` that went
by unnoticed. The operator is fixed and regression tests for the operators
has been added.
3c1afe53d Fix new instances of ArrayHandleRuntimeVec in UnknownArrayHandle
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Li-Ta Lo <ollie@lanl.gov>
Merge-request: !3058
`UnknownArrayHandle` is supposed to treat `ArrayHandleRuntimeVec` the
same as `ArrayHandleBasic`. However, the `NewInstance` methods were
failing because they need custom handling of the vec size.
In addition to using uniform coordinates, the ContourFlyingEdges filter can now process any type of coordinate system, making the filter use Flying Edges in more cases
In order to compile the contour filter more efficiently, we split the contour filter into two separate translation units, corresponding to the new filters ContourFlyingEdges and ContourMarchingCells. The API for Contour filter is left totally unchanged, and tries to use flying edges if the dataset is structured and uniform.
All three contour filters inherit from the `AbstractContour` class, providing utility methods used in the implementations.
Fix compiler warnings......
Kick the dashboards..........
Kick the dashboards.....
use lambda instead of functor, remove unused code
ArrayHandle::StorageType should be public
StorageType is public in vtkm::cont::ArrayHandle, but some subclasses
re-declare it as private. Having it public provides a convenient way to
get the storage type of an ArrayHandle, otherwise it is quite verbose.
Addresses: #314
Tetrahedralize and Triangulate filters check if input is already triangulated
Previously, tetrahedralize/triangulate would blindly convert all the cells to tetrahedra/triangles, even when they were already. Now, the dataset is directly returned if the CellSet is a CellSetSingleType of tetras/triangles, and no further processing is done in the worklets for CellSetExplicit when all shapes are tetras or triangles.
ci,crusher: changed fs to orion
ci,osx,macos: fix value of GIT_CLONE_PATH
Add a regression test for passing a variant as an argument to a worklet.
Variants are tricky objects, and the compiler might make some strange
assumptions during optimization.
One case that popped up in particular was when a variant contained two
objects with the same `sizeof` but the first object had padding. When the
variant contained the second object, the value under the padded part of
the first object was not set.
I've been seeing errors in a nightly build that compiles for CUDA Pascal
using GCC5. The issue is that one of the `ArrayHandleMultiplexer` tests
is failing to copy an implicit array correctly. I think the problem is
that in this test the first and second type of the `Variant` are the same
size, but the first type has some padding in the middle whereas the
second type does not. When using this second type, the values in the
same position of the padding of the first type don't seem to be
initialized properly in the kernel invocation.
My nonexhaustive experiment shows that things work OK as long as the
first type is large enough and has no fillers. Enforce this by adding an
internal entry to the union that is completely full.
This new filtered designed for bi-variate analysis builds the continuous scatterplot of a 3D tetrahedralized mesh for two given scalar point fields. The continuous scatterplot is an extension of the discrete scatterplot for continuous bi-variate analysis.
While adding checks to the size of implicit arrays, it was discovered
that some of the Mask and ScatterPermutation arrays were constructed
with the wrong sizes during dispatch. In particular, these arrays were
supposed to be sized on the output, but were instead sized on the input.
They went unnoticed because they were using implicit arrays that worked
even if the array access went out of bounds.
The clip filter used to copy the input points and point fields as is,
regardless of if they were actually part of the output. With this change,
we track which input points are actually part of the output and copy
only those values.
Address: #112
014c429eb Make divide by volume in particle density estimate a little safer
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Li-Ta Lo <ollie@lanl.gov>
Merge-request: !3022
The `HierarchicalAugmenter` used in the distributed contour tree filter
attempts to save memory by releasing buffers used to send and receive
data after the DIY enqueues and dequeues are posted. This works as long
as the DIY serialization process copies the data in the arrays. However,
changes are coming where the data is sent directly from the buffers.
These changes cause a deadlock because the enqueue places a read lock
until the data is sent while the release tried to get a write lock.
The solution is simply to "forget" the array rather than explicitly
delete it. In this case, the array will automatically be deleted once
everyone is done with it.
While going through the VTK-m code to identify where a cast-and-call was
happening against VTKM_DEFAULT_TYPE_LIST, I ran into a subtle case in
`ParticleDensityBase` that was calling a worklet with an
`UnknownArrayHandle`. This works OK, but was probably compiling for
unnecessary types (for example, vectors). Changed the field resolution
to be more intentional.
MR !2969 was meant to update the clip filters such that their field
mapping works on any field type and preserves the value type. Although
this was done for `ClipWithField`, it was not fully implemented for
`ClipWithImplicitFunction`. These changes update
`ClipWithImplicitFunction` to match its sibling.
ac889b500 Implement VecTraits class for all types
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Li-Ta Lo <ollie@lanl.gov>
Merge-request: !3018
f545feba8 Add changelog for documenting data license
a24358a1a Document source of WarpX files
60559ce9b Document the source of venn250.vtk
796ec9638 Document data that comes from VisIt tutorial
06391c4e6 Clarify license for ECL data
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Vicente Bolea <vicente.bolea@kitware.com>
Merge-request: !3016
The `VecTraits` class allows templated functions, methods, and classes to
treat type arguments uniformly as `Vec` types or to otherwise differentiate
between scalar and vector types. This only works for types that `VecTraits`
is defined for.
The `VecTraits` templated class now has a default implementation that will
be used for any type that does not have a `VecTraits` specialization. This
removes many surprise compiler errors when using a template that, unknown
to you, has `VecTraits` in its implementation.
One potential issue is that if `VecTraits` gets defined for a new type, the
behavior of `VecTraits` could change for that type in backward-incompatible
ways. If `VecTraits` is used in a purely generic way, this should not be an
issue. However, if assumptions were made about the components and length,
this could cause problems.
Fixes#589
Some of the data sets that are included from VTK-m are derived from the
VisIt Tutorial Data (https://www.visitusers.org/index.php?title=Tutorial_Data).
These are covered by the VisIt license, as communicated by Eric Brugger.
Although the license for these data is compatible with VTK-m's license,
we should still attribute the source of the data and make clear the
copyrights. The data are moved into the third_party directory, and
readmes are added to document everything.
The noise.vtk and noise.bov files have been renamed example.vtk and
example_temp.bov to match the name of the file in the VisIt tutorial
data archive. The ucd3d.vtk file, which is similar to the curv3d.silo
data but altered, has been removed. It was not used for any tests. It
was referenced in a couple of example programs, but the reference is
easily changed.
Some of the test data sets are derived from data sets that are commonly
distributed to test visualization algorithms and are featured in
numerous papers. However, I am unable to track down the original source
let alone identify what license, if any, they were released under. To
avoid any complications with data ownership, remove these data sets and
replace them with in house data sets that we explicitly own.
The precompiled `ArrayRangeCompute` function was not following proper fast
paths for special arrays. For example, when computing the range of an
`ArrayHandleUniformPointCoordinates`, the ranges should be taken from the
origin and spacing of the special array. However, the precompiled version
was calling the generic range computation, which was doing an unnecessary
reduction over the entire array. These fast paths have been fixed.
These mistakes in the code were caused by quirks in how templated method
overloading works. To prevent this mistake from happening again in the
precompiled `ArrayRangeCompute` function and elsewhere, all templated forms
of `ArrayRangeCompute` have been deprecated. Most will call
`ArrayRangeCompute` with no issues. For those that need the templated
version, `ArrayRangeComputeTemplate` replaces the old templated
`ArrayRangeCompute`. There is exactly one templated declaration of
`ArrayRangeComputeTemplate` that uses a class, `ArrayRangeComputeImpl`,
with partial specialization to ensure the correct form is used.
267ee49cb docker: update kokkos hip image
67bf9a966 docker: update kokkos hip image
50d4ab5cc CMake: VTKm_ENABLE_KOKKOS_THRUST to depend on Kokkos with CUDA or HIP enabled
0604f314a Fix for building SERIAL unit tests with KOKKOS_HIP/CUDA enabled
802bf80b0 CI: Add rocthrust installation step
fda475d5b Rework Thrust CMake options
e6f63a807 Adding CMake tweaks to turn off thrust algorithms if thrust is not detected.
5a72275ed Rewrite sorting specialization using std::enable_if_t
...
Acked-by: Kitware Robot <kwrobot@kitware.com>
Merge-request: !2926
`vtkm::cont::DataSet` is a dynamic object that can hold cell sets and
fields of many different types, none of which are known until runtime. This
causes a problem with serialization, which has to know what type to compile
the serialization for, particularly when unserializing the type at the
receiving end. The original implementation "solved" the problem by creating
a secondary wrapper object that was templated on types of field arrays and
cell sets that might be serialized. This is not a great solution as it
punts the problem to algorithm developers.
This problem has been completely solved for fields, as it is possible to
serialize most types of arrays without knowing their type now. You still
need to iterate over every possible `CellSet` type, but there are not that
many `CellSet`s that are practically encountered. Thus, there is now a
direct implementation of `Serialization` for `DataSet` that covers all the
data types you are likely to encounter.
The old `SerializableDataSet` has been deprecated. In the unlikely event an
algorithm needs to transfer a non-standard type of `CellSet` (such as a
permuted cell set), it can use the replacement `DataSetWithCellSetTypes`,
which just specifies the cell set types.