In reflection, the `WarpScalar` filter is surprisingly a superset of the
`WarpVector` features. `WarpScalar` has the ability to displace in the
directions of the mesh normals. In VTK, there is a distinction of normals
to vectors, but in VTK-m it is a matter of selecting the correct one. As
such, it makes little sense to have two separate implementations for the
same operation. The filters have been combined and the interface names have
been generalized for general warping (e.g., "normal" or "vector" becomes
"direction").
In addition to consolidating the implementation, the `Warp` filter
implementation has been updated to use the modern features of VTK-m's
filter base classes. In particular, when the `Warp` filters were originally
implemented, the filter base classes did not support more than one active
scalar field, so filters like `Warp` had to manage multiple fields
themselves. The `FilterField` base class now allows specifying multiple,
indexed active fields, and the updated implementation uses this to manage
the input vectors and scalars.
The `Warp` filters have also been updated to directly support constant
vectors and scalars, which is common for `WarpScalar` and `WarpVector`,
respectively. Previously, to implement a constant field, you had to add a
field containing an `ArrayHandleConstant`. This is still supported, but an
easier method of just selecting constant vectors or scalars makes this
easier.
Internally, the implementation now uses tricks with extracting array
components to support many different array types (including
`ArrayHandleConstant`. This allows it to simultaneously interact with
coordinates, directions, and scalars without creating too many template
instances.
In order to compile the contour filter more efficiently, we split the contour filter into two separate translation units, corresponding to the new filters ContourFlyingEdges and ContourMarchingCells. The API for Contour filter is left totally unchanged, and tries to use flying edges if the dataset is structured and uniform.
All three contour filters inherit from the `AbstractContour` class, providing utility methods used in the implementations.
One of the checks for `BenchmarkFilters` is to test the number of
components in field arrays. It did this by using a `CastAndCall` to get
the base `ArrayHandle` and check the type. This is unnecessarily
complicated and fragile in the case where the base array type cannot be
resolved by `CastAndCall`. Instead, just query the `UnknownArrayHandle`
for the number of components.
This should help catch regressions that only affect a single filter. It
also allows us to set up a partitioned test that has many small blocks.
It also fixes an issue we were seeing on the dashboard where the
contour benchmark was apparently running out of resources.
Added a name option that allows the same benchmark executable to be used
in multiple benchmark tests. This allows the benchmarks to be separated.
Also added an option to pass customized arguments to the benchmark
executable to overwrite the default values.
Originally, most of the sources used constructor parameters to set the
various options of the source. Although convenient, it was difficult to
keep track of what each parameter meant. To make the code more clear,
source parameters are now set with accessor functions (e.g.
`SetPointDimensions`). Although this makes code more verbose, it helps
prevent mistakes and makes the changes more resilient to future changes.
Several revisions ago, the ability to use virtual methods in the
execution environment was deprecated. Completely remove this
functionality for the VTK-m 2.0 release.
`ExecutionWholeArray` is an archaic class in VTK-m that is a thin wrapper
around an array portal. In the early days of VTK-m, this class was used to
transfer whole arrays to the execution environment. However, now the
supported method is to use `WholeArray*` tags in the `ControlSignature` of
a worklet.
Nevertheless, the `WholeArray*` tags caused the array portal transferred to
the worklet to be wrapped inside of an `ExecutionWholeArray` class. This
is unnecessary and can cause confusion about the types of data being used.
Most code is unaffected by this change. Some code that had to work around
the issue of the portal wrapped in another class used the `GetPortal`
method which is no longer needed (for obvious reasons). One extra feature
that `ExecutionWholeArray` had was that it provided an subscript operator
(somewhat incorrectly). Thus, any use of '[..]' to index the array portal
have to be changed to use the `Get` method.
When the configure modules were first created, there were some hacks
around filters that had not yet moved to the new filter mechanism. These
can (and should) be removed.
The previous version of BenchmarkInSitu added arguments to its argv list
by using `strdup`. However, this method will leak memory, which is not
great. Replace this with a safer mechanism that will properly delete
memory at the end of the program (and satisfy any memory analyzers).
This also fixes a warning from MS about `strdup` being deprecated.
This mechanism sets up CMake variables that allow a user to select which
modules/libraries to create. Dependencies will be tracked down to ensure
that all of a module's dependencies are also enabled.
The modules are also arranged into groups.
Groups allow you to set the enable flag for a group of modules at once.
Thus, if you have several modules that are likely to be used together,
you can create a group for them.
This can be handy in converting user-friendly CMake options (such as
`VTKm_ENABLE_RENDERING`) to the modules that enable that by pointing to
the appropriate group.
Previous version had a hard-coded loop for updating the dataset. This change relenquishes control over to google benchmarks. Running it from the command-line will iterate each sub-benchmark a different number of iterations depending on how long each individual test takes. The number of iterations can be increased (but not controlled) by increasing the minimum time for each test. Alternatively, using the "repetitions" argument, you can control exactly how many times the benchamrks are run.
See the README for InSitu that throughly documents the arguments.
There were several tests that were disabled for hip because they either
took too long to compile or were failing. We are getting closer to
making everything work, so re-enable this part of the build.
The static datasets were wreaking havoc on order of destructors at exit and causing kokkos to throw an exception. This change dynamically allocates a dataset, copies the read/sourced data into it, and explicitly frees the data.
The enumerations in `vtkm::cont::Field::Association` were renamed in the
previous commit. The old names still exist, but are deprecated. Change
the rest of the code to use the new names.