Previously, Clip was updated to remove unused points from the input.
This requires a re-mapping of point ids after compaction. This
re-mapping was missed for the points used for interpolation of centorid
points.
The flying edges algorithm (used when contouring uniform structured cell
sets) was not interpolating cell fields correctly. There was an indexing
issue where a shortcut in the stepping was not incrementing the cell index.
There was a bug in `CleanGrid` when removing degenerate polygons where it
would not detect if the first and last point were the same. This has been
fixed.
There was also an error with function overloading that was causing 0D and
3D cells to enter the wrong computation for degenerate cells. This has also
been fixed.
Fixes#796
67b7543a3 Adding documentation for flow filter restructure
dbc873efa Changes to address feedback from MR
67716402b Correct export in class declaration
6d1d4f90a Fixing linking issues for flow Analysis class
adcb42455 Removing unnecessary file
78ca3f301 Fixing linking issues for flow Analysis class
0e1ade83a Fixing linking issues for flow Analysis class
12a3bc94e Adding test dependency of filter_flow on tests
...
Acked-by: Kitware Robot <kwrobot@kitware.com>
Merge-request: !3087
Previously, the MIR filter ran a check the dimensionality of the cells in
its input data set to make sure they conformed to the algorithm. The only
real reason this was necessary is because the `MeshQuality` filter can only
check the size of either area or volume, and it has to know which one to
check. However, the `CellMeasures` filter can compute the sizes of all
types of cells simultaneously (as well as more cell types). By using this
filter, the MIR filter can skip the cell type checks and support more mesh
types.
Adding XGC Field
Adding updates to XGCField
Adding Updates for generalization
Adding WarpXStreamlines and Streamsurface
Adding changes to support XGC Poincare
Finalizing XGC analysis
The `GhostCellRemove` filter had some methods inconsistent with the naming
convention elsewhere in VTK-m. The class itself was also in need of some
updated documentation. Both of these issues have been fixed.
Additionally, there were some conditions that could lead to unexpected
behavior. For example, if the filter was asked to remove only ghost cells
and a cell was both a ghost cell and blank, it would not be removed. This
has been updated to be more consistent with expectations.
The original intent of calling the `Statistics` filter on a
`PartitionedDataSet` was that the resulting `PartitionedDataSet` would
have partitions matching the input giving the statics of each partition
(as well as the overall statistics in global fields). There is a comment
in the code that says as much.
However, the partitioned version of `Execute` deleted the statistics.
This corrects the behavior by leaving in the partitions' statistics.
This is also now being tested.
The CompositeVectors filter does not run any worklet of its
own. It uses precompiled array manipulation and copies for
its implementation.
It shouldn't matter if a device compiler is used. (It should
be a quick compile.) But for some reason the nvcc compiler
was choking on an `auto`. Rather than figure out why nvcc is
barfing, I just stopped using it for this file.
The `ExternalFaces` filter uses hash codes to find duplicate (i.e.
internal) faces. The issue with hash codes is that you have to deal with
unique entries that have identical hashes. The worklet to count how many
unique, unmatched faces were associated with each hash code was correct.
However, the code to then grab the ith unique face in a hash was wrong.
This has been fixed.
Fixes#789
This filter takes a `DataSet` and returns a point cloud representation that
has a vertex cell associated with each point in it. This is useful for
filling in a `CellSet` for data that has points but no cells. It is also
useful for operations in which you want to throw away the cell geometry and
operate on the data as a collection of disparate points.
Although the contour filter was recently divided into 2 filters, flying
edges and marching cubes, the marching cubes version still had many
conditions and was the file that took the longest to compile on Frontier.
To help speed up parallel compiles and prevent a single run of a
compiler from being overwhelmed, the compilation of all the marching
cubes conditions has been split up using instantiations.