c1c43d19d4
1. Some Linux distributions default linker implicitly enables the as-needed linking flag. This means that your shared library or executable will only link to libraries from which they use symbols. So if you explicitly link to pthread but don't use any symbols you wont have a 'DT_NEEDED' entry for pthread. 2. NVidia libGL (driver version 352 ) uses pthread but doesn't have a DT_NEEDED entry for the library. When you run ldd or readelf on the library you won't detect any reference to the pthread library. Aside this is odd since the mesa version does explicitly link to pthread. But if you run the following command: "strings /usr/lib/nvidia-352/libGL.so.1 | grep pthread | less" You will see the following: { pthread_create pthread_self pthread_equal pthread_key_crea ... libpthread.so.0 libpthread.so pthread_create } This is very strong evidence that this library is using pthread. 3. So what does this all mean? It means that on system that use the linking flag 'as-needed', are using the nvidia driver, and don't use pthread will generate binaries that crash on launch. The only way to work around this issue is to do either: A: Specify 'no-as-needed' to the linker potentially causing over-linking and a slow down in link time B: Use a method from pthread, making the linker realize that pthread is needed. We went with method B. |
||
---|---|---|
CMake | ||
docs | ||
examples | ||
vtkm | ||
CMakeLists.txt | ||
CONTRIBUTING.md | ||
CTestConfig.cmake | ||
LICENSE.txt | ||
README.md |
VTK-m
One of the biggest recent changes in high-performance computing is the increasing use of accelerators. Accelerators contain processing cores that independently are inferior to a core in a typical CPU, but these cores are replicated and grouped such that their aggregate execution provides a very high computation rate at a much lower power. Current and future CPU processors also require much more explicit parallelism. Each successive version of the hardware packs more cores into each processor, and technologies like hyperthreading and vector operations require even more parallel processing to leverage each core’s full potential.
VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures.
Getting VTK-m
The VTK-m repository is located at https://gitlab.kitware.com/vtk/vtk-m
VTK-m dependencies are:
- CMake 3.0
- Boost 1.52.0 or greater
- Cuda Toolkit 6+ or Thrust 1.7+
git clone https://gitlab.kitware.com/vtk/vtk-m.git vtkm
mkdir vtkm-build
cd vtkm-build
cmake-gui ../vtkm
A detailed walk-through of installing and building VTK-m can be found on our Contributing page