Headless on HPC cluster

Hello,

Great program and i’m just recently starting out with it.

Has anyone successfully installed/used/built shapeworks on a HPC cluster system?
I’ve used it on my Mac and setup parallel grooming/optimization python scripts for a multi-label spine segmentation. Works great, just slow since I’m processing each label separately due to spine curvature position issues. Now I want to run these on a HPC cluster to speed up processing in parallel.

However, I’m struggling to get it running/built.
The binary linux install has GLIBC errors (below).
Building from source has so far been hit with version/config issues in various packages that I’m working to resolve, especially since it is a non-gui system the vtk/qt packages need to be compiled.
Is there a docker/container out there that would work? I couldn’t seem to locate one.
Any help would be appreciated.
thanks!

basic system info:
Linux ln02 4.18.0-425.19.2.el8_7.x86_64
NAME=“Rocky Linux”
VERSION=“8.7 (Green Obsidian)”
ldd (GNU libc) 2.28
gcc (GCC) 9.3.0

for binary linux install:
Error message: /lib64/libm.so.6: version `GLIBC_2.29’ not found (required by […..]/ShapeWorks-v6.6.1-linux/bin/shapeworks_py.cpython-39-x86_64-linux-gnu.so)

fwiw, for anyone aiming to do the same, I was able to successfully:

  • create a docker container (ubuntu:22.04)
  • install shapeworks w/ the conda env
  • save the docker image (.tar) and convert to an apptainer (.sif) file
  • run this in headless mode through on the HPC cluster using the python api to groom.run() & optimize.run()

I get about an 10-fold overall speed improvement depending on the number of parallel processes, so well worth the effort up front for my situation. let me know if details are needed.