Low Cpu Usage after Loading Large Numbers of Meshes


I followed the tutorial in ShapeWorks\Examples\Python\hip_multiple_domain.py and loaded a large, customized mesh dataset. Everything was fine when grooming or loading those meshes, but the CPU usage gradually decreased before reaching the optimization step, and finally, only one core was active during the optimization. I then tried directly using the shapeworks command “shapeworks optimize”, but still, only one core works.

I’m wondering if anyone has encountered this kind of problem before. I would appreciate any suggestions for this scenario…

ShapeWorks uses TBB (Thread Building Blocks) to parallelize many steps of grooming and optimization. However, not all steps are fully parallelized. Is the optimization progressing or is it stuck?

Also, I am confused about your process display, Shapeworks is using 180GB of RAM?

How many particles are you using?

Thanks for the reply! The optimization process is “stuck”, most of the time only one core/ a few cores are running. Though the approximate time cost in the pic is around 425 (mins? hours?), it seldom changes even for running a whole day.

For memory issues, I am using 1024 and 1024 for the hemipelvis and whole femur, respectively. I think the 180GB RAM is due to the large meshes I loaded to the memory… Approximately 3500 meshes (hemipelvis and femur), each of them represented by around 12000 points…

Unless you have more than 180GB of RAM, it is certainly memory thrashing and as such the CPU is barely being used at all. How much RAM does the system have?

You could try remeshing the meshes to be smaller. They typically don’t take up a lot of space though, but there are other data structures for each one.

The RAM is 1.5 TB (indeed).

But thanks for the suggestion. I will remesh the meshes again to make them smaller.

Ok, then it’s not swapping memory to disk.

The correspondence piece runs in parallel, but for initialization, this is usually not too time consuming. Still, it will have to compute the update vectors for every particle for every shape and this happens serially. But I have to say I’m a bit perplexed still.

I would perhaps try 1/4 of the dataset, and then 1/2 and see how it runs.

1 Like