Difficulties Tuning Particle Optimization Parameters


I am running into some problems with the particle optimization. As recommended on Github I am using a small subset of geometries (N=9) to explore the performance of the optimization and quickly iterate on changes to the parameters. However, I don’t feel like I’ve been able to fix this problem where I see the particles crossing each other on different geometries making a sort of spiky response on the optimized PCA samples and the resultant shape model. I am using mostly default settings for optimization (currently), however I have enabled Normals and Geodesics to try and improve the correspondence. I am using 2048 particles, based on some previous work published by the Lenz lab. All models were remeshed and aligned using ICP.

The below image shows (I believe) a good distribution of particles across the surface, as well as the current optimization parameters.

However, you can see the problem in Samples panel and the resultant isosurface. One of the samples always looks “best”, and the others have varying amounts of particle crossing, per below:
(image in reply due to new user constraints)

I have increased and decreased both Relative Weighting and Regularization, but I haven’t noticed a clear improvement - although in some cases I’ve noticed the problem worsening. Do you have any recommendations of parametric changes for this problem? I’m sure this issue is common, and I’m just not yet familiar with the software response. Any help would be appreciated!

Additionally, I’m running most of these optimizations from the command line and had a few questions about that:

  1. Is it safe to rename the optimized particle folder to save various “versions” of the optimized shape? Switching between folders by changing one at a time back to the original name <project_name>_particles?
  2. Working from the command line makes it easy to build a wrapper such as a DOE or tuning of the optimization parameters. Is there a recommended method to numerically assess the performance of the particle optimization that you have seen used previously?
  3. Running multiple optimizations simultaneously on different projects from Conda prompts shouldn’t see any issues with the software interaction, correct?


The first thing I would try is to see how a lower particle count model looks. If you are not getting good initialization, models at lower counts, then it is nearly impossible for the model to fix itself later at higher particle counts.

It can also be helpful to watch the optimization in Studio, to potentially see when things go wrong.

Yes, that’s fine.

You can have a look at the ShapeEvaluation methods that compute Compactness, Specificity and Generalization. However, none of these metrics are specifically looking at poor correspondence.

Yeah, that’s fine.