Initialization vs. optimization steps

I’ve been using shapeworks quite a bit in the last couple of weeks. One thing I’ve noticed in my data, and it seems like in other posts here, is how to setup optimization that results in a consistent and smooth mesh after completion.

I use multiscale (=8 particles), normals (=4.0 strength), and procrustes which obviously helps correspondence throughout the optimization. However, I’ve also found using too large of iterations_per_split allows the particles to drift further out of correspondence with each step that doesn’t seem to be correctable with the optimization iterations. The scenario I’ve found to be a compromise is to use a quite small number of iterations_per_split (<=20, sometimes even =1). The particles don’t capture the full object in the early stages, but eventually with a large number of particles, do cover enough of it.

As an observation, it seems allowing particles to move a lot is good during the initial stages to capture the full object, but allowing too much drift later on is detrimental and leads to poor correspondence (topology) for the final shape.

I’m curious if when using multistage, it would make sense to limit the number of iterations_per_step(to =1) after hitting the multiscale_particles. At least intuitively it seems this could balance the initial object coverage while maintaining good particle correspondence.
I looked at the code, but don’t have enough cpp depth to make it a quick test on my part. Happy to share data if it’s helpful.