I am running into some problems with the particle optimization. As recommended on Github I am using a small subset of geometries (N=9) to explore the performance of the optimization and quickly iterate on changes to the parameters. However, I don’t feel like I’ve been able to fix this problem where I see the particles crossing each other on different geometries making a sort of spiky response on the optimized PCA samples and the resultant shape model. I am using mostly default settings for optimization (currently), however I have enabled Normals and Geodesics to try and improve the correspondence. I am using 2048 particles, based on some previous work published by the Lenz lab. All models were remeshed and aligned using ICP.
The below image shows (I believe) a good distribution of particles across the surface, as well as the current optimization parameters.
However, you can see the problem in Samples panel and the resultant isosurface. One of the samples always looks “best”, and the others have varying amounts of particle crossing, per below:
(image in reply due to new user constraints)
I have increased and decreased both Relative Weighting and Regularization, but I haven’t noticed a clear improvement - although in some cases I’ve noticed the problem worsening. Do you have any recommendations of parametric changes for this problem? I’m sure this issue is common, and I’m just not yet familiar with the software response. Any help would be appreciated!
Additionally, I’m running most of these optimizations from the command line and had a few questions about that:
Is it safe to rename the optimized particle folder to save various “versions” of the optimized shape? Switching between folders by changing one at a time back to the original name <project_name>_particles?
Working from the command line makes it easy to build a wrapper such as a DOE or tuning of the optimization parameters. Is there a recommended method to numerically assess the performance of the particle optimization that you have seen used previously?
Running multiple optimizations simultaneously on different projects from Conda prompts shouldn’t see any issues with the software interaction, correct?
The first thing I would try is to see how a lower particle count model looks. If you are not getting good initialization, models at lower counts, then it is nearly impossible for the model to fix itself later at higher particle counts.
It can also be helpful to watch the optimization in Studio, to potentially see when things go wrong.
Yes, that’s fine.
You can have a look at the ShapeEvaluation methods that compute Compactness, Specificity and Generalization. However, none of these metrics are specifically looking at poor correspondence.
Did you have any luck resolving your correspondence issues? I am having a very similar issue in the multi-domain mode, investigating the sub-talar joint. I have activated geodesic distancing and surface normals as part of the optimization process, and have kept the number of particles to 256 per domain.
Hi,
thanks for the response, yes i acheive good correspondence at lower particle counts of 128. When increased to 256 particles per domain however there is significant partlicle crossing/spikes in PCA modes. So far the only thing to have helped is increasing the relative weighting to 10.
Ok, most likely one or more of the particle splits are by chance getting pushed in opposite directions after splitting and then getting stuck in that configuration. We are actively working on some solutions for this issue. One thing that often helps in cases like this is to scale your data up in size a bit (as much as 2-3x) as this can give particles more “room” to move around each other. I realize this may not be trivial to do.
For the parameters, you can try increasing the initial relative weighting as this is the value used during the initialization/splits.
okay thanks for your response :). The bones shapes are from a paediactric cohort so perhaps this is the issue, i could re scale as you say and then scale back down for any post SSM analysis which considers dimensions. However, is there much value in scaling up to increase particle count vs a lower particle count with the original reduced size?
For increasing the initial relative weighting, i assume this would still be in the order of 0.01 to ensure uniform distribution?
The particle count needed for your task is dependent on the anatomy, original data fidelity (e.g. don’t want to model noise), detail your are interested in, etc.
If you are only looking for rough shape variations, then a low particle count is fine. However, if there are perhaps very small bumps that you need to model, then you will have to make sure the number of particles is fine enough to model those bumps.
The scaling approach I’ve mentioned is a workaround that we’ve had some success with until we have a new strategy to deal with overall scale issues in the optimizer. Whether it’s needed depends on the task.
For initial relative weighting, you can experiment with different values, as low as 0.01 or even 0.005. It will just depend.