Failed to wrap back to original mesh using mean mesh

Dear ShapeWorks Support Team,

I am currently using ShapeWorks to build a statistical shape model for a set of 146 right ventricle surface meshes. My goal is to compute a mean shape (template) and then use that mean template to establish consistent topology (correspondence) for all meshes.

The steps I followed are:

I ran an groom(I also did rigid-alignment) and optimization process in ShapeWorks to compute the mean shape of the 146 meshes.
Then, I used this mean template to warp back to each of the original 146 meshes (I believe this step is intended to reconstruct each original mesh by deforming the mean with the deformation field obtained during the correspondence optimization?).
However, I encountered a problem in the second step. When I warp the mean template back to reconstruct the individual meshes, I observe:

Large losses (high reconstruction errors).
The reconstructed meshes appear broken (visually corrupted).
I am unsure why this happens. The optimization seemed to run without errors, and I expected the warping to produce meshes close to the original ones with the same topology.

Could you please help me understand:

Why might the warping result in large errors and broken meshes?
Are there common pitfalls or parameters that I might have set incorrectly?
How can I debug or improve the process?
Thank you for your assistance.

import os
import shapeworks as sw
import numpy as np

# File paths
reference_mesh_path = r"F:\Project_2\3D_DATA\Roberto_SSM\Split_By_Parts\RV\Average_5000_mesh.vtk"
reference_particle_path = r"F:\Project_2\3D_DATA\Roberto_SSM\Split_By_Parts\RV\Average_5000_particles.particles"
output_dir = r"F:\Project_2\3D_DATA\Roberto_SSM\Split_By_Parts\RV\RV_WarpOut"

particles_dir = r"F:/Project_2/3D_DATA/Roberto_SSM/Split_By_Parts/RV/RV_146_particles"
mesh_dir = r"F:/Project_2/3D_DATA/Roberto_SSM/Split_By_Parts/RV/groomed"

# Find matching files
particles_files = [f for f in os.listdir(particles_dir) if f.endswith("_RV_rigid_groomed_local.particles")]
cases = []
for pf in particles_files:
    case_prefix = pf.replace("_RV_rigid_groomed_local.particles", "")
    mesh_file = f"{case_prefix}_RV_rigid_groomed.vtk"
    mesh_path = os.path.join(mesh_dir, mesh_file)
    if os.path.exists(mesh_path):
        cases.append((os.path.join(particles_dir, pf), mesh_path))

print(f"Found {len(cases)} valid particle-mesh pairs")

# Load reference data
reference_mesh = sw.Mesh(reference_mesh_path)
print(f"Loaded reference mesh: {reference_mesh_path}, Points: {reference_mesh.numPoints()}, Faces: {reference_mesh.numFaces()}")

ref_particles = sw.ParticleSystem([reference_particle_path]).ShapeAsPointSet(0)
print(f"Loaded reference particles: {reference_particle_path}, Particle count: {ref_particles.shape}")

# Create warper object
warper = sw.MeshWarper()
warper.generateWarp(reference_mesh, ref_particles)

# Test warping with reference particles
try:
    test_mesh = warper.buildMesh(ref_particles)
    if test_mesh.numPoints() == 0 or test_mesh.numFaces() == 0:
        print("Test buildMesh failed after generateWarp")
    else:
        print("Warp initialization successful")
except Exception as e:
    print(f"Exception during test buildMesh: {e}")

# Process all cases
results_info = []

for particle_file, orig_mesh_file in cases:
    target_particles = sw.ParticleSystem([particle_file]).ShapeAsPointSet(0)
    case_id = os.path.basename(particle_file).replace(".particles", "")
    print(f"\nProcessing: {case_id}")
    print(f"Target particle count: {target_particles.shape}")

    # Particle count consistency check
    if ref_particles.shape != target_particles.shape:
        print(f"Particle count mismatch! Reference: {ref_particles.shape}, Target: {target_particles.shape}")
        results_info.append((particle_file, None, None, None, None, "Particle count mismatch"))
        continue

    # Build warped mesh
    warped_mesh = warper.buildMesh(target_particles)
    print(f"Warped mesh - Points: {warped_mesh.numPoints()}, Faces: {warped_mesh.numFaces()}")

    if warped_mesh.numPoints() == 0 or warped_mesh.numFaces() == 0:
        print("Warped mesh is empty")
        results_info.append((particle_file, 0, 0, None, None, "Warp failed"))
        continue

    # Save warped mesh
    output_path = os.path.join(output_dir, f"{case_id}_warped.vtk")
    warped_mesh.write(output_path)
    print(f"Saved: {output_path}")

    # Load original mesh and calculate errors
    orig_mesh = sw.Mesh(orig_mesh_file)
    if orig_mesh.numPoints() == 0:
        print(f"Original mesh is empty: {orig_mesh_file}")
        results_info.append((particle_file, warped_mesh.numPoints(), warped_mesh.numFaces(), None, None, "Original mesh empty"))
        continue

    # Calculate point-to-cell distances
    dists_and_indexes = warped_mesh.distance(target=orig_mesh, method=sw.Mesh.DistanceMethod.PointToCell)
    distances = np.array(dists_and_indexes[0])
    avg_err = np.mean(distances)
    max_err = np.max(distances)
    print(f"Reconstruction error - Avg: {avg_err:.4f} mm, Max: {max_err:.4f} mm")

    results_info.append((particle_file, warped_mesh.numPoints(), warped_mesh.numFaces(), avg_err, max_err, "Success"))

# Topology consistency check
print("\nTopology & Error Summary")
if not results_info:
    print("No valid warped meshes generated")
else:
    base_verts, base_faces = None, None
    consistent_topology = True
    for item in results_info:
        pf, verts, faces, avg_err, max_err, status = item
        case_id = os.path.basename(pf)
        print(f"{case_id}: Points={verts}, Faces={faces}, AvgErr={avg_err:.4f}, MaxErr={max_err:.4f}, Status={status}")
        if status == "Success":
            if base_verts is None:
                base_verts, base_faces = verts, faces
            elif verts != base_verts or faces != base_faces:
                consistent_topology = False
    if consistent_topology:
        print("All successful warps have consistent topology")
    else:
        print("Inconsistent topology detected in warped meshes")

# Error statistics
failures = [item for item in results_info if item[5] != "Success"]
if failures:
    print("\nFailure summary:")
    for item in failures:
        pf, _, _, _, _, status = item
        case_id = os.path.basename(pf)
        print(f"{case_id}: {status}")
else:
    print("All cases warped successfully")


Mesh Grooming

Fill Holes: enabled

Smooth: disabled

Remesh: enabled

Percentage: enabled

Number of Vertices: 100.0%

Adaptivity: 1.00

Alignment

Reflect: disabled

Alignment: enabled

Method: Center

Reference: auto

Random Subset Size: auto

Other

Skip Grooming: disabled
Optimization Parameters

Number of Particles: 6000
Initial Relative Weighting: 0.05
Relative Weighting: 1
Starting Regularization: 1000
Ending Regularization: 10
Iterations per Split: 1000
Optimization Iterations: 5000
Geodesic Distance: disabled
Geodesic Remesh Percent: 100 (inactive)
Normals: disabled
Normals Strength: 10 (inactive)
Geodesic Distance from Landmarks: disabled
Landmark Distance Weight: 1 (inactive)
Procrustes: disabled
Procrustes Scaling: disabled
Procrustes Rotation/Translation: enabled (grayed out / fixed)
Procrustes Interval: 10 (inactive)
Multiscale Mode: disabled
Multiscale Start: 32 (inactive)
Use Initial Landmarks: disabled
Narrow Band: 4


Best regards,
Lucy

The first thing I would do is to make sure the shape model has good correspondence. If the correspondence is not good, the reconstructions will be poor. Open the optimized project in ShapeWorks Studio and examine the PCA modes in the Analysis tab. With good correspondence, you should see particles stay in the same relative position across the mode animations.


I have tried to reconstruct the epicardium of the left ventricle, but the surface of the mean mesh appears very stretched and not smooth. Additionally, when I adjust the standard deviation of the PCA, the particle animations change constantly and appear unstable. I am not sure if there is anything I can do to improve the reconstruction quality.

Yes, this is an extremely poor correspondence. I would recommend starting at a much lower particle count and make sure the correspondence is good there first. Perhaps 128 particles. You may need more iterations per split. What I might guess is happening is that the particles are not spreading well until the final optimization, but at that point, it’s not able to recover.

Hi Amorris,

Thank you for your suggestions.

Just to confirm, should I first try using fewer particles (e.g., 128) with more iterations, and make sure they can spread well and establish good correspondence at the low particle count before increasing the number of particles?

Also, in this lower-resolution setup, would it be recommended to adjust the initial_relative_weighting or relative_weighting? Should I increase or decrease these weights to help improve particle spreading and shape adaptation?

Lastly, I wonder: if I use the same weights and iteration settings, but still fail to get reasonable correspondence at a lower particle count, does that strongly imply that using more particles will likely perform even worse?

Looking forward to your advice. Thanks again!

Also, I have a follow-up question regarding my specific goal.

In my case, I’m trying to find a good average shape (mean shape) that can then be used to reconstruct all of my 146 input shapes, with minimal reconstruction error. To improve this, would it be reasonable to manually exclude some outlier shapes (e.g., remove 10 samples with very large deviation), or to split the dataset and try optimizing on smaller batches first (e.g., 50 shapes at a time)?

Would that be an acceptable approach, or could it negatively impact the final model?


One more question — if, during the visualization in the GUI, I notice that the particles (e.g., 6000 in total) are heavily clustered in only a small region of the mesh and do not spread to cover the entire surface, does that mean the final optimization is very likely to fail or produce poor correspondence? Is there any way via changing the parameters to change this situation?

You can set the initial relative weighting to a smaller value. 0.01 instead of 0.05. This will allow the particles to spread further. Besides that you can run it for many more iterations per split. 5,000 or more?


Maybe the higher the particles number will lead to higher chances of poor correspondence. But isn’t the more the particles the higher spreading area on the target mesh?
I wonder: if I use the same weights and iteration settings , but still fail to get reasonable correspondence at a lower particle count, does that strongly imply that using more particles will likely perform even worse?

It’s a multiscale process. If lower scales are in poor correspondence, there is little chance that higher scales will be in correspondence. I am recommending that you confirm that lower numbers of particles are working well first before moving on to large numbers of particles.

I tried 128 particles and it does not have such spike that occurs in the higher particles, but when I warp back to the original mesh, the reconstruction error is still not small as many surface detail are lost. After that I tried 256/512/1024/2048 Particle and I think 1024 has the smallest reconstruction error, but some outlier cases will have many spike. Not sure which parameter I should find-tune to help the 1024 particle reduce spikes/some small area that particle distribution is unreasonable

First Photo is 128 particle I think none have spikes


Second is the 1024 I found some spike on outlier case

I’d be concerned that even the 128 is not in good correspondence. When you animate the PCA modes, do the particles stay in correspondence throughout the mode, or are they crossing positions?

Is there a way you could share this data with me so that I can experiment as well? amorris@sci.utah.edu

Thank you for your previous response. I believe I have achieved a fair to good result for the right ventricle across the 146 samples, although three of them still exhibit surface spikes. I experimented with using fewer particles, which helped reduce the severity of these spikes compared to previous attempts.

However, I am now facing difficulties with the left ventricular myocardium. I used 2048 particles, but I noticed that it is quite challenging to allocate particles inside the cavity. To mitigate surface spikes on the warped-back meshes, I also scaled up the mesh size by a factor of 8.

May I ask: Is increasing the number of particles the only way to ensure particle allocation inside the myocardial cavity, or are there alternative strategies that could help improve internal particle distribution?