Ipek::Particle::Correspondence

From wiki
Jump to navigationJump to search

Running Correspondence without attributes

Pre-Processing

  • Command sintaxis

ParticleCorrespondencePreprocessing --parameterFileName pre.params --voxelSize 1 --smoothing 0.3 --outputBinaryBool (better using Ipek's command, ShapeWorksGroom smooths too much the Distance Transforms)

  • Inputs: vtk files Format of the Input file, pre.params
NUMBER_OF_SHAPES=4
NUMBER_OF_ATTRIBUTES=0
TEMPLATE_INDEX=1
WEIGHTS= 
shape01.vtk 
shape02.vtk 
shape03.vtk 
shape04.vtk 
  • Outputs: distance maps, (advice, output Binary Volumes --outputBinaryBool, to detect possible problems in the scan conversion)

How to visualize? DistanceMaps, ImageViewer ./subject_000_DistanceMap.mha Binaries, itksnap in slicer or itksnap ./subject_000_Binary.mha

Running correspondence

  • Command sintaxis

Correspondence correspondence.params

  • Format of the input file, correspondence.params
(inputs        
"subject_000_DistanceMap.mha"
"subject_001_DistanceMap.mha"
"subject_002_DistanceMap.mha"
"subject_003_DistanceMap.mha"
)
// OPTIONALLY we could specify a set of point files to initialize
// the optimization like this:
//(point_files "BussOriginalScaled.lpts"  "BussRegMirrorScaled.lpts" )
//(point_files "001V2_Mandible.lpts"  "001V6_Mandible.lpts" "001V8_Mandible.lpts" )
(number_of_particles 256) // If point files are not specified, then
                          // the application will initialize particles
                          // by splitting until each shape has this 
                          // total number of particles.
(iterations_per_split 200) // Iterations between splitting during
                          // initialization phase. 
(starting_regularization 10.0) // Starting regularization for the
                                // entropy-based correspondence
                                // optimization.
(ending_regularization 0.1) // Final regularization for the entropy-
                            // based correspondence.
(optimization_iterations 200) // Number of iterations for the entropy-
                              // based correspondence.
(checkpointing_interval 20) // Number of iterations between checkpoints
                            // (iterations at which results are saved)
(output_points_prefix "surg") // Prefix for the output files.


  1. Not using init points: add point, split all (glyph size)
  2. Init points, and how to get them (see below)
  • Parameters
    • Correspondence: minimum entropy
    • Surface Sampling: adaptive (adaptivity strenght is to increase the sampling in high curvature areas)
    • Relative grad scaling: error factor, if is bigger the error accepted in your correspondence optimization, final results will be worse. You might wanna play with it in order to make the two terms of the equation to reach more or less the same value.
    • Initial min variance: step once see how is the 4th mode of variation, and get that order of magnitude initial min variance, if you do not hold it, min variance will decrease while iterating. Might wanna try how it is holding it.

Running the optimization RUN and see how it goes, split all, keep on going.

advice, SAVING INTERMEDIATE STEPS (save lpts) just in case the optimization crashes, is good to have checkpoints.

Post-processing

  • Command sintaxis

ParticleCorrespondencePostProcessingTPS postprocess.params

  • Format of the input file, postprocess.params
NUMBER_OF_SHAPES=4
checkpoint.0.lpts shape01.vtk
checkpoint.1.lpts shape02.vtk
checkpoint.2.lpts shape03.vtk
checkpoint.3.lpts shape04.vtk

Running Correspondence with attributes

Computing your attributes

  • In a 1,1,1 sp basis
  • Formated in a KWMeshVisu fashion
  • X, y and z values will be added to the optimization as attributes as well, use xyzgrabber VTK 0 attx xyzgrabber VTK 1 atty xyzgrabber VTK 2 attz
  • Calculate weights as well, using MeshMath
    • For non x, y, z attributes

MeshMath sample00_att -variance sample01_att sample02_att .... sampleN_att Take that number and invert it (1/meanvar) that is the weight to add in the pre.params file

    • For x, y, z attributes

MeshMath sample00_attx -variance sample01_attx sample02_attx .... sampleN_attx Avg mean variances of x, y and z, take that number and invert it (1/AVGXYZ(meanvar)) that is the weight to add in the pre.params file

Pre-Processing

  • Command sintaxis

ParticleCorrespondencePreprocessing --parameterFileName pre.params --voxelSize 1 --smoothing 0.3 --outputBinaryBool (better using Ipek's command, ShapeWorksGroom smooths too much the Distance Transforms)

  • Inputs: vtk files, KWM attfiles, weights.

Format of the Input file, pre.params

NUMBER_OF_SHAPES=4
NUMBER_OF_ATTRIBUTES=4
TEMPLATE_INDEX=1
WEIGHTS= weight1 weight2 weight3 weight4
shape01.vtk shape01_dirz.txt shape01_x.txt shape01_y.txt shape01_z.txt
shape02.vtk shape02_dirz.txt shape02_x.txt shape02_y.txt shape02_z.txt
shape03.vtk shape03_dirz.txt shape03_x.txt shape03_y.txt shape03_z.txt
shape04.vtk shape04_dirz.txt shape04_x.txt shape04_y.txt shape04_z.txt
  • Outputs: distance maps, attribute distance maps (advice, output Binary Volumes --outputBinaryBool, to detect possible problems in the scan conversion)

Running correspondence

  • Command sintaxis

Correspondence correspondence.params

Sintaxis of correspondence.params changes slightly, weights and atts must be indicated.

  • Format of the input file, correspondence.params
(inputs        
"subject_000_DistanceMap.mha"
"subject_001_DistanceMap.mha"
"subject_002_DistanceMap.mha"
"subject_003_DistanceMap.mha"
)
// Attribute files
(attributes_per_domain 4)
(attribute_files "subject_000_Attribute_00.mha" "subject_000_Attribute_01.mha" "subject_000_Attribute_02.mha" "subject_000_Attribute_03.mha" "subject_001_Attribute_00.mha" "subject_001_Attribute_01.mha"   "subject_001_Attribute_02.mha" "subject_001_Attribute_03.mha" "subject_002_Attribute_00.mha" "subject_002_Attribute_01.mha" "subject_002_Attribute_02.mha" "subject_002_Attribute_03.mha" "subject_003_Attribute_00.mha"   "subject_003_Attribute_01 
 .mha" "subject_003_Attribute_02.mha" "subject_003_Attribute_03.mha" )
(attribute_scales 1.998792 0.013913379 0.013913379 0.013913379 )
// OPTIONALLY we could specify a set of point files to initialize
// the optimization like this:
//(point_files "BussOriginalScaled.lpts"  "BussRegMirrorScaled.lpts" )
//(point_files "001V2_Mandible.lpts"  "001V6_Mandible.lpts" "001V8_Mandible.lpts" )
(number_of_particles 256) // If point files are not specified, then
                          // the application will initialize particles
                          // by splitting until each shape has this 
                          // total number of particles.
(iterations_per_split 200) // Iterations between splitting during
                          // initialization phase. 
(starting_regularization 10.0) // Starting regularization for the
                                // entropy-based correspondence
                                // optimization.
(ending_regularization 0.1) // Final regularization for the entropy-
                            // based correspondence.
(optimization_iterations 200) // Number of iterations for the entropy-
                              // based correspondence.
(checkpointing_interval 20) // Number of iterations between checkpoints
                            // (iterations at which results are saved)
(output_points_prefix "surg") // Prefix for the output files.


  1. Not using init points: add point, split all (glyph size)
  2. Init points, and how to get them (see below)
  • Parameters
    • Correspondence: general entropy
    • Surface Sampling: adaptive (adaptivity strenght is to increase the sampling in high curvature areas)
    • Relative grad scaling: error factor, if is bigger the error accepted in your correspondence optimization, final results will be worse. You might wanna play with it in order to make the two terms of the equation to reach more or less the same value.
    • Initial min variance: step once see how is the 4th mode of variation, and get that order of magnitude initial min variance, if you do not hold it, min variance will decrease while iterating. Might wanna try how it is holding it.

Running the optimization RUN and see how it goes, split all, keep on going.

advice, SAVING INTERMEDIATE STEPS (save lpts) just in case the optimization crashes, is good to have checkpoints.

Post-processing

  • Command sintaxis

ParticleCorrespondencePostProcessingTPS postprocess.params

  • Format of the input file, postprocess.params
NUMBER_OF_SHAPES=4
# corresponding particle list | projection target surface mesh | projected surface mesh output | TPS warped surface mesh output (optional for --saveTPS)
checkpoint.0.lpts shape01.vtk shape-projected-01.vtk [shape-tpsWarped-01.vtk]
checkpoint.1.lpts shape02.vtk shape-projected-02.vtk [shape-tpsWarped-02.vtk]
checkpoint.2.lpts shape03.vtk shape-projected-03.vtk [shape-tpsWarped-03.vtk]
checkpoint.3.lpts shape04.vtk shape-projected-04.vtk [shape-tpsWarped-04.vtk]

Useful things to know

Generating models from binary volumes: ModelMaker in Slicer 3

  1. load volume (check label map checkbox)
  2. go to surface models and model maker
  3. create new model hierarchy select the volume and apply and done
  4. save and check only the model just created
  • Advice, close scene everytime you are starting to create a new model*

Inspecting binary volume properties

ImageStat binary_volume -info (binary_volume *gipl, *gipl.gz, *mha, *mhd, etc)

This sintaxis will provide the spacing, origin, dimensions and max and min grey scale values.

Initializing particles

  1. MinMaxReader implemented by Clement. The command reads the limit points in x,y,z in a meta file.

MinMaxMeshReader <InputMesh.meta> <AttributeFile.txt> <PointsFile.lpts>

  • InputMesh.meta, is the input mesh
  • AttributeFile.txt, outputs information that does not have to be used for Particle Initialization
  • PointsFile.lpts, outputs the limit points in a particle file that can be loaded by the correspondence program
  • Command sintaxis

/Autism2/IBIS2/people/vachet/Software/MinMaxMeshReader/MinMaxMeshReader_linux64/MinMaxMeshReader

  1. More soon!

Reconstruction Quality Control

Using MeshValmet

  • Command sintaxis

MeshValmet

IV translated models from your original and reconstructed meshes