- Research article
- Open Access
- Open Peer Review
Building generic anatomical models using virtual model cutting and iterative registration
© Xiao et al; licensee BioMed Central Ltd. 2010
- Received: 5 August 2009
- Accepted: 8 February 2010
- Published: 8 February 2010
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms.
The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models.
After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step.
Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
- Image Registration
- Image Stack
- Deformable Image Registration
- Left Mandible
- Cave Automate Virtual Environment
Spatial information of biological structures has been used to analyze their functions and to relate their shape changes to various genetic parameters [1–4]. In particular, using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research [1, 2, 4–10]. In order to be suitable for statistical analysis, a generic 3D model must be a single averaged model representing all individual 3D models in the same population of a study [5, 11]. An averaged 3D model is a commonly used form of a generic 3D model. The creation of an averaged model captures information that can be exploited in statistical analysis of real populations. By comparing averaged models and dispersion around them, anatomical differences can be quantified across groups that differ in some underlying causal or exploratory factors, such as genetics, gender, and drug treatment . The comparisons can be made between 'static' morphological states, where the subjects for comparison are at the same developmental state or they can be between 'dynamic' states, where comparisons are made between various stages of the subject's growth. Therefore, a technique for creating high throughput 3D generic models is needed to collect and manage large numbers of subjects quickly and efficiently. Such a technique will enable researchers to discover a wide range of traits to their interest in both natural and clinical settings. Generic 3D models can also be used in automatic segmentation , medical education, virtual crash testing, therapy planning and customizing replacement body parts [11, 12]. Hence, in medical and biological studies, 3D generic models built for a range of populations are in high demand.
In order to create valid 3D generic models from 2D image stacks, more attention should be paid to two essential steps - image segmentation and image registration. Image registration is the process to find a 3D transformation that can map the same anatomical region from one subject into another one. This process is essential in clinical and research applications because researchers often need to compare the same anatomical region scanned using different modalities or at different time points . Image segmentation is needed when we try to retrieve the spatial information of certain biological structures after applying in vivo imaging technologies such as MRI. This step is generally indispensable because 3D image stacks generated from in-vivo scanners usually contain a large amount of superfluous information that is irrelevant to immediate diagnostic or therapeutic needs.
With the tremendous advancements in medical imaging technologies such as CT, PET, MRI, and fMRI, we are now able to capture images of biological structures and their functions more clearly than ever before. Additionally, advanced technologies from other fields such as computer vision, computer graphics, image processing and artificial intelligence have been used to analyze 2D medical images of various modalities . However, due to the complexity of biological structures and their shape information overlaying on medical images, it is still an exceptionally difficult task to quickly and accurately create 3D generic models for a population of a study.
Due to the difficulties with automating the segmentation task, enhanced manual segmentation software is still widely used. Various image processing algorithms have been produced to minimize user interactions and increase segmentation accuracy . However, the current enhanced manual segmentation approaches are still quite laborious; many times it requires a well-trained user to interact with every 2D image slice. Therefore, in order to achieve accurate 3D reconstruction of a region, structure, or tissue of interest , it is necessary to entail specifically tailored solutions that combine and integrate different 3D segmentation algorithms  that may still necessitate manual segmentation on each 2D image slice. To redress such persistent drawbacks, we have developed a generalized virtual dissection-based method for creating generic models. In comparison to our previous virtual dissection technique , the method now allows user-define curves for indicating cutting surfaces and employs enhanced iterative registration to better handle shape variations. In addition, the resulting software is now publicly available. We show that the creation of an averaged model that captures spatial information exploitable in statistical analyses of organ shape is facilitated by coupling our generalized segmentation method with existing automatic image registration algorithms .
Overview of the method
The method pipeline contains the following major steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) cutting each model to generate a sub-model of the user's interest; (iv) making image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks from the previous step; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. All the algorithms are implemented using Java and C++ based on functionalities from open source toolkits VTK (Visualization Toolkit ), ITK (Insight Segmentation and Registration Toolkit ) and ImageJ . Both volumetric data and surface model for the generic 3D model are created at the final step.
3D model reconstruction
Sub-model of interest creation
Our reconstructed 3D model is a representation of the whole mouse skull. In order to retrieve the sub-model, our custom-developed cutting tools are used to cut the 3D skull model until the desired separation of the sub-model is achieved.
Creating corresponding 2D image portions of the sub-model
Iterative image registration
Rigid 3D image registration. In order to align the entire set of sub-models into the same space automatically, an intensity-based rigid 3D registration algorithm which uses a mean square metric, a linear interpolator, a versor rigid 3D transform and a versor rigid 3D transform optimizer inside ITK is used to register the images.
Affine 3D image registration. Due to the variations of each individual sub-model, rigid 3D image registration creates local misalignments and the averaged model created based on only rigid image registration might not be an average representative. Therefore, affine 3D image registration is also available in our package to further align the models. An intensity-based affine 3D registration algorithm which uses a mean square metric, a linear interpolator, an affine transform and a regular step gradient descent optimizer inside ITK is applied for affine registration.
Non-rigid (deformable) image registration. The global affine transformation from the previous step might create some remaining local shape variations. Therefore, in order to sharpen the blurry average images, a non-rigid image registration can also be used after step 2. An intensity-based deformable 3D registration algorithm which uses a mean square metric, a linear interpolator, a B-spline based transform and a LBFGS (limited memory Broyden-Fletcher-Goldfarb-Shanno update) optimizer inside ITK is applied for further deformable image registration.
We randomly pick a subject from the female group as a reference and register every image stack to this reference stack using 3D rigid registration. After each registration step, the intensities of the images are turned into binary such that pixels with intensities 255 belong to the model and pixels with 0 belong to the background. Then we average corresponding pixel intensities from all the stacks to create the averaged image stack . The same registration process is applied to the male group.
Averaged models are created from the previous step by using the global median of the pixel intensities as the threshold value for binarizing the averaged image stack. An affine transformation based image registration is applied again to all the images that have been processed by rigid transformation from the previous step in the same way as described in the previous step and new averaged image stacks are created.
The previous step is repeated, but this time B-Spline based deformable image registration is applied to all the images that have been processed by affine transformation from the previous group.
The previous step can be applied repeatedly to all the images that have been processed by deformable transformation from the previous group in order to achieve more accurate registrations.
Intensity based image averaging
The global median of the averaged image intensities is used to apply the marching cube algorithm to the averaged image stacks  to extract the generic left mandible model that represents the average shape of all the left mandibles across all the subjects in the same population.
Generic model building
We have developed a generalized virtual dissection-based method for the creation of generic models from 2D image stacks of a group of individuals. To illustrate our novel generic models creation technique, whole body scans of eight female mice and eight male mice are used to create averaged 3D models of the left mandible. For each subject, the left mandible 3D model is created using our cutting tools and the corresponding 2D image stack that contains only information of the left mandible is also generated.
Validation of the iterative registration
Comparison of image registration accuracy
No. of pixels with intensity 255/No. of pixels with non-zero intensity after registration
Versor based 3D rigid registration
Affine transformation based 3D registration
B-Spline deformable transformation based 3D registration
F2 as reference
F3 as reference
F4 as reference
F5 as reference
F6 as reference
F7 as reference
F8 as reference
F9 as reference
M2 as reference
M3 as reference
M4 as reference
M5 as reference
M6 as reference
M7 as reference
M8 as reference
M9 as reference
Dice index to evaluate the similarities between two averaged models created from different initial references
Dice index measurement  is used to evaluate the similarities between averaged models starting from different reference subjects, after the additional registration procedure to facilitate direct comparison. As shown from Table 2, the similarity measures are from 0.97 to 0.98 among different averaged models. We believe that the rest 0.02 to 0.03 differences are due to the system error caused by the registration process. For the female mice group, the mean dice index is 0.976464, the standard deviation is 0.001489 and the coefficient of variation is 0.001524. For the male mice group, the mean dice index is 0.9789, the standard deviation is 0.000698 and the coefficient of variation is 0.000713. Therefore, we can see that in this case, starting from different reference subject will not affect the averaged models.
Root mean square error (RMSE) between models
Many studies considered complicated organs such as brain [4, 9, 10, 21, 22]. Inside the brain, different sub regions need to be considered during the registration process. Therefore, if one uniform intensity value is used to represent the organ, homogenous tissue mapping might not be available. However, in our study we would like to consider the organs with homogeneous intensities and structures. Therefore, we can use only one intensity value to represent the model and use it for registration and model averaging. This would reduce the registration time and increase the registration accuracy.
Flexible module-based implementation
Our method is composed of five modules: 3D model reconstruction, sub-model of interest creation, production of 2D image stacks corresponding to the sub-models, image registration, and generic 3D model creation. Each module in this framework has various algorithms that can be applied according to the requirements of a specific scientific study.
For 3D model reconstruction from 2D image stacks, the marching cubes algorithm is the most popular one. Moreover, other reconstruction algorithms have been developed to improve the quality of the contour geometry [23, 24]. Therefore, depending on the application requirements, different reconstruction algorithms can be used in our method to create polygonal models. Our cutting tools can be used to process polygonal models created from any reconstruction algorithm.
Efficiency of the cutting approach
In order to automatically or semi-automatically create generic 3D models, different approaches have been proposed. However, those generic model building tools either need perfect individual models  or require costly human-computer interactions to retrieve 3D models. In , a brain atlas of the honeybee was constructed. The brain structures of the honeybee, such as neuropils and neurons, were manually segmented and labeled. Even with sophisticated algorithms  to help users to trace regions slice-by-slice quickly and accurately, manually processing thousands of images is still very labor intensive. Therefore, we focused on processing more slices with fewer human-computer interactions. Using a plane to separate a 3D polygon mesh has been used to refine a model created from CT or MRI image stacks . Our approach can use not only a plane but also a box, a sphere, or even a user-defined curve to cut 3D models. More cutting algorithms can be added as well to quickly remove the portion that is of no interest to the users. Hence, with the cutting information, corresponding 2D image stacks can be updated automatically. Our approach can be used to create the desired models very quickly and automatically register images. Therefore, our method significantly shortens the generic model building time.
Processing time for model making
Image size: 1024 × 1024
Number of images: 500
Image size: 1024 × 1024
Number of images: 500
Average time to create a sub-model from a stack
Average number of cuts performed
Since image registration is an essential step towards creating generic models, numerous techniques have been developed to register corresponding 2D image stacks or 3D models. For some applications, averaged models created from the rigid registration step satisfy the requirements. For example, in , an intensity-based rigid image registration algorithm is applied to create a generalized shape image (GSI) which represents average values of the corresponding pixel intensities across all the image stacks. Even though this method yields some shape variations and not well-registered images create local differences from averaged images by using the gold standard (e.g. landmark based Procrustes average) it still can be used as a screening tool for the initial shape analysis. In  iterative averaging is used to register all the original images to the same reference to create an average, and then iteratively re-register the original images to the new average. Affine and non-rigid image registrations are applied in the honeybee brain atlas creation. A subsequent affine registration step removes more misaligned shape differences than applying only the rigid registration and creates a sharper averaged image, but relative shape differences might still remain. Nevertheless, compared with automatic deformable registration, affine registration requires fewer parameters and the computation time is relatively short. Therefore, depending on the requirements of the application, deformable registration can be used repeatedly to further remove the misalignments and create still sharper averaged images.
If the user wants to create an averaged surface model that is more like the gold standard Procrustes averaged model, a method for jointly registering and averaging 3D surface models, such as the one described in , can be used. Anatomical structures are modeled using a quadrangular mesh. The contour in each image slice is detected and then re-sampled using the same number of points. Then a permutation of points on each contour is performed to guarantee that every point in each model corresponds to the same anatomical region of the point with the same index in all other models. The points are finally averaged to create the generic model. The points are indexed on two integer coordinates, one of which represents the ordering of the initial image stacks. However, in order to use this approach, we have to pay attention to the alignment in the direction of slice ordering, since that method assumes that the anatomical structures along this direction are aligned automatically by the scanner. Therefore, rigid, affine or deformable registrations should still be used first to ensure that the anatomical structures along the direction are aligned. Subsequently, the multiple 3D anatomical surface models averaging algorithm  can be used to create an averaged surface model. Our package does not provide the quadrangular mesh building algorithm as described in , however our registration programs can still be used to align the anatomical structures along the slice ordering direction.
Information on shape variation
The rigid, affine and non-rigid registration algorithms that we employ allow us to align all the subjects virtually and create the averaged models. Besides having the final averaged 3D models, all the transformations applied during the registration step are also available for visualizing shape changes and numerical morphometrical analysis such as global and local shape comparisons, strain tensor analysis, and modes of variations analysis [3, 6, 25]. The transformations are all available through ITK .
In the above equation, V is a versor. X is a point in the 3D space. C is a vector that represents the rigid transformation center. The application of the versor onto the vector (X-C) is different from the regular vector product. However, in ITK, we can convert the versor product into the Euclidean matrix format. The 3D rotation matrix and the translation vector can be calculated from the versor product and can be saved for further analysis.
where X is a vector and represents a point in the 3D space, A is a 3 × 3 matrix and represents the affine transformation matrix, C is a vector and represents the transformation center, and T is a vector and represents the 3D translation. X' is the new position for X after the affine transformation. The affine registration from ITK that we utilized consists of rotation, scaling, shearing and translation in the 3D dimension. There are (3+1) × 3 parameters in this transformation. The first 3 × 3 parameters define A, the last 3 parameters define the translations for each dimension. The center of the transformation is automatically calculated from the programs and is also available.
B-Spline based non-rigid transformation [3, 6, 9, 13] will generate a dense deformation field where a deformation vector is assigned to every point in the 3D space. The deformation field is available and can be saved in the form of a vector image from ITK. The deformation vector can be used to further analyze the local shape variations.
Applicability of the method
Using our cutting tools to build models from 2D image stacks allows beginners in medical fields to learn anatomy intuitively and enjoy the process of separating the biological structures from the virtual body model before dealing with real subjects. Quickly and accurately creating various 3D averaged models can satisfy the requirements for a large number of models in virtual crash testing, therapy planning, and customizing replacement body parts. Large scale morphological studies that require quantification of anatomical features can be really tedious and might be very detailed and only focused on a few important measurements. Our method facilitates morphological studies by allowing anatomical structures to be measured and compared rapidly and in more detail. These tools help put morphological analysis at a similar level to other studies such as genetic and molecular studies where a large amount of data and measurements can be dealt with relatively quickly.
The issue of homology, which refers to biological structures that have the same function, is also addressed through our method. If we measure an average and do quantitative comparisons, we would want to compare the same anatomical region. This requires the two models being compared to be first registered correctly with each other, such that if one area of interest is picked in one model, it refers to the same region in the other model. The iterative registration employed in our approach can, to a large extent, reduce the misalignments. The method we developed optimizes the functionalities and technologies of existing toolkits and the resulting software package allows biologists to build their generic models more quickly and accurately.
As our virtual dissection tools are implemented in Java, they can run on both regular display systems and on the state-of-the-art CAVE Automated Virtual Environment , which is a 3D stereo-based 4-wall display system installed at the University of Calgary to provide users with a virtual immersive environment. One of the advantages of using this virtual reality system as a platform for our cutting tools is that users can treat both real world objects and virtual world objects quite the same way, which is not possible in a desktop computing environment or even in a single-wall stereo display environment. For example, users can move around in the display environment and view virtual objects from the "inside" such that the details operations can be easily understood. By harnessing the power of the CAVE and our cutting tools, users will have more flexibility including a wide variety of viewing perspectives and a high degree of freedom to set locations and orientations of the cutting tools. This is a definite advantage over ordinary desktop computing environments where the objects need to be frequently rotated to perceive their 3D structures.
We have developed a new technique that uses virtual model cutting and iterative image registration to create generic models from 2D image stacks of a group of individuals. Our system allows biologists to build generic 3D models quickly and accurately. However, particularly complicated morphological structures, such as highly branched and convoluted designs that typify vascular or nervous networks, still pose a challenge to our generalized and enhanced method toward generic model creation. It is difficult to use the current manual virtual dissection tools to remove such sub-models from initial, unprocessed scans. More convenient and intuitive manual virtual dissection methods will be developed in our future research. Producing deformable models based on the current tools will also be an area of further development. Those deformable averaged models can then be used for automatically segmenting the anatomical structures. More advanced automated segmentation algorithms that utilize generic models will be studied to enable higher throughput analyses of anatomical structures in both medical and more general biological contexts. Quantification of 3D shape variations will also be studied based on our generic model building technique.
The implementation of our method is available for free downloading at http://www.visualgenomics.ca/~mxiao/research.html. The current version of the software has been tested on Unix Solaris 10 and Windows XP with .NET Framework 3.5. In order to run the program from our jar files, at least Java 1.6 need to be installed. ImageJ as well as shared (dynamically linked) libraries of VTK and ITK should also be installed.
Detailed installation and the user's guide are also available on the project website. VTK, ITK and ImageJ are all open source and freely available software toolkits.
This work has been supported by Genome Canada through Genome Alberta; Alberta Science and Research Authority; Western Economic Diversification; the Governments of Canada and of Alberta through the Western Economic Partnership Agreement; the iCORE/Sun Microsystems Industrial Research Chair program; the Alberta Network for Proteomics Innovation; and the Canada Foundation for Innovation. We thank Dr. Fred Bookstein for his comments on this project. We thank Wei Liu for scanning the mice. We thank Megan Smith for her comments and her help with editing the paper. We also thank the reviewers for their comments.
- Thompson PM, Mega MS, Narr KL, Sowell ER, Blanton RE, Toga AW: Brain image analysis and atlas construction. Handbook of Medical Imaging: Medical Image Processing and Analysis. Edited by: Sonka M, Fitzpatrick JM. 2000, SPIE Press, 2: 1063-1119.Google Scholar
- Small CG: The Statistical Theory of Shape. 1996, New York: SpringerView ArticleGoogle Scholar
- Olafsdottir H, Darvann TA, Hermann NV, Oubel E, Ersboll BK, Fangi AF, Larsen P, Perlyn CA, Morriss-Key GM, Kreiborg S: Computational mouse atlases and their application to automatic assessment of craniofacial dysmorphology caused by the crouzon mutation Fgfr2C342Y. Journal of Anatomy. 2007, 211: 37-52. 10.1111/j.1469-7580.2007.00751.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Barratt DC, Chan CSK, Edwards PJ, Penney GP, Slomczykowski M, Carer TJ, Hawkes DJ: Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging. Medical Image Analysis. 2008, 12: 258-374. 10.1016/j.media.2007.12.006.View ArticleGoogle Scholar
- Maschino E, Maurin Y, Andrey P: Joint registration and averaging of multiple 3D anatomical surface models. Computer Vision and Image Understanding. 2006, 1: 16-30. 10.1016/j.cviu.2005.06.004.View ArticleGoogle Scholar
- Brandt R, Rohlfing T, Rybak J, Krofczik S, Maye A, Westerhoff M, Hege HC, Menzel R: Three-dimensional average-shape atlas of the honeybee brain and its applications. The Journal of Comparative Neurology. 2005, 492: 1-19. 10.1002/cne.20644.View ArticlePubMedGoogle Scholar
- Avants B, Gee JC: Shape averaging with differmorphic flows for atlas creation. Proceedings of the IEEE International Symposium on Biomedical Imaging, 1: April 2004. 2004, Arlington, VA, 595-598.Google Scholar
- Argall BD, Saad ZS, Beauchamp MS: Simplified intersubject averaging on the cortical surface using SUMA. Human Brain Mapping. 2006, 27: 14-27. 10.1002/hbm.20158.View ArticlePubMedGoogle Scholar
- Ruckert D, Frangi AF, Schnabel JA: Automatic construction of 3D statistical deformation models using non-rigid registration. Lecture Notes in Computer Science: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2001. Edited by: Niessen WJ, Viergever MA. 2001, Berlin Heidelberg: Springer, 2208: 77-84.Google Scholar
- Rajamani KT, Styner MA, Talib H, Zheng G, Nolte LP, Ballester MAG: Statistical deformable bone models for robust 3D surface extrapolation from sparse data. Medical Image Analysis. 2007, 11: 99-109. 10.1016/j.media.2006.05.001.View ArticlePubMedGoogle Scholar
- Schmutz B, Reynolds KJ, Slavotinek JP: Development and validation of a generic 3D model of the distal femur. Computer Methods in Biomechanics and Biomedical Engineering. 2006, 5: 305-312.View ArticleGoogle Scholar
- Zachow S, Zilske M, Hege HC: 3D reconstruction of individual anatomy from medical image data: segmentation and geometry processing. Proceedings of the CADFEM Users Meeting. 2007, Dresden, GermanyGoogle Scholar
- Yoo T, Ed: Insight into Images. 2004, AK PetersGoogle Scholar
- Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G: User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage. 2006, 3: 1116-1128. 10.1016/j.neuroimage.2006.01.015.View ArticleGoogle Scholar
- Chen T, Metaxas D: A hybrid framework for 3D medical image segmentation. Medical Image Analysis. 2005, 6: 547-565. 10.1016/j.media.2005.04.004.View ArticleGoogle Scholar
- Xiao M, Soh J, Meruvia-Pastor O, Osborn D, Lam N, Hallgrímsson B, Sensen CW: An efficient virtual dissection tool to create generic models for anatomical atlases. Studies in Health Technology and Informatics. 2009, 142: 426-428.PubMedGoogle Scholar
- Schroeder W, Martin K, Lorensen B: The Visualization Toolkit. 2006, Prentice-HallGoogle Scholar
- Rasband WS: ImageJ. U. S. National Institutes of Health, Bethesda, Maryland, USA. 1997, [http://rsb.info.nih.gov/ij/]Google Scholar
- Kristensen E, Parsons TE, Hallgrímsson B, Boyd SK: A novel 3D image-based morphological method for phenotypic analysis. IEEE Transactions on Biomedical Engineering. 2008, 12: 2826-2831. 10.1109/TBME.2008.923106.View ArticleGoogle Scholar
- Dice LR: Measures of the amount of ecologic association between species. Ecology. 1945, 26: 297-302. 10.2307/1932409.View ArticleGoogle Scholar
- Guimond A, Meunier J, Thirion PJ: Average brain models: a convergence study. Computer Vision and Image Understanding. 2000, 2: 192-210. 10.1006/cviu.1999.0815.View ArticleGoogle Scholar
- Guimond A, Meunier J, Thirion JP: Automatic computation of average brain models. Lecture Notes in Computer Science: Medical Image Computing and Computer-Assisted Intervention 1998-MICCAI'98. 1998, Berlin Heidelberg: Springer, 1496: 631-640.Google Scholar
- Schaefer S, Warren J: Dual marching cubes: primal contouring of dual grids. Proceedings of the 12th Pacific Conference on Computer Graphics and Applications: October 2004. 2004, Seoul, Korea, 70-76. full_text.Google Scholar
- Schaefer S, Ju T, Warren J: Manifold dual Contouring. IEEE Transactions on Visualization and Computer Graphics. 2007, 3: 610-619. 10.1109/TVCG.2007.1012.View ArticleGoogle Scholar
- Bayly PV, Black EE, Pedersen RC, Leister EP, Genin GM: In vivo imaging of rapid deformation and strain in an animal model of traumatic brain injury. Journal of Biomechanics. 2006, 6: 1086-1095. 10.1016/j.jbiomech.2005.02.014.View ArticleGoogle Scholar
- Sensen CW: Using CAVE® technology for functional genomics studies. Diabetes Technology & Therapeutics. 2002, 4: 867-871.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2342/10/5/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.