Skip to main content
Intended for healthcare professionals
Free access
Research article
First published online September 14, 2014

Investigation of Nonuniform Dose Voxel Geometry in Monte Carlo Calculations


The purpose of this work is to investigate the efficacy of using multi-resolution nonuniform dose voxel geometry in Monte Carlo (MC) simulations. An in-house MC code based on the dose planning method MC code was developed in C++ to accommodate the nonuniform dose voxel geometry package since general purpose MC codes use their own coupled geometry packages. We devised the package in a manner that the entire calculation volume was first divided into a coarse mesh and then the coarse mesh was subdivided into nonuniform voxels with variable voxel sizes based on density difference. We name this approach as multi-resolution subdivision (MRS). It generates larger voxels in small density gradient regions and smaller voxels in large density gradient regions. To take into account the large dose gradients due to the beam penumbra, the nonuniform voxels can be further split using ray tracing starting from the beam edges. The accuracy of the implementation of the algorithm was verified by comparing with the data published by Rogers and Mohan. The discrepancy was found to be 1% to 2%, with a maximum of 3% at the interfaces. Two clinical cases were used to investigate the efficacy of nonuniform voxel geometry in the MC code. Applying our MRS approach, we started with the initial voxel size of 5 × 5 × 3 mm3, which was further divided into smaller voxels. The smallest voxel size was 1.25 × 1.25 × 3 mm3. We found that the simulation time per history for the nonuniform voxels is about 30% to 40% faster than the uniform fine voxels (1.25 × 1.25 × 3 mm3) while maintaining similar accuracy.


The past decade has seen a gradually increased usage of the Monte Carlo (MC) method in dose calculation for clinical radiotherapy. A number of commercial treatment planning systems (TPSs) have started to offer MC dose engines as an option.14 In order to increase the calculation speed, numerous variance reduction techniques such as history repetition, Russian roulette, and photon splitting are employed to improve the calculation efficiency.5 In most dose engines for the MC treatment planning, the particles are tracked and the energy is scored in uniform geometric voxels which are resampled from the original computer tomography (CT) image data with decreased spatial resolution. The original CT pixel resolution which is 512 × 512 is not usually used in the MC simulations. The main reason is that the calculations are very slow because an extremely large number of particles are required to reduce the statistical uncertainty for small voxels to an acceptable level.
It is well known that the voxel size affects the speed of MC calculations. It has been demonstrated by Kawrakow6 that the central processing unit (CPU) time spent for voxel-to-voxel boundary crossing and energy loss deposition with the MC code VMC++ is proportional to the number of voxel boundaries crossed by the particle track and thus proportional to the inverse of the voxel resolution. In order to obtain better dose resolution, small voxel size should be used. However, small voxel size results in longer calculation time. Large voxel size helps to increase the calculation speed since the number of voxel boundary crossing is reduced and the number of particles required to achieve a certain statistical uncertainty is decreased as well. The choice of the voxel size in MC simulations is a trade-off between precision and speed. The voxel size (eg, 5 or 2 mm) is chosen at the beginning of the simulation to uniformly discretize the dose volume.
Several techniques have been investigated for increasing the speed of MC simulations based on the geometrical voxel representation. One approach is to use a scoring grid with different voxel sizes.79 The scoring grid is used for energy deposition only. The CT scan data are in a separate grid called “geometry grid” for particle transport. The voxels in the scoring grid are a number of grouped original CT voxels or separate superimposed spherical voxels. The size of the scoring voxels and the interspacing between scoring voxels are predefined by the user. The other approach that allows nonuniform voxel size is to use an octree as a compression algorithm to represent the CT data.10,11 The octree refers to as a data structure in which each node has 8 subnodes. The octree is built by recursively subdividing a cubic volume into octants symmetrically. This technique has been used in the general purpose MC code GEANT412 to reduce the memory consumption as well as to decrease the calculation time. Other approaches include the region-oriented approach13 and the non-voxelated approach.14,15 The region-oriented approach segments the CT image volume into regions of homogeneous composition and density and was implemented in the GEANT4 toolkit.12 It was reported that the calculation time was decreased by up to a factor of 15. The non-voxelated approach isolates regions of uniform density and composition from the scoring grid to minimize the number of boundary crossings. It was implemented in DOSXYZnrc,16 and the reported speedup factor was up to 1.2 for CT phantoms. The other technique used for increasing the calculation speed is the macro MC method, which applies precalculated particle path library in a well-defined simple local geometry to a patient’s heterogeneous global geometry. Large steps are taken through the global geometry to accelerate the calculations.17
In this work, we postulate that by creating multi-resolution nonuniform voxels such that smaller voxels are used for the large density gradient regions and larger voxels for the smooth regions, we can increase the simulation speed while maintaining similar accuracy. To illustrate the potential of improvements in calculation time and efficiency using this strategy, we devised a volume decomposition technique that discretizes a CT image volume into multi-resolution nonuniform voxels. The algorithm employs an adaptive mesh refinement (AMR) technique,18 which starts with a uniform coarse mesh and then each coarse voxel is further recursively split into 2 subvoxels whenever the density variation in the coarse voxel exceeds a presetable limit. The AMR technique is widely used in hydrodynamics to reveal the details of shock waves by changing the grid spacing at those regions. In the simulation of hydrodynamics, the mesh is changing dynamically over time by refining the mesh resolution. In our work, we use static mesh refinement in the sense that the nonuniform mesh is formed beforehand and it stays the same during the calculation. All neighboring information for a voxel is stored in the memory as a tree-like data structure; therefore, it is relatively quick to locate a voxel by a given particle location required in the track algorithm.

Materials and Methods

Algorithm to Generate Nonuniform Voxels

The multi-resolution subdivision (MRS) procedure to generate nonuniform voxels of variable size from a three-dimensional (3D) CT image volume is carried out as follows:
The 3D CT image volume is converted to the mass density volume pixel-by-pixel without changing the resolution using a CT number-to-density calibration curve. The end result is a 3D density volume with the dimension of 512 × 512 × number of slices.
The density volume is then discretized into a mesh with coarse resolution. Generally, we keep the slice thickness and use the space of 5 mm for both X and Y directions. The assignment of density to a coarse voxel is calculated by averaging the densities of the CT voxels whose center is inside the coarse voxel. Partial voxels within the coarse voxel are neglected. We also calculate average densities for each half of the coarse voxel in the X, Y, and Z directions.
The density variation ΔDx,y,z in the X, Y, and Z directions within a coarse voxel is calculated by the difference in the average densities of the 2 halves of the voxel in that direction. The largest density difference and the associated direction are stored for each coarse voxel.
If the density variation (ΔDx or ΔDy or ΔDz) in the coarse voxel is greater than the predefined split criterion (eg, 0.5 g/cm3), then the voxel is divided into half. Otherwise, the voxel remains intact. The stored average densities in step 2 are assigned to the 2 subvoxels.
All the voxels including recently divided subvoxels are scrutinized repeatedly for possible splitting until the density variation within each of the voxels is less than the criterion. Finally, a material index is assigned for each voxel according to the density range-to-material table.
Figure 1 illustrates the above MRS procedure. It shows a split coarse voxel, which is divided into half at the X direction (green cut plane) during first pass of the procedure and then one of the subvoxels (right side) is further divided into half at the Y direction (red cut plane) during second pass. To refine a coarse voxel with the size of 5 × 5 × 3 mm3 to a small voxel with the size of 1.25 × 1.25 × 3 mm3, 4 passes are needed since each pass applies only in 1 direction. The generated voxels are stored in a file with an unstructured format, which includes the information such as voxel indices, voxel neighbor connectivity, density, and material.
Figure 1. Illustration of a coarse voxel which is split into 3 subvoxels. The initial voxel (black cube) is split by the green plane into 2 subvoxels and the right subvoxel is further divided by the red plane.
The above MRS steps are based on the hypothesis that large dose gradients occur at the heterogeneous regions such as the interfaces of tissue and bone. It is a reasonable assumption as we know that the energy deposition is scaled based on the relative electron density in model-based dose methods. The other cause of large dose gradients is due to the beam penumbra. To take it into account, we apply 5 mm margin to the beam edges and carry out ray tracing from the margin area. The voxels which come cross with the rays and have not been divided previously are further subdivided into smaller voxels.

Monte Carlo Tracking Algorithm

To couple with the proposed nonuniform geometry, we rewrote an MC code in C++ called GMC based on the dose planning method (DPM) MC code,19 which is a fast MC program for patient dose calculations. As in DPM, Compton scattering, photoelectric ionization, and pair production were considered for photon transport. These 3 processes were simulated in an analog fashion, that is, every interaction was modeled individually until the energy of the particle falls below a cutoff energy or the particle escapes the simulation volume. For other details about the physics implemented, readers may refer to DPM.19 We only emphasize the boundary crossing algorithm here.
As implemented in DPM, electrons move across the voxel boundary to consider the material change in the next voxel. The distance to the voxel boundary is calculated along with the distance to the next hard interaction. The boundary crossing or the hard interaction event is sampled based on the closest distance. In the uniform mesh, the index of a voxel is determined by 3 indices (I, J, and K), and the index of next voxel after boundary crossing is simply adjusted by ±1 according to the direction of the particle. However, this is not true for the nonuniform voxel mesh since a voxel can no longer be indexed by I, J, and K. Therefore, the tracking subroutines have to be modified to accommodate this change.
The calculation geometry in our work consists of a number of nonuniform voxels. It is a hybrid mesh with variable voxel sizes. The data structure used in the hybrid mesh conceptually consists of a number of root nodes, and each root node represents a binary tree. The information such as the parentship, the neighbor voxels, and the connectivity for the 2 child voxels is stored in the tree node for quick retrieval. A tag is kept in each tree node indicating whether the voxel is split. To find the index of next voxel after the boundary crossing, the tag is first checked. If it is not split, the next voxel is updated by this leaf node. Otherwise, the child voxels are searched further until the undivided subvoxel is found. The block diagram in Figure 2 illustrates this process.
Figure 2. A block diagram for tracking particles on nonuniform voxels. A tag of a voxel indicates whether the voxel is a leaf node or not.

Statistics and Dose Evaluation

The efficiency (∊) of an MC algorithm is defined20 as ∊ = (σ2T)−1, where σ2 is the variance and T is the time required to obtain this variance. The efficiency expresses how fast an MC algorithm can reach a desired level of statistical accuracy. This metric was invented to compare the performance between MC packages with different variance reduction techniques so that the simulated number of histories does not reflect the true performance. For the calculation of σ2, the average uncertainty proposed by Rogers and Mohan20 is generally employed
where Di is the dose in the voxel i, ΔDi is its statistical uncertainty, and σ2 is averaged over all voxels with a dose greater than 50% of the maximum. N is the total number of simulated histories. Note that the efficiency is independent of N but dependent on the simulated time per history.
For dosimetric evaluation, the γ index21 is widely used to compare dose distributions. The γ index function is defined for each reference dose point rr as
where re refers to a dose point at the evaluation distribution and Γ is simply the Euclidean distance in the renormalized dose–distance space. In our work, we used Δd = 3 mm as the acceptance criterion for the distance to agreement and ΔD = 3% as the acceptance criterion for the dose difference.22 Since all the dose distributions are evaluated with the fine grid (1.25 mm) dose distribution as the reference, the acceptance criterion Δd = 3 mm is reasonable. The criterion for acceptable comparison is defined as the γ index γavg ≤ 1, and the passing rate is defined as the ratio of the number of voxels with γ < 1 to the total number of voxels. The dose distribution from the fine uniform mesh was used as the reference. The evaluation distribution in the γ index calculations is the dose points from the unstructured nonuniform mesh (at the voxel center). All calculations were performed on a personal computer with a CPU of 3.3 GHz and 3 GB of memory.

Accuracy and Performance of the Algorithm Implementation

First of all, we illustrate the accuracy of the GMC code by using the published International Conference on the Use of Computers in Radiation Therapy (ICCR) simulations as an example.20 The size of the slab phantom is 30.5 × 39.5 × 30 cm3. For the photon test case, the ICCR phantom consists of water from 0 to 3 cm, aluminum from 3 to 5 cm, lung (ρ = 0.26 g/cm3) from 5 to 12 cm, and water from 12 to 30 cm. For the electron test case, the phantom consists of water from 0 to 2 cm, aluminum from 2 to 3 cm, lung from 3 to 6 cm, and water from 6 to 30 cm. The beam size is 1.5 × 1.5 cm2 with 100 cm source to surface distance. The statistical precision was set to 0.3%. The phantom was discretized to 5 × 5 mm2 voxels in the X and Y directions and 2 mm in the Z direction.
To illustrate the potential gain of using nonuniform voxels, we performed a timing study on a simple setup: a single 20 MeV electron beam with 10 × 10 cm2 field size originating from a point source. The voxel sizes of the 30 × 30 × 30 cm3 water phantom are 1, 1.5, 2.5, 3.5, and 5 mm, respectively. Particles of 106 were simulated. The study was used to demonstrate the time dependence on the voxel size.
Tracking particles through a nonuniform mesh is more complicated than tracking particles in a uniform mesh. With the uniform mesh, the neighbor voxel is accessed directly through the simple computation of array index. With a nonuniform mesh representation, the tree-like data structure is traversed to obtain neighbor and child information, and thus extra steps and computations are involved. To characterize the overhead with the use of nonuniform mesh representation, several additional runs were performed using the same beam setup as in the timing study, except that the water phantom was represented as tree structure–like nonuniform voxels.

Clinical Cases

We use 2 clinical intensity-modulated radiation therapy (IMRT) cases (lung, head, and neck [H&N]) to demonstrate that the dose distribution calculated by the MRS approach maintains similar accuracy as calculated in the fine uniform mesh while the computational time is reduced. The plans were created with the Pinnacle TPS. The DICOM images, treatment field settings, and the fluence map were exported from the TPS. Seven fields at different gantry angles were used for each plan. The cutoff energies for absorption were 50 keV for photons and 200 keV for electrons.
We used the Metropolis sampling algorithm23 to sample the source particles from the field fluence maps. The norm error |pf − pf*|2 was recorded in the simulations, where pf and pf* are probability vectors for the simulated fluence map and the TPS fluence map, respectively. Particles of 109 were simulated to ensure that the simulated fluence map faithfully matched with the TPS fluence map so that the norm error was less than 10−3. A real clinical source model is underdevelopment. For this work, simple point source assumption was employed, that is, the multi-leaf collimator and the detailed source model were not considered in this work. Therefore, the calculated dose was not comparable with the TPS dose.


Accuracy and Performance of the Algorithm Implementation

The discrepancy of the percentage depth dose between GMC and the ICCR benchmarks for both photon and electron test cases is shown in Figure 3. We can see that the discrepancy is generally less than 1%, with a maximum of 3% at the interfaces.
Figure 3. Percentage depth dose differences for the International Conference on the Use of Computers in Radiation Therapy (ICCR) photon and electron test cases. Solid lines indicate photon test case, and dotted lines indicate electron test case.
Figure 4 shows the calculation time dependence on the voxel size. As we can see, the calculation time scales linearly with the inverse of the voxel size Δx. This result is consistent with the findings of Kawrakow.6 Kawrakow suggested that the CPU time T for an MC code using algorithms that do not truncate steps at boundaries is:
where N is the number of particle histories and T0 is the average time per history spent for the simulation of geometry-independent quantities. The time, α/Δx, represents the CPU time needed for voxel-to-voxel boundary crossing and energy loss deposition. We found that T0 = 37 and α = 47 (unit: μs/history) fit the line very well in our simulations.
Figure 4. Comparison of calculation time per history versus the reciprocal of the voxel size using uniform and nonuniform mesh.

Clinical Cases

The planning CT volume for the study cases was discretized into a high-resolution uniform mesh with the voxel size of 1.25 × 1.25 × 3 mm3 (slice thickness was 3 mm) and several nonuniform voxel meshes. The dose distribution calculated on the fine uniform mesh was used as the reference for comparison and for the γ index evaluations. For the nonuniform mesh, the initial voxel size was set to 5 × 5 × 3 mm3. The allowed minimum voxel size was set to 1 × 1 × 3 mm3 so that no split occurred in the Z direction. By applying the split density gradient threshold (DGT) to the initial voxel size (5 × 5 × 3 mm3), several nonuniform meshes were created. Figure 5A and B shows 1 slice of the generated nonuniform voxels for the H&N case and the lung case, respectively. Note that the nonuniform mesh depicts the patient structure, which shows the boundary of the patient and the interfaces of soft tissue and bones. A cuboid region covering the PTV was defined as an input for calculating the maximum uncertainty of all the voxels in the region. With DGT of 0.1, the volume of PTV was discretized into 1020 voxels (5 mm voxel size) and 20 voxels (2.5 mm voxel size), and 5 voxels (1.25 mm voxel size) in the cuboid region for the H&N case. For the lung case, the volume of lung PTV was discretized into 17 520 voxels (5 mm voxel size) and 237 voxels (2.5 mm voxel size), and there are no voxels having the size of 1.25 mm since the lung PTV is quite homogeneous, and no smaller voxels are necessary.
Figure 5. Illustration of the nonuniform adaptive meshes generated from the computer tomography (CT) volumes for (A) the head and neck (H&N) patient case and (B) the lung patient case. Note that more small voxels are populated at great density gradient regions.
The dose distributions for the H&N and lung IMRT plan were shown in Figure 6A and B, respectively. The isodose lines were overlaid for the calculations on the uniform mesh (yellow lines) and the nonuniform mesh (red lines). The number of generated nonuniform voxels, execution time per history, efficiency gain, and average γ index with various DGTs were presented in Table 1. As DGT increases, the number of generated voxels decreases as expected, and it was approximately an order of magnitude less than the number of uniform voxels. To evaluate the efficiency gain, we ran the simulations to reach a statistical uncertainty of 0.5%. For nonuniform voxels, we observed that 5 to 7 times less particles were needed to achieve the statistical uncertainty compared with the simulation on the uniform mesh. The dose distributions are in good agreements between the uniform and the nonuniform meshes (voxel averaged γ < 0.5), and the passing rate with an acceptance criterion of 3%/3 mm for all cases is greater than 95%. As pointed out by Kawrakow,6 the goal of an MC simulation for a radiation treatment plan is to accurately and efficiently calculate a dose distribution to a prescribed statistical uncertainty. By using the nonuniform voxels, the number of particles needed to reach the prescribed statistical uncertainty to the region of 50% of maximum dose was reduced by a factor of approximately 6 when compared to the number of particles needed when using the uniform voxels. Thus, an efficiency improvement of about 6 was obtained for these test cases. The other quantity for the speedup measurement is the simulation time per history. We can see that the nonuniform voxel approach is about 30% to 40% faster than the uniform fine voxels (1.25 × 1.25 × 3 mm3).
Figure 6. Dose calculation results for (A) the 7-beam H&N IMRT plan and (B) the 7-beam lung IMRT plan. The yellow isodose lines were calculated on uniform meshes, while the red isodose lines were calculated on nonuniform adaptive meshes. IMRT indicates intensity-modulated radiation therapy; H&N, head and neck.
Table 1. Calculation Time and Efficiency Using Nonuniform Voxels for 2 Clinical IMRT Cases.a
CaseMeshDGT# VoxelsγavgPRT, µs/historyEff
H&NA 2.03 × 107  24.81.1
B0.11.87 × 1060.4496.118.45.8 (5.3)
0.31.62 × 1060.4595.916.66.2 (5.6)
0.51.53 × 1060.4995.416.16.5 (5.8)
0.71.48 × 1060.4895.315.16.9 (6.2)
1.01.44 × 1060.4895.115.16.9 (6.2)
LungA 2.69 × 107  29.20.6
B0.12.23 × 1060.4599.921.53.1 (5.2)
0.31.73 × 1060.4599.918.73.5 (5.8)
0.51.60 × 1060.3310017.33.9 (6.4)
0.71.54 × 1060.4599.816.94.0 (6.6)
1.01.48 × 1060.4699.816.74.1 (6.9)
Abbreviations: DGT, density gradient threshold; eff, efficiency; H&N, head and neck; IMRT, intensity-modulated radiation therapy; A, uniform mesh with the voxel size of 1.25 × 1.25 × 3 mm3; B, nonuniform mesh generated with different DGTs used for voxel splitting; γavg, the γ index averaged over all voxels with a dose greater than 50% of the maximum; PR, the passing rate of the number of voxels with the voxel γ index less than 1; T (µs), the calculation time in microsecond per history.
aThe number in the parentheses is the efficiency gain comparing with the calculation on the uniform mesh.
The dose volume histograms (DVHs) for the 2 cases are illustrated in Figure 7A and B. For the H&N case, more small voxels were generated due to the interfaces between air cavity and bone. The DVHs for the PTV, brain stem, spinal cord, and normal brain, which were calculated on the nonuniform grid, are almost identical when compared with the DVH calculated on the uniform voxels (1.25 and 5 mm). However, for small volume organs like optical chiasm and nerves, the DVHs are slightly different and the DVHs calculated on the nonuniform grid are closer to the DVHs calculated on the fine grid (1.25 mm) due to the generated small voxels. For the lung case, the DVHs for all organs are very similar.
Figure 7. Dose volume histograms (DVHs) for (A) the head and neck (H&N) and (B) lung intensity-modulated radiation therapy (IMRT) cases. Solid lines are the results calculated on the fine grid (1.25 mm). Heavy dashed lines are the result calculated on the nonuniform voxels and light dashed lines were calculated on the coarse grid (5 mm).

Discussion and Conclusion

One difficulty in trying to increase the speed of MC calculations is the significant portion of simulation time spent in tracking the charged particles across heterogeneity boundaries. For accurate dose calculations, it is necessary to calculate the boundary crossing for electron transport as electrons move through small steps and deposit energy by continuous slowing down approximation since the cross-section data need to be updated when material type changes. Distance to the boundary has to be calculated at each step of the electron trajectory to determine the maximum permissible energy loss for that step.24,25 There are 2 kinds of approaches to handle boundary crossing. One approach is to force all particles to stop at the boundary and the direction of the particles are changed (GEANT4, EGS4/EGSnrc, and PENELOPE). As an example, 2 algorithms of boundary crossing are implemented in EGSnrc, namely EXACT and PRESTA-I. The deflection occurs within a user input parameter distance (skin depth for BCA) and then either single elastic scattering mode (EXACT) or a multiple scattering event (PRESTA-I) is applied to calculate the changed direction. It has been demonstrated by David et al13 that the computation time can be decreased by a factor of 15 with GEANT4 by using the proposed segmented volume approach to minimize the number of boundary crossing. The idea is to merge the voxel having the same material into large volume segment so that the number of boundary crossing is reduced.
The other approach to handling boundary crossings does not truncate steps at boundaries. This method is usually adopted in fast MC codes (eg, VMC++ and DPM). The CPU time for this type of MC simulations consists of 2 components, that is, time spent on calculation of geometry-independent quantities and time spent on voxel-to-voxel boundary crossing.6 The theoretical maximum speed of an MC simulation has an upper limit when the entire time is spent for geometry calculations. For example, VMC++,6 using the simultaneous transport of particle sets technique, reaches 90% of the maximum possible time for 1 mm voxels. The DPM19 MC code spends 41% for voxel-to-voxel boundary crossing and energy loss deposition. The PMC,26 using the precalculated data technique, spends 56% of the time in ray tracing. Although a patient’s CT is represented by millions of small voxels, many large homogeneous regions with the same tissue type exist such that boundary crossing considerations inside those regions are not necessary. A natural approach to reduce the number of boundary crossing is to create nonuniform voxels with various voxel sizes such that larger voxels are allowed for smooth density regions and smaller voxels for the regions with larger density gradient. The rationale of this approach is that high-dose gradient is often associated with high-density gradient, which is commonly used to differentiate the material. Note that in the fast dose algorithm, which does not truncate steps at boundaries, the operations to calculate the distance to the voxel boundary are still required. Therefore, as the number of boundaries that are encountered is reduced, the amount of computing time spent on boundary crossings is also reduced.
In our approach, the particles traverse and deposit energy on a unified nonuniform grid. One may suggest the use of a fine grid for particle transportation and scoring, and after the simulation, rebin the dose scoring grid into either uniform or nonuniform coarse grid to obtain better statistical precision. However, the time spent on geometrical transport per history on a fine grid is still slower than transport on a nonuniform grid.
Although the overhead of handling nonuniform voxels was about 10% more than handling uniform voxels, the reduction in the number of boundary crossing calculation still make the simulation per history faster than using the fine grid (1.25 × 1.25 × 3 mm3). In addition, as a result of nonuniform voxel decomposition, the average voxel is larger. To reduce the statistical uncertainty, large voxel volume helps since more particles deposit in the voxel. For the target region, large voxels may be generated if the PTV region is fairly homogeneous and therefore fewer particles may be needed to reach the statistical uncertainty than using the fine grid without compromising much accuracy to the surrounding organs as demonstrated in DVHs. For the heterogeneous areas such as the interface of tissue and bone, small voxels are generated to keep high spatial resolution as large dose gradients may occur. By using nonuniform voxels such that large voxels are generated for smooth density regions and small voxels for heterogeneous regions, the number of voxels can be reduced by an order of magnitude. Efficiency can be improved because of the resulting larger average voxels. For dose algorithms such as collapse-cone convolution and superposition convolution,27,28 a regular uniform mesh may be crucial for the algorithm to take advantage of the regularity to increase the calculation speed. However, for the MC simulation, the regularity of the mesh is not important because of the randomness of the particle trajectory. In addition to common variance reduction techniques, geometrical division into nonuniform voxels can be a useful technique to further accelerate MC simulations.


Monte Carlo
multi-resolution subdivision
adaptive mesh refinement
personal computer
source to surface distance
intensity modulated radiation therapy
head and neck
dose volume histogram
treatment planning system
planning target volume
density gradient threshold
percentage depth dose
multi-leaf collimator
continuous slowing down approximation
International Conference on the Use of Computers in Radiation Therapy
computerized tomography

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.


The author(s) received no financial support for the research, authorship, and/or publication of this article.


1. Agostinelli S. Monaco TPS, Elekta CMS, Inc.
2. Cygler JE, Daskalov GM, Chan GH, Ding GX. Evaluation of the first commercial Monte Carlo dose calculation engine for electron beam treatment planning. Med Phys. 2004;31 (1):142–153.
4. Chetty IJ, Curran B, Cygler JE, et al. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning. Med Phys. 2007;34 (12):4818–4853.
5. Kawrakow I, Fippel M. Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC. Phys Med Biol. 2000;45 (8):2163–2183.
6. Kawrakow I. VMC++, electron and photon Monte Carlo calculations optimized for radiation treatment planning. In: Kling A, Barao F, Nakagawa M, Tavora L, Vaz P, eds. Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Application: Proceedings for the Monte Carlo 2000 Meeting Lisbon. Berlin: Springer Berlin Heidelberg; 2000:229–236.
7. Hartmann Siantar CL, Walling RS, Daly TP, et al. Description and dosimetric verification of the PEREGRINE Monte Carlo dose calculation system for photon beams incident on a water phantom. Med Phys. 2001;28 (7):1322–1337.
8. Reynaert N, De Smedt B, Coghe M, et al. MCDE: a new Monte Carlo dose engine for IMRT treatment planning with an efficient scoring grid. Phys Med Biol. 2004;49 (14):N235–N241.
9. Afsharpour H, Landry G, D'Amours M, et al. ALGEBRA: ALgorithm for the heterogeneous dosimetry based on GEANT4 for BRAchytherapy. Phys Med Biol. 2012;57 (11):3273–3280. doi:10.1088/0031-9155/57/11/3273.
10. Hubert-Tremblay V, Archambault L, Tubic D, Roy R, Beaulieu L. Octree indexing of DICOM images for voxel number reduction and improvement of Monte Carlo simulation computing efficiency. Med Phys. 2006;33 (8):2819–2831.
11. Badal A, Kyprianou I, Banh DP, Badano A, Sempau J. penMesh—Monte Carlo radiation transport simulation in a triangle mesh geometry. IEEE Trans Med Imaging. 2009;28(12):1894–1901. doi:10.1109/TMI.2009.2021615.
12. Agostinelli S, Allison J, Amako K, et al. Geant4—simulation toolkit. Nucl Instrum Methods A. 2003;506:250–303. doi:10.1016/S0168-9002(03)01368-8.
13. Sarrut D, Guigues L. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations. Med Phys. 2008;35 (4):1452–1463. doi:10.1118/1.2884854.
14. Babcock K, Cranmer-Sargison G, Sidhu N. An enhanced HOWFARLESS option for DOSXYZnrc simulations of slab geometries. Med Phys. 2008;35 (9):4106–4111. doi:10.1118/1.2968094.
15. Babcock K, Cranmer-Sargison G, Sidhu N. Increasing the speed of DOSXYZnrc Monte Carlo simulations through the introduction of nonvoxelated geometries. Med Phys. 2008;35 (2):633–644.
16. Walters BRB, Rogers DWO. DOSXYZnrc Users Manual. NRC Report PIRS 794. Ionizing Radiation Standards. Ottawa, Ontario: NRC; 2005.
17. Neuenschwander H, Born EJ. A marco Monte Carlo method for electron beam dose calculations. Phys Med Biol. 1992;37 (1):107–125. doi:10.1088/0031-9155/37/1/007.
18. Berger MJ, Colella P. Local adaptive mesh refinement for shock hydrodynamics. J Comput Phys (Elsevier). 1989;82 (1):64–84. doi:10.1016/0021-9991(89)90035-1.
19. Sempau J, Wilderman SJ, Bielajew AF. DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations. Phys Med Biol. 2000;45 (8):2263–2291. doi:10.1088/0031-9155/45/8/315.
20. Rogers DWO, Mohan R. In: Bortfeld T, Schlegel W, eds. Proceedings of the 13th ICCR The Use of Computers in Radiation Therapy: XIIIth International Conference. Heidelberg: Springer-Verlag; 2000:120–122.
21. Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions. Med Phys. 1998;25 (5):656–661. doi:10.1118/1.598248.
22. Yuan J, Chen W. A gamma dose distribution evaluation technique using the k-d tree for nearest neighbor searching. Med Phys. 2010;37 (9):4868–4873. doi:10.1118/1.3480964.
23. Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E. Equation of state calculations by fast computing machines. J Chem Phys. 1953;21:1087–1092. doi:10.1109/5992.814660.
24. Bielajew AF, Rogers DWO. PRESTA—the parameter reduced electron-step transport algorithm for electron Monte-Carlo transport. Nucl Instr Methods Phys Res B. 1987;18:165–181. doi:10.1016/S01680583X(86)80027.
25. Bielajew AF, Rogers DWO. Electron step-size artefacts and PRESTA. In: Jenkins TM, Nelson WR, Rindi A, Nahum AE, Rogers DWO, eds. Monte Carlo Transport of Electrons and Photons. New York: Plenum; 1988:115–137.
26. Jabbari K, Keall P, Seuntjens J. Considerations and limitations of fast Monte Carlo electron transport in radiation therapy based on precalculated data. Med Phys. 2009;36 (2):530–540.
27. Ahnesjo A. Collapsed cone convolution of radiant energy for photon dose calculation in heterogenous media. Med Phys. 1989;16 (4):577–592. doi:10.1118/1.596360.
28. Chen Q, Chen M, Lu W. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU. Med Phys. 2011;38 (3):1150–1161. doi:10.1118/1.3551996.

Cite article

Cite article

Cite article


Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options


Share this article

Share with email

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions


Published In

Article first published online: September 14, 2014
Issue published: August 2015


  1. Monte Carlo
  2. nonuniform voxel
  3. radiation therapy

Rights and permissions

© The Author(s) 2014.
Request permissions for this article.


Manuscript received: January 14, 2014
Revision received: May 15, 2014
Manuscript accepted: May 20, 2014
Published online: September 14, 2014
Issue published: August 2015
PubMed: 25223321



Jiankui Yuan, PhD
University Hospitals, Case Medical Center, Cleveland, OH, USA
Quan Chen, PhD
Department of Radiation Oncology, University of Virginia, Charlottesville, VA, USA
James Brindle, PhD
University Hospitals, Case Medical Center, Cleveland, OH, USA
Yiran Zheng, PhD
University Hospitals, Case Medical Center, Cleveland, OH, USA
Simon Lo, MD
University Hospitals, Case Medical Center, Cleveland, OH, USA
Jason Sohn, PhD
University Hospitals, Case Medical Center, Cleveland, OH, USA
Barry Wessels, PhD
University Hospitals, Case Medical Center, Cleveland, OH, USA


Jiankui Yuan, PhD, University Hospitals, Case Medical Center, Cleveland, OH 44106, USA. Email: [email protected]

Metrics and citations


Journals metrics

This article was published in Technology in Cancer Research & Treatment.


Article usage*

Total views and downloads: 744

*Article usage tracking started in December 2016


See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores

Articles citing this one

Web of Science: 7 view articles Opens in new tab

Crossref: 7

  1. The dose accumulation and the impact of deformable image registration ...
    Go to citation Crossref Google Scholar
  2. A tool for precise calculation of organ doses in voxelised geometries ...
    Go to citation Crossref Google Scholar
  3. Enhanced optimization of volumetric modulated arc therapy plans using ...
    Go to citation Crossref Google Scholar
  4. An Integrated Framework Based on Full Monte Carlo Simulations for Doub...
    Go to citation Crossref Google Scholar
  5. A Monte Carlo model and its commissioning for the Leksell Gamma Knife ...
    Go to citation Crossref Google Scholar
  6. Experimental Validation of Monte Carlo Simulations Based on a Virtual ...
    Go to citation Crossref Google ScholarPub Med
  7. Development of a Monte Carlo model for treatment planning dose verific...
    Go to citation Crossref Google Scholar

Figures and tables

Figures & Media


View Options

View options


View PDF/ePub

Get access

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:

Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.