1 Introduction

Patient-specific computational hemodynamics appeared in the late 1990s with the work of a few researchers bridging image processing and computational fluid dynamics (CFD) [17, 19, 22, 30] and it is today pursued by an increasing number of groups around the world. The spread is motivated both by the growing interest in the study of hemodynamic variables and how they relate to disease processes, and by the improvements in the available computing resources, which now allow simulations to be run anywhere from personal computers to parallel clusters.

Despite the amount of work in this field has been growing steadily over the last years, the studies in which computational hemodynamic simulations are performed on large patient populations are still very limited. The ability of performing large-scale studies in a controlled way is a key requirement for the future of computational hemodynamics. In an evidence-based medicine perspective, it is only through the evaluation of the ability of some hemodynamic variables to predict disease or influence treatment in large populations that those hemodynamic variables, or some surrogate thereof, will become players in clinical research and clinical practice [18]. Without this necessary step, the usefulness of computational hemodynamics, although sought in principle, will remain unproven.

The likely cause of the paucity of studies on large patient populations is the fact that the process from image acquisition to simulation results is still based on several, operator-dependent steps, which involve image-segmentation, model preparation, mesh generation and simulation. The lack of integrated workflows, with a few exceptions [10], makes the use of image-based computational hemodynamics for large-scale studies non trivial and encumbered by time-consuming operator-dependent tasks, which constitute a potential source of error whose influence is difficult to control and quantify. In this context, the availability of shared tools is in our opinion of primary importance. Open source software has proven itself in the last years to be a powerful resource in several fields of computing. In the scientific world, open source efforts have often lead to widespread adoption, standardization of methods, continuous quality checking, open code review, reproducibility of results and benchmarking. For this reason, we believe that the introduction of shared tools in the field of patient specific computational hemodynamics will lead to better reproducibility and comparability among studies, ultimately increasing the chance of generating results that contribute to clinical evidence.

To take a concrete step in the direction outlined above, we here present a framework aimed at providing a working solution for high-throughput computational hemodynamics. The techniques presented here are only a part of a larger effort that our groups have been undertaking in the past years towards the construction of shared and robust open source tools for geometric and hemodynamic vascular modeling (the Vascular Modeling Toolkit [2]). In this work, we will focus on the process that leads from imaging datasets to simulations. We will first present the general approach and the software tools on which our framework is based, and we will then proceed to describe the key components of the framework in some detail.

2 General approach and tools

Image-based modeling for computational hemodynamics in general requires three fundamental steps, namely image segmentation, model pre-processing and mesh generation. Our effort consists in the development of an integrated framework that allows these operations to be performed in a streamlined fashion. Decisions on the design of the framework were taken towards robustness and repeatability on one hand, and efficiency on the other. In more detail, we aimed at satisfying the following criteria:

  • minimizing operator intensive tasks, while allowing the framework to retain the flexibility required for the application in diverse contexts;

  • favoring high-level decisions, e.g., the specification of vascular segments through selection of fiducials at their endpoints followed by automated segmentation, over operator dependence, e.g., manual outlining or manual threshold-based segmentation criteria;

  • reducing the number of free parameters to be specified at each step, so that the operator is faced with a few decisions that are easy to document.

From a software standpoint, the techniques presented in this paper are integrated within the Vascular Modeling Toolkit. The toolkit has been developed using a modular approach, by building specialized components capable to interact with each other to create complex processing pipelines. From an implementation standpoint, we followed a two-layer approach, in which computation-intensive algorithms are implemented in the C++ programming language, and algorithms are dynamically assembled to form processing pipelines using the Python programming language. Both layers are implemented using an object-oriented approach, at the advantage of modularity and maintainability.

The Vascular Modeling Toolkit relies on two major open-source efforts in the field of scientific visualization and medical image analysis. The Visualization Toolkit (VTK) [23] is an object-oriented toolkit for analysis and visualization of 3D data implemented in C++, whose classes are also accessible from three higher-level languages, namely Tcl, Python and Java. VTK provides data structures for images ( vtkImageData ), polygonal surfaces ( vtkPolyData ) and volumetric meshes ( vtkUnstructuredGrid ), among several others. From an algorithmic standpoint, VTK provides basic image processing tools, several algorithms for surface processing, such as smoothing, decimation, clipping, connectivity filters, and algorithms acting on volumetric meshes, as well as advanced visualization resources. The Insight Toolkit (ITK) [15] is a C++ toolkit for image processing, segmentation and registration, although it is growing beyond these boundaries. It is developed using a generic programming paradigm, with heavy use of C++ template constructs, which allows high modularity and code reuse with minimal impact on performance. Thanks to this approach, most algorithms can work unmodified for any spatial dimension and for multiple data types. Over the last years, ITK has become a standard resource in medical image analysis. State of the art algorithms are being contributed by the community and follow strict quality checking rules based on open code review and continuous testing.

The approach followed in the design of the Vascular Modeling Toolkit is to adopt the data structures provided by VTK, and to implement C++ algorithms in classes derived from base VTK algorithm classes. This way, new algorithms can coexist with native VTK algorithms in a processing pipeline. Using automated wrapping tools available in VTK, all algorithms are made available to the high-level layer implemented in the Python programming language. The same is true for ITK processing pipelines, which are directly integrated within VTK derived classes and made available to the Python layer. The Python layer is in turn composed of classes that also act as individual command line scripts. These scripts, each implementing a specific functionality, can be piped at the command line to provide complete processing pipelines. This modular architecture allows users and developers to take advantage of the Vascular Modeling Toolkit at several levels, from the C++ algorithm to Python code to command line pipelines. This tries to answer the challenges posed by image-based modeling, which often requires operations to be assembled in a custom fashion depending with the imaging data and modeling problem at hand.

In the following sections, we will explore in detail our approach for image-based computational hemodynamics. Where not stated otherwise, the techniques have been implemented as components of the vascular modeling toolkit.

3 Image segmentation

Magnetic resonance (MR), three dimensional ultrasound (3D-US), computed tomography (CT) and rotational angiography (RA) scanners today available in clinical settings are capable of providing information about the anatomy of blood vessels with sub-millimetric resolutions. Such information is represented as a 3D array of grayscale intensities, which we will refer to as \(I({\mathbf x}),\quad {\mathbf x}\in{\mathbb {R}}^{3}\) being the spatial coordinate, and it is usually provided by clinical scanners in DICOM format, which we directly use for importing data. Following this phase, image segmentation is performed, in order to extract the geometry of the vascular segments of interest which will later define the computational domain.

Image segmentation is a key aspect for computational hemodynamics, as it has been suggested that local geometry has a key influence on hemodynamic variables [31]. Rather than aiming at the automatic identification of entire vascular networks contained in the image [13], we leave the identification of the individual vascular segments of interest for a simulation to the operator through the selection of a few seed points on the image, and we then concentrate on the objective operator-independent determination of the location of the lumen boundary. This is justified by the fact that, in a typical computational hemodynamic problem, the need is to simulate flow in a subset of the vessels contained in the image, whose boundary has to be located in a reproducible way. In fact, more automated segmentation methods must later rely on model editing to prepare geometries for mesh generation [10, 13]. Our solution is to concentrate the effort required by the operator in the interpretation of the image content, and rather to automate all the downstream operations as much as possible.

The segmentation approach adopted in our framework is based on implicit deformable models [9, 24], which have the desirable property of being flexible with respect to the topology of the object to be segmented. The following main components have to be defined in order to specify the segmentation strategy:

  1. 1.

    the partial differential equation (PDE) describing the evolution of the implicit deformable model;

  2. 2.

    the initialization strategy, that define the initial conditions for the PDE;

  3. 3.

    eventual regularization strategies to adopt in case of noisy images;

  4. 4.

    the choice of the feature image, i.e., the input to the forcing terms of the PDE.

A description on how each component has been implemented in our framework follows.

3.1 Implicit deformable models

We refer to implicit deformable models as to deformable surfaces \(S(t):{{\mathbb{R}}}^{2}\times{{\mathbb{R}}}^{+}\to{{\mathbb{R}}}^{3}\) described by means of their 3D embedding \(\phi({\mathbf x},t):{{\mathbb{R}}}^{3}\times{{\mathbb{R}}}^{+}\to{{\mathbb{R}}},\) a scalar function defined such that S(t) is its zero-level isosurface, i.e., S(t) = {x|ϕ(x, t) = 0}. The technique used to track the temporal evolution of the embedding under image- and shape-based forces is known as level sets method. Following this method, the evolution of ϕ(x, t) can be described by a PDE of the kind

$$ \phi_{t}=-w_{1}G({\mathbf x})|\nabla\phi|+w_{2}2H({\mathbf x})|\nabla\phi|-w_{3}\nabla P({\mathbf x})\cdot\nabla\phi $$
(1)

where the first term on the right-hand side represents surface inflation with a position-dependent speed given by G(x), which can depend on image features or on shape constraints, the second term represents a smoothness constraint on the surface, \(H({\mathbf x})=\nabla\cdot\frac{\nabla\phi} {\left|\nabla\phi\right|}\) being the mean curvature of the zero-level surface, and the last term represents advection of the surface by means of the vector field ∇P(x). Weights w 1, w 2 and w 3 regulate the influence of each term on surface evolution [24]. Equation 1 is solved using finite differences on the image grid, as briefly described in Sect. 3.5.

Choosing \(P({\mathbf x})=\left|\nabla I\right|\) in Eq. 1 results in the advection of the surface towards the ridges of the image gradient magnitude, as shown in Fig. 1. This choice has the underlying assumption that the interface between two tissue types corresponds to the location of maximum change in image intensity. This is a reasonable assumption for acquisition modalities such as CTA, RA, contrast-enhanced and steady-state MRA (e.g., TrueFISP), in which the intensity of blood is given by the presence of a contrast agent or to T2/T1 weighting of blood. On the contrary, such advection field may not yield satisfactory results for images acquired using time-of-flight (TOF) or phase-contrast (PC) MRA sequences, in which signal from blood is linked to its velocity. In this latter case, the decrease in velocity in the vicinity of the vessel wall may lead to a decrease in image intensity which is not linked to the presence of an interface between blood and wall at that location. This effect affects PC–MRA images more severely than TOF–MRA images, to the point that dedicated advection fields should be developed for the former, although this is not subject of the present work.

Fig. 1
figure 1

Segmentation of an abdominal aorta MRA dataset. Top row, left to right interactive initialization of a branch through the specification of two seed points, one within the suprarenal aorta and the other within the right renal artery; complete initialization; result of segmentation after level set evolution. Bottom row, left to right maximum intensity projection of the dataset; a section of the final level set function ϕ with its zero level set; final model superimposed to a section of |∇I| (darker intensity associated to higher values)

Relying on spatial variation in image intensity rather than on absolute intensity, e.g., in manual or automated thresholding approaches, confers greater robustness to the segmentation method, since the location of the lumen boundary is made independent to changes in the intensity of the lumen due to artifacts, contrast bolus dynamics or partial volume effects. An additional disadvantage of intensity-based approaches is that, since the true lumen boundary corresponds to regions of steep changes in intensity, slight variations in the definition of a threshold lead to relatively large displacements of the location of the resulting lumen boundary. On the contrary, seeking the location corresponding to the ridges of the gradient magnitude of image intensity is a univocally defined criterion, although the problem is then shifted to capturing the ridges corresponding to the lumen boundary among all feature ridges contained in the image, which will be subject of the next Section.

In [5, 6], we proposed the use of Eq. 1 to segment vascular segments for computational hemodynamics. In particular, we described a three-stage procedure composed by centerline-based initialization, inflation (w 1 = 1, w 2 = w 3 = 0) towards the vessel edges and advection (w 1 = 0, w 2 ≥ 0, w 3 = 1) to the gradient magnitude ridges. Although essentially robust, this approach suffered from operator-dependence induced by the inflation step, which was meant to bring the surface from the centerline towards the boundary of the vessel, within the capture radius of the advection term. In this work, we overcome this problem by skipping the inflation step thanks to a better initialization strategy, with which we directly generate an initial surface located within the capture radius of the advection term, as shown in Fig. 1 and described in the next Section. The first term in Eq. 1 is therefore unused in the present method, to the advantage of reproducibility.

3.2 Colliding fronts initialization

The generation of an initial level set function ϕ0 = ϕ(x, t 0) plays an important role in our framework. As stated above, our strategy is to let the operator interactively select the segments of interest and use deformable models to reproducibly identify the sub-voxel position of the lumen boundary. In this perspective, the goal of initialization is to produce the initial surface of selected branches already within the capture radius of the advection term in Eq. 1, with minimal input and at interactive speeds. Avoiding the inflation step during the evolution of the level set equation while retaining the ability of segmenting selected vascular segments goes in the direction of conferring robustness and flexibility to the modeling chain.

We here present a novel technique for level set initialization that satisfy the above requirements, here referred to as colliding fronts. The technique is based on the propagation of two independent wavefronts from two seeds, s 1 and s 2, interactively placed at the two ends of a vascular branch. The wavefronts travel with speed proportional to local image intensity, following the so-called eikonal equation [25]

$$ |\nabla T|=I^{-1} $$
(2)

where T is the arrival time of the wavefront in each point of the domain and the right-hand side is the slowness of the wave in each point, as shown in Fig. 2. Equation 2 is customarily solved on the imaging grid by means of the fast marching method, which is an efficient method for the numerical approximation of the eikonal equation based on upwind finite differences [25]. Once the arrival times T 1 and T 2 have been computed from s 1 and s 2, the initial level set function can be defined as

$$ \phi_{0}=\nabla T_{1}\cdot\nabla T_{2} $$
(3)

Such function is negative where the two fronts are directed towards each other, which identifies the initial level set as the region between the two seed points. Eventual side branches are automatically excluded from the initialization, since the two fronts seep in side branches in the same direction from the parent vessel, and the function in Eq. 3 becomes positive, as shown in Fig. 2. In order to increase robustness, the initial volume is defined as the region for which ϕ0 in Eq. 3 is negative and as the same time connected to the seeds, i.e., if there is a connected path between s 1 and s 2 such that ϕ0 is always negative.

Fig. 2
figure 2

Colliding fronts initialization along the abdominal aorta dataset. Top, left interactive selection of two seed points, one within the suprarenal aorta, on at the iliac bifurcation. Top, right contour lines of the solution of the eikonal equation T 1 from the suprarenal seed, and vectors showing the solution gradient ∇T 1, color coded according to the magnitude of the superior-inferior direction component (red for vectors pointing up, blue for vectors pointing down). Bottom, left initialized abdominal aorta, showing the selective initialization of the branch and the absence of side branches. Bottom right close-up of the two vector fields ∇T 1 and ∇T 2, showing counter-directed vectors within the lumen and co-directed vectors at the lumen boundary and within the renal artery; the black line is the zero level set of ∇T 1 ·∇T 2, i.e., the trace of the initialized surface

For high contrast between vessels and background, Eq. 3 already produces an initial surface located in close proximity of the boundary of the lumen. In fact, since the wavefronts propagate with speed proportional to image intensity, the lumen boundary is characterized by a steep decrease in speed, which makes both wavefronts align parallel to the direction of the boundary, thus leading ϕ0 to be positive, as shown in Fig. 2. When the signal-to-background ratio is not high, the slow-down of fronts at the lumen boundary might not be sufficient for the fronts to assume a concurrent orientation. In this case, the selection of a threshold to constrain the propagation of the wavefronts outside a prescribed intensity range is required. It has to be noted that, provided that the threshold is specified in the range between the vessel intensity and the background, the final position of the surface after the level set evolution will not be affected by the choice of the threshold.

The use of wave propagation through the fast marching method opens up another simpler possibility for interactive initialization, to be used in case of objects of non tubular shape or for initializing entire vascular networks without branch selection. The approach is to select a group of seeds and a group of targets on the image, and to solve Eq. 2 from the seed points. The initial level set function is then Φ0 = T − t target, where T is the solution of the eikonal equation, and t target is the lowest eikonal solution among the targets. In other words, Φ0 is the wavefront at the time when the first target has been reached. In particular for the initialization of large vascular segments, this approach necessitates the specification of a threshold to constrain the propagation of the wavefront.

3.3 Regularization strategies

Although for datasets with good signal-to-background and signal-to-noise ratio the above initialization strategy and sole level set advection (w 1 = w 2 = 0, w 3 = 1) are sufficient, for datasets affected by noise or artifacts the introduction of regularization in the segmentation process is required to produce a smooth segmentation. Clearly, regularization depends on the choice of at least one smoothing parameter, which potentially introduces operator-dependence.

In order to decrease the influence of noise in the segmentation process, noise can be suppressed using a Gaussian or other image smoothing filter. Alternatively, derivatives in the computation of the feature image can be performed by convolution of the image with a Gaussian derivative kernel, in lieu of finite differences. This is equivalent to computing derivatives using finite differences on a subsampled version of the image after convolution with a Gaussian kernel. These two approaches can effectively suppress the influence of noise, but can also lead to corrupted segmentations, for instance by merging nearby vessels, or erasing small vessels. This is due to the fact that a Gaussian filter suppresses image features below the scale corresponding to its standard deviation.

An alternative strategy for regularization is the use of the second term in Eq. 1 (w 2 > 0), which penalizes high curvature features in the resulting segmentation. This can be a very effective strategy when the advection term is strong, i.e., when the image contrast to background ratio is high. On the contrary, it can lead to sever shrinkage of the surface where the curvature term is stronger than the advection term. It is in fact known from differential geometry that any genus zero surface tends to a sphere if allowed to deform under mean curvature flow.

In order to obtain surface regularization while retaining vessel diameter, Lorigo et al. [21] have proposed to replace the mean curvature H appearing in the second term of Eq. 1 with the smallest principal curvature, computed as the smallest eigenvalue of the curvature tensor. In this case, regularization only results in the penalization of surface features with high minimal curvature. Considering that a vessel is a tubular object with low smallest principal curvature (zero for a straight vessel) and high largest principal curvature (the inverse of the radius of the vessel), the curvature term in this case acts by regularizing a vessel in the longitudinal direction, without shrinkage of the vessel diameter. Note that this method can fail to provide effective regularization or can generate unexpected artifacts in case of non-tubular shapes.

As a last regularization strategy, we briefly mention surface smoothing at a post-segmentation stage, e.g., through the use of a Taubin type non-shrinking smoothing filter [29] on the final triangulated surface. Since the filter is non-shrinking and preserves topology, it can be safely employed, although it can affect the shape of surfaces at a bifurcation apex or similar saddle-shaped features.

As usual, the perfect regularization strategy does not exist. The choice has to take into account the type of image, geometry and quantities of interest for the simulation. An important point to address in the execution of a large-scale study is the adoption of a consistent set of regularization parameters to use throughout the study, and the evaluation of sensitivity of results to the chosen parameters.

3.4 Feature images

As stated above, the computation of the feature image |∇I| to be used as the advection field is critical to the present approach. The advection term expresses a vector field pushing the deformable surface from the two sides of each feature ridge. The way the feature image is computed determines the location where the deformable surface converges. In the previous section, we have already mentioned the use of convolution with Gaussian derivatives as a way of computing regularized derivatives as compared to finite differences. We also pointed out how regularization can lead to the loss of small vessels. In the remainder of this section we will address the problem of segmenting vessels with a diameter of a few pixels.

When the size of a vessel is comparable to the width of the point-spread function of the acquisition system, the shape of the intensity profile tends to be similar to the point-spread function itself. This means that the intensity profile of a small vessel tends to lose the plateau within the lumen and to assume a hat-like shape, with a single maximum within the lumen. This effect leads to underestimation of the diameter of small vessels, as the gradient magnitude is non-zero within the lumen. As a further problem, the availability of a few grid points across the lumen tends to degrade the quality of the approximation of partial derivatives computed using finite differences in the imaging grid. Since level sets are not topologically constrained, severe underestimation can lead to disappearance of entire tracts of vasculature.

The usefulness of image interpolation in improving the results of segmentation in these cases is usually moderate, since, for low resolutions, the effects of the point spread function of the acquisition system cannot be recovered from pure interpolation. In general an increase in sampling density can lead to a better approximation of partial derivatives and potentially avoid level set collapse in some cases, but it cannot compensate the effect of the point spread function on the signal. In the same direction, the prescription of anisotropic resolutions during acquisition aimed at achieving higher in-plane resolution at the expense of a lower resolution on the orthogonal direction is seldom a solution in these cases. The effect of the point spread function in the coarse direction can severely distort the shape of a vessel, and the apparent gain in in-plane detail actually derives from a partial-volume averaging in the out-of-plane direction, which can lead to suppression of vessels smaller than the slice thickness and further corrupt the shape of the remaining vessels.

In order to cope with the effect of low resolution and avoid collapse of vascular tracts, we here propose to approximate gradient magnitude using an upwind finite difference scheme during the computation of the feature image [4]. In the non-limiting assumption that vessels are brighter than the surrounding tissue, we compute image gradient magnitude \(\left|\nabla I\right|\approx\Vert D_{x}I,D_{y}I,D_{z}I\Vert\) using the following approximation for partial derivatives

$$ D_{x}I=\max\left(-\frac{I_{i+1}-I_{i}} {h_{x}},\frac{I_{i}-I_{i-1}}{h_{x}}\right) $$
(4)

where i is a voxel index in the x direction and h x is the voxel spacing in the x direction (similarly for y and z). More intuitively, Eq. 4 expresses the computation of derivatives from a voxel only towards neighboring voxels having a lower intensity (if available), with the effect of reducing underestimation and avoiding level set collapse in cases where the vessel is represented by less than a few pixels across, as is the case for the renal arteries in Fig. 1. On the other hand, when voxel spacing is small compared to vessel diameter, the upwind approximation is close to the one provided by centered finite differences.

Although the above technique is effective in compensating the underestimation obtained for low resolutions, the accuracy of a CFD analysis on a model obtained from a low resolution scan is a question that should be addressed on a problem-specific basis. Certainly, the lower the resolution, the coarsest the level of detail that a CFD analysis would be able to assess, but a more specific characterization of this relationship depends on the domain of interest and on the problem being addressed. For instance, a straight tube necessitates of less geometric information than a complex saccular aneurysm for a CFD analysis to be considered accurate, and, similarly, accurate estimation of a pressure drop along a vascular segment necessitates of less geometric detail than accurate estimation of a wall shear stress map. Validation studies on physical or virtual phantoms on one hand, and patient studies based on repeated and multi-modality scans on the other, have to be encouraged for specific flow domains and quantities of interest.

3.5 Implementation notes

In our framework, we adopted the level set implementation provided by the Insight Toolkit. In particular, we make use of the sparse field level sets approach [32], by virtue of which only a minimum amount of voxel layers around the zero level set are tracked during the evolution. This way, computation of the solution on the whole imaging grid at each time step can be avoided, and the computing time depends on the actual extent of the zero level set surface. Originally implemented for isotropic voxel spacings (h x  = h y  = h z ), we extended the implementation to handle imaging data with anisotropic voxels, and contributed the code back to the Insight Toolkit. Similarly, we contributed the implementation of the colliding fronts initialization algorithm.

Once segmentation has been performed, the zero level set surface can be extracted using the Marching Cubes algorithm, as provided by the Visualization Toolkit. In the following, we will assume that a triangulated surface of the segmented vascular network is available.

4 Centerline computation

In order to streamline the process from segmented surfaces to simulation, it is required that a robust geometric characterization methodology is in place, so that operations can be performed robustly and with minimal or no user intervention. In our framework, geometric analysis is based on centerlines, which in turn rely on the concept of medial axis and its discrete counterpart, the Voronoi diagram [11]. The medial axis is defined as the locus of centers of spheres maximally inscribed inside an object, a sphere being maximally inscribed if not contained in a any other inscribed sphere. The envelope of all maximal inscribed spheres is the boundary surface of the object itself, therefore the (oriented) boundary surface and its medial axis are dual representations of the same shape [1]. In 3D, the medial axis is a non-manifold surface, i.e., not everywhere locally homomorphic to a 2D disk. A possible approximation to the medial axis for a discrete surface is the so-called embedded Voronoi diagram, that is defined by the portions of the Voronoi diagram lying inside the surface. The Voronoi diagram is defined by the boundaries of the regions of \({{\mathbb{R}}}^{3}\) closer to each surface point. As such, each point on the embedded Voronoi diagram is associated with a sphere touching the surface at at least four points and not inscribed in any other sphere, the discrete equivalent to a maximal inscribed sphere. Like the medial axis, the embedded Voronoi diagram is a non-manifold surface.

We define centerlines as minimal action paths on top of the Voronoi diagram, where the action field is the inverse of the radius of maximal inscribed spheres [6]. This method links two locations on the Voronoi diagram with a path minimizing the line integral of the action path, which results in centerlines that maximize their minimal distance to the boundary surface. Operatively, centerlines are computed by first solving the eikonal equation from a seed point on the Voronoi diagram domain

$$ |\nabla T|=R({\mathbf u})^{-1} $$
(5)

where R(u) is the maximal inscribed sphere radius field defined on the Voronoi diagram, u is its parametric space, and T is the solution representing the arrival times of a wavefront traveling on the Voronoi diagram with speed equal to R(u), as shown in Fig. 3. Once T has been computed, a centerline is generated by following the path of steepest descent of T on the Voronoi diagram from each target point, that is, by solving the following trajectory equation

$$ \frac{\hbox{d}{\mathbf c}}{\hbox{d}s}=-\nabla T $$
(6)

where s is the centerline parametric space taken as its curvilinear abscissa. Being defined on the Voronoi diagram, each centerline point is associated with the corresponding maximal inscribed sphere radius. The envelope of maximal inscribed spheres along centerlines produces a canal surface, or tube, which is inscribed inside the vessel and tangent to it at discrete locations, as shown in Fig. 3. Since they are associated to the deeper Voronoi structures, such defined centerlines are not sensitive to small displacements in surface points, which are instead known to affect the peripheral portions of the Voronoi diagram. Furthermore, the centerline computation problem is well-posed even in regions where the vascular domain doesn’t resemble a cylinder, such as at bifurcations or inside aneurysms, and this confers robustness to the algorithm.

Fig. 3
figure 3

Left embedded Voronoi diagram for the abdominal aorta dataset, grayscale-coded with the solution of the eikonal equation T seeded at the top-most section; centerlines traced from the downstream sections are shown superimposed to the Voronoi diagram. Center resulting centerlines. Right envelope of maximal inscribed spheres along the computed centerlines

5 Model pre-processing

The polygonal surface resulting from segmentation is in general not directly usable for generating a computational mesh. Typically, inlet and outlet sections of the model are not flat and no boundary indicators are available for prescription of boundary conditions. Furthermore, in case fully developed Dirichlet boundary conditions have to be imposed at inlet or outlet boundaries, circular sections need to be generated in order for Poiseuille or Womersley velocity profiles to be specified analytically. Last, it is desirable to impose boundary conditions a few diameters upstream or downstream the region of interest for the flow to develop before it enters the computational domain of interest. These problems are overcome through the automated connection of cylindrical flow extensions to all inlets and outlet sections of the model. Such sections can be generated by interactively cutting the original surface or by using centerlines to define the cutting planes in a more automated fashion. In any case, the direction of flow extensions must be robust with respect to the actual orientation of the cutting planes, and must rather reflect the orientation of the vessel as it approaches the boundary sections. For this purpose, we employ the orientation of centerlines within the last diameter before the boundary sections to define the normal to the flow extensions.

The shape of flow extensions must provide a smooth transition between the model boundary sections, in general non circular, and circular sections of equivalent area, in order to avoid artifactual flow disturbances in the transition between the flow extensions and the flow domain. We achieve this by generating a template cylindrical flow extension and then warping its first half by means of a thin plate spline transformation, computed in such a way to bring the cylinder circular section onto the model boundary section. The thin plate spline algorithm [8], aims at finding the deformation induced by the transformation between two sets of corresponding landmarks that minimizes bending energy. This is achieved through the use of radial basis function interpolation with a kernel of the form R 2logR corresponding to the minimization of the bending energy of a steel plate subject to boundary constraints [8]. In our framework, we use the implementation of the thin plate spline algorithm provided within the VTK library. Results of the automated application of flow extensions is shown in Figs. 4 and 5.

Fig. 4
figure 4

Left model after the automated addition of flow extensions, color-coded with the distance to centerlines. Center result of surface remeshing using distance to centerline as sizing function and R el = 0.4. Right cutout of the volume mesh, showing element size grading that closely follows surface element grading and adaptive 2-element prismatic boundary layer

Fig. 5
figure 5

Image-based model and mesh of a cerebral aneurysm located in the distal internal carotid artery, acquired from RA. Left sizing function based on distance to centerlines D c . Center sizing function based on centerline maximum inscribed sphere radius R c . Right mesh resulting from the use of R c as sizing function, demonstrating the capability of the technique of handling non-tubular shapes

By adjusting the strategy with which landmarks are prescribed for the thin plate spline algorithm, this methodology could be easily extended to the need of flow extensions with a curved axis, needed in the case where a vessel presents a strong curvature near the end and the flow domain of interest is close to the boundary section. However, it is desirable, that a sufficient segment of the upstream geometry is available for segmentation, in order to reproduce the flow patterns entering the domain of interest in a patient-specific way.

Fig. 6
figure 6

Left patient-specific model of a carotid bifurcation used as input for the RP system (see Appendix). Right picture of the physical phantom

Fig. 7
figure 7

Models obtained from CT, MR axial and MR coronal acquisitions of the physical RP phantom (see Appendix). No smoothing strategies have been adopted during segmentation. Signed distance to the STL model used as input for the RP system is displayed. Positive values indicate that the segmented model is located outside the STL model

6 Mesh generation

6.1 Surface remeshing

Once the shape of the model has been finalized, a computational mesh must be generated for the numerical solution of the partial differential equations describing blood flow. Our approach for automated mesh generation is to generate a high quality triangulated surface mesh and to then fill the volume using a combination of tetrahedral and prismatic elements.

For the present work, we adopted a mesh improvement approach to surface remeshing, where we assume that the input surface mesh is a good polygonal approximation of the continuous geometry. In fact, this may not be true for a surface generated from level sets in case the geometric features are on the order of a voxel in size, since those features end up being linearly interpolated with large triangles. In this case, surface mesh density can be first increased using a smooth subdivision scheme. In particular, we employ Loop’s subdivision scheme [20], provided by VTK, which yields a C2 continuous surface when mesh density tends to infinity, under assumptions of regularity of the starting mesh. Taking the input surface mesh as the reference geometry has the advantage of avoiding the conversion of the input triangulation to a CAD-like parametric representation, which could in principle introduce a regularization of the input geometry which is not directly controllable from the user side.

Our algorithm for surface remeshing belongs to the class of explicit remeshing algorithms, in which an initial triangulation \({{\fancyscript{T}}}_{0}\) is progressively modified in order for the surface mesh to conform to imposed quality and sizing criteria [14, 28]. Mesh improvement is performed through three basic topological operations, namely edge collapse, edge split and edge flip, acting on surface triangles, and one geometric operation, point relocation, acting on point position. Target triangle area is locally defined as a sizing function A(u) on the nodes of \({{\fancyscript{T}}}_{0},\)where u is a point in the parameterization of \({{\fancyscript{T}}}_{0}.\) The four atomic operations are iteratively applied to the mesh \({{\fancyscript{T}}}\) until the following criteria are met:

  • the Frobenius aspect ratio of each triangle, defined as \(f=\frac{e_{1}^{2}+e_{2}^{2}+e_{3}^{2}}{4\sqrt{3}A}\)where e i is the length of triangle edge i and A is triangle area, is as high as possible and higher than a threshold (by default set to 1.2);

  • local triangle area A(u) conforms the sizing function specified on \({{\fancyscript{T}}}_{0};\)

  • triangle neighborhoods have a valence (i.e., the number of vertices connected to a vertex through a triangle edge) as close as possible to 6.

Whenever a new point is inserted in \({{\fancyscript{T}}}\) or a point relocation is performed, the point is projected back on \({{\fancyscript{T}}}_{0}\) by means of a fast octree locator. The same locator is used whenever a value of A(u) defined on \({{\fancyscript{T}}}_{0}\) is needed for a triangle on \({{\fancyscript{T}}}.\) The sizing function A(u) can be automatically defined on the basis of the geometry of \({{\fancyscript{T}}}_{0},\) for instance by setting it proportional to the surface mean curvature. This solution has the advantage of refining the mesh around small-scale features, but suffers from sensitivity to the presence of small-scale noise on the surface, which potentially leads to artifactual over-refinements. We instead choose to define A(u) on the basis of local vessel size, with the rationale that a vessel should have the same number of elements around, or across, the lumen, irrespective of its local diameter.

We adopt the distance of surface points to centerlines, D c (u), as a robust surrogate of local vessel size. For each surface point, we compute its distance to the centerline point that yields the minimum tube function value among all centerline points. This tube-based approach is superior to simply computing the distance to the closest point on centerlines, since the latter approach would tend to underestimate local size where the size of vessels change abruptly, such as in the case of stenoses or T-shaped bifurcations. Finding the centerline point that yields the minimal tube function can be performed efficiently by considering that a tube c T is a line in a 4-dimensional space c T (s) = (x(s), y(s), z(s), ir(s)), where s is the line parameter, x, y and z the spatial coordinates of the centerline, r its radius and i the imaginary unit [3]. The tube function evaluated at a point p = (x p , y p , z p , 0) is then

$$ f({\mathbf p})=\underset{s\in[0,L]}{\min}\left\{\left\Vert {\mathbf c}_{T}(s)-{\mathbf p}\right\Vert \right\} $$
(7)

For a piecewise linear representation, Eq. 7 has to be evaluated for every linear segment [s i+1, s i ] to find τ ∈ [0,1] such that the tube function is minimal at s = s i  + τ(s i+1s i ). This is done analytically by extending the expression for the closest point to a line in Euclidean geometry to the 4-dimensional geometry of the tube function

$$ \tau=\frac{\left({\mathbf p}-{\mathbf c}_{T}(s_{i})\right)\cdot\left({\mathbf c}_{T}(s_{i+1})-{\mathbf c}_{T}(s_{i})\right)}{\left\Vert {\mathbf c}_{T}(s_{i+1})-{\mathbf c}_{T}(s_{i})\right\Vert} $$
(8)

Once the distance to centerlines has been defined at every node on \({{\fancyscript{T}}}_{0},\) it can be manipulated and used as a sizing function for the surface remeshing algorithm. In our framework, we compute the local target area field as

$$ A({\mathbf u})=\frac{\sqrt{3}}{4}*\left(R_{\rm el}*D_{c}({\mathbf u})\right)^{2} $$
(9)

where R el is a constant factor expressing the ratio between the nominal edge length and the distance to centerlines, as shown in Fig. 4. In practice, R el is the only free parameter in the surface remeshing procedure, and it quantifies the length of the nominal triangle edge relative to the vessel size. Since it’s a relative quantity, it can be chosen once for all in a population study.

The above approach is robust but suffers from the presence structures that are not well represented by centerlines, such as saccular aneurysms, on whose surface D c (u) could assume artifactually high values. A simple workaround is to set a upper threshold to the admissible values of D c (u). However, this introduces an absolute mesh sizing parameter which may imply model-specific decisions, and impede automatic meshing in a population study. An alternative, more robust, approach is to replace D c (u) with R c (u), i.e., the radius of the maximal inscribed sphere at the centerline point with the minimal tube function. In other words, after finding the point with the minimal tube value, we get its inscribed sphere radius instead of computing the distance to it. The definition of A(u) is then the same as in Eq. 9 with D c (u) replaced by R c (u). This way, the target area doesn’t increase in those regions that deviate from a tubular shape, such as saccular aneurysms as shown in Fig. 5, but it instead depends on the size of the tube where those regions originate.

6.2 Volume meshing

Volume mesh generation is performed on the basis of the surface mesh produced at the previous step, as the latter is left unchanged and the interior is filled with volumetric elements. The mesh is composed of two parts, the tetrahedral interior and the prismatic or tetrahedral boundary layer. We will now present them in this order.

Tetrahedral mesh generation in the Vascular Modeling Toolkit is performed by incorporating the functionality offered by the Tetgen mesh generator [27], a tool for constrained Delaunay and high-quality tetrahedralization of piecewise linear complexes. The algorithms provided by Tetgen are provided in code that was directly embedded within a VTK class, which provides seamless integration with the rest of the framework and achieves complete automation. We refer to [26] for in-depth description of the theory and algorithms behind the tool. For our purposes, it is sufficient to mention that Tetgen is capable of generating meshes conforming to a boundary triangulation and satisfying the Delaunay criterion (i.e., no vertex falls inside the circumsphere of any tetrahedron). In addition, mesh quality is ensured through the specification of a bound on the target ratio of the radius of the element circumsphere to the length of the shortest element edge.

Sizing of the tetrahedral mesh can be provided through the specification of target tetrahedron edge size at a set of points in space. In our case, we express target volume at each node of the input surface mesh, setting it proportional to the local triangle edge length on the surface mesh, as shown in Fig. 4. The proportionality constant can be chosen once close to 1. In practice, dealing with tubular geometries, and in general with concave bounding surfaces with non-negligible curvature everywhere, it is advisable to keep the proportionality constant lower than one (e.g., 0.8), in order to penalize the generation of elongated tetrahedra at the boundary surface.

Since the behavior of the solution close to the wall is of great interest for hemodynamics, as it is directly linked to wall shear stress and derived quantities, it is in general desirable that the mesh has a high element density near the wall, and that elements are aligned with the local orientation of the boundary surface in that region. In practice, this is realized by generating one or more layers of prismatic elements having the boundary triangles as their base and protruding into the domain along the surface normal direction (for solvers not supporting prismatic element types, the boundary layer can then be tetrahedralized). The thickness of the boundary layer is expressed as a fraction of the local edge length, which is in turn dependent on the local vessel size, consistently with the fact that the size of the physical shear layer is expected to scale with the vessel size, as shown in Fig. 4.

The concrete advantage of the generation of such boundary layer depends on the specific problem at hand and on the density of the internal tetrahedral mesh. When using an adaptive mesh refinement strategy as presented in the next section, it is expected that resolving the boundary layer in the direction normal to the surface will limit the number of elements tagged for refinement within the shear layer, therefore increasing efficiency. A systematic evaluation of such effect is beyond the scope of this paper and will be subject of future work.

As detailed in the next section, our solution strategy is based on P2-P1 finite elements. For this task we adopt isoparametric quadratic finite elements, namely 10-noded quadratic tetrahedra and 18-noded quadratic prisms. The conversion of the elements from linear to quadratic is performed by introducing additional nodes using a regular subdivision scheme. Once subdivided, midside nodes of boundary triangles are instead projected onto the initial surface \({{\fancyscript{T}}}_{0}\) using the octree locator, so to obtain a proper high-order representation of the surface.

7 CFD simulation

The methodology for automated mesh generation described in the previous Section has one potential limitation, the fact that the computational mesh is generated on the basis of the geometry, and not on the basis of the expected flow features. As an example, in the case of a stenotic vessel, the mesh would be dense at the stenosis throat, but the post-stenotic sections would likely be under-resolved with respect to the high-speed jet emanating from the stenosis. This would potentially limit the application of the present workflow for large-scale simulations, as each case should in theory be individually evaluated for potential problems linked to geometry-driven mesh generation. In order to overcome this limitation, we stress the importance of the adoption of an adaptive mesh refinement approach for the numerical approximation of the differential problem, in this case the incompressible Navier–Stokes equations. The automated geometry-driven mesh generation approach described in the previous Section is then complemented by a flow feature-driven mesh refinement strategy, therefore providing a robust approach to high-throughput computational hemodynamics. A solver should therefore fulfill the following requirements:

  • to provide a simple and robust solution strategy, so to reduce the number of parameters to be chosen for the simulation and to reduce the need for monitoring during computation;

  • to allow adaptive mesh refinement and coarsening at any timestep, so that the mesh can adapt to the flow features arising throughout the cardiac cycle;

  • to exploit parallel computation, both for the computation on clusters and on now standard multi-core workstations.

While it is beyond the scope of this paper to present a CFD solver, we briefly summarize the features of the solver currently under development in our group. Based on a second-order velocity correction scheme in rotational form, the implementation relies on the libMesh library [16], which provides a finite elements infrastructure featuring parallel adaptive non-conformal mesh refinement and coarsening. Non-conformal mesh refinement, i.e., the possibility to refine an element without refining its neighbors thanks to the correct handling of hanging nodes, allows element aspect ratios to remain unchanged throughout the simulation, and makes it efficient to perform refinement and coarsening frequently during the cardiac cycle. Also, we argue that the adoption of prismatic boundary layers and adaptive mesh refinement overcomes the theoretical presence of a numerical boundary layer close to non-slip boundaries, whose thickness is a function of the local element size. As mentioned above, the solver will be presented in a separate publication and will made freely available as open source software.

8 Conclusions

In this work, we presented the building blocks of our framework for high-throughput computational hemodynamics, which is already freely available within the open-source Vascular Modeling Toolkit. We hope that the advent of these and other tools will enable studies to be performed on a large-scale and will allow to lift the effort required in generating patient-specific models and to shift the focus on the underlying biological problems.

Although our framework potentially allows patient-specific hemodynamic variables to be evaluated in large-scale studies, new approaches for post-processing of hemodynamic simulations need to be developed for the identification of geometric and hemodynamic criteria to be employed in clinical research [3]. This is, in our opinion, one of the pressing issues in this field, and as such will constitute a focus of our future work.