Journal of Computational Surgery

Computing, Robotics, and Imaging for the Surgical Platform

Journal of Computational Surgery Cover Image
Open Access

Real-time in silico experiments on gene regulatory networks and surgery simulation on handheld devices

  • Icíar Alfaro1,
  • David González1,
  • Felipe Bordeu2,
  • Adrien Leygue2,
  • Amine Ammar3,
  • Elías Cueto1, 2Email author and
  • Francisco Chinesta2, 4
Journal of Computational SurgeryComputing, Robotics, and Imaging for the Surgical Platform20141:1

DOI: 10.1186/2194-3990-1-1

Received: 13 November 2012

Accepted: 1 March 2013

Published: 10 January 2014

Abstract

Abstract

Simulation of all phenomena taking place in a surgical procedure is a formidable task that involves, when possible, the use of supercomputing facilities over long time periods. However, decision taking in the operating room needs for fast methods that provide an accurate response in real time. To this end, Model Order Reduction (MOR) techniques have emerged recently in the field of Computational Surgery to help alleviate this burden. In this paper, we review the basics of classical MOR and explain how a technique recently developed by the authors and coined as Proper Generalized Decomposition could make real-time feedback available with the use of simple devices like smartphones or tablets. Examples are given on the performance of the technique for problems at different scales of the surgical procedure, form gene regulatory networks to macroscopic soft tissue deformation and cutting.

Keywords

Model Order Reduction Gene regulatory networks Surgery simulation

Background

Some 15 years ago, Satava [1] proposed a taxonomy of virtual anatomy consisting of five different generations. The first generation is composed by systems representing accurately the geometry of the organs at a macroscopic level. The second generation would include an accurate description of the physical dynamics of the body. While it is still hard, more than a decade after, to find a real-time surgical simulator that incorporates accurate, state-of-the-art models for soft tissues at a continuum level, this taxonomy included three more generations. From the third to the fifth one, these virtual descriptions of the patient should include, respectively, accurate descriptions of physiology, microscopic anatomy (at a neurovascular level, for instance), and, finally, biochemical systems.

While many successful models exist for all these different levels of description, see for instance [28] among many others, they have not been yet fully incorporated into virtual reality simulators due to the impressive computing requirements that they involve.

Difficulties at a macroscopic level

These difficulties are of very different nature. If we talk, for instance, of systems devoted to train future surgeons’ gestures such as cutting and suturing, the main difficulty comes from the highly non-linear constitutive equations of soft tissues, very often modeled as (possibly visco-) hyperelastic media [9]. These non-linear equations must be solved under real-time constraints that reach 1 kHz of feedback response if we think of haptic devices, or 25 Hz if we need for visual feedback only. Currently, very few surgical training simulators at this level incorporate accurate models for tissue deformation. Among these, we can cite the works by Ourselin and Taylor [10] based on the use of explicit finite elements implemented on hardware (Graphic Processing Units, GPU). But in general, explicit algorithms lack of robustness for very long times of simulation.

Recently, a growing interest has been paid to investigate Model Order Reduction (MOR) techniques in this framework. MOR comprises a variety of techniques known under different names (Proper Orthogonal Decomposition, POD; Principal Component Analysis, PCA; Karhunen-Loeve transform; among others) and ubiquitous in almost every branch of applied sciences and engineering. After the pioneering works of Karhunen, Loeve and Lorenz [1113], MOR techniques have been applied and re-discovered under different frameworks many times [1416].

In essence, POD-based model order reduction is based (with the notable exception of [15, 17]) on an a posteriori statistical treatment of existing solutions to complex problems that is used to construct an efficient (i.e., with very few degrees of freedom) basis to simulate problems slightly different to the original ones. While standard finite element techniques employ a basis of local, piece-wise polynomials to approximate the solution of a given problem (a very efficient choice when no information is at hand on the form of that solution), POD-based techniques employ global basis, specific for each particular problem. This basis is determined after constructing the correlation matrix of the results obtained by solving similar problems to the one at hand (the so-called snapshots of the system). These snapshots could be obtained, for instance, by simulating different points of contact between surgical tools and organ, as in [18, 19]. These snapshots allow to extract the basis to simulate, in a Galerkin framework, for instance, situations different to the original ones (a new point of contact, not considered initially, for instance). In [20], POD is employed to augment the range of stability of explicit finite element methods, which is another known property of the technique.

This approach presents, however, some major drawbacks. When dealing with highly non-linear tissues (which is very often the case), resulting equations must be consequently linearized. Employing standard Newton-Raphson schemes to solve these equations leads to the need of re-computing the tangent stiffness matrix of the system, which is a very time-consuming operation that eliminates many of the advantages of POD and renders the method useless. To overcome this, several techniques have been proposed. For instance, POD with interpolation [18], the so-called Empirical Interpolation Method [21] or its discrete counterpart [22] allow to overcome this difficulty. A different approach has been followed in some of our previous works [23, 24], in which a Taylor series expansion is applied to the variables of interest (here, the displacement field) in order to obtain a sequence of problems, one for each order of the expansion, but all with the same tangent stiffness matrix.

Beating the curse of dimensionality

A completely different source of complexity arises in problems whose solution is defined in spaces of a high number of dimensions. For instance, in [25] and references therein, the very interesting problem of modeling and simulating vein graft is studied that combines the need for simulation not only at a macroscopic level but also at a gene regulatory network level. It is hypothesized that blood shearing forces modulates a specific gene regulatory network determining the adaptive response of the vein wall.

Simulating the behavior of gene regulatory networks is a formidable task for several reasons. At this level of description, only a few molecules (maybe dozens or hundreds) of each species involved in the regulation process is present, and this eliminates the possibility of considering the process as deterministic, as is done very often in most chemical applications. In this situation, the continuum approach itself is questioned, as justified clearly in the excellent review by Turner et al. [26] and references therein. Here, the concept of concentration of the species does not make sense [6, 27]. On the contrary, under some weak hypothesis (well-stirred mixture, fixed volume, and temperature), the system can be considered as Markovian and can be consequently modeled by the so-called Chemical Master Equation (CME), [28], which is in fact no more than a set of ordinary differential equations stating the conservation of the probability density function P in time:
P z , t | z 0 , t 0 t = j a j z - v j P z - v j , t | z 0 , t 0 - a j ( z ) P z , t | z 0 , t 0 ,
(1)

where P(z,t|z0,t0) represents the probability of being at a state in which there are a number of molecules of each species stored in the vector z at time t when we started from a state z0 at time t0. a j represents the propensity (i.e., the probability) of reaction j to occur, while v j represents the change in the number of molecules of each species if reaction j takes place. This change is given, of course, by the stoichiometry of the reaction at hand.

What is challenging, however, in this set of equations is that they are defined in a state space which possess as many dimensions as the number of different species involved in the regulatory network. Under this framework, if we consider N different species, present at a number n of copies, the number of different possible states of the system is n N . This number can take the astronomical value of 106,000 if we consider some types of proteins, for instance, [28]. This phenomenon is known as the curse of dimensionality in many branches of science. For instance, Nobel prize winner R. B. Laughlin said, when talking about this problem [29], that 'No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension’.

To overcome this difficulty, most of the authors employ Monte Carlo-like algorithms (the so-called stochastic simulation algorithm, SSA [28, 30, 31]). But Monte Carlo techniques need for as many as possible individual realizations of the problem that compromise its simple application in inverse identification, leading to excessive time-consuming simulations, together with great variance in the results.

Methods

Proper generalized decomposition at a glance

Dealing with the problem of the curse of dimensionality in a very different context, the authors presented in a previous work a technique that is now known under the name of Proper Generalized Decomposition (PGD) [32, 33]. Essentially, to avoid the exponentially growing complexity of the problem with the number of state space dimensions, the method approximates the variable of interest, say u, as a finite sum of separable functions:
u x 1 , x 2 , , x D , t i = 1 N F 1 i x 1 · F 2 i x 2 · · F D i x D · T i ( t ) .
(2)

The reason for this particular choice motivated the method itself that is conceived as a greedy algorithm that computes one sum at a time and one product at a time, within a fixed point, alternating directions algorithm. This leads to a sequence of one-dimensional (low-dimensional, in general) problems, one for each function F j i that can be solved using your favorite technique (finite elements, finite volumes, finite differences, colocation, …).

If M nodes are used to discretize each coordinate, the total number of PGD unknowns is N × M × D instead of the M D degrees of freedom involved in standard mesh-based discretizations. Moreover, all numerical experiments carried out to date with the PGD show that the number of terms N required to obtain an accurate solution is not a function of the problem dimension D, but it rather depends on the regularity of the exact solution. The PGD thus avoids the exponential complexity with respect to the problem dimension.

The PGD technique can thus be seen as both a MOR technique, if we keep the number N of modes to a minimum and as an efficient weapon against the curse of dimensionality, since it proceeds by solving a sequence of one-dimensional problems of negligible computational cost. Note that by letting N grow, we will finally arrive at a solution of the same accuracy of the finite element one, for instance, once the number of terms in the basis, N is the same as the number of nodes of the finite element mesh. In many applications studied to date (see [34, 35] and references therein), N is found to be as small as a few tens for usual symmetric differential operators, and the approximation converges towards the solution associated with the complete tensor product of the approximation bases considered in each spatial dimension (see [36] for a formal proof in the case of elliptic problems).

This was also the main motivation of the so-called radial loading approximation within the LArge Time Increment (LATIN) method by Ladeveze [37]. It can be seen as a particular case of PGD approximation in which space-time separated representation is employed to solve non-linear structural mechanics problems.

On the other hand, PGD methods can be also seen as an efficient tool for high-dimensional problems. This twofold characteristic of the method makes it specially appealing for the numerical solution of the type of problems mentioned in the 'Background’ section.

Parametric problems as a tool for real-time simulation: an off-line/on-line strategy

As mentioned before, PGD can be seen both as a model order reduction technique and as an efficient solver for high-dimensional problems. But what actually interests us in the field of computational surgery is its ability to solve parametric problems in an unprecedented way. Indeed, in [38], a strategy was developed that sets out parametric problems in the form of high-dimensional ones, for which PGD is specially efficient. In words, if we seek for the solution of a parametric problem u(x,t,p1,…,p m ), the approach we follow is to consider the dependence of the solution on the parameters as if they were additional, non-physical coordinates where the solution takes place, just as new additional coordinates:
u x , t , p 1 , , p m i = 1 N X i ( x ) · T i ( t ) · P 1 i p 1 · · P m i p m
(3)

and therefore look for a solution as a finite sum of separable functions of space, (possibly) time and parameters p1,…,p m . This possibility opens the door to establish a strategy based on two steps. First, a general, multi-dimensional solution to the parametric problem is computed off-line. This phase may lead to the need of supercomputing facilities, but the solution is computed once for life and can be efficiently stored in the form of a set of separated functions, as stated before. This phase thus leads to a sort of meta-model or response surface for the problem. It is noteworthy to mention that this response surface is obtained without the need of any prior computer experiment. It is computed on the fly and stored for life. It provides the solution to the problem for a combination of parameter values, taken from within a prescribed value interval. This response surface is efficiently stored as a file of nodal values for all the involved separated functions that are just multiplied in real time straightforwardly.

After this meta-model is obtained, a second phase of the method is executed on-line. In this phase, the meta-model is evaluated, not solved for, at very efficient feedback rates. Figure 1 sketches the basics of the developed method. This approach has reported to provide with feedback rates on the order of kilohertz running on a simple laptop. These results will be deeply analyzed in the following section.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig1_HTML.jpg
Figure 1

Off-line/on-line strategy. The method we analyze here is based upon a combination of the 'off-line’ solution of a general enough parametric model and the 'on-line’ particularization of such a general solution in a particular context at real-time feedback rates. Photo credits: http://es.wikipedia.org/wiki/Archivo:UPM-CeSViMa-SupercomputadorMagerit.jpg.

Results and discussion

In order to show how the proposed methodology works, we consider here two distinct examples at two different scales. On one hand, we show how the technique works for the simulation of gene regulatory networks, even in the lack of knowledge about some parameters in the reactions. Secondly, we analyze how the multi-dimensional methodology proposed so far can be efficiently applied to the simulation of macro-scale problems such as liver palpation.

A PGD approach to gene regulatory network simulation

The PGD approach to the problem of efficiently simulating gene regulatory networks begins by assuming that the probability of being at a particular state z at time t can be approximated as a finite sum of separable functions, i.e.
P N ( z , t ) = j = 1 N F 1 j z 1 · F 2 j z 2 · · F D j z D · F t j ( t ) ,
(4)

where, as mentioned before, the variables z i represent the number of molecules of species i present at a given time instant. This particular choice of the form of the basis functions allows for an important reduction in the number of degrees of freedom of the problem, N × nnod × (D + 1) instead of (nnod) D , where D is the number of dimensions of the state space and nnod the number of degrees of freedom of each one-dimensional grid established for each spatial dimension. For this to be useful, one has to assume that the probability is negligible outside some interval and therefore substitute the infinite domain by a subdomain [0,…,m - 1] D , m being the chosen limit number of molecules for any species in the simulation. A similar assumption is behind other methods in the literature, such as the Finite State Projection algorithm, for instance, [28].

Another important point to be highlighted is the presence of a function depending solely on time, F t j ( t ) . This means that the algorithm is not incremental. Instead, it solves for the whole time history of the chemical species at each iteration of the method. If one then assumes that n terms of the sum given by Equation (4) are already known,
P n + 1 ( z , t ) = P n ( z , t ) + F 1 n + 1 z 1 · F 2 n + 1 z 2 · · F D n + 1 z D · F t n + 1 ( t ) ,
(5)

and look for the n + 1-th term, by substituting Equation (5) into the CME, Eq. (1) gives a non-linear problem in F 1 n + 1 , , F D n + 1 , F t n + 1 that is solved by means of a fixed point, alternating directions algorithm, see [39].

To show how this technique works, consider one of the simplest and most studied examples of gene regulatory networks, that of λ-phage. When a bacteriophage λ infects a cell, it either stays dormant or it reproduces until the cell dies. The resulting behavior depends crucially on two competing proteins that inhibit mutually each other, see a schematic representation in Figure 2. The so-called toggle switch is composed of a two-gene co-repressive network. For this case, the governing CME has the form [7]
P t = AP ,
(6)
with A = A 1 + A 2 , two operators, one for each reaction in the system. The form of these operators, following [7] is
A 1 P z 1 , z 2 = α β β + γ z 2 P z 1 - 1 , z 2 + δ z 1 + 1 · P z 1 + 1 , z 2 - α β β + γ z 2 + δ · z 1 × P z 1 , z 2 ,
and A 2 equivalent with z1 and z2 interchanged. We computed the solution for δ = 0.05, α = 1.0, γ = 1.0 and β = 0.4.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig2_HTML.jpg
Figure 2

Schematic mechanism of the toggle switch. Schematic mechanism of the toggle switch [6]. The constitutive P L promoter drives the expression of the lacI gene, which produces the lac repressor tetramer. The lac repressor tetramer binds the lac operator sites adjacent to the Ptrc - 2 promoter, thereby blocking transcription of cI. The constitutive Ptrc - 2 promoter drives the expression of the cI gene, which produces the λ-repressor dimer. The λ-repressor dimer cooperatively binds to the operator sites native to the P L promoter, which prevents transcription of lacI.

The simulation started from a non-physiological state in which both proteins showed a very high probability around z1 = z2 = 15. Despite this initial state, after t = 100 s (Figure 3a), one has a case where both average values of both proteins and small levels of the one protein combined with higher level of the other protein are quite likely, and this remains the case for the stationary distribution as well [7], Figure 3b.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig3_HTML.jpg
Figure 3

Simulated behavior of λ -phage toggle switch. (a) Marginal probability distribution function at t = 100 s. Axes denote the number of protein 1 and 2. (b) Solution at steady state (t ≈ 300 s) by separation of variables. Axes denote the number of protein 1 (abscissa) and protein 2 (ordinate).

But what should be highlighted about this technique is not only its ability to solve gene regulatory network simulations in a reasonable amount of time, which is extended easily to problems with some 20 different species involved, see [39], for instance. Very often, there is an important lack of experimental data concerning constants of the reactions (propensities), or simply we want to adopt the meta-modeling strategy introduced before. In that case, it is very convenient to set up the problem in parametric form and to convert it in a multi-dimensional one. Considering parameters as new state space dimensions opens the door to designing in silico experiments in which one readily (in real time) observes the behavior of the system under different conditions. The transient solution for a particular value of the propensity can then be computed by restricting the general solution to each particular value of this extra-coordinate. Even if the dimensionality of the problems increases even more than that demanded by the CME itself, this does not constitute major difficulty for PGD techniques that have easily solved problems in dimension 100 and more [35].

To illustrate this feature, we have simulated, for the ease of exposition, a cascade of only two terms. The operator related to a cascade is of the form A = A 1 + A 2 , with A 1 of the same form of the previous example and operator A 2 takes the form
A 2 P ( z ) = β z 1 β z 1 + γ P z - e 2 + δ z 2 + 1 P z + e 2 - β z 1 β z 1 + γ + δ z 2 P ( z ) ,
(7)
where s e2 is the second standard basis of R 2 . In order to check the proposed technique, and for the ease of illustration, we have considered a cascade of only two terms, with the parameter δ as an unknown. Note that the solution (obtained in one execution of the program), see Figure 4, provides the solution for different values of δ that reproduces the ones in the literature [7]. These examples run (off-line phase) on some minutes in a laptop, while they can be evaluated (on-line phase, parametric phase) at kilohertz rates with no special hardware requirements.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig4_HTML.jpg
Figure 4

Solution of the cascade problem with unknown parameter δ . Solution for the cascade problem with unknown propensities. Probability distribution function (top row) and marginal probability distribution function of each species (bottom row) at time t = 0 s, t = 30 s, and t = 600 s (approximately steady state). The left column presents the results for a value δ = 0.01, while the central one is for δ = 0.025 and the left one for δ = 0.045. Note that all the results are obtained in one execution of the program. The four-dimensional hypercube containing the solution space, whose dimensions are the concentrations of the two proteins, the value of δ and time, is then cut by the hyperplanes defined by the different values of δ and time to give these plots.

These examples show how an efficient simulation of gene regulatory networks can be incorporated into surgical simulators even within the operating room. In the new section, we show how a similar parametric, multi-dimensional strategy can be developed for macroscopic descriptions of surgery.

Simulation of liver palpation

As a representative example of the performance of the proposed technique at a macroscopic, continuum level, we have chosen a classical example of liver palpation during hepatic endoscopic resections [3]. For a detailed description of how resection could be simulated under MOR settings, we refer the interested reader to our former work [19]. For the sake of simplicity, we focus on the simulation of the interaction of surgical tools and organ, without the presence of cuts.

The problem of determining the response of an organ to the load transmitted by the contact with a surgical tool could be formulated as to determine the displacement at any point of the model, u(x,y,z), for any load position s and for any force vector orientation and module, t, thus rendering a problem defined in the physical space ( R 3 ), plus a six-dimensional state space ( R 6 ). Following the previous developments, we propose an iterative scheme that works by finding the n + 1-th term of the separated approximation in the form:
u j n + 1 ( x , s ) = k = 1 n X j k ( x ) · Y j k ( s ) + R j ( x ) · S j ( s ) = u j n ( x , s ) + R j ( x ) · S j ( s ) ,
(8)

where we have assumed, for the sake of simplicity on the exposition, that the load is unitary and directed along the z axis (thus, no dependence on t is considered here). The term u j refers to the j-th component of the displacement vector, j = 1,2,3 and where R(x) and S(s) are the sought functions that improve the approximation. Again, this iterative scheme is solved by introducing approximation given by Equation (8) into the weak form of the problem. This renders a non-linear problem on R and S that is solved, in our implementation, by using a fixed point, alternating directions algorithm. At each direction of the fixed point algorithm, we face again a non-linear problem, due to the non-linear constitutive equations of the liver tissue. In [40, 41], two distinct approaches have been pursued, namely an explicit one and a combination of PGD and asymptotic expansions on the variables of interest. The interested reader is committed to read these references for more details on the implementation.

Although the literature on the mechanical properties of the liver is not very detailed, we have assumed a Kirchhoff-Saint Venant material with Young’s modulus of 160 kPa, and a Poisson coefficient of 0.48, thus nearly incompressible [2]. Model’s solution was composed by a total of N = 167 functional pairs X j k ( x ) · Y j k ( s ) (see Equation (8)). The third component (thus j = 3) of the first six modes X 3 k ( x ) is depicted in Figure 5. The same is done in Figure 6 for functions Y, although in this case they are defined only on the boundary of the domain.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig5_HTML.jpg
Figure 5

Spatial modes of the liver solution. Six first functions X 3 k ( x ) , k = 1,…6, for the simulation of the liver.

https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig6_HTML.jpg
Figure 6

Load-dependent modes of the liver solution. Six first functions Y 3 k ( s ) , k = 1,…6, for the simulation of the liver. Note that, in this case, functions Y k (s) are defined on the boundary of the liver only.

Performance of the technique

Both problems introduced before can be solved off-line in standard computing facilities in reasonable amounts of time. In our case, the solution to the problem of liver palpation, for instance, was solved in a workstation equipped with two Nehalem cores at 2.33 Ghz, 24 Gb RAM and 64 bits. The simulations took some 20 h to complete.

The solution provided by the method agrees well with reference FE solutions obtained employing full-Newton-Raphson iterative schemes. But, notably, the computed solution can be stored in a so compact form that the on-line evaluation of the parametric solution (meta-model) is possible on handheld devices such as smartphones and tablets. For instance, for Android-operated devices, an application has been developed (we call it iPGD and is freely downloadable from [42]) that runs the model on a Motorola Xoom tablet running Android 3.0 without problems (only the surface of the model is represented for simplicity, given the limitations of the Android OS), see Figure 7. The 25-Hz feedback rate necessary for continuous visual perception is achieved without problems.
https://static-content.springer.com/image/art%3A10.1186%2F2194-3990-1-1/MediaObjects/40244_2012_Article_1_Fig7_HTML.jpg
Figure 7

Appearance of the proposed method running in a tablet under Android. An example of the implementation of the iPGD application for the liver problem in a Motorola Xoom tablet.

For more sophisticated requirements, such as those dictated by haptic peripherals, a simple laptop (in our case a MacBook pro running MAC OSX 10.7.4, equipped with 4-Gb RAM and an Intel core i7 processor at 2.66 GHz) is enough to achieve this performance. Even performances higher than 500 Hz have been reported for some implementations [43] Additional file 1.

Additional file 1:Movie: performance of the proposed technique for the liver problem. This avi file shows the performance of the proposed method on a test implementation of the technique. In it, the load position is controlled with the help of the mouse, whereas the load orientation is controlled by a Wii joystick. (AVI 2 MB)

Conclusions

In this paper, we have reviewed a new methodology for Model Order Reduction in the context of computational surgery proposed by the authors in a series of previous papers. This new methodology, coined as Proper Generalized Decomposition, improves existing techniques in various ways. Firstly, it enables to incorporate state-of-the-art constitutive models for soft tissues in systems requiring real-time performance (even reaching feedback rates in the order of 1 kHz). This performance can be achieved by employing an off-line/on-line strategy in which a multi-dimensional surface response or meta-model is computed, without the need of previous computer experiments. This meta-model is then evaluated or particularized at real-time rates very efficiently. This is possible due to the special form of the approximation of the solution, in the form of a finite sum of separable functions. Thus, this meta-model is stored in memory as a file containing a series of vector, with great savings of memory.

Another fundamental issue regarding this method is that it solves very efficiently high-dimensional problems. Gene regulatory networks modeled in a stochastic differential equation framework are a paradigm of such a high-dimensional problem. Parameters, such as unknown properties of the system, could also be considered advantageously as new state space dimensions of the problem, thus rendering an ever higher-dimensional problem, but that is still solved efficiently by the proposed technique.

The result is an appealing technique that allows to solve at unprecedented feedback rates state-of-the-art models for multiscale computational surgery. After an academic validation, our current effort of research is directed towards clinical validation of this approach.

Authors’ information

IA are DG are associate professors at I3A. AL and FB are research associates at Ecole Centrale de Nantes. AA is currently a professor at ENSAM Angers, France. EC is a professor at I3A. FC is currently a professor at Ecole Centrale de Nantes.

Declarations

Acknowledgements

This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant number CICYT-DPI2011-27778-C02-01. This support is gratefully acknowledged. Prof. Chinesta is also supported by the Institut Universitaire de France.

Authors’ Affiliations

(1)
Aragón Institute of Engineering Research, Universidad de Zaragoza
(2)
EADS Corporate International Chair, Ecole Centrale de Nantes
(3)
Arts et Métiers ParisTech, ENSAM Angers
(4)
Institut Universitaire de France

References

  1. Satava RM: Medical virtual reality: the current status of the future. In Healthcare in the information age. Edited by: Sieburg H, Weghorst S, Morgan K. Lansdale, PA: IOS Press; 1996.Google Scholar
  2. Delingette H, Ayache N: Soft tissue modeling for surgery simulation. In Computational Models for the Human Body, Handbook of Numerical Analysis (Ph. Ciarlet, Ed.). Edited by: Ayache N. Amstredam: Elsevier; 2004:453–550.View ArticleGoogle Scholar
  3. Delingette H, Ayache N: Hepatic surgery simulation. Commun ACM 2005, 48: 31–36. [http://doi.acm.org/10.1145/1042091.1042116]View ArticleGoogle Scholar
  4. Cotin S, Delingette H, Ayache N: Real-time elastic deformations of soft tissues for surgery simulation. In IEEE Transactions on Visualization and Computer Graphics. Edited by: Hagen H. IEEE Computer Society; 1999:62–73. [http://citeseer.ist.psu.edu/cotin98realtime.html]Google Scholar
  5. Bro-Nielsen M, Cotin S: Real-time volumetric deformable models for surgery simulation using finite elements and condensation. Comput Graphics Forum 1996, 15(3):57–66. 10.1111/1467-8659.1530057View ArticleGoogle Scholar
  6. Hasty J, McMillen D, Isaacs F, Collins JJ: Computational studies of gene regulatory networks: in numero molecular Biology. Nat Rev Genet 2001, 2: 268–279.View ArticleGoogle Scholar
  7. Hegland M, Burden C, Santoso L, MacNamara S, Boothm H: A solver for the stochastic master equation applied to gene regulatory networks. J Comput Appl Math 2007, 205: 708–724. 10.1016/j.cam.2006.02.053MATHMathSciNetView ArticleGoogle Scholar
  8. Sasai M, Wolynes PG: Stochastic gene expression as a many-body problem. Proc Nat Acad Sci 2003, 100(5):2374–2379. 10.1073/pnas.2627987100View ArticleGoogle Scholar
  9. Holzapfel GA, Gasser TC: A new constitutive framework for arterial wall mechanics and a comparative study of material models. J Elasticity 2000, 61: 1–48. 10.1023/A:1010835316564MATHMathSciNetView ArticleGoogle Scholar
  10. Taylor Z, Comas O, Cheng M, Passenger J, Hawkes D, Atkinson D, Ourselin S: On modelling of anisotropic viscoelasticity for soft tissue simulation: Numerical solution and GPU execution. Med Image Anal 2009, 13(2):234–244. . [Includes Special Section on Functional Imaging and Modelling of the Heart] [http://www.sciencedirect.com/science/article/B6W6Y-4TPF4P9-1/2/d51b8636b70ee79508c7f0472dcdb71a]. [Includes Special Section on Functional Imaging and Modelling of the Heart] 10.1016/j.media.2008.10.001View ArticleGoogle Scholar
  11. Karhunen K: Uber lineare methoden in der wahrscheinlichkeitsrechnung. Ann Acad Sci Fennicae, ser Al Math Phys 1946,. 37 37Google Scholar
  12. Loève MM: Probability theory. Princeton, NJ: Van Nostrand; 1963.MATHGoogle Scholar
  13. Lorenz EN: Empirical Orthogonal Functions and Statistical Weather Prediction. Cambridge, MA: MIT, Departement of Meteorology, Scientific Report Number 1, Statistical Forecasting Project; 1956.Google Scholar
  14. Park HM, Cho DH: The use of the Karhunen-Loève decomposition for the modeling of distributed parameter systems. Chem Engineer Sci 1996, 51: 81–98. 10.1016/0009-2509(95)00230-8View ArticleGoogle Scholar
  15. Ryckelynck D: A priori Hyperreduction method: an adaptive approach. J Comput Phys 2005, 202: 346–366. 10.1016/j.jcp.2004.07.015MATHView ArticleGoogle Scholar
  16. Tenenbaum JB, deSilva V, Langford JC: A global framework for nonlinear dimensionality reduction. Science 2000, 290: 2319–2323. 10.1126/science.290.5500.2319View ArticleGoogle Scholar
  17. Ryckelynck D, Chinesta F, Cueto E, Ammar A: On the a priori Model Reduction: Overview and recent developments. Arch Comput Methods Engineer 2006, 12(1):91–128.MathSciNetView ArticleGoogle Scholar
  18. Niroomandi S, Alfaro I, Cueto E, Chinesta F: Real-time deformable models of non-linear tissues by model reduction techniques. Comput Methods and Programs in Biomed 2008, 91(3):223–231. [http://www.sciencedirect.com/science/article/B6T5J-4SNPPVY-2/2/8a417e7f1371768b4c928d1f12fc7a0f] 10.1016/j.cmpb.2008.04.008View ArticleGoogle Scholar
  19. Niroomandi S, Alfaro I, Gonzalez D, Cueto E, Chinesta F: Real-time simulation of surgery by reduced-order modeling and X-FEM techniques. Int J Numeric Methods in Biomed Engineer 2012, 28(5):574–588. [http://dx.doi.org/10.1002/cnm.1491] 10.1002/cnm.1491MATHMathSciNetView ArticleGoogle Scholar
  20. Taylor Z, Crozier S, Ourselin S: A reduced order explicit dynamic finite element algorithm for surgical simulation. Med Imaging, IEEE Trans 2011, 30(9):1713–1721.View ArticleGoogle Scholar
  21. Barrault M, Maday Y, Nguyen N Patera: An 'empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations. COMPTES RENDUS MATHEMATIQUE 2004, 339(9):667–672. 10.1016/j.crma.2004.08.006MATHMathSciNetView ArticleGoogle Scholar
  22. Chaturantabut S, Sorensen DC: Nonlinear model reduction via discrete empirical interpolation. SIAM J Sci Comput 2010, 32: 2737–2764. [http://dx.doi.org/10.1137/090766498] 10.1137/090766498MATHMathSciNetView ArticleGoogle Scholar
  23. Niroomandi S, Alfaro I, Cueto E, Chinesta F: Model order reduction for hyperelastic materials. Int J Numeric Methods in Engineering 2010, 81(9):1180–1206. [http://dx.doi.org/10.1002/nme.2733]MATHMathSciNetGoogle Scholar
  24. Niroomandi S, Alfaro I, Cueto E, Chinesta F: Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models. Comput Methods and Programs in Biomedicine 2012, 105: 1–12. [http://www.sciencedirect.com/science/article/B6T5J-50VGHDD-1/2/1201566766c0d280af9195bf07bfaf91] 10.1016/j.cmpb.2010.06.012View ArticleGoogle Scholar
  25. Garbey M, Bass B, Berceli S: Multiscale mechanobiology modeling for surgery assessment. Acta Mechanica Sinica 2012, 28: 1186–1202. . [10.1007/s10409–012–0133–4] [http://dx.doi.org/10.1007/s10409-012-0133-4]. [10.1007/s10409-012-0133-4] 10.1007/s10409-012-0133-4MATHView ArticleGoogle Scholar
  26. Turner TE, Schnell S, Burrage K: Stochastic approaches for modelling in vivo reactions. Comput Biol Chemis 2004, 28: 165–178. 10.1016/j.compbiolchem.2004.05.001MATHView ArticleGoogle Scholar
  27. Sreenath SN, Cho KH, Wellstead P: Modelling the dynamics of signalling pathways. Essays in Biochemis 2008, 45: 1–28. 10.1042/BSE0450001View ArticleGoogle Scholar
  28. Munsky B, Khammash M: The finite state projection algorithm for the solution of the chemical master equation. J Chem Phys 2006, 124(4):044104. [http://link.aip.org/link/?JCP/124/044104/1] 10.1063/1.2145882View ArticleGoogle Scholar
  29. Laughlin RB, Pines D: The theory of everything. Proc Nat Acad Sci 2000, 97: 28–31. [http://www.pnas.org/content/97/1/28.abstract] 10.1073/pnas.97.1.28MathSciNetView ArticleGoogle Scholar
  30. Gillespie DT: Exact stochastic simulation of coupled chemical reactions. J Phys Chemis 1977, 81(25):2340–2361. [http://pubs.acs.org/doi/abs/10.1021/j100540a008] 10.1021/j100540a008View ArticleGoogle Scholar
  31. Gillespie DT: Approximate accelerated stochastic simulation of chemically reacting systems. J Chem Phys 2001, 115: 1716–1733. 10.1063/1.1378322View ArticleGoogle Scholar
  32. Ammar A, Mokdad B, Chinesta F, Keunings R: A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids. J Non-Newtonian Fluid Mech 2006, 139: 153–176. 10.1016/j.jnnfm.2006.07.007MATHView ArticleGoogle Scholar
  33. Ammar A, Mokdad B, Chinesta F: A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids. Part II: transient simulation using space-time separated representations. J Non-Newtonian Fluid Mech 2007, 144: 98–121. 10.1016/j.jnnfm.2007.03.009MATHView ArticleGoogle Scholar
  34. Chinesta F, Ammar A, Cueto E: Recent advances in the use of the Proper Generalized Decomposition for solving multidimensional models. Archives of Computational Methods in Engineering 2010, 17(4):327–350. 10.1007/s11831-010-9049-yMATHMathSciNetView ArticleGoogle Scholar
  35. Chinesta F, Ladeveze P, Cueto E: A short review on model order reduction based on proper generalized decomposition. Archives of Comput Methods in Engineering 2011, 18: 395–404. 10.1007/s11831-011-9064-7View ArticleGoogle Scholar
  36. Le Bris C, Lelièvre T, Maday Y: Results and questions on a nonlinear approximation approach for solving high-dimensional partial differential equations. Constructive Approximation 2009, 30: 621–651. . [10.1007/s00365–009–9071–1] [http://dx.doi.org/10.1007/s00365-009-9071-1]. [10.1007/s00365-009-9071-1] 10.1007/s00365-009-9071-1MATHMathSciNetView ArticleGoogle Scholar
  37. Ladeveze P: Nonlinear Computational Structural Mechanics. New York: Springer; 1999.MATHView ArticleGoogle Scholar
  38. Pruliere E, Chinesta F, Ammar A: On the deterministic solution of multidimensional parametric models using the Proper Generalized Decomposition. Math Comput Simulation 2010, 81(4):791–810. 10.1016/j.matcom.2010.07.015MATHMathSciNetView ArticleGoogle Scholar
  39. Ammar A, Cueto E, Chinesta F: Reduction of the chemical master equation for gene regulatory networks using proper generalized decompositions. Int J Numerical Methods in Biomedical Engineering 2012, 28(9):960–973. 10.1002/cnm.2476MathSciNetView ArticleGoogle Scholar
  40. Niroomandi S, Gonzalez D, Alfaro I, Bordeu F, Leygue A, Cueto E, Chinesta F: Real-time simulation of biological soft tissues: a PGD approach. Int J Numerical Methods in BiomedEngineer 2013, 29(5):586–600. 10.1002/cnm.2544MathSciNetView ArticleGoogle Scholar
  41. Niroomandi S, Alfaro I, Cueto E, Chinesta F: Model order reduction in hyperelasticity: a Proper Generalized Decomposition approach. Int J Numerical Methods in Engineering 2013, 96(3):129–149.MathSciNetGoogle Scholar
  42. Bordeu F, Leygue A, Alfaro I, González D, Modesto D, Cueto E, Huerta A, Chinesta F: iPGD, an interactive PGD application for Android . 2012.http://centrale-nantes-composites.comGoogle Scholar
  43. Bordeu F, Chinesta F, Leygue A, Cueto E, Niroomandi S: Reduction de modele par PGD applique la simulation en temps reel de solide deformables. In 10e Colloque National en Calcul des Structures Edited by: Bonnet M, Cornuault C, Pagano S. Published on-line at http://hal.archives-ouvertes.fr2011 Published on-line at

Copyright

© Alfaro et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.