Transfer function approach to understanding periodic forcing of signal transduction networks

Signal transduction networks are responsible for transferring biochemical signals from the extracellular to the intracellular environment. Understanding the dynamics of these networks helps understand their biological processes. Signals are often delivered in pulses and oscillations. Therefore, understanding the dynamics of these networks under pulsatile and periodic stimuli is useful. One tool to do this is the transfer function. This tutorial outlines the basic theory behind the transfer function approach and walks through some examples of simple signal transduction networks.


Introduction
Signal transduction networks are systems of biomolecules that deliver signals from one point in space to another, such as from the cell surface to the nucleus. This process facilitates biochemical changes in the cell that dictate phenotypic changes, such as cell growth, death, differentiation and migration. Different input patterns on the same network can yield different phenotypic outcomes. As such, there has been much focus on understanding the dynamics of networks in response to different inputs-in particular, pulses and oscillations, which occur physiologically for some hormones. Studying networks under periodic forcing is one way to mimic these systems but also is a general approach to understand system dynamics in any network. In the literature, this is often called the frequency-domain approach (as opposed to the time domain). In the frequency domain with periodic forcing, the response output is at the same frequency as the input but is attenuated in amplitude and shifted in phase. A compact mathematical representation of the ensuing dynamics is called the transfer function. The transfer function method from conventional control theory is a popular and highly useful tool that people employ to study this frequency domain. In this tutorial we espouse the use of a transfer function approach to understanding signal transduction networks. The paper is organised as follows. In sections 1.1 and 1.2 we give experimental and theoretical motivations of why studying biochemical networks in cells under periodic forcing is useful. In section 2 we introduce the reader to the mathematical background. In section 3, we illustrate the mathematical approaches covered in section 2 and apply them to some simple network motifs yielding quantitative results. Section 4 covers a qualitative approach to gleaning the essential features of the transfer function given a network architecture. In section 5, we discuss some of the limitations of the transfer function approach and give an example where the transfer function fails. In section 6 we conclude with some final remarks.
can be entrained by external periodic forcing can be investigated [1].
Especially with advancements in technologies like microfluidics and optogenetics, performing these periodic forcing experiments have become increasingly viable [2,3].
Studying system dynamics in the frequency domain framework also presents a variety of advantages over its time domain counterpart.
The first area concerns signal processing and biological computing. Frequency domain methods allow extra information to be encapsulated into signals. Non-oscillatory signals lack the 'degrees of freedom' that sinusoidal ones have, such as periods and modulations, into which information can be encoded [4]. The periodic repetition of signals also allow for removal of unwanted information like transient effects and noise. Related to this information encoding is the ability to measure the information carrying capacity or bandwidth [5]. Bandwidth is obtained by measuring pathway activity in response to inputs at varying frequencies. It is a property often neglected in the time domain. Considering signal transduction networks as machinery with bandwidth opens doors towards a computational picture of biology. Like how two networks can differ in information carrying capacity, they can also have different response activities to the same signal. The window of optimal response activity is called the dynamic range. Bandwidth and dynamic range are almost synonymous quantities, and understanding one helps with understanding the other. This is especially seen in signal transduction pathways that consist of multiple components operating on a different timescales [6]. With these specific windows of information capacity and dynamic response, we can see how using the right frequency of input signals in complex biological systems allows us to observe special functionalities of biological systems and probe for the existence of system components [3,[7][8][9][10][11][12][13].
The second area concerns biological engineering and medicine. Understanding dynamic ranges also presents avenues into controlling and engineering biological systems. This is especially useful in medicine and drug development-or 'frequency domain medicine' . In embryonic development, initial vertebrae formation is controlled by oscillating levels of gene expression that travel between the two ends of the embryo along a tissue called the presomitic mesoderm. There is a specific frequency, depending on each species, required at this stage for successful vertebrae formation. Frequencies too high or low cause abnormalities in vertebral development. This frequency is determined by the activity of the Notch, Wnt and fibroblast growth factor (FGF) pathways. Controlling these pathways allows us to control embryonic development [14][15][16]. Another example is the extracellular-regulated-kinase (ERK)/mitogenactivated protein kinase (MAPK) pathway, which is responsible for cell proliferation. ERK pulses can be frequency modulated. Different modulations yield different proliferation rates. Knowing when and how to control frequency modulation in the ERK/MAPK pathway can allow us to control cell growth [7].

Theoretical motivations
The description of an input signal in terms of periodically varying functions enables any input signal to be described via Fourier analysis. The transfer function approach enables a compact description of network dynamics. As we shall show below, the transfer function approach simplifies the solution of differential equations into a solution of algebraic equations. Transfer functions are modular and can be added following simple mathematical rules to represent large signal transduction networks.

Mathematical background
The transfer function approach makes one very basic assumption-that our input-output system can be described by a set of dynamical equations in continuous time. Given this criteria, a sequence of logical consequences follow and give rise to our transfer function.
Dynamical equations in continuous time take the form of differential equations. All differential equations have a linearised form. Linear differential equations in time have an equivalent form in frequency. These frequency-domain equations give us our transfer function.
In this section, we will walk through this logic together-covering the mathematical background along the way. We draw upon the formalisms presented in [17][18][19].

Dynamical equations
Physical processes can often be modelled using equations that evolve in time. We call these dynamical equations. For continuous time, dynamical equations take the form of differential equations.
To illustrate this, consider a system S that evolves in continuous time t. In the case at hand, S is our signal transduction network. Suppose that S can be fully described by a set of variables That is, at any point in time t = t ′ , the state of S is defined by the set of values For signal transduction networks, each x i (t) might represent the quantity of a different biochemical species present in the network. These x i (t) terms are called state variables. The column vector of all state variables forms the state vector Let us further suppose that we can interact with the system, such as by pipetting in and out species. That is, S accepts inputs from and returns outputs to the external environment. Denote the set of inputs as and outputs as The respective input and output vectors are Since the system is fully described by its state variables, the time evolution of S is equivalent to the time evolution of {x i (t)}.
If each x i (t) can be described by an ordinary differential equation relating its first time derivative to a function of the set of state and input variables, namely then we have all the dynamical information about the system required to describe its time evolution packaged into a set of n equations. That is, the time evolution of S is captured in the set of ordinary differential equations This can be expressed in vector form as where Similarly, if each output variable can be expressed in terms of the state and input variables, then the dynamical information for the outputs are packaged into the following set of o equations or alternatively in vector form where Together equations (8) and (11), or equivalently (9) and (12), make up what is known as the state space representation of a dynamical system. They are also individually referred to as the state equation and state observation equation, respectively. With the right choice of elements in sets {x i (t)} and {y k (t)}, one can always describe a continuous time dynamical system by equations (8) and (11); or equivalently (9) and (12). As a simple example, consider the dynamical equation and output equation With the following choice of state variables and, consequently, output variable we can express our original equations in forms (8) and (11) as follows:

Equilibria
A system may have one or more special combinations of values of state and input variables such that its state remains stationary in time. That is, there may exist specific sets of {x i (t)} and u j (t) for which, once the system assumes, prevent {x i (t)} from changing at future times. Such special sets of values, when substituted into equation (8), reduce each row to zero: telling us that the rate of change of each state variable is zero. Each of these special sets of values defines a special coordinate in parameter space. These special coordinates are termed equilibrium coordinates, or simply equilibria.

Linearised dynamical equations
We wish to arrive at the transfer function from our state space formulation. However, there is a problem-equations (8) and (11) can be nonlinear. Nonlinear equations do not (strictly) have transfer functions. Only linear equations have transfer functions. A nonlinear equation may, however, have local regions where it behaves approximately linear. In these regions, an approximate transfer function can be obtained. In this sense, the closest equivalent to the transfer function of a nonlinear equation is the transfer function of its local linear equations.
To obtain these local linear equations, we must linearise their original nonlinear forms about their equilibria. Linearisation involves approximating a function with its first order Taylor polynomial about a point of interest. The first order Taylor approximation of a function f (ξ) about coordinate ξ = ξ ′ is: Equivalently, we can recentre the function by defining ∆f (ξ) = f (ξ) − f ξ ′ and say: That is, for these special coordinates, the recentred approximation ∆f (ξ) is equal to the original approximation f (ξ).
If the function we are trying to linearise were to be any of the dynamical equations in equation (8), that is f(ξ) = F i (x, u), then these special ξ ′ coordinates would just be the equilibria that we mentioned in section 2.2. We can see this more explicitly if we try to Taylor approximate F i about arbitrary coordinate But, as we have established in equation (20), F i (x e , u e ) = 0. Therefore, equation (24) reduces to the form of equation (22c): Approximating the dynamical equations in (8) and (11) by their first order Taylor polynomials about arbitrary equilibrium coordinate gives and The corresponding matrix form of these two linearised systems of equations are and where A, B, C and D are derivative matrices defined by entries at row r and column c: So, given a system near equilibrium, we can approximate its dynamics from equations (9) and (12) with the simpler versions in equations (29) and (30).
If the entries in these derivative matrices are time independent, then equations (29) and (30) reduce to and One might imagine the time-dependent matrices A (t), B (t), C (t) and D (t) to represent time varying rate constants of a signal transduction network, while their time-independent counterparts A, B, C and D to represent fixed rate constants.

The transfer function
The transfer function is an equivalent representation to the state space representation in equations (32) and (33). It is to the frequency domain what the state space representation is to the time domain. The transfer function describes how a system amplitude modulates and phase shifts incoming sinusoids as a function of frequency.
In terms of cell signalling networks, transfer functions tell us how sinusoidal biochemical inputs are amplified and delayed in the process of turning into outputs (see figure 1).
For each pair of inputs and outputs in a system, there is an associated transfer function. That transfer function describes the contribution of that input to its associated output. For a system of one input and one output, a single transfer function exists, see figure 2.
For a system of one input and two outputs, two transfer functions exist: one for the contribution of input 1 to output 1 (T 11 ), another for that of input 1 to output 2 (T 21 ), as depicted in figure 3.
When multiple transfer functions exist, they are often written in matrix form with output indices along the rows and input indices along the column. The matrix for figure 3 would be    For a system of two inputs and one output, two transfer functions also exist, as shown in figure 4. However, this time, one for the contribution of input 1 to output 1 (T 11 ) and the other for that of input 2 to output 1 (T 12 ).
The associated transfer function matrix now looks like T = T 11 T 12 .
(35) Following the same logic, if we have a two-input to two-output system (as in figure 5), there are now four transferfunctions, with associated transfer function matrix For an arbitrary system with m inputs and n outputs, there are a total of m × n transfer functions, described by the matrix

Laplace transform
State space representations can be converted into transfer functions. To do this, we will need to employ a mathematical tool called the Laplace transform. For a piecewise continuous function f (t) defined for all non-negative real values of t, the Laplace transform L { f (t)} (s) is defined as: for all complex numbers s such that the integral exists and by analytic continuation where it does not. If t were to represent time, as is often the case, then s would represent complex temporal frequency. L { f (t)} (s) if often shorthanded to F (s).

From state space to transfer function
Given a state and state observation equation pair, of form (32) and (33), we can obtain its corresponding transfer function matrix T (s) as follows where I is the identity matrix and s is the complex frequency. The derivation for this employs the Laplace transform and some algebra as follows. Starting with equations (32) and (33) .
Rearranging equation (40a) gives The transfer function T (s) is formally defined as the ratio of the Laplace transformed output to the Laplace transformed input-that is, ∆Y (s) /∆U (s). Rearranging equation (42) into this form gives the expression we wanted in equation (39) ∆Y (s) This matrix has the m × n structure of that in equation (37).
Equation (39) is often the most systematic way to compute a transfer function from a given state space model. However, it may not be the most efficient or physically meaningful way to obtain a transfer function from a set of dynamical equations that is not yet in state space form. Although we have established that all systems of ordinary differential equations can be expressed in the state space forms (32) and (33), with appropriate choice of state and output variables, the process of doing so sometimes requires tricky changes in variables that have no obvious physical meanings to the original system of interest. This may detract physical insight from the result and turn the calculation into a purely computational exercise with no contextual insight into the biological system. Such is especially the case if derivatives of inputs are present in the dynamical equations. For example, the presence of du 1 .
In these cases, it may be more fruitful to first combine the equations (without employing changes of state variables) such that only terms containing the input, output and their derivatives remain. Then performing a Laplace transform and transposing the resulting equation to obtain the ratio of the Laplace transformed output to the Laplace transformed input will return the transfer function. Equivalently, one can start by Laplace transforming the individual differential equations and then combining them to get this transfer function ratio. The preference for order of these steps will depend on the system of equations at hand. We will see an example of this later in section 3.4. It is important to note that we have not done any new math here. All these steps of manual algebraic manipulation and Laplace transformation were already built into the machinery of equation (39), except this machinery was designed in such a way that required a very specific structure of input (i.e. the matrices in equations (32) and (33)) that are not convenient for certain dynamical equations.

Angular frequency-response functions
Let s = σ + iω, where σ and ω are real numbers. In the special case of σ = 0, that is for purely imaginary complex frequencies, we obtain the function T (s = iω). This is commonly referred to as the (angular) frequency-response function and provides two useful pieces of information: the magnitude frequency-response function M and the phase frequency-response function ϕ . Functions M and ϕ are defined as follows where ℜ and ℑ denote real and imaginary components. The plots of M and ϕ against ω are commonly referred to as the Bode magnitude plot and Bode phase plot, respectively.

Modulatory of transfer functions
Transfer functions are modular. We can obtain new transfer functions from combinations of smaller transfer functions arranged in special architectures. These architectures are called block diagrams and the rules of combining them are called block diagram algebra. Most conventional control theory texts will have an extensive list of block diagram algebra rules. Here, we highlight three rules that we believe will serve the reader well in constructing most signal transduction networks. The first is the series rule, which combines two transfer functions in series via a product (see figure  6 for schematic representation): The second is the parallel rule, which combines two transfer functions in parallel (figure 7) via a sum: The last is the closed loop rule (figure 8), which combines two transfer functions in a closed loop via a ratio:

Quantitative examples
To get a better grasp of the ideas in section 2, let us apply it to some simple examples.

Simple regulation (linear)
Consider the following network (figure 9) in which an input u 1 produces an output x 1 at constant rate k 1 while x 1 degrades at constant rate k 2 . This is sometimes referred to as simple regulation under mass action in the literature. Suppose the dynamics of this process can be described by the differential equation and output equation (49) Figure 9. Schematic of simple regulation under mass action.
Since this system is already linear, no linearisation is necessary. From our earlier notation, we can say and The corresponding derivative matrices are: Employing equation (39), we obtain the transfer function (53) This has real and imaginary components and so, from equations (44a) and (44b), the corresponding frequency response functions are Plotting M and ϕ for specific parameter values will give us a better sense of how the network amplitude modulates and phase shifts inputs into outputs.
We can observe two frequency windows of dominant behaviour-one below ω = k 2 and the other above ω = k 2 . In the former, M is maximally constant and ϕ is close to zero. This means inputs are passed with maximal amplitude modulation and relatively little phase shift. In the latter, M rolls off and ϕ tends towards −π /2. Conversely, here, signals are significantly dampened and phase shifted. This behaviour is indicative of a low pass filter with cutoff frequency ω = k 2 .
We can also see how k 1 and k 2 affect the frequency response of the system, as shown in figure  10. For Bode magnitude plots, k 1 translates curves vertically while k 2 translates curves horizontally. For Bode phase plots only k 2 transforms the curvestranslating them horizontally. This means the production rate increases the gain and the degradation rate increases the cut-off frequency.

Simple regulation (nonlinear)
Let us consider the same network (figure 9) except, this time, the production rate is nonlinear. Suppose the state and state observation equations are now and Again, from our earlier notation, we can write and If we consider the system about some equilibria (x 1e , u 1e ), then corresponding derivative matrices are: Realising that the entries in A and B are just constants, we can rename them as follows But this simply returns the same case as our first example in equation (52), that is giving us the same transfer function form and Bode plots as previous in equation (53). Here, we can see that the effect of taking a transfer function of nonlinear simple regulation returns that of linear simple regulation. There is one key difference however-k 1 cannot be changed independently, rather it must be varied via k a via equation (61). This may restrict the values that k 1 can assume depending on the values that k a can take.

Reversible reaction
Consider a slightly more complex architecture ( figure  11) in which an input u 1 produces a species x 1 which in turn produces another species x 2 such that it reproduces x 1 again. Let us further allow x 1 and x 2 to degrade. This is an externally driven reversible reaction with degradation under mass action. Suppose the dynamics of this process can be described by the set of linear differential equations and output equation Our F and G expressions are therefore and The corresponding derivative matrices are: (x1e,x2e,u1e) Figure 11. Reversible reaction between two degrading species driven by an external input.
Employing equation (39), we can obtain transfer function This is the transfer function from input u 1 to output x 1 . Following the same method as before (see also section 2.7), we obtain the following Bode plots, as shown in figures 12-16. Since we have a handful of parameters, let set each one to unity and take turns varying each to see their effect on the Bode plots.
By varying each parameter from a nominal value, in this case (k 1 , k 2 , k 3 , c 1 , c 2 ) = (1, 1, 1, 1, 1), we can see how sensitive the frequency response functions are to changes in each parameter. Figure 12, in which the Bode magnitude curves are significantly translated while the Bode phase plots remain invariant, provides a perfect example of how the transfer function gives insight into sensitive or robust parameters that can be manipulated to control with a system.

Negative feedback loop
Consider the following set of dynamical equations with input u 1 and output x 2 : .  This is a special model of a negative feedback loop called the Goodwin model [20]. A schematic of this model is in figure 17. As we mentioned at the end of section 2.6, the dynamical equations have included in them the derivative of the input. We will therefore take the manual route of computing the transfer function.
Let us start by linearising our equations about an equilibrium point (x 1e , x 2e , u 1e ). This returns  where ∆u 1 = u 1 − u 1e , ∆x 1 = x 1 − x 1e and ∆x 2 = x 2 − x 2e . We wish to obtain an equation for the Laplace transform of our output ∆X 2 (s) in terms of the Laplace transform of our input ∆U 1 (s). This requires us to eliminate ∆X 1 and its derivative. We begin by combining the first two equationsdifferentiating equation (72b) and then substituting in equation (72a)  Our system of equations is now d 2 ∆x1 Recall that the Laplace transform turns differential equations into algebraic equations. This will greatly simplify our next steps. Transforming both equations gives Rearranging equation (76a) for ∆X 1 and then substituting into equation (76b) gives the equation for ∆X 2 (s) in terms of ∆U 1 (s) that we wanted The transfer function is therefore A Bode plot of this transfer function for the Goodwin model is shown in figure 18. As expected of negative feedback, bandpass filtering behaviour is observed.

Qualitative examples 4.1. Ras-Erk MAPK cascade
The example in figure 19 aims to highlight the qualitative predictive power of the transfer function approach. The Ras-Erk MAPK cascade is a low-pass filtering system [21]. It consists of four biochemical species, starting with Ras and ending with Erk (see left hand panel of figure 19). In the experiments described in [21], Ras is activated optogenetically by an external light source. This allows Ras and, hence, the Ras-Erk cascade to be externally driven. By measuring the gain of Erk (output) with respect to light-activated SOS (input) at different input frequencies, the modulation frequency response shown in the right hand panel of figure 19, was observed.
Suppose we do not know anything about the rate constants of this system. Let us try to see if we can deduce a plausible functional form of the transfer function corresponding to this network architecture which can recreate the amplitude modulation plot.
Qualitatively inspecting the cascade architecture, we can identify sequences of simple regulation-like steps (see figure 20).
Replacing each node-to-node step in the chain with a transfer function block gives the right-most cascade of transfer functions (see figure 20). Employing the series rule in equation (45), we obtain the following transfer function: Choosing an arbitrary set of parameters: k 1 = k 2 = k 3 = k 4 = k 5 = k 6 = 1, we obtain the following plot in figure 21, which qualitatively matches the blue experimental low pass trend in figure 19. In addition, this gave us a possible phase response curve (figure 21, lower panel).
Again, we stress that this example serves to illustrate qualitative predictions and not reproduce the quantitative result.

p53-mdm2-ATM feedback loop
This example aims to highlight again the qualitative predictive power of the transfer function approach. The p53-mdm2-ATM feedback loop is involved in cell response to DNA damage. By subjecting cells to noisy DNA-damaging radiation (input) and working out the power spectra of the consequent p53 expression levels (output), the magnitude frequency-response curve for p53-mdm2-ATM was observed [22].
Again, suppose we do not know anything about the rate constants of this system. Let us see if we can deduce a plausible functional form for the transfer function of this network architecture that can recreate the amplitude modulation plot in the solid red curve on the right of figure 22.
By inspection of the network depicted in figure 22, we can see that each branch of the network resembles a simple regulation system. We can further identify two negative feedback loops-the outer loop between ATM and p53 in figure 23(a), and the inner loop between p53 and mdm2 in figure 23(b). Each loop resembles an interaction between a forward simple regulation step and a backwards simple regulation step. There is a common species in between the two loops-p53. If we suppose the output of p53 is shared by both ATM and mdm2 as inputs, then we can further suppose the transfer function for both of these are the same-as illustrated in green. Flipping the block diagram in figure 23(b) horizontally so that its inputs and outputs align with that of the feedback branch in the block diagram in figure 23(a), we can nest them together into the double closed loop structure as shown in figure 23(c).
Applying the closed loop rule in equation (47) to the inner most loop in figure 23(c) gives The block diagram is now reduced to the following single loop as depicted in figure 24.
Applying the rule again to this remaining loop gives the transfer function of the entire system  Choosing an arbitrary set of values for the numerators k 1 = k 3 = k 5 = 1 and denominators k 2 = k 4 = k 6 = 1/2, we obtain the following Bode magnitude plot in figure 25, which qualitatively matches the form of the solid red curve in figure 22.

Limitations
Our discussion up until now has focussed on a class of dynamical systems known as linear time invariant systems. Systems that can be described by the state space equations in equations (32) and (33) are linear time invariant. These systems form the least complex class out of the four classes of dynamical systems (see figure  26).
The transfer function method strictly only applies to linear time invariant systems. As we move into the more complex systems, the method breaks down. Let us discuss some shortcomings to the method.

Superposition
Linear time invariant systems obey superposition. That is, the output of a sum is equivalent to the sum of the individual outputs. Recall from harmonic analysis that all waveforms can be constructed from sinusoidal waves. Since the transfer function tells us how sinusoidal inputs are converted into outputs, it is a tool  to predict outputs due to arbitrary periodic inputs in linear time invariant systems (see figure 27).
For linear time variant, non-linear time invariant and non-linear time variant systems, superposition does not apply. Therefore, the transfer function cannot be used to predict outputs to arbitrary periodic waveforms in these more complex systems.

Linearisation windows
Recall that, to obtain the local transfer function of a nonlinear function, linearisation was necessary. Often, a linear approximation will only be valid within a limited window about the linearisation point. Therefore, the validity of the associated transfer function will also only be within this window.  Let us illustrate this with an example of a saturating system-a common phenomenon in biology (see figures 28-30) .
Suppose our system receives sinusoidal input x about a mean x 0 . Taking the first order Taylor polynomial about x 0 gives This is a reasonable approximation within the window −δ < ∆x < δ. Changing variables y (x) → y (x) − y (x 0 ) = ∆y (85a) x → x − x 0 = ∆x (85b) gives the recentred curve ∆y = dy dx | x0 ∆x = k∆x, k ∈ R. Equations (88a) and (88b) are independent of input amplitude and, so, tell us that the modulation is constant and phase shift is zero for all input amplitudes. The former cannot be the case as inputs larger than the −δ < ∆x < δ window should be clipped due to the saturation.
One approach that addresses this shortcoming is the describing function. The describing function can be thought of as a generalised transfer function for nonlinear systems. The describing function N (M, ω) is defined as the ratio of the fundamental component of the output n (ωt) to its sinusoidal input m (ωt) = M sin (ωt) [23]. In equation form, this is where A 1 and B 1 are the Fourier coefficients of the fundamental component of the output ∠ (N (M, ω)) = 0 (94b) respectively. We can immediately see more information in these frequency response expressions compared to that of the transfer function case in equations (88a) and (88b). Plotting |N (M, ω)| against the ratio of saturation to input amplitude δ/M returns the following curve in figure 31. For input amplitudes much greater than the saturation threshold, that is when δ/M is small, the modulation tends towards zero. This is reflects the clipping phenomena of large inputs. As input amplitudes approach the saturation threshold, that is when δ/M tends towards 1, the modulation approaches k-the gain of the linear region. This is expected as no clipping occurs in this regime. The latter result is what we previously obtained from the transfer function method in equation (88a).
The phase frequency response ∠ (N (M, ω)) here in equation (94b) is identical to that of the transfer function in equation (88b), except now we can say it also applies to signals of amplitudes greater than linear window.

Conclusion
The transfer function provides an powerful predictive tool into understanding the dynamics of signal transduction networks. Here, we have outlined the theory for the approach and applied it to some simple examples of signal transduction networks to illustrate how it can be used to study signal transduction networks of various architectures, both quantitatively and qualitatively.

Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.