11.1: State Spaces - Biology

11.1: State Spaces - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Closely related to “phase spaces” are “state spaces”. While phase spaces are typically used with continuous systems, described by differential equations, state spaces are used with discrete-time systems, described by difference equations. Here the natural system is approximated by jumps from one state to the next, as described in Chapter 7, rather than by smooth transitions. While the two kinds of spaces are similar, they differ in important ways.

Inspired by the complexities of ecology, and triggered in part by Robert May’s bombshell paper of 1976, an army of mathematicians worked during the last quarter of the twentieth century to understand these complexities, focusing on discrete-time systems and state spaces. One endlessly interesting state space is the delayed logistic equation (Aronson et al. 1982), an outgrowth of the discrete-time logistic equation described in Chapter 7.

For a biological interpretation of the delayed logistic equation, let’s examine the example of live grassland biomass coupled with last year’s leaf litter. Biomass next year (Nt+1) is positively related to biomass this year (Nt), but negatively related to biomass from the previous year (Nt−1). The more biomass in the previous year, the more litter this year and the greater the inhibitory shading of next year’s growth. The simplest approximation here is that all biomass is converted to litter, a fixed portion of the litter decays each year, and inhibition from litter is linear. This is not perfectly realistic, but it has the essential properties for an example. Field data and models have recorded this kind of inhibition (Tilman and Wedin, Nature 1991).

Program (PageIndex{1}). A program to compute successive points in the state space of the delayed logistic equation.

The basic equation has N1as live biomass and N2as accumulated leaf litter. N1 and N2are thus not two different species, but two different age classes of a single species.



The above is a common way to write difference equations, but subtracting Ni from each side, dividing by Ni , and making p = 0 for simplicity gives the standard form we have been using.



Notice something new. One of the coefficients, s2,1, is not a constant at all, but is the reciprocal of a dynamical variable. You will see this kind of thing again at the end of the predator–prey chapter, and in fact it is quite a normal result when blending functions (Chapter 18) to achieve a general Kolomogorov form. So the delayed logistic equation is as follows:



where r1 = r−1, r2 = −1, s1,2 = −r, and s2,1 = 1/N1. Notice also that ri with a subscript is different from r without a subscript.

For small values of r, biomass and litter head to an equilibrium, as in the spiraling path of Figure (PageIndex{2}). Here the system starts at the plus sign, at time t = 0, with living biomass N1 =0.5 and litter biomass N2=0.1. The next year, at time t =1, living biomass increases to N1 =0.85 and litter to N2 = 0.5. The third year, t = 2, living biomass is inhibited slightly to N1 = 0.81 and litter builds up to N2 = 0.85. Next, under a heavy litter layer, biomass drops sharply to N1 =0.22, and so forth about the cycle. The equilibrium is called an “attractor” because populations are pulled into it.

For larger values of r, the equilibrium loses its stability and the two biomass values, new growth and old litter, permanently oscillate around the state space, as in the spiraling path of Figure (PageIndex{3}). The innermost path is an attractor called a “limit cycle.” Populations starting outside of it spiral inward, and populations starting inside of it spiral outward— except for populations balanced precariously exactly at the unstable equilibrium point itself.

For still larger values of r, the system moves in and out of chaos in a way that itself seems chaotic. By r = 2.15 in Figure (PageIndex{4}), the limit cycle is becoming slightly misshapen in its lower left. By r = 2.27 it has become wholly so, and something very strange has happened. A bulge has appeared between 0 and about 0.5 on the vertical axis, and that bulge has become entangled with the entire limit cycle, folded back on itself over and over again. What happens is shown by magnifying Region 1, inside the red square.

Figure (PageIndex{5}) shows the red square of Figure (PageIndex{4}) magnified 50 diameters. The tilted U-shaped curve is the first entanglement of the bulge, and the main part of the limit cycle is revealed to be not a curve, but two or perhaps more parallel curves. Successive images of that bulge, progressively elongated in one direction and compressed in the other, show this limit cycle to be infinitely complex. It is, in fact, not even a one-dimensional curve, but a “fractal,” this one being greater than one-dimensional but less than two-dimensional!

Figure (PageIndex{6}) magnifies the red square of Figure (PageIndex{5}), an additional 40 diameters, for a total of 2000 diameters. The upper line looks single, but the lower fatter line from Figure (PageIndex{5}) is resolved into two lines, or maybe more. In fact, every one of these lines, magnified sufficiently, becomes multiple lines, revealing finer detail all the way to infinity! From place to place, pairs of lines fold together in U-shapes, forming endlessly deeper images of the original bulge. In the mathematical literature, this strange kind of attractor is, in fact, called a “strange attractor.”

Such strange population dynamics that occur in nature, with infinitely complex patterns, cannot arise in phase spaces of dynamical systems for one or two species flowing in continuous time, but can arise for three or more species in continuous time. And as covered in Chapter 7, they can arise for even a single species approximated in discrete time.

What we have illustrated in this chapter is perhaps the simplest ecological system with a strange attractor that can be visualized in a two-dimensional state space.

Physiology of Body Fluids

Bruce M. Koeppen MD, PhD , Bruce A. Stanton PhD , in Renal Physiology (Fifth Edition) , 2013

Osmolarity and Osmolality

Osmolarity and osmolality are frequently confused and incorrectly interchanged. Osmolarity refers to the number of solute particles per 1 L of solvent, whereas osmolality is the number of solute particles in 1 kg of solvent. For dilute solutions, the difference between osmolarity and osmolality is insignificant. Measurements of osmolarity are temperature dependent because the volume of solvent varies with temperature (i.e., the volume is larger at higher temperatures). In contrast, osmolality, which is based on the mass of the solvent, is temperature independent. For this reason, osmolality is the preferred term for biologic systems and is used throughout this and subsequent chapters. Osmolality has the units of Osm/kg H 2O. Because of the dilute nature of physiologic solutions and because water is the solvent, osmolalities are expressed as milliosmoles per kilogram of water (mOsm/kg H2O).

Table 1-1 shows the relationships among molecular weight, equivalence, and osmoles for a number of physiologically significant solutes.


In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time) however, in the common LTI case, matrices will be time invariant. The time variable t can be continuous (e.g. t ∈ R > ) or discrete (e.g. t ∈ Z > ). In the latter case, the time variable k is usually used instead of t . Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:

System type State-space model
Continuous time-invariant x ˙ ( t ) = A x ( t ) + B u ( t ) >>(t)=mathbf mathbf (t)+mathbf mathbf (t)>
y ( t ) = C x ( t ) + D u ( t ) (t)=mathbf mathbf (t)+mathbf mathbf (t)>
Continuous time-variant x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) u ( t ) >>(t)=mathbf (t)mathbf (t)+mathbf (t)mathbf (t)>
y ( t ) = C ( t ) x ( t ) + D ( t ) u ( t ) (t)=mathbf (t)mathbf (t)+mathbf (t)mathbf (t)>
Explicit discrete time-invariant x ( k + 1 ) = A x ( k ) + B u ( k ) (k+1)=mathbf mathbf (k)+mathbf mathbf (k)>
y ( k ) = C x ( k ) + D u ( k ) (k)=mathbf mathbf (k)+mathbf mathbf (k)>
Explicit discrete time-variant x ( k + 1 ) = A ( k ) x ( k ) + B ( k ) u ( k ) (k+1)=mathbf (k)mathbf (k)+mathbf (k)mathbf (k)>
y ( k ) = C ( k ) x ( k ) + D ( k ) u ( k ) (k)=mathbf (k)mathbf (k)+mathbf (k)mathbf (k)>
Laplace domain of
continuous time-invariant
s X ( s ) − x ( 0 ) = A X ( s ) + B U ( s ) (s)-mathbf (0)=mathbf mathbf (s)+mathbf mathbf (s)>
Y ( s ) = C X ( s ) + D U ( s ) (s)=mathbf mathbf (s)+mathbf mathbf (s)>
Z-domain of
discrete time-invariant
z X ( z ) − z x ( 0 ) = A X ( z ) + B U ( z ) (z)-zmathbf (0)=mathbf mathbf (z)+mathbf mathbf (z)>
Y ( z ) = C X ( z ) + D U ( z ) (z)=mathbf mathbf (z)+mathbf mathbf (z)>

Example: continuous-time LTI case Edit

Stability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix A . The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:

The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of s I − A -mathbf > ,

The roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.

The system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).

Controllability Edit

The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if

where rank is the number of linearly independent rows in a matrix, and where n is the number of state variables.

Observability Edit

Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).

A continuous time-invariant linear state-space model is observable if and only if

Transfer function Edit

The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:

Assuming zero initial conditions x ( 0 ) = 0 (0)=mathbf <0>> and a single-input single-output (SISO) system, the transfer function is defined as the ratio of output and input G ( s ) = Y ( s ) / U ( s ) . For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from

using the method of equating the coefficients which yields

Canonical realizations Edit

Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):

Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:

The coefficients can now be inserted directly into the state-space model by the following approach:

This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).

The transfer function coefficients can also be used to construct another type of canonical form

This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).

Proper transfer functions Edit

Transfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.

The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially y ( t ) = G ( ∞ ) u ( t ) >(t)=< extbf >(infty )< extbf >(t)> . Together we then get a state-space realization with matrices A, B and C determined by the strictly proper part, and matrix D determined by the constant.

Here is an example to clear things up a bit:

which yields the following controllable realization

Notice how the output also depends directly on the input. This is due to the G ( ∞ ) >(infty )> constant in the transfer function.

Feedback Edit

A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system: u ( t ) = K y ( t ) (t)=Kmathbf (t)> . Since the values of K are unrestricted the values can easily be negated for negative feedback. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.

The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of ( A + B K ( I − D K ) − 1 C ) C ight)> . This assumes that the closed-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.

Example Edit

For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the Identity matrix. This would then result in the simpler equations

This reduces the necessary eigendecomposition to just A + B K .

Author summary

Identification of qualitatively different regimes in models of biomolecular switches is essential for understanding dynamics of complex biological processes, including symmetry breaking in cells and cell networks. We demonstrate how topological methods, symbolic computation, and numerical simulations can be combined for systematic mapping of symmetry-broken states in a mathematical model of oocyte specification in Drosophila, a leading experimental system of animal oogenesis. Our algorithmic framework reveals global connectedness of parameter domains corresponding to robust oocyte specification and enables systematic navigation through multidimensional parameter spaces in a large class of biomolecular switches.

Citation: Diegmiller R, Zhang L, Gameiro M, Barr J, Imran Alsous J, Schedl P, et al. (2021) Mapping parameter spaces of biological switches. PLoS Comput Biol 17(2): e1008711.

Editor: Stacey Finley, University of Southern California, UNITED STATES

Received: September 2, 2020 Accepted: January 15, 2021 Published: February 8, 2021

Copyright: © 2021 Diegmiller et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All codes used in this work have been made available via Github at

Funding: The work of R.D., J.I.A., and S.Y.S. was partially supported by funding from the National Institutes of Health under award R01 GM134204-01 ( R.D. was also supported by funding from the National Institutes of Health under award F31 HD098835. The work of L.Z. and K.M. was partially supported by the National Science Foundation under awards DMS-1839294 and HDR TRIPODS award CCF-1934924 (, a DARPA contract HR0011-16-2-0033 (, and National Institutes of Health award R01 GM126555-01 ( The work of J.B. and P.S. was partially supported by funding from the National Institutes of Health under award R35 GM12975 ( The work of M.G. was partially supported by FAPESP grant 2019/06249-7 ( and by CNPq grant 309073/2019-7 ( The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.


Feasibility of the Specification

The utility of a particular abstraction is a function of the feasibility of the specification. In our reintroduction problem, the number of states in the original MDP was 72, which required four transition matrices with 5184 entries. The number of states in the abstract MDP was 18, which required three transition matrices with 324 entries. In the landscape restoration problem, the number of states in the original MDP was 1 048 576, which would require four transition matrices with 1 099 511 627 776 entries. The number of states in the abstract MDP was 1024, which required three transition matrices with 1 048 576 entries.

Quality of the Approximation

The utility of an abstraction is also a function of the quality of the approximation. Recall that we are trying to capture the most important aspects of the original MDP, find an optimal policy over this reduced space and use this as an approximate solution to the original problem. The extent to which the abstract policy agrees with the optimal policy provides a measure of the quality of the approximation. Accordingly, we can measure quality in three ways. First, we can think of the number of errors as the number of states at which the abstract action differs from the optimal action. Second, we can calculate the value lost by using the abstract policy as a solution in the original problem. Here, loss occurs when a suboptimal decision is carried out in the original state space. Third, we can compare the values for the abstract and optimal policies, describing the deviation between the two as another form of error.

Species Reintroduction

The a priori error bound on the value lost per stage was 10 (or 10% of the possible range of optimal values). This is a worse-case bound, and we found that the actual maximum value lost was 5·90. Using the abstract policy as an approximate solution to the original problem, the average value lost per stage was 0·61, or 2·33%. The average error in the value of a state was 4·84 (SE = 1·22), with a maximum error of 9·50. The number of optimal actions chosen by the abstract policy per stage was 62 (of 72) or more than 86% (Fig. 1). Most importantly, the abstract policy was never wrong when the optimal action involved Capture or Release. As we would expect given the method of construction, the disagreement between the abstract and optimal policies occurred when Captive Infected was True and Wild Infected was False, and there were only two feasible actions: Isolate and Do Nothing. In these states, the optimal action in the original MDP was to Isolate in the abstract MDP, the Isolate action was removed and decision-maker was forced to Do Nothing.

We suspected that the success (or quality) of this abstraction was due, at least in part, to the way we assigned reward to each state (Table 2). Specifically, we had defined sub-goals (Source Population Size, Target Population Size) whose contributions to the value function were larger than that of the other subgoal (Wild Infected). To push the abstraction, or find conditions where the quality of the approximation deteriorated, we tried two alterative reward definitions. In the first alternative (Uniform Reward, Table 2), we eliminated the preference for the first two sub-goals, making Wild Infected (nearly) as important as the other two. In the second alternative (Preferred Reward Table 2), we valued Wild Infected more than either of the other two subgoals. A comparison of these two alternatives against the ‘nominal’ reward assignment is shown in Table 3. Importantly, the quality of the abstract policy dropped as the discrepancy in preference for Source Population Size, Target Population Size vs. Wild Infected was reduced, and then eliminated. The a priori error bound and actual maximum value lost grew to 25 and 14·76, respectively, for the Uniform Reward alternative, and the number of errors in action choice grew to 16. Under the Preferred Reward alternative, the a priori and actual maximum value lost grew to 40 and 23·62, respectively, and the number of errors in action choice was 20 (Table 3).

Abstract policy vs. optimal policy Nominal reward Uniform reward Preferred reward
Number of errors in action choice 10 (13·89%) 16 (22·22%) 20 (27·78%)
a priori Error bound (max. value lost per stage)a a Percentages expressed as a function of maximum value.
10·00 25·00 40·00
10·00% 25·00% 40·00%
Actual max. value lost per stagea a Percentages expressed as a function of maximum value.
5·90 14·76 23·62
5·90% 14·76% 23·62%
Actual average value lost per stage b b Percentages expressed as the average of the deviance (average value lost average error) divided by the true optimal value for each state.
0·61 1·85 3·81
2·33% 4·23% 8·04%
Average error in value of stateb b Percentages expressed as the average of the deviance (average value lost average error) divided by the true optimal value for each state.
4·84 11·10 16·67
9·93% 22·21% 43·85%
Standard deviation 1·21 3·52 6·33
  • a Percentages expressed as a function of maximum value.
  • b Percentages expressed as the average of the deviance (average value lost average error) divided by the true optimal value for each state.

Landscape Restoration

In this much larger problem, the a priori error bound on the value lost per stage was 30 (or 13·9% of the possible range of optimal values). As a point of comparison, we calculated the same error bound for other possible abstractions (retaining different numbers of state variables), which is shown in Table 4. We found that as more state variables were included in the abstraction, the better the performance guarantees on the resulting policy. This result illustrates that solution quality is a function of reward span not of problem size. The abstraction mechanism ensures that, all else being equal, solution quality does not degrade with increases in the number of states. Like before, the a priori error bound represents a worse-case bound, but because the size of the state space kept us from finding the optimal solution, we have no true values to compare against. It is also impossible in this short paper to display the decision space for the entire problem for simplicity, we only present results from a sample of states where the abstract policy is implemented in the original state space (Fig. 2).

No. of state variables in abstract MDP A priori error bound (%) # States % Reduction in state space
5 105 (48·837) 32 99·997
6 90 (41·860) 64 99·994
7 75 (34·883) 128 99·987
8 60 (27·907) 256 99·975
9 45 (20·930) 512 99·951
10 30 (13·953) 1024 99·902
11 25 (11·627) 2048 99·804
12 20 (9·302) 4096 99·609
13 15 (6·976) 8192 99·518
14 10 (4·651) 16 384 98·843
15 5 (2·325) 32 768 96·875
16 4 (1·860) 65 536 93·750
17 3 (1·395) 131 072 0·875
18 2 (0·093) 262 144 0·750
19 1 (0·046) 524 288 0·500


Viewed from space, Earth offers no clues about the diversity of life forms that reside there. Scientists believe that the first forms of life on Earth were microorganisms that existed for billions of years in the ocean before plants and animals appeared. The mammals, birds, and flowers so familiar to us are all relatively recent, originating 130 to 250 million years ago. The earliest representatives of the genus Homo, to which we belong, have inhabited this planet for only the last 2.5 million years, and only in the last 300,000 years have humans started looking like we do today.

As an Amazon Associate we earn from qualifying purchases.

Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute OpenStax.

    If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:

  • Use the information below to generate a citation. We recommend using a citation tool such as this one.
    • Authors: Mary Ann Clark, Matthew Douglas, Jung Choi
    • Publisher/website: OpenStax
    • Book title: Biology 2e
    • Publication date: Mar 28, 2018
    • Location: Houston, Texas
    • Book URL:
    • Section URL:

    © Jan 7, 2021 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License 4.0 license. The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

    Content Preview

    A representative from the National Football League's Marketing Division randomly selects people on a random street in Kansas City, Missouri until he finds a person who attended the last home football game. Let (p), the probability that he succeeds in finding such a person, equal 0.20. And, let (X) denote the number of people he selects until he finds his first success. What is the probability mass function of (X)?


    Assume Bernoulli trials — that is, (1) there are two possible outcomes, (2) the trials are independent, and (3) (p), the probability of success, remains the same from trial to trial. Let (X) denote the number of trials until the first success. Then, the probability mass function of (X) is:

    for (x=1, 2, ldots) In this case, we say that (X) follows a geometric distribution.

    Note that there are (theoretically) an infinite number of geometric distributions. Any specific geometric distribution depends on the value of the parameter (p).

    Parents play with their daughters and sons differently. For example, fathers generally roughhouse more with their sons than with their daughters.

    Socialization into gender roles begins in infancy, as almost from the moment of birth parents begin to socialize their children as boys or girls without even knowing it (Begley, 2009 Eliot, 2009). Many studies document this process (Lindsey, 2011). Parents commonly describe their infant daughters as pretty, soft, and delicate and their infant sons as strong, active, and alert, even though neutral observers find no such gender differences among infants when they do not know the infants’ sex. From infancy on, parents play with and otherwise interact with their daughters and sons differently. They play more roughly with their sons—for example, by throwing them up in the air or by gently wrestling with them—and more quietly with their daughters. When their infant or toddler daughters cry, they warmly comfort them, but they tend to let their sons cry longer and to comfort them less. They give their girls dolls to play with and their boys “action figures” and toy guns. While these gender differences in socialization are probably smaller now than a generation ago, they certainly continue to exist. Go into a large toy store and you will see pink aisles of dolls and cooking sets and blue aisles of action figures, toy guns, and related items.

    The Cannon-Bard and James-Lange Theories of Emotion

    Recall for a moment a situation in which you have experienced an intense emotional response. Perhaps you woke up in the middle of the night in a panic because you heard a noise that made you think that someone had broken into your house or apartment. Or maybe you were calmly cruising down a street in your neighbourhood when another car suddenly pulled out in front of you, forcing you to slam on your brakes to avoid an accident. I’m sure that you remember that your emotional reaction was in large part physical. Perhaps you remember being flushed, your heart pounding, feeling sick to your stomach, or having trouble breathing. You were experiencing the physiological part of emotion — arousal — and I’m sure you have had similar feelings in other situations, perhaps when you were in love, angry, embarrassed, frustrated, or very sad.

    If you think back to a strong emotional experience, you might wonder about the order of the events that occurred. Certainly you experienced arousal, but did the arousal come before, after, or along with the experience of the emotion? Psychologists have proposed three different theories of emotion, which differ in terms of the hypothesized role of arousal in emotion (Figure 11.4, “Three Theories of Emotion”).

    Figure 11.4 Three Theories of Emotion. The Cannon-Bard theory proposes that emotions and arousal occur at the same time. The James-Lange theory proposes the emotion is the result of arousal. Schachter and Singer’s two-factor model proposes that arousal and cognition combine to create emotion.

    If your experiences are like mine, as you reflected on the arousal that you have experienced in strong emotional situations, you probably thought something like, “I was afraid and my heart started beating like crazy.” At least some psychologists agree with this interpretation. According to the theory of emotion proposed by Walter Cannon and Philip Bard, the experience of the emotion (in this case, “I’m afraid”) occurs alongside the experience of the arousal (“my heart is beating fast”). According to the Cannon-Bard theory of emotion, the experience of an emotion is accompanied by physiological arousal. Thus, according to this model of emotion, as we become aware of danger, our heart rate also increases.

    Although the idea that the experience of an emotion occurs alongside the accompanying arousal seems intuitive to our everyday experiences, the psychologists William James and Carl Lange had another idea about the role of arousal. According to the James-Lange theory of emotion, our experience of an emotion is the result of the arousal that we experience. This approach proposes that the arousal and the emotion are not independent, but rather that the emotion depends on the arousal. The fear does not occur along with the racing heart but occurs because of the racing heart. As William James put it, “We feel sorry because we cry, angry because we strike, afraid because we tremble” (James, 1884, p. 190). A fundamental aspect of the James-Lange theory is that different patterns of arousal may create different emotional experiences.

    There is research evidence to support each of these theories. The operation of the fast emotional pathway (Figure 11.4, “Slow and Fast Emotional Pathways”) supports the idea that arousal and emotions occur together. The emotional circuits in the limbic system are activated when an emotional stimulus is experienced, and these circuits quickly create corresponding physical reactions (LeDoux, 2000). The process happens so quickly that it may feel to us as if emotion is simultaneous with our physical arousal.

    On the other hand, and as predicted by the James-Lange theory, our experiences of emotion are weaker without arousal. Patients who have spinal injuries that reduce their experience of arousal also report decreases in emotional responses (Hohmann, 1966). There is also at least some support for the idea that different emotions are produced by different patterns of arousal. People who view fearful faces show more amygdala activation than those who watch angry or joyful faces (Whalen et al., 2001 Witvliet & Vrana, 1995), we experience a red face and flushing when we are embarrassed but not when we experience other emotions (Leary, Britt, Cutlip, & Templeton, 1992), and different hormones are released when we experience compassion than when we experience other emotions (Oatley, Keltner, & Jenkins, 2006).

    Physical Properties of Liquids

    In a gas, the distance between molecules, whether monatomic or polyatomic, is very large compared with the size of the molecules thus gases have a low density and are highly compressible. In contrast, the molecules in liquids are very close together, with essentially no empty space between them. As in gases, however, the molecules in liquids are in constant motion, and their kinetic energy (and hence their speed) depends on their temperature. We begin our discussion by examining some of the characteristic properties of liquids to see how each is consistent with a modified kinetic molecular description.

    The properties of liquids can be explained using a modified version of the kinetic molecular theory of gases described previously. This model explains the higher density, greater order, and lower compressibility of liquids versus gases the thermal expansion of liquids why they diffuse and why they adopt the shape (but not the volume) of their containers. A kinetic molecular description of liquids must take into account both the nonzero volumes of particles and the presence of strong intermolecular attractive forces. Solids and liquids have particles that are fairly close to one another, and are thus called "condensed phases" to distinguish them from gases

    Watch the video: Ορκομωσία Μοριακής Βιολογίας 7-11-18 (September 2022).


  1. Tashakar

    Sorry for interrupting you, but in my opinion this topic is already out of date.

  2. Drygedene

    Wonderful, this is a very valuable thing

Write a message