3.1. Nonlinear Models for Neural Membranes
The neural phenomena thus far considered have been treated solely in terms of potential, current, and equivalent networks or topologies of lumped or distributed resistances and capacitances. Certain difficulties have been avoided by restricting consideration to those phenomena that can be described by sets of linear equations. This constraint must now be set aside as we examine nonlinear phenomena in terms of the chemical basis of neural activity, both for axon potentials and dendritic potentials.
The basis for this examination is the recognition that neural currents are carried by moving ions, not electrons. A net current is heterogeneous, because there are many different ions dissolved in and between nerve cells, all moving in response to electrical and other energy gradients. The main seat of the forces that move and oppose the movement of ions lies in or on the nerve cell membrane. This is where the energy transformations take place as is manifested by the establishment of electrical, chemical, and thermal gradients in the surround. The basis for neural events is the movement of ions across membranes. The basis for description must be a model for membrane function. Such a model is embodied in the ionic hypothesis of nerve activity (Katz, 1966).
3.1.1. THE IONIC HYPOTHESIS
The molecular structure of the membrane is not yet fully understood, nor are the chemical processes by which ions are moved across it. In the <Page 122> absence of such knowledge the main source of information about the membrane is the measurement of concentration and electrical gradients across it, and the densities of ionic currents j in response to the forces implied by those gradients. The ratio of current density j to force f driving each ion species y defines a specific conductance (conductance/unit area of membrane) for that ion in respect to that force
This is, of course, the ionic equivalent to Ohm's law and reduces the description of membrane function to sets of coupling coefficients between current density and force. The coefficients themselves are then interpreted in terms of the existence of certain types of molecular events in membrane, e.g., diffusion of ions through aqueous pores in a lipid barrier, transport of ions across membranes by carrier molecules, exchange of ions by selective adsorption to fixed charges in the membrane, etc., but in the description of common neuronal events such exploratory proposals are speculative overlay that might confuse understanding rather than aid it.
As an example, suppose that a membrane is given that separates the aqueous compartments comprising an idealized neuron and its medium. For each ion there is a current density jy across the membrane in response to multiple driving forces such as its own concentration gradient, the electrical and thermal gradients across the membrane, the concentration gradient for the solute (osmotic pressure), the hydrostatic pressure, the forces exerted by other ions in the medium, etc. Each force acts on the ion to induce movement so that the net flux density is expressed by the sum of several forces, each with its appropriate coefficient:
For r ions and nr forces the current of the membrane can be expressed as a set of equations at any instant in time
The coefficients represent the functional properties of the membrane, and subsidiary statements about them embody any assumptions and additional evidence available for the analysis.
The concentration forces f and electrical forces f acting across the membrane are much more prominent in their actions on ion movements <Page 123> than are other known forces. Thus in Eq. (3), n is set equal to 2. In the Bernstein model for the membrane the conductances for potassium, gcK+, and gzK+, are assumed to be much larger than those for any other ion, so that all others are set equal to zero. Thus the membrane function is described by
In respect to the chemical force the conductance is the same as the mobility µy. In respect to the electrical force it is the product of mobility and concentration, because the ease of flow depends on the number of ions available:
The electrical force acting on each ion is
where dv/dx is the gradient of potential in the membrane, Zy is the valence, and F is the Faraday constant (96,500 C/equivalent). The chemical force is
where dc/dx is the concentration gradient, T is the absolute temperature (degrees Kelvin), and R is the universal gas constant (8.31 J/mole °K). From Eqs. (4)–(7),
In the steady state it is assumed that the inward and outward currents in response to these two forces are equal and opposite, so that the net flux is zero
From Eqs. (8) and (9),
By integration across the membrane,
where vm is the membrane potential in volts inside with respect to zero outside, [K+]o the K+ concentration outside, and [K+]i the concentration inside the membrane. The typical neuron has an internal negative potential that manifests an inward force on K+ ions. The internal K+ concentration (e.g., 110 milliequivalents/liter) is higher than the external K+ concentration (4.3 milliequivalents/liter), so the chemical force is outwardly directed. For <Page 124> zero flow these forces must be equal and opposite [Eq. (10)]. Equation (11), the Nernst equation, predicts that at body temperature (37°C or 310°K) the inward electrical force required to give zero current (at the given typical concentration difference) is –87 mV. Further, the transmembrane potential is predicted to change with the logarithm of the ionic concentration ratio (Fig. 3.1).
FIG. 3.1. Relation between external potassium concentration [K+]o and transmembrane potential difference (negative inward) of the sartorius muscle using Eq. (11) (Adrian, 1956).
There is substantial evidence that this prediction holds for a variety of types of membranes over a rather wide range of external concentrations, but that this range does not include the range of normal extracellular concentration for potassium (Fig. 3.1). In this range vm is consistently less negative than predicted from the inner and outer concentrations.
This deviation from the expected relation implies that the membrane is permeable to other ions besides potassium. The use of radioactive tracers over the past three decades has demonstrated that all ions in inner and outer solutions of nerve membranes are in continuous flow across the membrane. Hence the assumption that all µy other than µK+ are zero is not valid.
The Bernstein model breaks down in another way. The model is based on the additional postulate that during the active state of nerve, there is an increase in conductance to all ions. This leads to the prediction that membrane potential should approach zero during the action potential. In fact, it is reversed in polarity at the crest, rendering this postulate untenable. Despite these shortcomings the Bernstein model (in modified form) persists as the basis for interpretation of neural electrical activity. In brief, the membrane is selectively permeable to some ions more than to others, and <Page 125> the essential processes on which nerve activity is based occur through changes in membrane conductances.
3.1.2. METABOLIC FORCES
As potassium is the major intracellular cation, so is sodium the dominant extracellular cation. At rest there is continuous inward and outward flow of sodium so that µNa+ ≠ 0. The Na+ concentration difference (using typical values for squid nerve) is 145 milliequivalents/liter outside less 14 milliequivalents/liter inside, so that there is a strong tendency for Na+ to diffuse into the cell. The internal negative potential implies an electrostatic force acting inwardly. The sum of these two forces cannot be zero, so that
In the steady state however, it is certain that
From Eqs. (3), which hold also for Na+, it must be concluded that the limitation to two forces, n = 2, is not valid. It is generally accepted that energy derived ultimately from oxidative metabolism in the neuron supplies the requisite force, and that Na+ is forced outwardly across the membrane as fast as it diffuses inwardly along the concentration and electrical gradients. Thus in the steady state
where ƒmNa+ represents the sum of unknown forces and gmNa+ the unknown transport processes.
There is evidence that the outward movement of Na+ is partially coupled with the inward movement of K+, for when the external K+ is removed entirely, the rate of outward movement of Na+ falls to one–third its normal value. It is likely that potassium is also affected by forces other than those of concentration and electrical gradients, some of which are derived from metabolic energy. A similar situation occurs for chloride and most other (if not all) ions. It has become the custom to represent these unknown forces on each ion by a single mechanism known as the membrane "pump" or battery for that ion species. Thus the equations for the membrane can be written as
For purposes of describing transient neural activity in terms of ionic currents it is permissible to assume that the µy and their associated forces are independent of the gmy and ƒmy, because the rate constants of the µy are far faster than those of the gmy. Furthermore, the active, transactional, information–bearing events (including action potentials and PSPs) arc attributed to changes in the µy whereas the restorative, nutritive, anabolic functions are attributed to the gmy. This separation is based on demonstrations that metabolic inhibitors that block oxidative metabolism (fluoride, cyanide, etc.) or that depolarize the nerve membrane (e.g., excess external potassium) do not prevent the formation of the action potential, provided membrane potential is restored to normal by anodal polarization (using an external battery). On the other hand local anesthetics (cocaine, procaine, etc.) or sodium–deficient solutions applied externally to the membrane prevent conduction without significant effect on membrane potential or resting ion exchanges.
Obviously there is a link between the active states and the restorative processes, as implied by Eq. (15), because on the average over periods of quiescence or normal activity, the total ionic transport currents in response to metabolic forces must be equal but opposite to those in response to known forces. The nature of this link is also unknown, so that the separation of transactional and restorative events is imposed as much by lack of knowledge as by convenience in describing nerve function.
It is widely believed that the fluctuations in membrane potential following the action potential (depolarizing and hyperpolarizing afterpotentials) are manifestations of these metabolic forces, but proof for this is not yet at hand. One of the main reasons for believing this is that the afterpotentials and the metabolic forces are much slower to develop and decay than the transactional potentials and forces. Afterpotentials are usually identified with axons, but they are also found to follow dendritic events, when they are looked for. The reason is simple: The transactional events are the result of brief ion–specific changes in membrane conductance, and the energy is supplied by ion concentrations previously accumulated at metabolic cost. Following each transactional event, the concentrations are restored to their initial values. This is done by moving ions in the reverse of their directions of flow during the transactional event. In the appropriate geometric conditions, the restorative ionic flows are accompanied by detectable fields of potential, the afterpotentials of axons and dendrites (see Fig. 2.19 and 2.22).
3.1.3. The CONCEPT OF EQUILIBRIUM POTENTIAL
The ionic hypothesis states that the basis for neural action currents is the occurrence of conductance changes in the membrane specific to certain ions <Page 127> or groups of ions. The currents can be measured in equivalents per second or in amperes. The conductance changes are expressed as variable conductances measured in mhos. The electrical energy is measured in volts. The energy of concentration must also be expressed in volts, in order that the sum of forces be calculated. This is done by use of the Nernst equation [Eq. (11)], which expresses a concentration difference for a given ion as an equivalent potential difference or equilibrium potential for that ion. For each ion the net driving force is the difference between vm and the equilibrium potential vy, for that ion. There is reason to doubt the quantitative validity of this step, because the Nernst equation applies to reversible systems in which a true equilibrium has been achieved and in which there is zero current. Across the membrane there is at best a steady state for sets of irreversible reactions, including the steady outflow of heat and hydrogen ions. Further, the concept is applied also to active states in which net ion currents do flow. Like the core conductor and the membrane capacitance, it is a useful concept that can be misused but not readily replaced.
The relationships between transmembrane potential and the conductances and concentration gradients for the major ions are expressed by means of the so–called "constant field equation," which will now be derived. Let a neuron have an outside a and an inside b separated by a single differentially permeable membrane of thickness d. In each compartment there are differing concentrations of cations cy+, and anions cy–. The total current across the membrane J is the sum of the independent ionic currents jy,. It is assumed that the conductances across the membrane are constant for each ion but not necessarily equal in the two directions. Then using the superscript + to denote the direction from outside to inside (a to b),
From Eq. (8)
where Zy = 1, which restricts consideration to univalent ions.
It is assumed that the ionic currents for the differing species are independent (equivalent to Dalton's law for a mixture of gases), and that the µ are constant over time and thickness of the membrane. A solution to this equation is obtained by assuming that ∂v/∂x is constant over the thickness of the membrane d. This is justified mainly on the basis that there are presumed to be both bound and unbound charges in the membrane, which will distribute themselves so as to minimize local deviations from a uniform gradient. Let <Page 128> vm be the transmembrane potential, so that
When the electric gradient is linearized by this assumption, Eqs. (17) become ordinary first–order linear differential equations. The solutions are
where ß = Fvm/RT. The net membrane current is the sum of cations cy+ moving from a to b and of anions cy– moving from b to a:
From Eqs. (19) and (20),
In the steady state, J = 0. Equation (21) is then solved for vm,
This equation states that the membrane potential in the steady state is determined by the concentration ratios of all the ion species, which are weighted by the relative mobilities of the ions across the membrane.
Consideration is commonly restricted to the concentrations of three ions. These are [Na+], [K+], and [Cl–]. The mobilities across the membrane often are expressed as specific membrane permeabilities, pNa+, pK– and pCl–. Using the subscript e = a to denote outside concentration and i = b to denote inside concentration, we obtain
This is known as the Goldman (1943) constant field equation. If any one of these ionic permeabilities should become much larger than all of the others, then Eq. (23) reduces to the Nernst equation. The transmembrane potential vm approaches the equilibrium potential for that ion species.
This is the conceptual basis for the ionic hypothesis. At rest in the normal neuron pK+ very much larger than pNa+ but it does not occlude the presence of pCl– .In the presence of excess external potassium pK+ increases and becomes dominant, so that the membrane potential can be predicted by the Nernst equation from the concentration ratio for K+. During the nerve action potential pNa+ rises 1000–fold to become dominant, <Page 129> and vm approaches the equilibrium potential for Na+ (Fig. 3.2). Membrane potential is therefore dependent on selective permeabilities to particular ions in the presence of concentration gradients determined by metabolic processes. The ionic analysis of neural events requires the measurement of these permeabilities (usually as conductances) and of their functional determinants.
FIG. 3.2. Relation between external sodium concentration [Na+]o and transmembrane potential difference (negative inward) in the resting and active states (Nastuk & Hodgkin, 1950).
3.1.4, THE SODIUM PERMEABILITY MODEL
The system for measurement devised by Hodgkin & Huxley (1952) is not derived from the constant field equation but requires use of Eqs. (15) and the concept of the equilibrium potential as an energy source expressed in volts. The ionic mobilities are replaced by equivalent specific ionic conductances expressed in mhos per centimeter squared and the current densities expressed in amperes per centimeter squared. Across a given area of membrane, the total current density j is the sum of a capacitative and a resistive (ionic) fraction
Let us restrict consideration to Na+ and K+. Each ion moves along its path in accordance with the sum of two forces. The electrical force is proportional to the transmembrane potential vm in volts. The concentration <Page 130> force is given by the Nernst equation also in volts, vy. The specific conductance for each ion is defined by the ratio (Section 2.2.1),
and is measured in mhos per centimeter squared. The conductance defined in this way (see footnote in Section 2.3.4) differs from the concepts of mobility or permeability, in that it includes both the properties of the membrane and the ion and the concentration of the ion in the membrane into a single term. There is no allowance (or need) for separating these attributes in the Hodgkin–Huxley system. It should be recalled that mobility is defined as the ratio of current to voltage (a conductance) in an ionic solution, divided by the concentration of the solution. In the membrane the intramembrane concentration cannot be defined (the membrane is not a homogeneous solvent) or measured. In squid nerve it is found empirically that the gy are instantaneous linear functions of vm, which reinforces the usefulness of Eq. (25). In nodes of Ranvier the definition for conductance is more conveniently made using the constant field equation (Hodgkin, 1964).
Substituting Eq. (25) into (24) after solving for jy,
The conductances have been calculated as functions of time and of vm by fixing Vm at each of a series of values from resting Vm and measuring jm.
During short periods of current the inner and outer concentrations do not change significantly, so that vNa+ and vK+ remain constant. Then Vm fixed by means of a high–gain negative–feedback amplifier (voltage clamp) with a very short time constant. Two long wire electrodes are inserted the full length of a giant axon (to give uniform current density over the axon), one for measuring vm and the other for delivering whatever current is required to maintain Vm constant. Within the first 30 µsec of changing Vm from its resting value to some new level there is a very brief current charging membrane capacitance, the time course of which is dependent mainly on the characteristics of the amplifier. Thereafter the current changes in direct proportion to the membrane conductance.
The circuit diagram for the lumped membrane is shown in Fig. 3.3a. Three variable resistors are in parallel with a fixed capacitance Cm and are each in series with an ionic "battery." The voltage is maintained constant by the current source Iµ(t) and the output is the current required to do this as shown by curve A in Fig. 3.3b.
FIG. 3.3. (a) Equivalent circuit for an element of membrane to represent specific ionic currents. (b) Separation of Na+ and K+ currents by use of the voltage clamp on axon in external solutions with and without Na+, respectively, A and B. Na+ current is C = A – B (Hodgkin, 1964).
Sudden depolarization of the membrane to zero volts requires a very brief outward current pulse to discharge the membrane capacitance. Thereafter, for a period of about 1 msec an inward current is required in order to maintain depolarization, followed by a sustained outward current (Fig. 3.3b, <Page 132> curve A). This ionic current is separated into its two components by (RL is ignored) repeating the experiment using an external fluid in which the Na+ is replaced by choline. The current in the second case (curve B) is that carried by K+ ions. The Na+ current is obtained by subtracting the second curve from the first (Fig. 3.3b, curve C).
The specific conductances are directly proportional to these currents:
because vNa+, vK+, and vm are all fixed. There is a rapid early increase in gNa+ followed by a return to baseline. The gK+ increases more slowly but remains elevated for the duration of the depolarization. Repolarization terminates both increases in conductance, returning more rapidly than gK+. However, gNa+ is self–limited in time.
The interpretation is that upon depolarization the large increase in gNa+ causes the membrane potential calculated by Eq. (23) to shift toward the equilibrium potential for Na+ Since this is approximately +40 mV, an inward current is required to neutralize membrane potential with a superimposed IR drop. At the termination of the increase in gNa+, the sustained increase in causes vm to shift toward the equilibrium potential for K+ (–80 mV). To neutralize the inwardly negative potential setup by the K+ concentration difference, a strong outward current is required.
By setting membrane potential to differing levels it is found that both gNa+ and gK+ are dependent on membrane potential (Fig. 3.4). This is the key property of the Hodgkin–Huxley system as it accounts for the major nonlinear property of the nerve, its threshold. Following depolarization of a region of membrane by an outward current both gNa+ and gK+ increase. The former causes Na+ ions to move inwardly with still further depolarization; the latter causes K+ ions to move outwardly leading to repolarization. The resting gNa+ is far lower than resting gK+ and must be greatly increased before it becomes dominant, but it increases far more rapidly following a step change in vm than does gK+, so that a regenerative increase in gNa+ can take place well before the restorative effect of the increase of gK+ occurs.
A major unanswered question concerns the nature of the mechanism that limits the duration of the increase in gNa+. This process is called sodium inactivation on the basis of the macroscopic model proposed by Hodgkin & Huxley (1952). The transport of a sodium ion is assumed to require the simultaneous occurrence of three events (e.g., the arrival of a carrier molecule at the surface of the membrane, the occurrence of the first steps of a three stage chemical reaction, the opening of a "pore" in a three–layer barrier, etc.), each of probability m ranging from 0 to 1.0 and the absence of a fourth event of probability h. The total probability that transport will occur is then m3h.
FIG. 3.4. (a) Time and (b) voltage dependencies of gNa+ and (Hodgkin, 1964).
Therefore gNa+ = gNa+ m3h, where gNa+ is the maximum value for gNa+. The probabilities are given by
where the α and ß are rate constants dependent only on membrane potential at given temperature and external calcium concentration. Depolarization increases αm and ßh and decreases αh and ßm.
The transport of K requires the simultaneous occurrence of four events, each of probability n, so that gK+ = K+ n4, where is the maximum of gK+. The probability is given by
in which increases and ß decreases with depolarization.
At fixed voltages the α and ß are constant, so that Eqs. (28)–(30) are easily solved. The experimental data are fitted with these results to evaluate the dependence of the rate constant and the probabilities, m, h, and n on membrane potential.
The expression for membrane current density (omitting a minor term for <Page 134> leakage conductance) is now
Equations (28)–(31) constitute a system of nonlinear equations in which the coefficients are dependent on only one parameter, the transmembrane voltage. This simplification, in itself a remarkable achievement, has provided a means for quantitative evaluation of the action potential and of the local circuit theory of propagation.
Some common misconceptions may be avoided by emphasizing the fact that in some axons the transmembrane potential during the action potential is determined entirely by the change in transmembrane sodium conductance [Eq. (23)]. Depolarization is due to an increase in gNa+ and not to the turning off of the sodium pump or to an increase in internal sodium concentration. Repolarization is due to the decrease in gNa+ and not to the delayed potassium conductance increase or to potassium outflow. The absence of action potentials in most dendrites is due to the lack of the voltage–dependent sodium conductance in the membrane, but there are other conductances that depend on transmitter substances and not on membrane voltage. In all cases the membrane system at rest consists of a dynamic balance of many forces including several important concentration gradients and a set of ionic currents adding to zero for each ion. A conductance change in an active state represents an unbalancing of the forces leading to a new set of ionic currents. The membrane then becomes a current generator with high internal impedance, provided the conductance change is in part of the neural membrane and not over the entire neural surface. The energy source for the electromotive forces (emf) is the preexisting concentration gradients, which are built from metabolic energy. The current is the sole basis for transmission within the neuron in the active states being considered here.
3.2. Nonlinear Models for Neurons and Parts of Neurons
3.2.1. ACTION POTENTIALS IN AXONS
According to the Hodgkin–Huxley model local depolarization of the axonal membrane by a sufficient amount (the threshold) and for a minimal time leads to a voltage–dependent regenerative increase in gNa+ (Fig. 3.4a). According to Eq. (23) the membrane potential in that region shifts toward vNa+, the sodium equilibrium potential. Differential polarization of the membrane establishes a longitudinal flow of current that must be in the form of a closed loop crossing the membrane twice. The inward current in the active segment must equal the outward current in the passive <Page 135> segment. The resistive drop across the passive membrane is subtracted from the resting membrane potential. The reduction in electrical field strength across the passive membrane by itself is sufficient to increase gNa+ there, causing local membrane potential to approach vNa+. Thus the process repeats itself for the length of the fiber. The increase in gNa+ self–limited, which leads to the return of membrane potential toward rest. The delayed increase in gK+ (Fig. 3.4b) causes hyperpolarization as membrane potential shifts toward vK+. During these changes in gNa+ and gK+ there is some net entry of Na+ ions and loss of K+ ions across the membrane, which are subsequently exchanged at a slow rate by metabolic processes, long after repolarization has been effected. The amounts exchanged for each impulse are about one–millionth part of the total Na+ and K+ ions that are present in the axoplasm.
There are four main lines of evidence in support of this model. First, Eq. (31) has been used to calculate the forms of the transmembrane potential and impedance changes during the propagated action potential (Fig. 3.5).
FIG. 3.5. (a) Predicted and (b) observed action potentials (Hodgkin, 1964).
From Section 2.3.3 the transmembrane current density jm is proportional to the second derivative of transmembrane potential along the axon. If a is the radius of the fiber and ri is the specific resistance of the axoplasm, then
If the velocity of propagation θ is constant, then x = –θt and
Equation (33) combined with Eq. (31) gives an ordinary second–order nonlinear differential equation in potential with respect to time:
FIG. 3.6. Theoretical solutions for propagated action potential and conductances (Hodgkin, 1964).
FIG. 3.7. Action potential (curve A) and membrane conductance (curve B) measured with an alternating current bridge (Cole & Curtis, 1939).
For a specified initial condition of vm Eq. (34) can be solved for vm as a function of time by numerical or analog techniques. The value for θ for which vm returns to zero as t goes to infinity must be found by iterative guess work. Recorded and computed action potentials are shown in Fig. 3.5 (Hodgkin & Huxley, 1952). The theoretical curves for gNa+ and gK+ are shown in Fig. 3.6. Their sum gives a curve for transmembrane conductance (Fig. 3.7) that <Page 137> parallels that measured during the action potential with an alternating current bridge (Cole & Curtis, 1939).
A second line of evidence is comparison of predicted and measured Na+ gain and K+ loss from an active fiber. The instantaneous value for Na+ and K+ current densities can be calculated using the data in Fig. 3.6 and Eq. (27). The area enclosed by the curves for jNa+ and jK+ gives the total ionic charge exchanged per unit area of membrane, which on conversion using F (Faraday's constant) gives qNa+ = 4.33 pmole/cm2 and qK+ = 4.26 pmole/cm2 surface of membrane. These values agree well with measurements of Na+ gain (3–4 pmole/cm2) and K+ loss (3–4 pmole/cm2) per impulse in squid axon using tracers or flame photometry. The amount of charge required to change the potential on membrane capacitance (1 µF/cm2) by 100 mV is 10–7 C/cm2, which is equivalent to 1 pmole/cm2 of Na+, so that the observed and calculated exchanges are well in excess of the minimum required.
A third line of evidence is the relation between spike amplitude and the concentration gradient for Na External concentration can be varied by replacing part or all of the Na+ with choline in the bathing solution. Internal concentration in giant fibers is varied by injecting artificial solutions with micropipettes or by removing the axoplasm by squeezing (like toothpaste from a tube) and replacing it with known solutions (Hodgkin, 1964). With some exceptions the peak amplitude of the action potential is found to vary in accordance with the Nernst equation (Fig. 3.2). Also, when the membrane is clamped at the calculated value for vNa+ the initial inward current disappears, and if vm is made more positive than vNa+, the initial current is outward.
The fourth line of evidence stems from the study of saltatory conduction. Extracellular recording has shown that inward current occurs only at nodes of Ranvier in myelinated fibers, and that such blocking agents as cold, ultraviolet radiation and chemical inhibitors act only at the nodes. This implies that a cylinder of membrane perhaps .03 µm long and 10 µm in diameter can induce a regenerative permeability change in the same axon at a distance of 1 to 3 internodal lengths (up to 6 mm or 6000 µm). There is no other possibility in the neuron for action at such relatively immense distances than an electrical field of force. The point is crucial, because the greater part of contemporary neurophysiology is based on the concept of the loop current as the sole basis for intraneuronal transmission at high speed.
It should now be clear why conduction velocity is proportional to axon diameter. The membrane capacitance must be discharged in order to change membrane potential, and the time constant τ1 for this is given by the product of longitudinal (mainly internal) resistance R1 and membrane capacitance Cm (the initial outward current of the action potential is mainly capacitative). With increasing diameter R1 decreases with the square and Cm in linear <Page 138> proportion, so that τ1 decreases in proportion to diameter. The conductance changes are so rapid that τ1 is the limiting constant and determines θ. (Note τ1 ≠ τm).
Large fibers have low R1 and high action currents, with relatively large energy losses per action potential. In myelinated fibers τ1 is reduced by the successive layers of membrane–like material; the capacitances of these layers are in series, so that the net capacitance is the reciprocal of the sum of the reciprocals, i.e., it is far less than the capacitance of a single layer. The action current and energy per action potential are much reduced. In the internodal segments the total voltage drop is distributed across the multiple layers, whereas in the node it lies mainly across the axon membrane.
The Hodgkin–Huxley equations explain the axon properties of accommodation and adaptation (see Section 1.2.5). They can also serve to account for the linear relationship (Dodge, 1972) between the amplitude of a steady depolarizing transmembrane current and the rate of firing of a neuron (see Section 2.4.4). † However, acceptance of the ionic hypothesis has not been universal, mainly because of the simplifying assumptions required to use the Nernst equation, which limit the scope of the theory, e.g., the omission of consideration of thermal and hydrostatic gradients (particularly transients), the assumption of equilibrium when at best the steady state can be attained, the still unsatisfactory treatment of the role of divalent ions, and the failure to extend the theory to the analysis of afterpotentials. It is likely that major advances in understanding of the molecular structure of the membrane will be required before the ionic hypothesis is superceded. Meanwhile the concepts of equilibrium potential, of the control of membrane potential by specific ionic conductances, and of the voltage and time dependencies of the selective sodium and potassium conductances together constitute a model that suffices to organize most of the available data on the properties of axon membrane.
† However, whether this or some other ionic mechanism accounts for the relationship is still unknown.
3.2.2. THRESHOLD UNCERTAINTY IN AXONS
Threshold is defined with reference to an amplitude of stimulation below which an axon does not fire, and at or above which it does. In the vicinity of threshold there is uncertainty of what value threshold amplitude has, because on repeated trials at some fixed stimulus level firing occurs on some trials and not on others. Let us suppose that a range of stimulus intensity is broken into a set of levels, and that for each level N trials are made. The response of the axon is measured in the usual way, and it is observed that at each level of intensity there are n responses. The results can <Page 139> be expressed as the fraction n/N of successful firings. For stimuli well below threshold there are no responses, and the relative frequency is zero. Well above threshold the axon responds every time and the relative frequency is 1.0. Somewhere between the nerve responds on the average of half of the trials, giving a relative frequency of .50. This level of intensity is taken as threshold.
Provided the trials are made sufficiently far apart (e.g., 1.0 sec), the occurrence or not of a response on a preceding trial does not affect the tendency for a response to occur on the next trial. By appropriate statistical testing it can be shown that the outcomes of successive trials are independent of each other. There is, furthermore, no way to predict before a given trial whether a response will occur or not on that trial. We have, therefore, a collection of random events, the relative frequency of which depends on stimulus intensity and ranges between zero and unity.
The results can be presented in tabular or graphic form giving a probability distribution. This increases from 0 to 1 monotonically over a specified range of stimulus intensity. Empirically it is found that the probability distribution for axon firings can be closely fitted with a cumulative Gaussian or normal distribution function, i.e., the integral of the normal density function. Let P(v) be the probability of firing in response to a particular input v. Then
where v0 is the intensity at which P(v) = .50 and σ is the standard deviation. Values can be taken from a table for the area enclosed by the normal density function for specified values of the independent variable.
The source of this variability lies in the axon and not in the stimulus source or electrodes, for when two parallel axons are excited by the same stimulus the probability of occurrence of response is independent for each axon. Either the threshold vthr, the resting transmembrane potential vr, the penetrating fraction of stimulus current density jm, or the membrane specific resistance rm, or some combination is undergoing spontaneous fluctuation. From measurements of vr using intracellular electrodes in large axons it is known that vr is very stable (unlike vr in the somata of central neurons undergoing continuous synaptic bombardment). Both vthr and jm depend on the levels of the ionic conductances, which also determine rm, so that the internal state of the membrane is presumed to undergo random fluctuations. The Hodgkin–Huxley system is based on the concept of randomly occurring events, the probabilities of which are dependent on vm, so that the unpredictable behavior of axon near threshold is intuitively expected. This behavior, however, is not explicitly represented in their <Page 140> equations by a parameter equivalent to the standard deviation of vthr, σ, or to the coefficient of variation or "relative spread" [defined by Ten Hoopen & Verveen (1963) as σ/vthr] as in Fig. 3.8.
Uncertainty is not a prominent feature of isolated axonal transmission, because the currents generated by active membrane cause changes in vm at an adjacent sensitive membrane far outside the range of uncertainty of vthr or vr. It introduces, however, a statistical characteristic of nerve function that is exceedingly important for the function of neurons in their neural masses.
3.2.3. POSTSYNAPTIC POTENTIALS IN DENDRITES
As described in Section 2.4.2, the transmembrane potential of a neuron vm is briefly changed by a monosynaptic impulse input on a presynaptic set of axons. If the axons are excitatory, vm increases rapidly to a maximal deviation or crest and then returns more slowly to the rest level, with a small but long lasting overshoot. If the axons are inhibitory, vm decreases and then decays to rest in much the same way. These dendritic impulse responses, called, respectively, excitatory and inhibitory postsynaptic potentials (EPSPs and IPSPs), are basically different from the output of axons. They are nonpropagated; apparent delays within the neuron are due to the cable properties of dendrites. They are "graded" rather than all–or–none; their amplitudes are roughly proportional to the input amplitude, and there is no threshold. They have no refractory periods and can be added.
FIG. 3.8. Relation between probability of response and stimulus intensity in percentage of threshold (Ten Hoopen & Verveen, 1963).
The mechanism of PSPs is the formation of loop current by the release of emf at synaptic sites in the dendritic membrane, which results in the flow <Page 141> of ions across the passive part of the dendritic membrane and the initial segment of the axon. There are two classes of mechanism for the release of emf. In electrical synapses, the loop currents generated by presynaptic axons pass across the postsynaptic membrane through special channels. The resulting changes in vm alter gNa+ and gK+. We are not concerned with this class. In chemical synapses the presynaptic axon terminals release a specialized transmitter substance that crosses to the postsynaptic membrane by diffusion, attaches to it, and alters its state.
In terms of the ionic hypothesis the alteration is described as a change in transmembrane conductance gPSP that is selective for certain ions. In accordance with Eq. (23) the level of vm in the subsynaptic dendrite changes toward the equilibrium potential for those ions The difference between the local level of vm and the resting level of vm elsewhere in the neuron causes current to flow. The current density jm is given by
The increase in conductance occurs mainly during the phase of the PSP before its crest. The rate of change toward the equilibrium potential is rapid because the conductance is raised. After the crest, the rate of return toward the resting level of vm is determined mainly by the passive membrane conductance, which is relatively low.
The applicability of Eq. (23) has been tested in two ways. First the intracellular ionic concentrations have been changed by electrophoretic injection of various ions through a microelectrode (Eccles, 1957). This changes the value for in Eq. (36). Second, the resting level of vm has been changed by passing a step current across the membrane with a microelectrode prior to the onset of the testing impulse. Examples of results of the second procedure are shown in Figs. 3.9 and 3.10. If vm is biased beyond the equilibrium potential for the PSP, vm < vPSP the PSP should invert. The EPSP vanishes as vm approaches the junctional potential between the intracellular and extracellular ionic solutions, which is not far from 0 mV (Fig. 3.9). It is inferred that the conductance increase is for all ions, but significantly for Na+, K+, and Cl–, which are the predominantly available ions. The IPSP vanishes at different values of vm, which are usually more negative than vm (Fig. 3.10). From this evidence and from the effects of ionic electrophoresis, it is inferred that is determined by vK+ or vCl-, or by both. That is, the conductance increase is selective for K+ or Cl+ or both.
The sources of energy for PSPs (as in the action potential) are the transmembrane concentration gradients of the relevant ions. The energy is dissipated as heat and the ionic concentrations are restored by energy from metabolic sources. The replacement gives rise to afterpotentials <Page 142> (see Fig. 2.19, Fig. 2.22, and Section 3.1.2). For example, during an EPSP positive charge moves inwardly across the active membrane, chiefly because Na+ ions move in and Cl– ions move out. Positive charge moves outwardly across the passive membrane, chiefly because K+ ions move out and Cl– ions move in. These movements are local, because there is not sufficient time for Na+ ions to move longitudinally in the dendrites. During the replacement phase, the metabolically derived emf are distributed over the previously active and passive membranes, but the net flow of charge is in the reverse direction. When the neuron is in the steady state, the total exchange of charge is zero. For this reason, the areas under the curves of the PSP and its afterpotential should be equal (see Section 2.5.2 and Fig. 2.22).
FIG. 3.9. (a) EPSPs in a motorneuron at various levels of transmembrane potential set by passing current across the membrane with an intracellular electrode. (b) Maximal rate of rise of EPSP against pre–set level of transmembrane potential. (c) EPSP peak amplitude, also against preset level of transmembrane potential. (d) Relations between applied current and recorded steady potential with electrode inside (•) and outside (O) the motorneuron (Coombs et al., 1955).
The synaptic conductance changes can be detected by use of the voltage clamp technique and by transmembrane current steps, but precise measurements of the time courses and spatial distributions are prevented by the complexities of dendritic geometry. Despite this limitation the conductance <Page 143> changes, and the PSPs as well, are inferred to be nonpropagating because unlike gNa+ and in axon membrane, the synaptic conductances gEPSP and gPSP are independent of vm. Transmission delays are attributed to the cable properties of dendrites (see Fig. 2.16). Similarly, the amplitudes of PSPs are proportional to input, because there is no vm dependent regenerative increase in conductance. They are additive because there is no inactivation process, as for gNa+; however, proportionality and additivity are limited to a narrow range of output near a median resting level of vm. This is because the PSP amplitudes are determined by the synaptic current jm, which is given by Eq. (36). For each increment in vm as a response to a previous input, the available driving force vPSP–vm for another response is reduced. Moreover, there is evidence that the synaptic conductance increase in response to one input can change the membrane time constant for the response to a second input. That is, if the amplitude of the first input is large enough, the synaptic conductance channel established by the first input may shunt the passive membrane resistance, across which the current of the second response flows. This type of nonlinearity is assumed not to occur at the amplitudes occurring in the EEG and evoked activity described in this book.
FIG. 3.10. (a) Direct (orthodromic) and (b) antidromic IPSPs in a motorneuron at various levels of transmembrane potential set by passing polarizing current across the membrane with an intracellular microelectrode. (c) Direct IPSP peak amplitude against preset level of transmembrane potential. (d) Same for antidromic IPSPs. Arrows show resting transmembrane potentials (Coombs et al., 1955).
3.2.4. AMPLITUDE–DEPENDENT INPUT OUTPUT RELATIONS
In general, we know that the transmembrane potential at the soma of a neuron vs(t) increases with the total rate of pulses on excitatory afferent axons, less the total rate of pulses of inhibitory afferent axons pd(t). The pulse rate on the efferent axon pa(t) increases with vs(t). It is very difficult, however, to construct input–output functions for vs = Gd(pd, t) and pa = Gs(vs, t) because in most instances the functions are nonlinear and time varying, and because pd(t) cannot be precisely measured (Section 2.4.4).
A nonlinearity in pulse to wave conversion, vs = Gd(pd, t) due to the ionic mechanism of PSPs is described in Section 3.2.3. There is another nonlinearity in the relation between the crest amplitude of the presynaptic action potential vpre and the crest amplitude of the PSP The function vpost = Gd(vpre, t) is exponential in pairs of neurons large enough to permit simultaneous intracellular recording from the pre– and postsynaptic sides of a synapse, there being an e–fold (e = 2.73, the natural logarithm base) increase in vpost for each 5–mV increase in vpre (Fig. 3.11). Indirect observations suggest that the same or a similar function holds for many synapses and may be part of the mechanism for presynaptic inhibition (Section 1.2.5).
FIG. 3.11. Relation between amplitude of presynaptic action potential (varied by preliminary electrical polarization) and the size of the postsynaptic potential (Hagiwara & Tasaki, 1958).
Time variance is seen when the input pulse rate pd(t) is changed from a low steady rate to a high steady rate and then back again to the low rate. During the high rate of input, successive PSPs may increase in crest amplitude (facilitation), decrease (defacilitation), or wax and wane (recruitment). If the rate is so high that successive PSPs fuse into sustained depolarization or <Page 145> hyperpolarization, the input is called tetanic. Following several seconds of tetanic input the successive PSPs on stimulation at a low rate are commonly facilitated or defacilitated for several minutes. These changes are called posttetanic potentiation and depression (see Section 1.2.5). Depression may result from accumulation of K+ ions outside the membranes of the tetanized presynaptic axons, which decreases their resting vm. If vm is increased above the threshold transmembrane potential, the axons undergo what is called cathodal block and cannot conduct action potentials, or depression may result from depletion of transmitter substance in the axon terminals. Potentiation may result from fusion of hyperpolarizing afterpotentials attributable to elevation of gK+, which results in increased resting vm, increased amplitude of presynaptic action potentials vpre, and an increase in or it may be due to increased availability of transmitter substance.
The nonlinearities in wave to pulse conversion, pa = Gs(vs, t) have been described in terms of thresholds and refractory periods (Sections 1.2.5 and 2.1.3). Time variance has been described in terms of accommodation and adaptation (Section 1.2.5) and threshold variation (Section 3.2.2). Linear, time–variant approximations are described in Section 2.4.4. The location of the trigger zone at which an impulse is initiated may vary. Large dendritic trees may have multiple trigger zones, so that the relation between vs(t) and pa(t) may depend on which segment of the dendritic tree receives input.
It is commonly assumed that the soma, the large proximal dendrites and the initial segment of the axon are isopotential, so that the measured v’s(t) is equal to vs(t) at the trigger zone, but there is evidence that the assumption is not generally valid. Granit, Kellerth & Williams (1964) present four criteria, by which to measure the amplitude of a sustained inhibitory input to a motorneuron. The inhibitory input is induced by sustained stretch of muscle which is antagonistic to the muscle innervated by the motorneuron under observation with an intracellular microelectrode. The four criteria are; (a) reduction in magnitude of a monosynaptic EPSP in response to single–shock stimulation of an afferent nerve; (b) reduction in tonic firing rate evoked by sustained cathodal current delivered through an intracellular electrode; (c) an increase in "synaptic noise," i.e., rapid random fluctuations in background membrane potential, indicative of mixed E and I input leading to an overall increase in gm without change in average vm, and (d) sustained hyperpolarization of the neuron. Of these criteria, (b) proves to be most sensitive, but its magnitude is not well correlated with the magnitudes of the others.
The description in this section includes only the more common types of amplitude–dependent nonlinearity and time variance that hold for most neurons. The task of describing the properties of networks of 10 to 100 or <Page 146> more neurons appears to be formidable. The difficulties for the most part evaporate when we consider the properties of masses of neurons, but we would be unwise to proceed without some understanding of a wide range of neural input–output relations.
3.3. Nonlinear Models for Neural Masses
Events in neural masses occur in the wave mode ov and in the pulse mode op. We are chiefly concerned with waves as a time–invariant function of pulses, which is pulse to wave conversion ov(t) = G1 [op(t)], and with pulses as a function of waves, which is wave to pulse conversion op(t) = G2 [ov(t)], These nonlinear functions are to be used to define the amplitude–dependent forward gain of a neural mass.
The data on which the analysis rests consist of digitized trains of pulses from single neurons and unit clusters in an interactive mass p'(t), which are treated as representative of the states of neuron subsets in the mass, and digitized segments of EEG waves v'(t), which are treated as representative of the dendritic currents of the neurons in the mass. We will not seek to justify these representations at present and will defer the problems involved to later chapters. The nonlinear functions to be described are designated v'(t) = G1[p'(t)] and p'(t) = G2 [v'(t)] or simply v = G(p) and p = G(v).
In the study of amplitude–dependent input–output relations of single neurons, the most common background state is the zero equilibrium state (Section 1.3.5). Interactive masses cannot exist unless they have background activity. When their background or "spontaneous" activity is suppressed, the masses are reduced to the KO level. Therefore, we must begin the study of amplitude–dependent input–output relations for masses by describing the background active states in the wave and pulse modes. Description is restricted to neural masses in the olfactory system, because the required data have been obtained only for these masses.
3.3.1. BACKGROUND ACTIVITY IN THE WAVE MODE
Within interactive masses intracellular recordings of potential from single neurons commonly show irregular fluctuations in the baseline between action potentials, which are referred to as "synaptic noise." This activity results from repeated bombardment of the neuron by excitatory and inhibitory impulses, and is abolished by deep anesthesia or isolation of the local region of recording from other parts of the nervous system, particularly from afferent tracts. Recognizable patterns of voltage with time commonly recur as wave forms embedded in the continuing signal that resemble EPSPs and IPSPs, but for the most part future values of potential cannot be reliably predicted from past measurements.
Extracellular recordings display the same type of fluctuations in baseline, constituting what is referred to most generally as the "electroencephalogram" or EEG. The most prominent, though not the only source of this activity, is the passage of synaptic currents from innumerable neurons across the fixed resistance of the extracellular medium. The EEG is an indispensable source of information about the properties of neuron populations, though the extraction of relevant information is a difficult task. Recurrent patterns of waveforms can be observed (Fig. 3.12) such as spikes, spindles, slow wave complexes, and slow baseline shifts, but irregularity is an outstanding feature of the EEG.
Both synaptic noise and EEG can be treated as random time series. With the application of appropriate behavioral controls on the animals from which the recordings are made, the neural generators can be regarded as stationary, in the sense that the statistical properties of the signals remain constant over intervals of measurement lasting from several seconds to several minutes. Repeated measurement of voltage at a regular rate over a desired interval yields an ensemble of values of a random variable as a function of time. Numerous statistical procedures can be used to extract useful information about the processes generating these random signals, of which two are particularly relevant in the present context.
FIG. 3.12. EEG waves recorded from the prepyriform cortex of a waking cat with implanted electrodes. A and C are surface points about 6 mm apart. B and D are points deeper than A and C by 1.5 mm. G is a reference point over the frontal sinus.
The first of these procedures is the amplitude histogram. The range of variation is divided into J equal intervals of voltage ∆v, each assigned the <Page 148> bin number m = 1, . . . , J. Each value of the random variable v'(t) lies within the range of one of the bins for v(m), and for each such correspondence one count is added to the bin. The amplitude histogram is normalized to an empirical probability density function by dividing the sample number in each bin by the total number in the set N, and by multiplying this ratio by the ratio of the standard deviation of the sample σ to the bin width ∆v:
Fig. 3.13. Histograms of prepyriform EEG amplitudes. (a) Normal density, characteristic of most recording periods; N = 21,200. (b) Platykurtic distribution with skew to left, seen during prolonged sinusoidal oscillation N = 256,000. (c) Rayleigh distribution of the envelope of the EEG (successive upward and downward crests); N = 7811. (d) Leptokurtic distribution, seen during barbiturate spikes under moderately deep anesthesia; N = 134,400. The curves in parts (a), (b), and (d) are from the normal density equation. The Rayleigh curve is given by f(v) = σ–2v exp(–v2/2σ2) v ≥ 0. This distribution is predicted for the output of a narrow bandpass filter with white noise input.
The probability density function of the EEG resembles a Gaussian probability density function (Fig. 3.13), though it often deviates slightly, but significantly, in being too sharply peaked (leptokurtosis) or in being skewed. The second approach is based on autocorrelation and is aimed at disclosure of sinusoidal periodicities buried in the random background activity. The underlying assumption is required that the system be linear, or that it respond in a linear range of function to some random input. The autocorrelation function is defined as the normalized ensemble average of the <Page 149> product of the signal with itself displaced in time,
It is a form of convolution (Section 2.3.1). For use with sampled data, this is converted to a summation, using Eq. (90) in Chapter 2:
Examples are shown in Fig. 3.14. The resultant autocorrelation function â(τ) is converted by the Fourier transform to an empirical power spectrum (Fig. 3.15):
The computational steps, restrictions, and precautions have been described in many previous works (see Matousek, 1973) and need not be recounted here. The end result is a set of estimates of the spectral distribution of energy in the currents generated by single neurons or by neural masses.
FIG. 3.14. Autocovariances from prepyriform EEGs of two cats (Freeman, 1962d).
FIG. 3.15. Power spectra computed from autocovariances in Fig. 3.14 for two cats (adapted from Boudreau & Freeman, 1963).
3.3.2. BACKGROUND ACTIVITY IN THE PULSE MODE
Except when isolated or depressed by drugs single neurons in interactive masses characteristically discharge repeatedly in the absence of deliberate and controlled stimulation. Although this activity is commonly called "spontaneous" or "maintained" to distinguish it from "induced" or "evoked" activity, it will be referred to here as the "background state" or "reference level," partly because it has determinants that may vary with time and partly because the causal factors may be subject to change by deliberate intervention.
Occasional neurons are found to fire at regular intervals, so that the <Page 151> pulse train is adequately described by its frequency. More commonly, the intervals are not regular, nor can the future time of occurrence of a pulse be predicted reliably from knowledge of the times of occurrence of pulses in the past, as is the case with regular firing rates.
The time of occurrence of each pulse is completely independent of all others, if the pulse train manifests what is known as a "Poisson process," which can be completely described by use of the mean firing rate, n pulses/sec, assuming that n remains constant in time, i.e., that the random process is stationary. Two assumptions are required. First, it is assumed that in an arbitrarily short interval of time having the duration ∆τ the joint probability of occurrence of a single pulse P(1 ∩ ∆τ) is uniform throughout the duration and is proportional to the product of the mean firing rate and the duration
For sufficiently short intervals the probability that more than one pulse will occur in ∆τ becomes negligibly small. Either one pulse will occur, P(1 ∩ ∆τ), or no pulse will occur, P(0 ∩ ∆τ), and the sum of these two probabilities is unity
The second assumption is that the occurrence of a pulse in any interval τ preceding ∆τ does not alter the probability of occurrence of a pulse in ∆τ. Therefore the joint probability that no pulse will occur in τ and τ + ∆τ, P(0 ∩ τ + ∆τ), is equal to the product of the probabilities that no pulse will occur in either interval
Substitution of Eqs. (41) and (42) into (43) yields
Rearrangement of terms gives
As ∆τ is allowed to become vanishingly small, Eq. (45) becomes a simple first–order differential equation
The solution is
In the limiting case where ∆τ approaches zero, P(1 ∩ ∆τ) also approaches zero
The probability that no pulse will occur diminishes from one exponentially in time following the occurrence of the last preceding pulse.
One test of whether a neural pulse train conforms to this property of a random pulse train is done by forming a histogram of the intervals between successive pulses. The time axis of the pulse train t is divided into a set of equal segments ∆t = ∆τ, which are at least an order of magnitude shorter than the mean pulse frequency n. The intervals between a set of N + 1 pulses are measured in units of ∆τ and the N intervals are assigned to the appropriate bin on the time axis of the histogram τ adding one count for each interval. The empirical distribution is normalized by dividing the counts ~f(τ) by the total number of intervals N and multiplying by the ratio of the standard deviation of the distribution of intervals σ, which is equal to the mean pulse rate n, to the segment ∆τ ≠ 0:
and P(1 ∩ τ) = (1 ∩ τ) + ε(1, τ), where ε is an error term, P and are the predicted and observed probabilities, and P(1 ∩ τ) = nτe-nτ (Parzen, 1960).
Interval histograms derived from most "spontaneously" active neurons in the central nervous system resemble the form predicted for a Poisson process, but they differ in two crucial ways, reflecting violation of the two assumptions underlying the description of a Poisson process. First, the probability of the occurrence of one pulse following another is not uniform, owing to the presence of depolarizing and/or hyperpolarizing afterpotentials, associated with delayed conductance changes, absolute and relative refractory periods, or "recovery" processes involving metabolic forces. Typically the normalized interval histogram is zero for some msec after τ = zero and then rises to a maximum with increasing τ (Fig. 3.16a).
Second, the interval histograms of many interactive neurons show more than one maximum. This reflects the fact that most neurons are embedded in feedback loops in neural masses and the occurrence of a pulse at some time t is not independent of the occurrence of a pulse at some earlier or later time. For example, a mass action that imposes an inhibitory event back onto the neuron that generated a pulse decreases the probability of firing at an interval corresponding to the loop delay. The failure to fire results in subsequent disinhibition, so that the probability of firing increases at t equal to twice the loop delay. In the interval histogram a peak occurs at τ equal to twice the loop delay.
These periodicities may be revealed by application of the technique of <Page 153> autocorrelation to neural pulse trains. Eq. (89) in Chapter 2 is written as
For convenience this may be transformed (Gerstein & Kiang, 1960) to a series of sums as
FIG. 3.16. (a)–(c) Interval histograms fitted with the equation
(d)–(f) Expectation densities fitted with the equations
The values of the constants have no importance here. (a, d) Single bulbar unit. (b, e) Multiple bulbar units. (c, f) Single prepyriform cortical unit. N = 80,000 pulses for each frame.
This corresponds to translating the origin to each successive pulse in turn and adding the pulse trains, one unit for the occurrence of a pulse in each successive bin. The resulting set of values is an expectation density function that yields two kinds of information. The empirical curves from neural pulse <Page 154> trains are (τm) = 0 for τm = 0 and rise to some level corresponding to the mean probability of firing. This rising curve, generally corresponding to an exponential, represents the recovery process of the neuron following a discharge, e.g., it corresponds to a relative refractory period, a hyperpolarizing afterpotential, or to inhibitory feedback (Fig. 3.16d). If after this initial recovery period the pulse–generating process is random, the probability of firing is uniform and the expectation density is nonoscillatory. The Fourier transform can be used to show the presence or absence of spectral peaks.
The interval histogram and expectation density function are useful for limited purposes. If the frequency of firing (τ) is zero for the first few milliseconds in both functions, the pulse train is most likely to have been generated by a single neuron. If (τ) is not zero in that interval, the pulse train must have been generated by two or more neurons. If the decay phase of the interval histogram is Poisson and if the expectation density is constant following an asymptotic rise in (τ) from zero, the pulse train can be treated as random. If these conditions are not met, the pulse train is not random. Some degree of order and predictability must exist, although those two techniques are not usually optimal for describing the order.
3.3.3. RELATIONS OF WAVES AND PULSES
The EEG of an interactive neural mass and the pulse trains of the neurons in the mass are manifestations of the active state of the mass. When appropriate averages are taken of measurements on these modes of activity, the averaged measurements are estimates of two of its state variables, namely the active states in the pulse mode or in the wave mode for one or another KI set in the neural mass. The pulse and wave functions may be generated by the same KI set or by different KI sets.
We proceed now to construct a function that relates activity in the pulse mode to activity in the wave mode. We know that the neurons in the KI sets, comprising the interactive mass, transform waves to pulses and pulses to waves, and that the output pulses of each neuron are input pulses for many other neurons. It is not necessary at the outset to know which KI sets generate the observed activities, nor do we need to know what the topology of connections is or what the level of complexity is. We must know, however, that the EEG being recorded is generated by only one KI set, and that it is not a mixture of potential differences generated by two or more KI sets in the mass. The pulse train must be generated by one neuron, or by a small number of neurons that are close together and are members of the same KI set. The momentary values of the EEG, which are v’(t), and the instantaneous rate (reciprocals of the intervals) of the pulse <Page 155> trains p’(t) vary, but their statistical properties may be constant. These are the means and standard deviations of the measured properties, such as amplitude, rate, etc., such that the mass is in a stationary state.
The location of the recording site for the EEG must be chosen carefully, so that the values for v’(t) can be treated (Sections 4.3.3 and 4.4.3) as proportional to the mean for transmembrane potential vs(t) in some subset of a KI set. The pulse train must be generated by neurons in close spatial proximity to the subset of neurons generating the EEG. This is because the activity density functions are in general not uniform across neural masses. The neuron or small subset of neurons generating the pulse train must belong to a subset that is a part of the mass and not a subset of an afferent KO or KI mass.
If these conditions are met, we construct a function relating pulse values to wave values in the following way. We collect an adequate sample of from 104 to 105 pairs of measurements on simultaneous observations of v’(t) and p’(t) at intervals of time of I msec (the approximate duration of each pulse). The pulse measure is 0 or 1 depending on whether a pulse is absent or present. The wave measure is in microvolts. Let v denote the digitized amplitude and let the range be divided into W intervals of amplitude ∆v.
An amplitude histogram is constructed in the standard way. The set of wave–pulse pairs is examined seriatim. For each occurrence of a value of amplitude in a given interval, a count of one is added to the designated interval. A second histogram is constructed concurrently that has one interval corresponding to each amplitude interval. If a pulse occurs in the same wave–pulse pair, a count of one is added to the second histogram. If there is no pulse, a count of zero is added.
When the total number of pairs N has been examined and classified, the number of amplitude occurrences in each interval nv(v), is divided by the total number of pairs. This gives the probability density for amplitude (v):
The number of occurrences of a pulse in each amplitude interval np(p, v), is divided by the total number of pairs to give the joint pulse–amplitude probability density
The pulse probability density is then divided by the amplitude probability density to give the pulse probability conditional on amplitude
This function (p|v) is not useful because it describes the conditional pulse <Page 156> probability only for simultaneous occurrences. From Chapter 2 we know that there are time delays in transmission in neural masses. The active state in the wave mode leads or lags the active state in the pulse mode by a time lag T determined by the linear properties of the masses. We describe the time dependency in the following way. For each occurrence of a value for amplitude in a wave–pulse pair v'(t), we ask whether a pulse p'(t + T) occurred at the same time T = 0 as above, and then we ask whether a pulse occurred in any of the preceding k pairs, T = –k (msec), and whether a pulse occurred in any of the following k pairs, T = +k (msec). A two–dimensional histogram is constructed for np(T, v). At the conclusion of the classification of observations we divide the number of pulse occurrences for each time interval and amplitude interval by the total number of pairs
The pulse probability density in time and amplitude is divided by the amplitude probability density
This yields the pulse probability conditional on time and amplitude (Fig. 3.17). For T = 0, Eq. (56) predicts the same result as Eq. (54).
FIG. 3.17. (a) Experimental normalized conditional probability density (NCPD) for a single mitral–tufted cell in the olfactory bulb, m(p|T ∩ v ∩ ω). (b) Theoretical NCPD from Eqs. (69), Pm(p|T ∩ v ∩ ω).
The limits on a table of conditional probability are set as follows. The mean and standard deviation σ are calculated for v’, and the table is truncated at ± 3σ. This is because the number of occurrences of values of amplitude outside these limits is too few to give reliable estimates for (p|T ∩ v). The limits on time are set at k = ±25 (msec) as a compromise between the range of time needed to define (p|T ∩ v) and the cost of computation. For convenience, (p|T ∩ v) is divided by the mean pulse probability for the entire set of pairs Po to give the normalized conditional pulse probability:
where w is the number of amplitude intervals from –3σ to +3σ. For most EEG recordings, including those described here, the mean for is zero; at the center of the amplitude range, v = 0, and of course, T = 0 is at the center of the lag time range. An illustration of a table of the experimental normalized conditional pulse probability density n(p|T ∩ v) is shown in Fig. 3.17a. For <Page 158> assistance in visualization, a table of predicted probability density Pn(p|T ∩ v) is shown in Fig. 3.17b. (The function is derived in Section 3.3.4.) The values of (p|T ∩ 0) and the values for (p|0 ∩ v) often tend to the value of 0. The values for (p|T ∩ + 3σ) and (p|T ∩ – 3σ) vary between zero and a maximum well in excess of 0. For any time T = T+ at which (p|T ∩ + 3σ) is maximal, (p|T ∩ – 3σ) is minimal, and for any time T = T– , the reverse holds.
FIG. 3.18. (a)–(c) Pulse probability sigmoid curves, predicted Pµ(v) and experimental µ(v). (d)–(f) Pulse probability waves, predicted Pµ(T) and experimental µ(T). Reproduced with permission from W. J. Freeman, Linear analysis of the dynamics of neural masses, Annual Review of Biophysics and Bioengineering, 1, 232. Copyright 1972 by Annual Reviews Inc. All rights reserved.
The principal information to be found in the table is the time–dependence of the functions n(p|T ∩ v) for v < 0 and v > 0, and the amplitude dependence of the functions n(p|T+ ∩ v) and n(p|T– ∩ v). These functions are obscured by random variations in n(p|T ∩ v). An estimate for n(p|T ∩ v) is obtained by averaging across the upper third of the range for v > 0.
Similarly for v < 0,
These are usually oscillatory functions of on T, which often tend to be sinusoidal (plotting symbols in Fig. 3.18d–f). They have the frequency of <Page 159> the concomitantly recorded EEG (see Figs. 3.12–3.15). They are approximately mirror images. The difference of the two functions divided by two,
is called an experimental pulse probability wave. It is measured by a sinusoidal basis function. (Section 2.1.1) to give the predicted pulse probability wave P(T)
where ~p is modulation amplitude, co is frequency in radians per second, φ is phase in radians, α is decay rate in reciprocal seconds, and ε(T) is the least squares error. The value for φ is the difference between the phase of P(T) and the phase of the autocorrelation function â(τ) (Fig. 3.14) of the EEG, v'(t), which is zero. Representative functions for P(T) are shown in Fig. 3.18d–f.
The pulse probability conditional on EEG amplitude is called the pulse probability sigmoid curve and is estimated by averaging n(p|T ∩ v) over a selected number of lag times Tk, k = 6–12, near which (T+,v) is maximal:
Similarly, for times when Pn(T+,v) is nearly minimal,
Examples are shown by the plotting symbols in Figs. 3.18a–c. The mean sigmoid curve is the average of the two functions, after reversal of the amplitude domain of –(v) about v = 0:
This is the desired function that relates probability values in the pulse mode p(t) to amplitude values in the wave mode v(t) for the neural mass.
3.3.4. WAVE TO PULSE CONVERSION IN THE KI SET
The next step is the derivation of theoretical functions for p(v) and v(p) for wave to pulse conversion and pulse to wave conversion in the mass, which can be evaluated by fitting the functions to the empirical conditional pulse probability.
We assume the existence of a stationary background state in the KI set in which the values for wave activity are randomly distributed with zero mean and unit standard deviation, and the pulse trains of neurons in the set have constant mean rate and randomly distributed interpulse intervals, <Page 160> except that there is an exponential rise in pulse probability from zero to the mean probability P0 following each occurrence of a pulse. Because for each neuron the intervals vary at random above a minimal value but with constant mean and variance, the set is time invariant, and the phenomena of adaptation and accommodation need not be considered. We assume that at any time there is a value for pulse density pd on input axons, a value for pulse density pa on output axons, and a value for the wave density v at each point in the set (see Section 1.3.2).
From studies of variation of threshold in neurons (Section 3.1.7) and across neurons in KO sets (Rail, 1955; Rail & Hunt, 1956), we know that the thresholds in the KI set are distributed with respect to wave amplitude v. If an increment of change ∆v occurs in the direction of inhibition v < 0, the number of neurons below threshold must increase, so that pa decreases by ∆pa. But pa cannot decrease indefinitely, because when all the neurons are below threshold, pa is zero and cannot be negative. Moreover, the decrement ∆pa must decrease as pa diminishes. Because we do not know a priori the nature of the distribution of thresholds, we assume as an approximation that the ratio ∆p/∆v is proportional to pa times a constant γi. In the limit as ∆v –> 0,
If an increment of change ∆v occurs in the direction of excitation v > 0, then the density of pulse output pa must increase by an increment ∆pa. As pa increases, neurons in the set are more likely to be in a relative refractory period, or undergoing a hyperpolarizing afterpotential based on a delayed increase in gK+ such that they are less likely to fire. We assume that a limiting value pmax exists in the KI set as a whole for pa, and that the increment in ∆pa for an excitatory increment ∆v is equal to the difference pmax – pa time a constant. In the limit as v –> 0,
The limiting pulse density Pmax for the set is not the same as the maximal rate for any one neuron. Rather it represents an average over both periods of high firing rate and subsequent periods, which neurons require before firing once again at a high rate.
The solutions to the differential equations are
The value for pmax is determined from the condition that pa has a continuous derivative with respect to v = 0, and p0 is mean pulse rate.
Equation (67) predicts that as v ––> ±∞ , pa asymptotically approaches pa = 0 and pa = p0(1 + γi/γe). The pattern of a sigmoid curve with horizontal asymptotes is commonly observed (Fig. 3.18a and d) for (v). The value for the ratio is found empirically to be γi/γe = 2. Therefore, γe is set equal to γ, and
Equations (68) are used to infer the function for wave to pulse conversion in the KI mass, because the limits apply to values for pulse density and not wave density.
The results are listed in Table 3.1 for fitting Eqs. (68) and Eqs. (60) to the pulse probability gradients and waves from mitral–tufted (KIM) cell pulse trains of anesthetized cats and rabbits (see Section 4.3.1), with N the number of pulse–wave samples at 1000/sec; σ the standard deviation of EEG amplitude; px the total number of pulses divided by N/1000; p0, ω, φ, and ~pm the values obtained following nonlinear regression of Eqs. (60) onto the pulse probability waves; and σ is obtained by fitting Eq. (68) to pulse probability sigmoid curves. EEG frequency is the average peak frequency from the power spectra of selected short segments (5 sec) of the wave train. Additional values in Table 3.2 are from pulse trains of Type A neurons in the prepyriform cortex of anesthetized cats and rabbits (see Section 4.4.1).
A curve from Eqs. (68) is fitted to +(v) in parts (a) and (b) of Fig. 3.18. The surface shown in Fig. 3.17b is generated by Eqs. (68) and (60), modified to fit P+(T) as follows:
In most sets of data, the power spectrum of the EEG displays two prominent peaks, one at the respiratory rate near ω = 6 rad/sec (1 Hz) and <Page 162> the other at a characteristic frequency near ω = 250 rad/sec (40 Hz) (Fig. 3.15a). The pulse probability wave (T) oscillates at corresponding frequencies. Measurement requires a set of two basis functions,
Alternatively, activities at the two frequencies are separated by filtering prior to construction of a table (p|T ∩ v) for each frequency band. The function P(T) varies with the frequency ω or frequency range ∆ω over which the conditional pulse probability is determined. Therefore another dimension ω is introduced, (p|T ∩ v ∩ ω) and P(p|T ∩ v ∩ ω). Figure 3.17a <Page 163> displays P+(T), for a frequency range fixed by filters with half–amplitude frequencies near 60 rad/sec (10 Hz) and 500 rad/sec (125 Hz). The evidence available at present indicates that (v) is not dependent on frequency.
3.3.5. PULSE TO WAVE CONVERSION IN THE KI SET
The chief nonlinearities in the conversion of input pulse density pd to wave density v occur in the presynaptic terminals (Fig. 3.11) and in the ionic mechanism of the dendritic PSPs. The presynaptic nonlinearity depends on pulse amplitude, which in turn depends either on synaptic mechanisms for presynaptic inhibition (Sections 1.2.5 and 3.2.4) or on after potentials or other changes established by preceding pulses. We assume that the input pulse trains are random occurrences at an invariant mean rate, so that accommodation and adaptation need not be considered (Section 1.2.5).
We assume that the spatial density of synaptic activity is sufficiently low during background activity of neural masses that the conductance change for each PSP does not affect any other PSP. The remaining nonlinearity is imposed by the nature of the ionic emf of the PSPs. According to the ionic hypothesis (Sections 3.1.1–3.1.3) the emf for each PSP asymptotically approaches zero as the wave amplitude approaches the equilibrium potential for the PSP. We can represent the equilibrium potential for the EPSP by e, which is the potential difference between vEPSP and v = v0 = 0, and the <Page 164> equilibrium potential for the TPSP by i, which is the potential difference vIPSP – v0.
We infer that for an increment ∆pd in excitatory input pulse density pd > p0 there is an increment in wave density ∆v. From Section 3.2.3 and Eq. (36) we infer that
where e > 0 is the average equilibrium potential for the EPSP of neurons in the KI set, and ζe is a constant. In the limit as ∆pd –> p0,
where i < 0 is the average equilibrium potential for IPSPs of neurons in the KI set, and ζi is a constant. From the condition that the derivatives are continuous at v = 0, ζe e = ζii. A constant r < 0 is defined such that e = ri, ζi = ζ, and ζe = ζ/r. Therefore, for ζ > 0 and r < 0,
Equations (74) are solved for p as a function of v because v is the independent variable in experimental determination of pulse probability sigmoid curves for both wave to pulse and pulse to wave conversions. The solutions are
Equations (75) imply that as v ––> i, p ––> ∞ , and that as v ––> +3σ, p approaches or equals zero, but not asymptotically.
A common form of +(v) or –(v) is that in which it rises to a high value as v, respectively, approaches +3σ or –3σ (Fig. 3.18c and f). Curves from Eqs. (75), P(v), have been fitted to (v), computed from the pulse trains of type B neurons in the prepyriform cortex (see Section 4.4.1). The average value for i is –06σ, and the average value for r is –5.7 (Table 3.3). The vertical asymptote implies that Eqs. (74) represent the function for pulse to wave conversion because the limit applies to values for wave density and not pulse density. The experimentally derived parameter, r = –5.7, is consistent with intracellular measurements of vEPSP and as in Figs. 3.9 and 3.10. For example, if vIPSP is equal to a junctional potential of –4 mV, and <Page 165> the resting potential is –70 mV, the expected value for vIPSP is –81 mV (Fig. 3.10).
3.3.6. THE FORWARD GAIN OF THE KI SET
The transfer function for a KI set has been reduced (Section 1.3.4) to a single feedback loop between two KO subsets in the set: a transmitting subset and a receiving subset. The transfer function for each KO subset, which is identical to the transfer function of the other, consists of four serial parts. The afferent path is linear F1(s) and transmits in the pulse mode to the nonlinear pulse to wave conversion mechanism G1(p). The dendrites operate linearly in the wave mode F2(s) and transmit to the nonlinear wave to pulse conversion mechanism G2(v) which determines the output. In serial order,
The analysis of this system containing two nonlinear elements is simplified by piecewise linear approximation. Both nonlinear elements have bilateral saturation. For any bounded input domain, the nonlinear function between input and output is approximated by a straight line segment over the <Page 166> domain. The nonlinear gain is replaced by a fixed gain coefficient given by the slope of the line segment. The dependence of the slope on the input domain is now derived.
If the pulse to wave mechanism does not itself saturate but does transmit over a sufficient range to saturate the wave to pulse mechanism, the output is determined by G2(v) (Fig. 3.19b), and G1(p) can be replaced by a linear function g1(po). In this case the order of elements can be changed:
If the pulse to wave mechanism does not transmit over a sufficient range to saturate the wave to pulse mechanism, but itself undergoes saturation, the output is determined by G1(p) (Fig. 3.19a), and G2(v) is replaced by a linear function † g2(vo):
† With regard to notation, the sigmoid input–output curves are designated G(v) and G(p). The derivatives of the curves are designated g(v) and g(p). The fixed number that is the slope of a straight line replacing G(v) or G(p) is designated by a subscripted coefficient such as g1, g2, ge or gi. The slope is given by g(v) or g(p) for a fixed value of v or p such as v = ve* as in ge = g(ve*). Both ge and gi are forward gain coefficients of the KO transmitting subsets in KIe and KIi sets, and by assumption the square root of the feedback gain Ke or Ki is equal to the forward gain, as in Ke.5 and Ki.5 = gi. More generally, G(v) and G(p) can be viewed as operators that give the rules for converting one time function v(t) to another time function p(t) or vice versa, whereas g(v) and g(p) can be viewed as functions that give the rules for determining a fixed number such as ge from a real number such as ve*. The nonlinear operators should perhaps more precisely be written as G(p) = G[p(t)] and G(v) = G[v(t)], but this is avoided in order to simplify the notation.
By redefinition of the loop transfer function between the two KO subsets (see Section 2.4.1),
The transfer function of the KO subset consists of a linear and a nonlinear element in series in one of three alternative forms.
FIG. 3.19. (a, b) Input–output curves for (a) pulse to wave conversion and for (b) wave to pulse conversion. (c, d) Gradients of input–output curves, from which the forward gains are derived.
The amplitude–dependent nonlinear gains are determined by the ratios of output to input, and are designated by g(p) and g(v):
where dv/dpd is given by Eqs. (74), and dpa/dv is derived from Eqs. (68). For v in dimensionless units of γ,
For v and i in units of ζ/γ (microvolts per pulses per second),
By definition at v0 = 0,
If there is no saturation of the wave to pulse mechanism,
This is shown graphically in Fig. 3.19c. If there is no saturation of the pulse to wave mechanism,
This is shown in Fig. 3.19d.
Equations (85) and (87) imply that there are three sets of determinants of forward gain. First, the two empirical rate constants, γ and ζ, denote the lumped properties determining the sensitivities respectively of the trigger zones and of the synaptic mechanisms in the dendrites. In the following chapters these are treated as invariants.
Second, the forward gain depends on the background state, which is specified by the mean pulse rate po and the relative mean level of hyperpolarization i or depolarization e = r i. By assumption the synaptic equilibrium potentials, vIPSP and are fixed by the electrochemical ionic concentration gradients across the membrane, and i = vIPSP – vo and e = vEPSP – vo change as the result of changes in the background level of polarization vo. Intuitively we can expect po and vo to increase or decrease together (Fig. 3.20a), but the form of the covariance is not known. A set of input–output curves for G2(v) is shown in Fig. 3.20a for three sets of values of po and vo. The same three curves are shown in Fig. 3.20b, with two differences. The values on the ordinate are normalized by dividing p by po, and the values on the abscissa are scaled to give equal width of display for three ranges of wave amplitude v. The steepest sigmoid curve has the widest range of v.
Because of the way in which the experimental data are acquired and processed (Section 3.3.3), vo = 0 in each determination of p(T, v), and the range on the abscissa is fixed at ± 3σ (Fig. 3.20b). Further, the data are normalized with respect to po [Eq. (57)]. If vo and po increase, then from Eq. 85 there is an increase in go, and the change in the form of the graphic display consists in an increase in the steepness of the sigmoid curve (Fig. 3.20c). However, an increase in σ for the EEG also increases the steepness. That is, in the form of Fig. 3.20c, the input–output curves are comparable to the experimental curves for pulse probability conditional on EEG amplitude (Fig. 3.21), in which the conditional probability is normalized, /o, and the range of EEG amplitude is ±3σ (standard deviations). Examples are shown for three conditions of recording mitral–tufted units from the olfactory bulb in an anesthetized cat. In part (a) the data are from the condition breathing through the mouth as well as the nose. For part (b) breathing is only through the nose, the variance of EEG amplitude is greater, and the value for o is increased. For part (c), EEG variance and o are diminished by a small supplemental dose of pentobarbital. The upper frames show <Page 169> conditional pulse probability at a lag time when the pulse probability is highest for high EEG amplitudes, and the lower three frames from the same tables show the conditional pulse probability at a lag time when the pulse probability is lowest for high EEG amplitudes (Section 3.3.3). In the following chapters, po and vo are treated as constant determinants of fixed go over sets of AEPs within stable states and as variable (varying go) between stable states (Section 1.3.5) of KO, KI, and KII sets.
FIG. 3.20. Stages in the procedure of normalization by which the forward gain is derived for wave to pulse conversion. (a) Representation of increasing gain (slope of curve for wave to pulse conversion) with increase in both mean pulse rate po and mean wave amplitude vo. (b) Effect of normalization of vo. (c) Effect of normalization of po. (d) Replacement of the sigmoid function by a linear function over an operating range given by v+ to v– . The slope of the line segments defines a point of tangency on each limb of the sigmoid curve, and those two points define the effective operating amplitudes ve* and vi* (see Section 6.1.1).
Third, the forward gain depends on the instantaneous wave amplitude v. In piecewise linear approximation we treat each AEP or time segment of the EEG as having a certain amplitude range v+ and v–, and we infer that the sigmoid input–output curve can be approximated by a straight <Page 170> line over that range, as in Fig. 3.20d, which has a certain slope. With fixed po and vo the slope is different for differing values of v+ and v– in differing AEPs and EEG segments. We can express the range for each event as a fixed value of v over the duration of the event. The fixed value is designated as ve* for a KOe or KIe and vi* for a KOi or KIi set. The numerical values cannot be obtained directly. Instead, a value of g(v) is found by techniques to be described, and Eq. (87) is solved for ve* or vi*. In terms of Fig. 3.20 the value of g(v*) specifies the slope of a straight line, which is tangent to one of the sigmoid curves at a point, and the value for v at that point is the value for ve* or vi*. That is, in piecewise linear approximation of the dynamics of a KO set within a KI or KII set (see Section 1.3.4), the nonlinear input–output curve is replaced by a straight line segment that is tangent to the curve at the effective value for the wave amplitude of the KO set over the duration of the approximation. The slope of the line is designated ge or gi.
FIG. 321. Examples of pulse probability sigmoid curves P+(v) and P–(v) from a bulbar neuron in three conditions: (a) oral breathing, (b) nasal breathing, both under light anesthesia and (c) under moderate anesthesia. Curves are from Eqs. (68). Each set of triangles shows +(v) above and –(v) below (Section 3.3.3). The upper three curves should be compared with the three curves in Fig. 3.20c.
The forward gain without saturation go van be estimated numerically by means of Eq. (85). For this purpose, it is necessary to identify the K–sets <Page 171> in the mass and to determine which sets generate the wave and pulse trains. This is discussed further in Section 6.2.4, after the required topology has been described.
The requirement for a sigmoid nonlinear input–output curve of distributed neural sets is now well recognized (Wilson & Cowan, 1972; Grossberg, 1973; Zetterberg, 1973). The unique features of the curves derived from electrophysiological data are the following. The monotonically increasing curves have a single inflexion, which implies that there is a single maximum in the gain as a function of amplitude. (If the saturation at the lower end were attributed only to thresholds of the neurons, then the distribution of thresholds would be unimodal). The maximal forward gain and the maximal rate of change in gain with change in wave amplitude both occur near the central operating point. Both curves are asymmetric; the curve for pulse to wave conversion in the ratio of 6 to 1, and the curve for wave to pulse conversion in the ratio of 2 to 1. The asymptotes for the wave to pulse conversion at p = 0 and p = 3po are also in the ratio of 2 to 1. This states that the theoretical maximum for pulse density in the population is directly coupled with the mean pulse density po so that both change together when the population changes from one stable level to another. The physiological basis for the relations between the curvatures and the asymptotes is obscure and deserves detailed exploration. These features are critical determinants of the interactive properties of KII sets which are described in Chapter 6. In particular, the near–threshold impulse responses of KI and KII sets are often exquisitely sensitive to changes in input amplitude, which reflects the maximal dependence of gain on amplitude near v = 0.
<End Chapt 3>