The concept of Fisher information I is introduced. Smoothness properties of I, and its relation t... more The concept of Fisher information I is introduced. Smoothness properties of I, and its relation to entropy, disorder, and uncertainty are explored. Information I is generalized to N- component problems, and is expressed both in direct and Fourier spaces. Applications to ISAR radar imaging and to the derivation of physical laws are discussed.
Many properties of the fundamental particles follow from a simple, information-based model of mea... more Many properties of the fundamental particles follow from a simple, information-based model of measurement. The model postulates that the level of Fisher information I that is acquired in the four-position measurement of a hadron achieves the level J that is intrinsic to the hadron four-position. Requiring the equivalent condition (I - J)^m = 0 for different integer values of m
Living systems represent a local exception, albeit transient, to the second law of thermodynamics... more Living systems represent a local exception, albeit transient, to the second law of thermodynamics, which requires entropy or disorder to increase with time. Cells maintain a stable ordered state by generating a steep transmembrane entropy gradient in an open thermodynamic system far from equilibrium through a variety of entropy exchange mechanisms. Information storage in DNA and translation of that information into proteins is central to maintenance thermodynamic stability, through increased order that results from synthesis of specific macromolecules from monomeric precursors while heat and other reaction products are exported into the environment. While the genome is the most obvious and well-defined source of cellular information, it is not necessarily clear that it is the only cellular information system. In fact, information theory demonstrates that any cellular structure described by a nonrandom density distribution function may store and transmit information. Thus, lipids and polysaccharides, which are both highly structured and non-randomly distributed increase cellular order and potentially contain abundant information as well as polynucleotides and polypeptides Interestingly, there is no known mechanism that allows information stored in the genome to determine the highly regulated structure and distribution of lipids and polysaccha- riedesin the cellular membrane suggesting these macromolecules may store and transmit information not contained in the genome. Furthermore, transmembrane gradients of H(+), Na(+), K(+), Ca(+), and Cl(-) concentrations and the consequent transmembrane electrical potential represent significant displacements from randomness and, therefore, rich potential sources of information.Thus, information theory suggests the genome-protein system may be only one component of a larger ensemble of cellular structures encoding and transmitting the necessary information to maintain living structures in an isoentropic steady state.
All spontaneous emergence of quantum particles from false vacuums can occur via usual energy-base... more All spontaneous emergence of quantum particles from false vacuums can occur via usual energy-based Lagrangians; or, as we show, via a variational principle of minimum loss of Fisher information. By this principle all material existence in the multiverse, including its life forms, are physical manifestations of Fisher information. The information principle serially formed our universe, and all others, in the multiverse. The resulting expansionary (Big bang) eras of time t and/or space-time x_i, i = x,y,z,t (c=1) for the universes are found to obey probability densities p(t) and p(x_i) of usual exponential forms. The existence of the multiverse allows preservation of invariant values of the 26 physical constants via their relay from one universe to another by successive Lorentzian wormholes. At each relay the emerging constants are represented by the intensities of an input hologram. The information principle was previously used to derive nearly all textbook physics and much cell biology, e.g. the Hodgkin-Huxley (H-H) equations governing ions emerging into biological cells. The equations we derive governing p(t) and p(x_i) for universes coincide with the H-H equations governing ions entering these biological cells. Thus, the information concept holds over a vast range of scale sizes. 2. BACKGROUND Note: By "particle" we mean the usual wave-like entity consisting of energy-mass. We propose that all statistical, scientific (physical, chemical, biological, medical) effects are tangible expressions of maximized information [1],[2]; in particular of Fisher information[3],[4]. These may be viewed, alternatively, as accomplishing minimization of its loss [5],[6] after transmission of certain particles over a channel. Also, in cases where the emergent medium is granular the Fisher information goes over into a Kullback-Leibler divergence value [1] Eq. (A4) or (A6) (see Appendix A). In turn, this minimized loss is shown to represent that in corresponding Shannon information values [5],[6] (see below Eq. (3)). How does "minimum loss of information" occur? 2.1 Criterion of minimum information loss A general information channel consists, by definition, of a signal, e.g. a light beam which is transmitted over a medium (called a "channel"), e.g. "the air," from its source (say, a time-modulated light bulb) to a receiver (say, your eye). The ideal "information" sought at the receiver is in the true time-dependent intensity profile at the bulb. The transmission channel is, by definition, closed to 'outside' signal inputs, so that, en route to the receiver no new information (here, light) may enter it. Then what can enter must be, by definition, purely noise. This can only reduce the source level of information. Hence, any change in the information carried from source to receiver must be a loss. Minimizing this loss then amounts to achieving maximum gain of the information over the channel. Moreover, an effect whose observations convey maximum Fisher information I, in particular, can be most accurately measured. This is by the Cramer-Rao [1],[2],[3] result 2 = −1 (1) for the minimum mean-squared error 2 attainable in measuring a mean, or signal, value. Thus, the larger the information I is the smaller is the mean-squared error. 2.2 Thesis of J.A. Wheeler This has further consequences: Man is an integral part of nature, and any humanly-observed phenomenon is, at least, affected by man's chosen method of observation (a well-confirmed thesis of J.A. Wheeler [7]). An oft-quoted example is the two possible types of output-either visible interference fringes on a receiving screen or visually observed photon slit positions-when conducting the famous optical double-slit interference experiment [8]. In fact all such Wheeler-type experiments could be observed by suitably automated measuring devices and, so, are not necessarily limited to human-or even to live-observation. Apes, dolphins or artificial devices-such as motion-sensing cameras-can fill this role as well. 2.3 Dual role of intelligence Next, consider a different benefit of quantifying an unknown scientific effect by maximizing its level of Fisher I [1],[2]. By Eq. (1), the data from the effect are also maximally accurate. In summary, in a universe such as ours maximum intelligence is built into both (1) the structure of its physics and (2) the ability of that structure to be known. This also lends itself to conveying into our universe extremely accurate values of the 26 fundamental physical constants, as were needed for it to evolve properly from the Big bang onward (see below). Also, in such a universe, creatures that can deduce properties (1) and (2), and use them for evolutionary advantage, are favored. An example of such use is the current development of "learning machines." Suggestions of combining a learning machine with the human brain are being considered for, potentially, 'engineering' a combined living-digital being of increased mental faculty. 2.4 Accuracy problem Could nature per se, in even the seemingly most complex form of multiple universes (a "multiverse"), have emerged out of this Fisher information-based property? A clue is that Fisher information is a local measure of complexity (in time and space), so that maximizing it allows, in turn, each universe of a multiverse to arise. We will show below how this could have happened. A related question is how our universe could have evolved with the required extreme accuracy of the 26 universal constants that enable life to exist in it. For example, Stephen Hawking has noted that: "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."[9] 2.5 Sensitivity of the known physical laws to accuracy of their universal constants Thus suppose, e.g., that the strong nuclear force coupling constant was 2% stronger than it is, while the other constants were left unchanged (discussed by physicist Paul Davies [10]).
The authors show how the formal framework of quantum mechanics arises in economics generally and ... more The authors show how the formal framework of quantum mechanics arises in economics generally and in financial economics in particular as a natural consequence of an information-theoretic approach to these fields. Specifically, extremizing Fisher information, subject to the constraint that the associated probability density reproduces observed economic phenomena, results in Schrodinger-like equations. The authors illustrate the utility of this approach with examples from financial economics. As an emergent consequence of information theory, the use of quantum formalism in economics can be seen as a powerful approach for understanding complex dynamics grounded in the probabilistic view of economics advocated by Knight and Keynes.
Presented in this paper is a new matrix formulation of both the classical electromagnetic Maxwell... more Presented in this paper is a new matrix formulation of both the classical electromagnetic Maxwell equations and the relativistic quantum mechanical Dirac equation. These new matrix representations will be referred to as the Maxwell spacetime matrix equation and the Dirac spacetime matrix equation. Both are Lorentz invariant. Key to these new matrix formulations is an 8-by-8 matrix operator referred to here as the spacetime matrix operator. As it turns out, the Dirac spacetime matrix equation is equivalent to four new vector equations, which are similar in form to the four Maxwell vector equations. These new equations will be referred to as the Dirac spacetime vector equations. This allows these new vector equations to be as readily solved as solving a set of Maxwell vector equations. Based on these two new matrix approaches, two computer programs, encoded using Matlab software, have been developed and tested for determining the reflection and transmission characteristics of multilayer optical thin-film structures and multilayer quantum well-and-barrier structures. A listing of these software programs may be found in supplemental material associated with this article. Numerical results obtained based on the use of these computer programs are presented in the results section of this article.
Abstract A wide class of physical problems requires the estimation of probability laws. Diffracti... more Abstract A wide class of physical problems requires the estimation of probability laws. Diffraction patterns and quantum mechanical probability laws on position are examples. Minimum Fisher information is one approach to estimating such laws; maximum entropy is another. In this paper, we show that the minimum Fisher information approach may be derived from a prior principle of maximum Cramer-Rao bound (MCRB). The MCRB-Fisher approach is then applied to some fundamental physical problems, including diffraction theory and quantum mechanics. It is found to give the correct physical solutions, that is the Helmholtz and Schrodinger wave equations respectively. By comparison, the maximum entropy approach gives incorrect solutions to these problems, except in the special (thermodynamic) case of a harmonic oscillator potential.
Mammalian cell function requires timely and accurate transmission of information from the cell me... more Mammalian cell function requires timely and accurate transmission of information from the cell membrane (CM) to the nucleus (N). These pathways have been intensively investigated and many critical components and interactions have been identified. However, the physical forces that control movement of these proteins have received scant attention. Thus, transduction pathways are typically presented schematically with little regard to spatial constraints that might affect the underlying dynamics necessary for protein-protein interactions and molecular movement from the CM to the N. We propose messenger protein localization and movements are highly regulated and governed by Coulomb interactions between: 1. A recently discovered, radially directed E-field from the NM into the CM and 2. Net protein charge determined by its isoelectric point, phosphorylation state, and the cytosolic pH. These interactions, which are widely applied in elecrophoresis, provide a previously unknown mechanism for localization of messenger proteins within the cytoplasm as well as rapid shuttling between the CM and N. Here we show these dynamics optimize the speed, accuracy and efficiency of transduction pathways even allowing measurement of the location and timing of ligand binding at the CM-previously unknown components of intracellular information flow that are, nevertheless, likely necessary for detecting spatial gradients and temporal fluctuations in ligand concentrations within the environment. The model has been applied to the RAF-MEK-ERK pathway and scaffolding protein KSR1 using computer simulations and in-vitro experiments. The computer simulations predicted distinct distributions of phosphorylated and unphosphorylated components of this transduction pathway which were experimentally confirmed in normal breast epithelial cells (HMEC).
Probability, Statistical Optics, and Data Testing, 2001
The aim of a least-squares approach is to estimate the parameters that define a known, or hypothe... more The aim of a least-squares approach is to estimate the parameters that define a known, or hypothetical, physical law. The estimated parameters are those that make the law fit a given set of data points in the least-squares sense. Probably the reader has carried through a least-squares curve fit more than once during his career. But, there is actually a great deal more to this problem than accomplishing the fit.
Processing of Images and Data from Optical Sensors, 1981
The most important advance in restoring images during the past decade probably was the realizatio... more The most important advance in restoring images during the past decade probably was the realization that positive-enforced solutions are a real advance over unconstrained solutions. Surprisingly, the positive constraint both reduces spurious oscillation and enhances resolution simultaneously. This will be shown by a simple graphical argument. The earliest workers in this exotic field realized that by enforcing positivity they were inducing higher-frequency oscillation into their outputs than even is present in the image data. And more importantly, these were real and not artifacts. That is, super-resolution was being produced in real images for the first time. Some examples of these will be shown. The earliest methods for enforcing positivity were ad hoc, e.g., by arbitrarily representing the restoration as the square of a function. Later positivity was given a firm theoretical basis through the route of "maximum entropy," a concept which originated in the estimation of probability densities. A review of such methods will be given. Of late, positivity has also aided in producing real solutions to the "missing phase problem" of Labeyrie interferometry.
We show that a famous die experiment used by E. T.)dynes as intuitive justification of the need f... more We show that a famous die experiment used by E. T.)dynes as intuitive justification of the need for maximum entropy (ME) estimation admits, in fact, of solutions by classical, Bayesian estimation. The Bayesian answers are the maximum probable (m.a.p.) and posterior mean solutions to the problem. These depart radically from the ME solution, and are also much more probable answers.
Presented in this paper is a new matrix representation of classical electromagnetic theory. The b... more Presented in this paper is a new matrix representation of classical electromagnetic theory. The basis 4)f this representation is a space-time, eight-by-eight difrerential matrix operator. This matrix operator is initially lormulated from the differential form of the Maxwell field equations for vacuum. The resulting matrix formulation of Maxwell's equations allows simple and direct derivation of: i he electromagnetic wave and charge continuity equations; I he Lorentz conditions and definition of the electromagnetic jotentials; the Lorentz and Coulomb gauges; the electromagnetic 2otential wave equations; and Poynting's conservation of 1:nergy theorem. A four-dimensional Fourier transform of :he matrix equations casts them into an eight-dimensional transfer theorem. The transfer function has an inverse, and this allows the equations to be inverted. This expresses the fields directly in terms of the charge and current source distributions.
The use and analysis of the signatures of light reflected by scattering sites in a moving medium ... more The use and analysis of the signatures of light reflected by scattering sites in a moving medium have been actively pursued for the determination of flow velocities. 1-4 Here we address the problem of the associated data reduction, in particular, when the number of scatterers is appreciable (of the order of 100 or more), and the particles are relatively large, of the order of tens of microns; this is called particle image velocimetry (PIV). Particle image velocimetry is of considerable interest, especially for flows where there is only limited transverse motion, i.e., where particles in the flow field generally stay within well-defined planes. In many situations of interest these conditions are met. Spatial particle distributions within flows are of some importance since they influence many processes of interest, including heat transfer, velocity distributions of the carrier gas near bounding surfaces, and other properties of both single-and double-phase flow. In addition, since excellent time resolution in the data acquisition is now possible with the use of pulsed lasers, the simultaneous determination of the position and subsequent infer
In this paper, a method of restoring longitudinal images is developed. By using the transfer func... more In this paper, a method of restoring longitudinal images is developed. By using the transfer function for longitudinal objects, and inverse filtering, a longitudinal image may be restored. The Fourier theory and sampling theorems for transverse images cannot be used directly in the longitudinal case. A modification and reasonable approximation are introduced. We have numerically established a necessary relationship between just-resolved longitudinal separation (after inverse filtering), noise level, and the taking conditions of object distance and lens diameter. An empirical formula is also found to well-fit the computed results. This formula may be of use for designing optical systems which are to image longitudinal details, such as in robotics or microscopy.
The findings in this report are not to be construed as an official Department 6f the-my position,... more The findings in this report are not to be construed as an official Department 6f the-my position, unless so designated by other authorized documents. The use of trade names or manufacturers' names in this report does not constitute indorsement of any commercial product.
The concept of Fisher information I is introduced. Smoothness properties of I, and its relation t... more The concept of Fisher information I is introduced. Smoothness properties of I, and its relation to entropy, disorder, and uncertainty are explored. Information I is generalized to N- component problems, and is expressed both in direct and Fourier spaces. Applications to ISAR radar imaging and to the derivation of physical laws are discussed.
Many properties of the fundamental particles follow from a simple, information-based model of mea... more Many properties of the fundamental particles follow from a simple, information-based model of measurement. The model postulates that the level of Fisher information I that is acquired in the four-position measurement of a hadron achieves the level J that is intrinsic to the hadron four-position. Requiring the equivalent condition (I - J)^m = 0 for different integer values of m
Living systems represent a local exception, albeit transient, to the second law of thermodynamics... more Living systems represent a local exception, albeit transient, to the second law of thermodynamics, which requires entropy or disorder to increase with time. Cells maintain a stable ordered state by generating a steep transmembrane entropy gradient in an open thermodynamic system far from equilibrium through a variety of entropy exchange mechanisms. Information storage in DNA and translation of that information into proteins is central to maintenance thermodynamic stability, through increased order that results from synthesis of specific macromolecules from monomeric precursors while heat and other reaction products are exported into the environment. While the genome is the most obvious and well-defined source of cellular information, it is not necessarily clear that it is the only cellular information system. In fact, information theory demonstrates that any cellular structure described by a nonrandom density distribution function may store and transmit information. Thus, lipids and polysaccharides, which are both highly structured and non-randomly distributed increase cellular order and potentially contain abundant information as well as polynucleotides and polypeptides Interestingly, there is no known mechanism that allows information stored in the genome to determine the highly regulated structure and distribution of lipids and polysaccha- riedesin the cellular membrane suggesting these macromolecules may store and transmit information not contained in the genome. Furthermore, transmembrane gradients of H(+), Na(+), K(+), Ca(+), and Cl(-) concentrations and the consequent transmembrane electrical potential represent significant displacements from randomness and, therefore, rich potential sources of information.Thus, information theory suggests the genome-protein system may be only one component of a larger ensemble of cellular structures encoding and transmitting the necessary information to maintain living structures in an isoentropic steady state.
All spontaneous emergence of quantum particles from false vacuums can occur via usual energy-base... more All spontaneous emergence of quantum particles from false vacuums can occur via usual energy-based Lagrangians; or, as we show, via a variational principle of minimum loss of Fisher information. By this principle all material existence in the multiverse, including its life forms, are physical manifestations of Fisher information. The information principle serially formed our universe, and all others, in the multiverse. The resulting expansionary (Big bang) eras of time t and/or space-time x_i, i = x,y,z,t (c=1) for the universes are found to obey probability densities p(t) and p(x_i) of usual exponential forms. The existence of the multiverse allows preservation of invariant values of the 26 physical constants via their relay from one universe to another by successive Lorentzian wormholes. At each relay the emerging constants are represented by the intensities of an input hologram. The information principle was previously used to derive nearly all textbook physics and much cell biology, e.g. the Hodgkin-Huxley (H-H) equations governing ions emerging into biological cells. The equations we derive governing p(t) and p(x_i) for universes coincide with the H-H equations governing ions entering these biological cells. Thus, the information concept holds over a vast range of scale sizes. 2. BACKGROUND Note: By "particle" we mean the usual wave-like entity consisting of energy-mass. We propose that all statistical, scientific (physical, chemical, biological, medical) effects are tangible expressions of maximized information [1],[2]; in particular of Fisher information[3],[4]. These may be viewed, alternatively, as accomplishing minimization of its loss [5],[6] after transmission of certain particles over a channel. Also, in cases where the emergent medium is granular the Fisher information goes over into a Kullback-Leibler divergence value [1] Eq. (A4) or (A6) (see Appendix A). In turn, this minimized loss is shown to represent that in corresponding Shannon information values [5],[6] (see below Eq. (3)). How does "minimum loss of information" occur? 2.1 Criterion of minimum information loss A general information channel consists, by definition, of a signal, e.g. a light beam which is transmitted over a medium (called a "channel"), e.g. "the air," from its source (say, a time-modulated light bulb) to a receiver (say, your eye). The ideal "information" sought at the receiver is in the true time-dependent intensity profile at the bulb. The transmission channel is, by definition, closed to 'outside' signal inputs, so that, en route to the receiver no new information (here, light) may enter it. Then what can enter must be, by definition, purely noise. This can only reduce the source level of information. Hence, any change in the information carried from source to receiver must be a loss. Minimizing this loss then amounts to achieving maximum gain of the information over the channel. Moreover, an effect whose observations convey maximum Fisher information I, in particular, can be most accurately measured. This is by the Cramer-Rao [1],[2],[3] result 2 = −1 (1) for the minimum mean-squared error 2 attainable in measuring a mean, or signal, value. Thus, the larger the information I is the smaller is the mean-squared error. 2.2 Thesis of J.A. Wheeler This has further consequences: Man is an integral part of nature, and any humanly-observed phenomenon is, at least, affected by man's chosen method of observation (a well-confirmed thesis of J.A. Wheeler [7]). An oft-quoted example is the two possible types of output-either visible interference fringes on a receiving screen or visually observed photon slit positions-when conducting the famous optical double-slit interference experiment [8]. In fact all such Wheeler-type experiments could be observed by suitably automated measuring devices and, so, are not necessarily limited to human-or even to live-observation. Apes, dolphins or artificial devices-such as motion-sensing cameras-can fill this role as well. 2.3 Dual role of intelligence Next, consider a different benefit of quantifying an unknown scientific effect by maximizing its level of Fisher I [1],[2]. By Eq. (1), the data from the effect are also maximally accurate. In summary, in a universe such as ours maximum intelligence is built into both (1) the structure of its physics and (2) the ability of that structure to be known. This also lends itself to conveying into our universe extremely accurate values of the 26 fundamental physical constants, as were needed for it to evolve properly from the Big bang onward (see below). Also, in such a universe, creatures that can deduce properties (1) and (2), and use them for evolutionary advantage, are favored. An example of such use is the current development of "learning machines." Suggestions of combining a learning machine with the human brain are being considered for, potentially, 'engineering' a combined living-digital being of increased mental faculty. 2.4 Accuracy problem Could nature per se, in even the seemingly most complex form of multiple universes (a "multiverse"), have emerged out of this Fisher information-based property? A clue is that Fisher information is a local measure of complexity (in time and space), so that maximizing it allows, in turn, each universe of a multiverse to arise. We will show below how this could have happened. A related question is how our universe could have evolved with the required extreme accuracy of the 26 universal constants that enable life to exist in it. For example, Stephen Hawking has noted that: "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."[9] 2.5 Sensitivity of the known physical laws to accuracy of their universal constants Thus suppose, e.g., that the strong nuclear force coupling constant was 2% stronger than it is, while the other constants were left unchanged (discussed by physicist Paul Davies [10]).
The authors show how the formal framework of quantum mechanics arises in economics generally and ... more The authors show how the formal framework of quantum mechanics arises in economics generally and in financial economics in particular as a natural consequence of an information-theoretic approach to these fields. Specifically, extremizing Fisher information, subject to the constraint that the associated probability density reproduces observed economic phenomena, results in Schrodinger-like equations. The authors illustrate the utility of this approach with examples from financial economics. As an emergent consequence of information theory, the use of quantum formalism in economics can be seen as a powerful approach for understanding complex dynamics grounded in the probabilistic view of economics advocated by Knight and Keynes.
Presented in this paper is a new matrix formulation of both the classical electromagnetic Maxwell... more Presented in this paper is a new matrix formulation of both the classical electromagnetic Maxwell equations and the relativistic quantum mechanical Dirac equation. These new matrix representations will be referred to as the Maxwell spacetime matrix equation and the Dirac spacetime matrix equation. Both are Lorentz invariant. Key to these new matrix formulations is an 8-by-8 matrix operator referred to here as the spacetime matrix operator. As it turns out, the Dirac spacetime matrix equation is equivalent to four new vector equations, which are similar in form to the four Maxwell vector equations. These new equations will be referred to as the Dirac spacetime vector equations. This allows these new vector equations to be as readily solved as solving a set of Maxwell vector equations. Based on these two new matrix approaches, two computer programs, encoded using Matlab software, have been developed and tested for determining the reflection and transmission characteristics of multilayer optical thin-film structures and multilayer quantum well-and-barrier structures. A listing of these software programs may be found in supplemental material associated with this article. Numerical results obtained based on the use of these computer programs are presented in the results section of this article.
Abstract A wide class of physical problems requires the estimation of probability laws. Diffracti... more Abstract A wide class of physical problems requires the estimation of probability laws. Diffraction patterns and quantum mechanical probability laws on position are examples. Minimum Fisher information is one approach to estimating such laws; maximum entropy is another. In this paper, we show that the minimum Fisher information approach may be derived from a prior principle of maximum Cramer-Rao bound (MCRB). The MCRB-Fisher approach is then applied to some fundamental physical problems, including diffraction theory and quantum mechanics. It is found to give the correct physical solutions, that is the Helmholtz and Schrodinger wave equations respectively. By comparison, the maximum entropy approach gives incorrect solutions to these problems, except in the special (thermodynamic) case of a harmonic oscillator potential.
Mammalian cell function requires timely and accurate transmission of information from the cell me... more Mammalian cell function requires timely and accurate transmission of information from the cell membrane (CM) to the nucleus (N). These pathways have been intensively investigated and many critical components and interactions have been identified. However, the physical forces that control movement of these proteins have received scant attention. Thus, transduction pathways are typically presented schematically with little regard to spatial constraints that might affect the underlying dynamics necessary for protein-protein interactions and molecular movement from the CM to the N. We propose messenger protein localization and movements are highly regulated and governed by Coulomb interactions between: 1. A recently discovered, radially directed E-field from the NM into the CM and 2. Net protein charge determined by its isoelectric point, phosphorylation state, and the cytosolic pH. These interactions, which are widely applied in elecrophoresis, provide a previously unknown mechanism for localization of messenger proteins within the cytoplasm as well as rapid shuttling between the CM and N. Here we show these dynamics optimize the speed, accuracy and efficiency of transduction pathways even allowing measurement of the location and timing of ligand binding at the CM-previously unknown components of intracellular information flow that are, nevertheless, likely necessary for detecting spatial gradients and temporal fluctuations in ligand concentrations within the environment. The model has been applied to the RAF-MEK-ERK pathway and scaffolding protein KSR1 using computer simulations and in-vitro experiments. The computer simulations predicted distinct distributions of phosphorylated and unphosphorylated components of this transduction pathway which were experimentally confirmed in normal breast epithelial cells (HMEC).
Probability, Statistical Optics, and Data Testing, 2001
The aim of a least-squares approach is to estimate the parameters that define a known, or hypothe... more The aim of a least-squares approach is to estimate the parameters that define a known, or hypothetical, physical law. The estimated parameters are those that make the law fit a given set of data points in the least-squares sense. Probably the reader has carried through a least-squares curve fit more than once during his career. But, there is actually a great deal more to this problem than accomplishing the fit.
Processing of Images and Data from Optical Sensors, 1981
The most important advance in restoring images during the past decade probably was the realizatio... more The most important advance in restoring images during the past decade probably was the realization that positive-enforced solutions are a real advance over unconstrained solutions. Surprisingly, the positive constraint both reduces spurious oscillation and enhances resolution simultaneously. This will be shown by a simple graphical argument. The earliest workers in this exotic field realized that by enforcing positivity they were inducing higher-frequency oscillation into their outputs than even is present in the image data. And more importantly, these were real and not artifacts. That is, super-resolution was being produced in real images for the first time. Some examples of these will be shown. The earliest methods for enforcing positivity were ad hoc, e.g., by arbitrarily representing the restoration as the square of a function. Later positivity was given a firm theoretical basis through the route of "maximum entropy," a concept which originated in the estimation of probability densities. A review of such methods will be given. Of late, positivity has also aided in producing real solutions to the "missing phase problem" of Labeyrie interferometry.
We show that a famous die experiment used by E. T.)dynes as intuitive justification of the need f... more We show that a famous die experiment used by E. T.)dynes as intuitive justification of the need for maximum entropy (ME) estimation admits, in fact, of solutions by classical, Bayesian estimation. The Bayesian answers are the maximum probable (m.a.p.) and posterior mean solutions to the problem. These depart radically from the ME solution, and are also much more probable answers.
Presented in this paper is a new matrix representation of classical electromagnetic theory. The b... more Presented in this paper is a new matrix representation of classical electromagnetic theory. The basis 4)f this representation is a space-time, eight-by-eight difrerential matrix operator. This matrix operator is initially lormulated from the differential form of the Maxwell field equations for vacuum. The resulting matrix formulation of Maxwell's equations allows simple and direct derivation of: i he electromagnetic wave and charge continuity equations; I he Lorentz conditions and definition of the electromagnetic jotentials; the Lorentz and Coulomb gauges; the electromagnetic 2otential wave equations; and Poynting's conservation of 1:nergy theorem. A four-dimensional Fourier transform of :he matrix equations casts them into an eight-dimensional transfer theorem. The transfer function has an inverse, and this allows the equations to be inverted. This expresses the fields directly in terms of the charge and current source distributions.
The use and analysis of the signatures of light reflected by scattering sites in a moving medium ... more The use and analysis of the signatures of light reflected by scattering sites in a moving medium have been actively pursued for the determination of flow velocities. 1-4 Here we address the problem of the associated data reduction, in particular, when the number of scatterers is appreciable (of the order of 100 or more), and the particles are relatively large, of the order of tens of microns; this is called particle image velocimetry (PIV). Particle image velocimetry is of considerable interest, especially for flows where there is only limited transverse motion, i.e., where particles in the flow field generally stay within well-defined planes. In many situations of interest these conditions are met. Spatial particle distributions within flows are of some importance since they influence many processes of interest, including heat transfer, velocity distributions of the carrier gas near bounding surfaces, and other properties of both single-and double-phase flow. In addition, since excellent time resolution in the data acquisition is now possible with the use of pulsed lasers, the simultaneous determination of the position and subsequent infer
In this paper, a method of restoring longitudinal images is developed. By using the transfer func... more In this paper, a method of restoring longitudinal images is developed. By using the transfer function for longitudinal objects, and inverse filtering, a longitudinal image may be restored. The Fourier theory and sampling theorems for transverse images cannot be used directly in the longitudinal case. A modification and reasonable approximation are introduced. We have numerically established a necessary relationship between just-resolved longitudinal separation (after inverse filtering), noise level, and the taking conditions of object distance and lens diameter. An empirical formula is also found to well-fit the computed results. This formula may be of use for designing optical systems which are to image longitudinal details, such as in robotics or microscopy.
The findings in this report are not to be construed as an official Department 6f the-my position,... more The findings in this report are not to be construed as an official Department 6f the-my position, unless so designated by other authorized documents. The use of trade names or manufacturers' names in this report does not constitute indorsement of any commercial product.
Uploads
Papers by Roy Frieden