Papers by Eduardo Mizraji

JOURNAL OF GENIUS AND EMINENCE, 2023
In this essay we propose, based on the ideas of L. Rapkine and J. Monod on the physical reasons f... more In this essay we propose, based on the ideas of L. Rapkine and J. Monod on the physical reasons for aesthetic appreciation, that the interest of creative scientists in Jorge Luis Borges' works is produced by an apparently contradictory effect: on the one hand, the serenity that these texts induce in the mind of an innovative person and, on the other, the modification of the cognitive balance that causes the complexity of these texts and that operates as a creative force. We illustrate this idea with the specific case of the story "Blue Tigers". We show that this text by Borges, in a reader who is sensitive to it, injects information that puts the mind in a state far from the level of cognitive equilibrium. When it comes to solving a scientific problem, the search for new ways to solve the problem is enhanced by this condition in which the cognitive balance is cancelled. That is the condition in which the creation of new ideas seems possible.
Neurocomputing, 2016
We present a neural network model that can execute some of the procedures used in the information... more We present a neural network model that can execute some of the procedures used in the information sciences literature. In particular we offer a simplified notion of topic and how to implement it using neural networks that use the Kronecker tensor product. We show that the topic detecting mechanism is related to Naive Bayes statistical classifiers, and that it is able to disambiguate the meaning of polysemous words. We evaluate our network in a text categorization task, resulting in performance levels comparable to Naive Bayes classifiers, as expected. Hence, we propose a simple scalable neural model capable of dealing with machine learning tasks, while retaining biological plausibility and probabilistic transparency.

Biophysical Reviews
Explaining the foundation of cognitive abilities in the processing of information by neural syste... more Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a ...

Speech and Computer, 2018
Tensor contexts enlarge the performances and computational powers of many neural models of langua... more Tensor contexts enlarge the performances and computational powers of many neural models of language by generating a double filtering of incoming data. Applied to the linguistic domain, its implementation enables a very efficient disambiguation of polysemous and homonymous words. For the neurocomputational modeling of language, the simultaneous tensor contextualization of inputs and outputs inserts into the models strategic passwords that rout words towards key natural targets, thus allowing for the creation of meaningful phrases. In this work, we present the formal properties of these models and describe possible ways to use contexts to represent plausible neural organizations of sequences of words. We include an illustration of how these contexts generate topographic or thematic organization of data. Finally, we show that double contextualization opens promising ways to explore the neural coding of episodes, one of the most challenging problems of neural computation.

Cognitive Science, 2011
Encoding word-order and semantic information using modular neural networks Alvaro Cabana Secci´ o... more Encoding word-order and semantic information using modular neural networks Alvaro Cabana Secci´ on Biof´isica, Facultad de Ciencias, Universidad de la Rep´ ublica Juan Valle-Lisboa Secci´ on Biof´isica, Facultad de Ciencias, Universidad de la Rep´ ublica Eduardo Mizraji Secci´ on Biof´isica, Facultad de Ciencias, Universidad de la Rep´ ublica Abstract: Vector space models have been successfully used for lexical semantic representation. Some of these models rely on distributional properties of words in large corpora, and have been contrasted with human performance on semantic similarity and priming in lexical decision tasks. Neural network models of lexical representation have been classically of reduced size due to computational limitations. Recently, associative memory models have been related to semantic space models such as LSA and Bayesian classificators. Our goal is to build lexical representations that include semantic and word-order information by using context-dependent neur...
Revue d'épidémiologie et de santé publique, Jan 15, 1978
Martini's model makes it possible to study various epidemiological situations arising in the ... more Martini's model makes it possible to study various epidemiological situations arising in the study of infectious diseases: endemicity, recurrence, presence of a reservoir of pathogenic agent and active immunization. Numerical applications to epidemiological data concerning measles in Uruguay is presented.
Journal of Theoretical Biology, 1984
Kinetic models for the mode of action of processive and non-processive DNA-helicases are detailed... more Kinetic models for the mode of action of processive and non-processive DNA-helicases are detailed. Fluxes at the steady state are analyzed, and the random walk of the enzymes on the DNA is studied in connection with the rate constants of the chemical reactions involved in the transformation of substrate to products. Finally, the constants of the kinetic model for the processive helicase are related to the parameters of an analogous viscoelastic model.
The Journal of Chemical Physics, 1984
ABSTRACT

Cortex, 2014
Numerous cortical disorders affect language. We explore the connection between the observed langu... more Numerous cortical disorders affect language. We explore the connection between the observed language behavior and the underlying substrates by adopting a neurocomputational approach. To represent the observed trajectories of the discourse in patients with disorganized speech and in healthy participants, we design a graphical representation for the discourse as a trajectory that allows us to visualize and measure the degree of order in the discourse as a function of the disorder of the trajectories. Our work assumes that many of the properties of language production and comprehension can be understood in terms of the dynamics of modular networks of neural associative memories. Based upon this assumption, we connect three theoretical and empirical domains: (1) neural models of language processing and production, (2) statistical methods used in the construction of functional brain images, and (3) corpus linguistic tools, such as Latent Semantic Analysis (henceforth LSA), that are used to discover the topic organization of language. We show how the neurocomputational models intertwine with LSA and the mathematical basis of functional neuroimaging. Within this framework we describe the properties of a contextdependent neural model, based on matrix associative memories, that performs goaloriented linguistic behavior. We link these matrix associative memory models with the mathematics that underlie functional neuroimaging techniques and present the "functional brain images" emerging from the model. This provides us with a completely "transparent box" with which to analyze the implication of some statistical images. Finally, we use these models to explore the possibility that functional synaptic disconnection can lead to an increase in connectivity between the representations of concepts that could explain some of the alterations in discourse displayed by patients with schizophrenia.

Biosystems, 1999
Context-dependent associative memories are models that allow the retrieval of different vectorial... more Context-dependent associative memories are models that allow the retrieval of different vectorial responses given a same vectorial stimulus, depending on the context presented to the memory. The contextualization is obtained by doing the Kronecker product between two vectorial entries to the associative memory: the key stimulus and the context. These memories are able to display a wide variety of behaviors that range from all the basic operations of the logical calculus (including fuzzy logics) to the selective extraction of features from complex vectorial patterns. In the present contribution, we show that a context-dependent memory matrix stores a large amount of possible virtual associative memories, that awaken in the presence of a context. We show how the vectorial context allows a memory matrix to be representable in terms of its singular-value decomposition. We describe a neural interpretation of the model in which the Kronecker product is performed on the same neurons that sustain the memory. We explored, with numerical experiments, the reliability of chains of contextualized associations. In some cases, random disconnection produces the emergence of oscillatory behaviors of the system. Our results show that associative chains retain their performances for relatively large dimensions. Finally, we analyze the properties of some modules of context-dependent autoassociative memories inserted in recursive nets: the perceptual autoorganization in the presence of ambiguous inputs (e.g. the disambiguation of the Necker's cube figure), the construction of intersection filters, and the feature extraction capabilities.
Scientific Reports, Jan 19, 2023
2017 Computing Conference, 2017
This work relates the theory of Mental Spaces with neural models that sustain associations betwee... more This work relates the theory of Mental Spaces with neural models that sustain associations between patterns. The theory of context-dependent matrix associative memories is used to establish a neural counterpart for the connectors between mental spaces. Two applications of these neural models concerning mental space builders for linguistic topics and prepositions, respectively, are described. Finally, it is shown that this relation of mental spaces and matrix memories, establish a link with LSA that may help to develop a physiological approach to semantic networks.
JOURNAL OF GENIUS AND EMINENCE, 2023
In this essay we propose, based on the ideas of L. Rapkine and J. Monod on the physical reasons f... more In this essay we propose, based on the ideas of L. Rapkine and J. Monod on the physical reasons for aesthetic appreciation, that the interest of creative scientists in Jorge Luis Borges' works is produced by an apparently contradictory effect: on the one hand, the serenity that these texts induce in the mind of an innovative person and, on the other, the modification of the cognitive balance that causes the complexity of these texts and that operates as a creative force.
Biosystems, 1994
... This is easy to see taking into account that an arbitrary order can always be written as P(hx... more ... This is easy to see taking into account that an arbitrary order can always be written as P(hxp), P being a permutation matrix (see Barnett, 1990). ... Math. Biosci. 14, 197-220. Barnett, S., 1990, Matrices (Clarendon Press, Oxford). ...

ArXiv, 2021
The square root of Not is a logical operator of importance in quantum computing theory and of int... more The square root of Not is a logical operator of importance in quantum computing theory and of interest as a mathematical object in its own right. In physics, it is a square complex matrix of dimension 2. In the present work it is a complex square matrix of arbitrary dimension. The introduction of linear algebra into logical theory has been enhanced in recent decades by the researches in the field of neural networks and quantum computing. Here we will make a brief description of the representation of logical operations through matrices and we show how general expressions for the two square roots of the Not operator are obtained. Then, we explore two topics. First, we study an extension to a non-quantum domain of a short form of Deutsch's algorithm. Then, we assume that a root of Not is a matrix extension of the imaginary unit i, and under this idea we obtain fully matrix versions for the Euler expansions and for the representations of circular functions by complex exponentials.

ArXiv, 2020
In this work we investigate the representation of counterfactual conditionals using the vector lo... more In this work we investigate the representation of counterfactual conditionals using the vector logic, a matrix-vectors formalism for logical functions and truth values. With this formalism, we can describe the counterfactuals as complex matrix operators that appear preprocessing the implication matrix with one of the square roots of the negation, a complex matrix. This mathematical approach puts in evidence the virtual character of the counterfactuals. The reason of this fact, is that this representation of a counterfactual proposition produces a valuation that is the superposition the two opposite truth values weighted, respectively, by two complex conjugated coefficients. This result shows that this procedure produces a uncertain evaluation projected on the complex domain. After this basic representation, the judgment of the plausibility of a given counterfactual allows us to shift the decision towards an acceptance or a refusal represented by the real vectors 'true' or &#...

Cognitive Science, 2013
Language as a Window into the Brain and its Pathologies Peter Garrard ( [email protected] ) Bri... more Language as a Window into the Brain and its Pathologies Peter Garrard ( [email protected] ) Brita Elvevag ( [email protected] ) St George’s, University of London & St George’s Stroke and Dementia Research Centre, UK . Department of Clinical Medicine, University of Tromso, & University Hospital of North Norway Tromso, Norway. Peter beim Graben ( [email protected] ) Eduardo Mizraji ( [email protected]) Institut fur deutsche Sprache und Linguistik, Humboldt- Universitat zu Berlin, Germany. Seccion Biofisica, Facultad de Ciencias Universidad de la Republica, Uruguay. Juan C. Valle Lisboa ([email protected]) Facultad de Ciencias, Universidad de la Republica, Uruguay. Keywords: Language; Psychiatry; Schizophrenia; Model; Neural Networks notoriously affect the way lexical items are selected and used by a writer, even before the symptoms of the disease are apparent. Several measures of language comprehension and production have been used to assess the presence and cours...
THEORIA. An International Journal for Theory, History and Foundations of Science, 2016
Natural languages can express some logical propositions that humans are able to understand. We il... more Natural languages can express some logical propositions that humans are able to understand. We illustrate this fact with a famous text that Conan Doyle attributed to Holmes: “It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth”. This is a subtle logical statement usually felt as an evident true. The problem we are trying to solve is the cognitive reason for such a feeling. We postulate here that we accept Holmes’ maxim as true because our adult brains are equipped with neural modules that perform naturally modal logical computations.
Neurocomputing, 2015
We present a neural network model that can execute some of the procedures used in the information... more We present a neural network model that can execute some of the procedures used in the information sciences literature. In particular we offer a simplified notion of topic and how to implement it using neural networks that use the Kronecker tensor product. We show that the topic detecting mechanism is related to Naive Bayes statistical classifiers, and that it is able to disambiguate the meaning of polysemous words. We evaluate our network in a text categorization task, resulting in performance levels comparable to Naive Bayes classifiers, as expected. Hence, we propose a simple scalable neural model capable of dealing with machine learning tasks, while retaining biological plausibility and probabilistic transparency.
Uploads
Papers by Eduardo Mizraji