Merging the Transportation and Communications Revolutions. Abstracts for ITS America Seventh Annual Meeting and ExpositionITS America, 1997
The ability to perform Automatic Vehicle Location (AVL) is a requirement of a variety of applicat... more The ability to perform Automatic Vehicle Location (AVL) is a requirement of a variety of applications in Intelligent Transportation Systems like vehicle guidance, dynamic route optimization, scheduling, bus stop annunciation, security, fleet tracking, etc. The authors describe a new approach to AVL where a simple sensor matches visual data against a stored database to determine position, in combination with standard dead-reckoning. Their system is completely self-contained in that it requires no infrastructure external to the vehicle such as beacons or satellites. It can provide better accuracy than the satellite based Global Positioning System (GPS), while functioning well in the dense urban environments where GPS is prone to failure. In cases where the vehicle is constrained to a local area (buses, trains, police cars, ...) it could serve as an inexpensive stand alone AVL system. For free-ranging vehicles (intercity trucks, private autos, ...) it could serve as a supplement to GPS for travel in urban areas.
Proceedings of Conference on Intelligent Transportation Systems, 1997
The ability to perform automatic vehicle location (AVL) is a requirement for a variety of applica... more The ability to perform automatic vehicle location (AVL) is a requirement for a variety of applications in intelligent transportation systems like vehicle guidance, dynamic route optimization, scheduling, bus stop annunciation, security, fleet tracking, etc. We describe a new approach to AVL where data from a simple low-bandwidth sensor producing 1-dimensional visual data is matched against a stored database to determine position, in combination with standard dead-reckoning. Our system is completely self-contained in that it requires no infrastructure external to the vehicle such as beacons or satellites
ABSTRACT Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization ... more ABSTRACT Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a comparison with more classical smoothed solutions.
We examine a two-person game we call Will-Testing in which the strategy space for both players is... more We examine a two-person game we call Will-Testing in which the strategy space for both players is a real number. It has no equilibrium. When an infinitely large set of players plays this in all possible pairings, there is an equilibrium for the distribution of strategies which requires all players to use different strategies. We conjecture this solution could underlie some phenomena observed in animals.
Proceedings of the National Academy of Sciences, 2010
We report on human-subject experiments on the problems of coloring (a social differentiation task... more We report on human-subject experiments on the problems of coloring (a social differentiation task) and consensus (a social agreement task) in a networked setting. Both tasks can be viewed as coordination games, and despite their cognitive similarity, we find that within a parameterized family of social networks, network structure elicits opposing behavioral effects in the two problems, with increased long-distance connectivity making consensus easier for subjects and coloring harder. We investigate the influence that subjects have on their network neighbors and the collective outcome, and find that it varies considerably, beyond what can be explained by network position alone. We also find strong correlations between influence and other features of individual subject behavior. In contrast to much of the recent research in network science, which often emphasizes network topology out of the context of any specific problem and places primacy on network position, our findings highlight ...
Proceedings of the 13th ACM Conference on Electronic Commerce - EC '12, 2012
We report on an extensive series of behavioral experiments in which 36 human subjects collectivel... more We report on an extensive series of behavioral experiments in which 36 human subjects collectively build a communication network over which they must solve a competitive coordination task for monetary compensation. There is a cost for creating network links, thus creating a tension between link expenditures and collective and individual incentives. Our most striking finding is the poor performance of the subjects, especially compared to our long series of prior experiments. We demonstrate that the subjects built difficult networks for the coordination task, and compare the structural properties of the built networks to standard generative models of social networks. We also provide extensive analysis of the individual and collective behavior of the subjects, including free riding and factors influencing edge purchasing decisions.
We report on a series of behavioral experiments in social networks in which human subjects contin... more We report on a series of behavioral experiments in social networks in which human subjects continuously choose to play either a dominant role (called a King) or a submissive one (called a Pawn). Kings receive a higher payoff rate, but only if all their network neighbors are Pawns, and thus the maximum social welfare states correspond to maximum independent sets. We document that fairness is of vital importance in driving interactions between players. First, we find that payoff disparities between network neighbors gives rise to conflict, and the specifics depend on the network topology. However, allowing Kings to offer "tips" or side payments to their neighbors substantially reduces conflict, and consistently increases social welfare. Finally, we observe that tip reductions lead to increased conflict. We describe these and a broad set of related findings.
We report on a series of behavioral experiments in social networks in which human subjects contin... more We report on a series of behavioral experiments in social networks in which human subjects continuously choose to play either a dominant role (called a King) or a submissive one (called a Pawn). Kings receive a higher payoff rate, but only if all their network neighbors are Pawns, and thus the maximum social welfare states correspond to maximum independent sets. We document that fairness is of vital importance in driving interactions between players. First, we find that payoff disparities between network neighbors gives rise to conflict, and the specifics depend on the network topology. However, allowing Kings to offer "tips" or side payments to their neighbors substantially reduces conflict, and consistently increases social welfare. Finally, we observe that tip reductions lead to increased conflict. We describe these and a broad set of related findings.
A generic NP-complete graph problem is described. The calculation of certain predicate on the gra... more A generic NP-complete graph problem is described. The calculation of certain predicate on the graph is shown to be both necessary and sufficient to solve the problem and hence the calculation must be embedded in every algorithm solving NP problems. This observation gives rise to a metric on the difficulty of solving an instance of the problem. There appears to be an interesting phase transition in this metric when the graphs are generated at random in a "2-dimensional" extension. The metric is sensitive to 2 parameters governing the way graphs are generated: p, the density of edges in the graph, and K, related to the number of points in the graph. The metric seems to be finite in part of the (p,K)-space and infinite in the rest. If true, this phenomenon would demonstrate that NP-complete problems are truly monolithic and can easily exhibit strong intrinsic coupling of their variables throughout the entire instance.
The complexity of learning in shallow 1-Dimensional neural networks has been shown elsewhere to b... more The complexity of learning in shallow 1-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when internode communication channels are short and local, thus adhering to more biological and VLSI constraints.
In a multi-layered neural network, any one of the hidden layers can be viewed as computing a dist... more In a multi-layered neural network, any one of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthermore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.
ABSTRACT Finite-frequency tomographic methods find their origin in the recognition that seismic w... more ABSTRACT Finite-frequency tomographic methods find their origin in the recognition that seismic waves are sensitive to the earth's structure not only on but also in a neighborhood of the ray connecting source and receiver. The sensitivity kernels are therefore nonzero within some positive distance from this ray. Real-life tomographic applications often need to employ a coarse model parameterization to reduce the number of model parameters and make the inversion practical from a computational point of view. This coarse parameterization, however, substantially reduces the benefit in resolution of finite-frequency tomography when compared to classical tomographic methods; standard coarse parameterization effectively turns the finite-frequency sensitivity kernels into 'fat' rays. To overcome this we are developing global-scale finite-frequency tomography in the wavelet domain, where the sparseness of both the sensitivity kernel and the model can be exploited in carrying out the inversion. We work on the cubed sphere to allow us to use wavelet transforms in Cartesian coordinates. This cubed sphere is built through a one-to-one mapping of Cartesian coordinates on each face of the cube to the corresponding "faces of the sphere". At the edges of each of the faces of the cube, the mapping is singular; this induces artifical singularities in the model and kernel, which in the wavelet domain would show up as large coefficients. We avoid these artificially large wavelet coefficients by using domain-adapted wavelets based on the construction of wavelets on the interval. The inversion is based on an l1-norm minimization procedure. We will present some preliminary examples.
Proceedings of a Workshop on Computational Learning Theory and Natural Learning Systems Constraints and Prospects Constraints and Prospects, Aug 25, 1994
We study the problem of when to stop learning a class of feedforward networks-- networks with lin... more We study the problem of when to stop learning a class of feedforward networks-- networks with linear outputs neuron and fixed input weights -- when they aretrained with a gradient descent algorithm on a finite number of examples. Undergeneral regularity conditions, it is shown that there are in general three distinctphases in the generalization performance in the learning process, and
The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to b... more The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when inter-node communication channels are short and local, thus adhering to more biological and VLSI constraints. '.' ',. '')(, .
Advances in Neural Information Processing Systems 5 Nips Conference, 1992
In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distr... more In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.
Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, netw... more Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models
Merging the Transportation and Communications Revolutions. Abstracts for ITS America Seventh Annual Meeting and ExpositionITS America, 1997
The ability to perform Automatic Vehicle Location (AVL) is a requirement of a variety of applicat... more The ability to perform Automatic Vehicle Location (AVL) is a requirement of a variety of applications in Intelligent Transportation Systems like vehicle guidance, dynamic route optimization, scheduling, bus stop annunciation, security, fleet tracking, etc. The authors describe a new approach to AVL where a simple sensor matches visual data against a stored database to determine position, in combination with standard dead-reckoning. Their system is completely self-contained in that it requires no infrastructure external to the vehicle such as beacons or satellites. It can provide better accuracy than the satellite based Global Positioning System (GPS), while functioning well in the dense urban environments where GPS is prone to failure. In cases where the vehicle is constrained to a local area (buses, trains, police cars, ...) it could serve as an inexpensive stand alone AVL system. For free-ranging vehicles (intercity trucks, private autos, ...) it could serve as a supplement to GPS for travel in urban areas.
Proceedings of Conference on Intelligent Transportation Systems, 1997
The ability to perform automatic vehicle location (AVL) is a requirement for a variety of applica... more The ability to perform automatic vehicle location (AVL) is a requirement for a variety of applications in intelligent transportation systems like vehicle guidance, dynamic route optimization, scheduling, bus stop annunciation, security, fleet tracking, etc. We describe a new approach to AVL where data from a simple low-bandwidth sensor producing 1-dimensional visual data is matched against a stored database to determine position, in combination with standard dead-reckoning. Our system is completely self-contained in that it requires no infrastructure external to the vehicle such as beacons or satellites
ABSTRACT Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization ... more ABSTRACT Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a comparison with more classical smoothed solutions.
We examine a two-person game we call Will-Testing in which the strategy space for both players is... more We examine a two-person game we call Will-Testing in which the strategy space for both players is a real number. It has no equilibrium. When an infinitely large set of players plays this in all possible pairings, there is an equilibrium for the distribution of strategies which requires all players to use different strategies. We conjecture this solution could underlie some phenomena observed in animals.
Proceedings of the National Academy of Sciences, 2010
We report on human-subject experiments on the problems of coloring (a social differentiation task... more We report on human-subject experiments on the problems of coloring (a social differentiation task) and consensus (a social agreement task) in a networked setting. Both tasks can be viewed as coordination games, and despite their cognitive similarity, we find that within a parameterized family of social networks, network structure elicits opposing behavioral effects in the two problems, with increased long-distance connectivity making consensus easier for subjects and coloring harder. We investigate the influence that subjects have on their network neighbors and the collective outcome, and find that it varies considerably, beyond what can be explained by network position alone. We also find strong correlations between influence and other features of individual subject behavior. In contrast to much of the recent research in network science, which often emphasizes network topology out of the context of any specific problem and places primacy on network position, our findings highlight ...
Proceedings of the 13th ACM Conference on Electronic Commerce - EC '12, 2012
We report on an extensive series of behavioral experiments in which 36 human subjects collectivel... more We report on an extensive series of behavioral experiments in which 36 human subjects collectively build a communication network over which they must solve a competitive coordination task for monetary compensation. There is a cost for creating network links, thus creating a tension between link expenditures and collective and individual incentives. Our most striking finding is the poor performance of the subjects, especially compared to our long series of prior experiments. We demonstrate that the subjects built difficult networks for the coordination task, and compare the structural properties of the built networks to standard generative models of social networks. We also provide extensive analysis of the individual and collective behavior of the subjects, including free riding and factors influencing edge purchasing decisions.
We report on a series of behavioral experiments in social networks in which human subjects contin... more We report on a series of behavioral experiments in social networks in which human subjects continuously choose to play either a dominant role (called a King) or a submissive one (called a Pawn). Kings receive a higher payoff rate, but only if all their network neighbors are Pawns, and thus the maximum social welfare states correspond to maximum independent sets. We document that fairness is of vital importance in driving interactions between players. First, we find that payoff disparities between network neighbors gives rise to conflict, and the specifics depend on the network topology. However, allowing Kings to offer "tips" or side payments to their neighbors substantially reduces conflict, and consistently increases social welfare. Finally, we observe that tip reductions lead to increased conflict. We describe these and a broad set of related findings.
We report on a series of behavioral experiments in social networks in which human subjects contin... more We report on a series of behavioral experiments in social networks in which human subjects continuously choose to play either a dominant role (called a King) or a submissive one (called a Pawn). Kings receive a higher payoff rate, but only if all their network neighbors are Pawns, and thus the maximum social welfare states correspond to maximum independent sets. We document that fairness is of vital importance in driving interactions between players. First, we find that payoff disparities between network neighbors gives rise to conflict, and the specifics depend on the network topology. However, allowing Kings to offer "tips" or side payments to their neighbors substantially reduces conflict, and consistently increases social welfare. Finally, we observe that tip reductions lead to increased conflict. We describe these and a broad set of related findings.
A generic NP-complete graph problem is described. The calculation of certain predicate on the gra... more A generic NP-complete graph problem is described. The calculation of certain predicate on the graph is shown to be both necessary and sufficient to solve the problem and hence the calculation must be embedded in every algorithm solving NP problems. This observation gives rise to a metric on the difficulty of solving an instance of the problem. There appears to be an interesting phase transition in this metric when the graphs are generated at random in a "2-dimensional" extension. The metric is sensitive to 2 parameters governing the way graphs are generated: p, the density of edges in the graph, and K, related to the number of points in the graph. The metric seems to be finite in part of the (p,K)-space and infinite in the rest. If true, this phenomenon would demonstrate that NP-complete problems are truly monolithic and can easily exhibit strong intrinsic coupling of their variables throughout the entire instance.
The complexity of learning in shallow 1-Dimensional neural networks has been shown elsewhere to b... more The complexity of learning in shallow 1-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when internode communication channels are short and local, thus adhering to more biological and VLSI constraints.
In a multi-layered neural network, any one of the hidden layers can be viewed as computing a dist... more In a multi-layered neural network, any one of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthermore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.
ABSTRACT Finite-frequency tomographic methods find their origin in the recognition that seismic w... more ABSTRACT Finite-frequency tomographic methods find their origin in the recognition that seismic waves are sensitive to the earth's structure not only on but also in a neighborhood of the ray connecting source and receiver. The sensitivity kernels are therefore nonzero within some positive distance from this ray. Real-life tomographic applications often need to employ a coarse model parameterization to reduce the number of model parameters and make the inversion practical from a computational point of view. This coarse parameterization, however, substantially reduces the benefit in resolution of finite-frequency tomography when compared to classical tomographic methods; standard coarse parameterization effectively turns the finite-frequency sensitivity kernels into 'fat' rays. To overcome this we are developing global-scale finite-frequency tomography in the wavelet domain, where the sparseness of both the sensitivity kernel and the model can be exploited in carrying out the inversion. We work on the cubed sphere to allow us to use wavelet transforms in Cartesian coordinates. This cubed sphere is built through a one-to-one mapping of Cartesian coordinates on each face of the cube to the corresponding "faces of the sphere". At the edges of each of the faces of the cube, the mapping is singular; this induces artifical singularities in the model and kernel, which in the wavelet domain would show up as large coefficients. We avoid these artificially large wavelet coefficients by using domain-adapted wavelets based on the construction of wavelets on the interval. The inversion is based on an l1-norm minimization procedure. We will present some preliminary examples.
Proceedings of a Workshop on Computational Learning Theory and Natural Learning Systems Constraints and Prospects Constraints and Prospects, Aug 25, 1994
We study the problem of when to stop learning a class of feedforward networks-- networks with lin... more We study the problem of when to stop learning a class of feedforward networks-- networks with linear outputs neuron and fixed input weights -- when they aretrained with a gradient descent algorithm on a finite number of examples. Undergeneral regularity conditions, it is shown that there are in general three distinctphases in the generalization performance in the learning process, and
The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to b... more The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when inter-node communication channels are short and local, thus adhering to more biological and VLSI constraints. '.' ',. '')(, .
Advances in Neural Information Processing Systems 5 Nips Conference, 1992
In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distr... more In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.
Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, netw... more Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models
Uploads
Papers by Stephen Judd