Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
…
24 pages
1 file
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the Mixture of Polynomials (MoPs) framework.
International Journal of Approximate Reasoning, 2006
In this paper we study the problem of exact inference in hybrid Bayesian networks using mixtures of truncated basis functions (MoTBFs). We propose a structure for handling probability potentials called Sum-Product factorized potentials, and show how these potentials facilitate efficient inference based on i) properties of the MoTBFs and ii) ideas similar to the ones underlying Lazy propagation (postponing operations and keeping factorized representations of the potentials). We report on preliminary experiments demonstrating the efficiency of the proposed method in comparison with existing algorithms.
Test, 2006
The MTE (mixture of truncated exponentials) model allows to deal with Bayesian networks containing discrete and continuous variables simultaneously. One of the features of this model is that standard propagation algorithms can be applied. In this paper, we study the problem of estimating these models from data. We propose an iterative algorithm based on least squares approximation. The performance of the algorithm is tested both with artificial and actual data.
International Journal of Approximate Reasoning, 2011
Statistics and Computing, 2006
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approximating probability density functions (PDF's). This paper presents MTE potentials that approximate standard PDF's and applications of these potentials for solving inference problems in hybrid Bayesian networks.
International Journal of Approximate Reasoning, 2006
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result of the algorithm is a network where the conditional distribution for each variable is a mixture of truncated exponentials (MTE), so that no restrictions on the network topology are imposed. The structure of the network is obtained by searching over the space of candidate networks using optimisation methods. The conditional densities are estimated by means of Gaussian kernel densities that afterwards are approximated by MTEs, so that the resulting network is appropriate for using standard algorithms for probabilistic reasoning. The behaviour of the proposed algorithm is tested using a set of real-world and artificially generated databases.
2012
In this paper we study the problem of inference in hybrid Bayesian networks containing deterministic conditionals. The difficulties in handling deterministic conditionals for continuous variables can make inference intractable even for small networks. We describe the use of re-approximations to reduce the complexity of the potentials that arise in the intermediate steps of the inference process. We show how the idea behind re-approximations can be applied to the frameworks of mixtures of polynomials and mixtures of truncated exponentials. Finally, we illustrate our approach by solving a small stochastic PERT network modeled as an hybrid Bayesian network.
2011
In this paper we discuss some practical issues that arise in solving hybrid Bayesian networks that include deterministic conditionals for continuous variables. We show how exact inference can become intractable even for small networks, due to the difficulty in handling deterministic conditionals (for continuous variables). We propose some strategies for carrying out the inference task using mixtures of polynomials and mixtures of truncated exponentials. Mixtures of polynomials can be defined on hypercubes or hyper-rhombuses. We compare these two methods. A key strategy is to re-approximate large potentials with potentials consisting of fewer pieces and lower degrees/number of terms. We discuss several methods for re-approximating potentials. We illustrate our methods in a practical application consisting of solving a stochastic PERT network.
International Journal of Intelligent Systems, 2014
In this paper we discuss some practical issues that arise in solving hybrid Bayesian networks that include deterministic conditionals for continuous variables. We show how exact inference can become intractable even for small networks, due to the difficulty in handling deterministic conditionals (for continuous variables). We propose some strategies for carrying out the inference task using mixtures of polynomials and mixtures of truncated exponentials. Mixtures of polynomials can be defined on hypercubes or hyper-rhombuses. We compare these two methods. A key strategy is to re-approximate large potentials with potentials consisting of fewer pieces and lower degrees/number of terms. We discuss several methods for re-approximating potentials. We illustrate our methods in a practical application consisting of solving a stochastic PERT network.
Cooper & Moral (1998), 1998
We describe a heuristic method for learning mixtures of Bayesian Networks (MBNs) from possibly incomplete data. The considered class of models is mixtures in which each mixture component is a Bayesian network encoding a conditional Gaussian distribution over a fixed set of ...
2012
Bayesian Network is a significant graphical model that is used to do probabilistic inference and reasoning under uncertainty circumstances. In many applications, existence of discrete and continuous variables in the model are inevitable which has lead to high amount of researches on hybrid Bayesian networks in the recent years. Nevertheless, one of the challenges in inference in hybrid BNs is the difference between conditional probability density functions of different types of variables. In this paper, we propose an approach to construct a Unified Conditional Probability Density function (UCPD) that can represent probability distribution for both types of variables. No limitation is considered in the topology of the network. Hence, the construction of the unified CPD is developed for all pairs of nodes. We take use from mixture of Gaussians in the UCPD construct. Additionally, we utilize Kullback-Liebler divergence to measure the accuracy of our estimations.
Oltre la ‘separazione delle vie’, editado por M. A. Nicolaci (Trapani: Il Pozzo di Giacobbe 2024), 2024
Recherches en danse, 2015
Critique internationale, 2013
Bagh-e Nazar Journal, 2023
Polityka europejska państw Europy Środkowej, 2024
Dinâmica e gênese dos grupos: o legado de Kurt Lewin, 2007
Filosofía política de las independencias latinoamericanas, 2012
Рукопись РНБ Солов. 277/284: описание, 2023
Empirical Models Challenging Biblical Criticism
Journal of Labor Research, 2007
Divulgación Científica
Insight Turkey, 2012
Revista Brasileira de Enfermagem, 2022
Food Microbiology, 2018
Age and Ageing, 2011
IEEE Transactions on Power Delivery, 2006
Revista de Psicopatología y Psicología Clínica, 2003
Efficiency of public administration
Journal of Biomechanics, 2020
Journal of Surfactants and Detergents, 1999