Questions tagged [posterior]
In Bayesian statistics, the term 'posterior' refers to the probability distribution of a parameter conditioned on the observed data.
1,031 questions
1
vote
0
answers
18
views
How to aggregate confidence interval from spline into intervals correctly for Cox model?
Currently i'm estimating HR splines using mgcv::gam::coxph. My spline is nonlinear and has one turn, so i'd like to divide my spline into two sections given the turn and get conditional estimates for ...
0
votes
0
answers
13
views
Title: Handling Skewed Importance Sampling Weights for High-Dimensional Log-Likelihoods
Question:
I am performing importance sampling (IS) for a Bayesian inference problem with the following setup:
1. Data and Model
My data has ( D = 1300 ) dimensions.
The log-likelihood, $ \log p(x \...
0
votes
0
answers
9
views
Extracting individual level posterior class memebership probabilities in multilevel LCA
I am conducting a multilevel laten class analysis using the R package multilevLCA.
I have fitted the model using multiple steps (i.e. determining optimal number of classes as well as clusters). I now ...
0
votes
0
answers
13
views
Best practice to compute contrasts from Bayesian posterior with repeats
I have samples ($i$) from posterior distributions of a parameter ($\theta$) for two different conditions $a$ and $b$.
Samples of the posterior difference between the two conditions $\theta_{\delta i}$ ...
0
votes
0
answers
23
views
Rescaling a lognormal prior for a Bayesian fit
I am fitting a non-linear regression using Bayesian methods and I have a question that I thought was simple but I have now confused myself thinking about it.
Let us say we are fitting a non-linear ...
3
votes
0
answers
33
views
Gibbs sampler for hierarchical model
I am studying MCMC methods and I have the following model
$$
y_i \mid \lambda_i \overset{\text{iid}}{\sim} \text{Poisson}(\lambda_i) \\
\lambda_i \mid \alpha, \beta \overset{\text{iid}}{\sim} \text{...
2
votes
1
answer
45
views
Posterior estimation using VAE
Using normalizing flows, we can model model's posteriors $p(\theta|D)$, by feeding Gaussian noise $z$ to the NF (parametrized with $\phi$), using the output of the NF $\theta$ as model parameters, and ...
0
votes
0
answers
44
views
MCMC samplers give noticeably different distributions--which is better?
Background
We have built two MCMC samplers to sample the posterior of a hierarchical Bayesian model with 4-20+ parameters. The two samplers use different approaches. For most datasets, they give ...
0
votes
0
answers
19
views
Finding posterior of M/N given posterior of M. (Gamma Distribution)
I'm trying to solve a question about posterior distribution of Gamma scalar multiplication/division. I will give some context to the question here:
$Y_1,..., Y_n$ is modelled as IID Pois($\mu$) random ...
0
votes
1
answer
38
views
Posterior's impact of changing parameter's support
Let's say I'm doing Bayesian inference and I have a parameter $\alpha \in (0, 1)$. I'm doing MCMC with Metropolis Algorithm and for this I consider the transformation $\psi = \log(\alpha/(1 - \alpha))$...
1
vote
1
answer
40
views
Derive the posterior multivariate normal distribution
I have a question when I was reading the book Latent Variable Models and Factor Analysis: A Unified Approach by Bartholomew, Knott and Moustaki. Here it is:
Suppose that $\mathbf{x}=(x_1, x_2, ..., ...
0
votes
1
answer
53
views
Estimating the Posterior Probability of Playing Coin 1
Suppose someone has two coins with probability $p_1$ and $p_2$ of getting heads with $p_2\le p_1$. The person is throwing the coins and may switch from playing coin 1 to coin 2 with probability $\...
0
votes
0
answers
32
views
Using inducing points for exact gaussian process inference
I'm a bit muddled with the inference of gaussian processes using inducing points, in particular in conditions when this should be exact inference and not an approximation.
For a gaussian process $f\...
1
vote
0
answers
17
views
How to interpret model fit in posterior predictive checks between two models that both capture the observations in its 1sigma?
I have two models aimed at explaining a single observed measurement $x_{obs}$:
Simple Model with 26 parameters $f_1(\theta)$.
Complex Model with 31 parameters $f_2(\theta)$.
Both models are assumed ...
1
vote
1
answer
78
views
Which proposal function should be used in this particular case of the Metropolis-Hastings algorithm?
As part of my research, I would like to apply the Metropolis-Hastings in order to sample from some posterior distribution. More precisely, the data comes from a multivariate normal distribution in the ...
1
vote
1
answer
32
views
Propagating measurement uncertainty with posterior predictions as data in another model
I'm working on a modeling approach that incorporated estimates of measurement uncertainty trying to use brms in R. I'm working from the example in chapter 14 of ...
0
votes
0
answers
44
views
Bayesian: Formally comparing prior and posterior distributions
Consider Bayesian inference in the regression model:
$$ \begin{align*} y &= \beta_0+\beta_1x_1+\beta_2x_2 + \varepsilon \\ \varepsilon &\sim
\mathcal{N}(0,\sigma^2) \end{align*}$$
Suppose we ...
5
votes
1
answer
166
views
Uniform prior and poisson likelihood, what posterior distribution will be produced?
If i have a uniform distribution over a fixed specified and a finite range, and a Poisson likelihood distribution, what posterior will be produced?
The likelihood has this form $$P(\pmb{X}|
\pmb{\...
0
votes
0
answers
72
views
How to use posterior predictions from one model (standard curve) as an outcome variable in another model?
I'm working with a dataset concerning the concentration of a large number of different chemical compounds in a mixture, using a Bayesian approach (with brms in R). The instrument that measures the ...
3
votes
1
answer
230
views
Basic question about deriving MAP estimator
Say we have a random process $X(t, u)$ parametrized by $t$ and $u$ that generates data $x$. We also have a prior on $u$, $p(u)$.
Am I correct in stating that the expression to find the maximum a ...
1
vote
0
answers
40
views
how can predictive distributions be considered as expectations?
I guess that the prior and posterior predictive distributions can be considered expectation of $p(y|\theta )$ (in case of prior predictive distribution) and $p(\widetilde{y}|\theta )$ (in case of ...
0
votes
0
answers
18
views
two-step gibbs sampling vs block gibbs sampling
While reading Bayesian-related technical articles, I can see algorithms such as two-step Gibbs sampling and block gibbs sampling ...
1
vote
1
answer
49
views
known variance in conjugate normal
$Posterior\ mean=\frac{1}{\frac{1}{\sigma_{0}^{2}} + \frac{n}{\sigma^{2}}}\left( \frac{\mu_{0}}{\sigma_{0}^{2}} + \frac{\sum_{i=1}^{n} x_i}{\sigma^2} \right)$
Using this updating equation with known ...
1
vote
0
answers
19
views
Bayes factor for hypothesis
I am studying Bayesian hypothesis testing and I want to calculate the Bayes factor for
\begin{align*}
H_0: \lambda = 1 \hspace{0.2cm} vs \hspace{0.2cm} H_1:\lambda > 2
\end{align*}
with $p(\...
0
votes
0
answers
30
views
How to obtain likelihood ($P(B/R)$ given the prior $P(R)$ and the posterior $P(R/B)$
I am working on a topic related to multiple-choice response. I would like to measure the efficiency of the information source (or a student’s information search) and I believe Bayesian statistics is ...
2
votes
0
answers
41
views
Likelihood from posterior [closed]
This question is strange and perhaps silly but it would be very useful for my research. Is there any method to find the likelihood given a prior distribution and its corresponding posterior ...
2
votes
1
answer
53
views
How to choose what to integrate out or what to condition on for marginal distributions?
I am trying to work out the Bayesian posteriors of $\theta$, $\tau$ and the $\varepsilon$ in the following model:
$$y(t) = \phi(t,\tau)\theta+v(t),$$
where $\{v(t)\}$ is an iid sequence of random ...
0
votes
1
answer
90
views
Normal approximation for posterior distribution
I am reading the example 4.3.3 of "The Bayesian Choice" by Christian P. Robert and I was wondering if it is possible to obtain a normal approximation in this case to estimate the posterior. ...
0
votes
1
answer
53
views
Interaction: Posterior comparison brms, difference between as_draws, and posterior_predict. Is it correct to interpret posterior_predict instead?
I have an interaction effect in my model, and I want to extract the posterior of each of my parameter in order to compare them and make inference about them. I couldn't simply use the as_draws() ...
1
vote
0
answers
39
views
How to prove the posterior probability for multivariate case (i.e. dimension $d\ge 2$)?
Suppose there are $k$ groups, $\pi_1, \pi_2, \cdots, \pi_k$, with the probability density function for group $\pi_i$ being $f_i(\boldsymbol{x})$, where $\boldsymbol x\in R^d$. The prior probability ...
1
vote
0
answers
23
views
How to leverage the separable functions in MCMC sampling? [closed]
I'm considering the posterior of a parametric model via the Bayesian approach. More specificity, I have a parametric model $u(p_1,p_2, p_3) = u_1(p_1) \times u_2(p_2) \times u_3(p_3)$ and I want to ...
0
votes
0
answers
29
views
Overcoming posterior correlation for a model with random effects (for a Gibbs sampler)
I am trying to infer parameters for a model of case numbers of different infectious diseases in different locations over time. The model is
$$
\log \left(1 + y_{ijt}\right)\sim\mathsf{Normal}\left(\mu ...
5
votes
1
answer
59
views
Can we get probabilistic predictions evaluable by proper scoring rules from bayesian inference without evaluating the marginal likelihood?
Let's say we have a vector of inputs, $X=[x_0,\dots, x_{n-1}]$, and a vector of outputs, $Y=[y_0, \dots, y_{n-1}]$.
We would like to predict the distribution of a new output ,$\hat{y}$, given a new ...
1
vote
0
answers
33
views
Mutual Information decay
Consider $m$ channels indexed by $i$ with $1 \leq i \leq m$. The input alphabets are from the same finite set $\mathcal{X}$. Let $\pi$ denote a probability distribution on $\mathcal{X}$. Define the ...
1
vote
1
answer
77
views
Confidence interval over point estimate given regression parameters
In Bayesian analysis, the posterior distribution is often sampled when the PDF is not intractable (often). If the samples are of length $n$, then every index in $range(1,n)$ corresponds to a valid ...
0
votes
0
answers
30
views
Computing Bayesian model averaged posteriors
The Bayesian model averaged posterior predictive distribution for new data $\tilde{y}$ given training data $y$, across a set of $M$ models $\mathcal{D} = \{D_{1}, ..., D_{M}\}$, is defined as:
\begin{...
2
votes
1
answer
46
views
How to decompose the conditional posterior prob? [closed]
I am learning bayesian inference now. A problem I encountered a lot of time is, when I need to calculate or simplify the posterior prob., I don't know how should I begin, according to what I have.
For ...
1
vote
0
answers
31
views
How to derive conditional posterior predictive distribution from definition of posterior predictive distribution in bayesian regression?
In my situation, I have a set of data points:
$$ z_{0:n} = \\{ (x_0, y_0),\dots ,(x_{n-1}, y_{n-1}) \\} $$
I am trying to figure out how to derive the fully expanded form for the conditional posterior ...
0
votes
0
answers
47
views
MCMC for correlated Posterior
I'm simulating the posterior of a (as it seems) highly correlated Posterior distribution using MCMC (DREAM Algorithm).
My setting is that I have 7 parameters where x1/x3 and x2/x4 is highly correlated,...
0
votes
0
answers
14
views
Turning a list of cost into categorical probability mass distribution
Background
Given a noisy dataset $D$, I have to solve a classification problem where the possible anserwer is $i\in\{1,\dots,N\}$. So far I can get pretty decent result with an algorithm that, based ...
1
vote
1
answer
133
views
Simple example of Log-Sum-Exp trick for continuous case
I am trying to confirm my understanding of how to apply the [Log-Sum-Exp trick to recover a posterior distribution from a log-posterior distribution. I want to consider a simple example from a model I ...
0
votes
0
answers
38
views
Recovering normalized posterior distribution from log-posterior
For a Bayesian estimation problem that I am working on, where I update the log-posterior (many times based on data) instead of the posterior itself using Bayes rule. I find the following (rather ...
0
votes
0
answers
32
views
Understanding distribution of distributions
I was studying Bayesian plausibility but was stuck on some basic concepts. Bayesian plausibility condition is an equation that says $$\int_{\Delta(\Omega)}pd\mu(p) = P$$ Where $P$ is the prior ...
1
vote
2
answers
112
views
Interpretation of posterior predictive distributions
QUESTION UPDATE due to the comments I have received so far.
The data, the example and the results below are fictitious, as I am interested with the correct interpretation of these results.
Suppose I ...
0
votes
0
answers
70
views
Posterior distribution of shape & rate parameter in Poisson-Gamma Mixture
Currently I'm struggling to handle the following question.
Suppose $x_i,(i=1,2,\dots,n)$ follows Poisson distribution:
$$p(x_i|\theta) = \frac{\theta^{x_i}e^{-\theta}}{x_i!}, \quad x_i\in\mathbb N,\...
0
votes
0
answers
64
views
Predictive Posterior Distribution of a Bivariate Normal Distribution with Unknown Mean and Variance
I am working on a homework assignment and have been stuck on it for a while and would like to seek for some clarification.
Given a likelihood function $y_i|\theta,\sigma^2 \sim N(\theta,\sigma^2)$ for ...
2
votes
1
answer
69
views
MCMC seems very sensible to the evidence
currently starting to study bayesian ML, and specifically MCMC, in order to compute the posterior:
$$
P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}
$$
Now, I see how the acceptance ratio makes sense ...
4
votes
1
answer
79
views
How should you determine the probability returned by a flat uniform prior function
I am currently doing an analysis that involves fitting a model to a 1D graph.
Following the example on the emcee documentation, I started with Maximum likelihood estimation and am now looking at using ...
1
vote
0
answers
41
views
Using bootstrap for accurate posterior in Variational Bayes
A common well-known issue in Variational Bayes is the variance underestimation of the posterior. Some methods using "sandwich" variance have already been proposed but provide frequentist ...
0
votes
0
answers
9
views
Choosing Distortion Measures for Decision Rules with Logarithmic Posteriors
I've been delving into Bayesian decision theory and specifically looking at scenarios where we work with the logarithm of the posterior distribution (log-posterior). My understanding is that in such ...