Consider $z_1,\ldots,z_n$ as a sample of observations of $Z$ and $y_1,\ldots,y_n$ the missing data,
where $Z\sim N(\mu,\sigma^2+1)$ and $Y\sim N(0,1)$.
i)Find the expression of $u_{p+1}$ and $\sigma^2_{p+1}$ given the current values $\mu_p$ and $\sigma^2_p$.
ii)Implement in R the EM algorithm for estimate $\mu$ and $\sigma^2$.Use as data a sample of $rnorm$
First I found that $Y\mid Z=z\sim N(\frac{z-\mu}{\sigma^2+1},\frac{\sigma^2}{\sigma^2+1})$. Then I found the likelihood
$$L\propto (\frac{\sigma^2+1}{\sigma^2})^{\frac{n}{2}}e^{-\frac{1}{2}\frac{\sigma^2+1}{\sigma^2}\sum(y-(\frac{z-\mu}{\sigma^2+1}))^2}$$ $$\log L=\frac{n}{2}\log(\sigma^2+1)-\frac{n}{2}\log(\sigma^2)-\frac{1}{2}\frac{\sigma^2+1}{\sigma^2}\sum (y-(\frac{z-\mu}{\sigma^2+1}))^2$$
So taking the derivative $$\frac{\partial}{\partial\mu}\log L=-\frac{\sigma^2+1}{\sigma^2}\frac{1}{\sigma^2+1}\sum (y-\frac{z-\mu}{\sigma^2+1})=-\frac{1}{\sigma^2}(\sum y-\frac{nz}{\sigma^2+1}+\frac{n\mu}{\sigma^2+1})$$ developing this I get $$\hat{\mu}=-(\sigma^2+1)\overline{Y}+z$$
This will be my $\mu_{p+1}$? I am studying through this book Casella.
THE EM Algorithm
Compute $Q(\theta\mid\hat{\theta}_{(m)},x)=E_{\hat{\theta}_{(m)}}[\log L^c(\theta\mid x,z)]$ where the expectation is with respect to $k(z\mid\hat{\theta}_m,x)=\frac{f(x,z\mid\theta)}{g(x\mid\theta)}$
Maximize $Q(\theta\mid\hat{\theta}_{(m)},x)$ in $\theta$ and take $\theta_{(m+1)}=\operatorname{argmax}\limits_\theta$$Q(\theta\mid \hat{\theta}_{(m)},x)$
EDIT: I think perhaps a mistake as I did, because if $\hat{\mu}$ is the $u_{p+1}$ it will not depending on the value of the previous iteration
EDIT2: I think there is no way to make the M step without using simulation, so confused with this topic, this would be a case of mixture of distributions, since Y and Z have different distributions?