PixelCNN++ constructs a model distribution $p(x)$ over images $x\in\mathbb{R}^{n\times n}$ as a product of conditional distributions over pixels
$$p(x)=p(x_1,...,x_{n^2})=\prod_{i=1}^{n^2} p(x_i| x_1,...,x_{i-1})$$
In particular, we choose to model $p(x_i| x_1,...,x_{i-1})$ as a logistic distribution $L(\mu_i, s_i)$ where $\mu_i, s_i = neural\_net_\theta(x_1,...,x_{i-1})$ for parameters $\theta$. The authors claim that $p(x)$ is a probability density distribution, i.e.
$$\int_{\mathbb{R}^{n\times n}} p(x) \; dx= \int_{\mathbb{R}^{n\times n}} \prod_{i=1}^{n^2} L( neural\_net_\theta(x_1,...,x_{i-1})) \; dx=1$$
Q1. How can we guarantee that this integral is 1 for any neural network?
Q2. Is it true for any continuous function?