1
$\begingroup$

I'm looking for neural networks $n_\theta(x)$ which integrate to a constant $\int_{[0,1]^d} n(x)\; dx=c\in\mathbb{R}$. Notably, the constant should be the same for any weights $\theta$ for which $n_\theta(x)$ is finite.

Example. Normalizing flows use neural networks $n(x)$ to represent a probability density function $p_n(x)$ which satisfies $\int_{ [0,1]^d}p_d(x)\; dx=1$. This is also true for autoregressive models like PixelCNN.

It would also be interesting if the integral is something one can compute using a formula. Especially if is entails less architectural constraints than Normalizing flows and PixelCNN.


Clarification 1. While the example uses neural networks that represent probability density functions, I do not want that, it was just the only way I knew how to make the integral tractable.

Clarification 2. I'm not considering a particular architecture. I want to find an architectural constraint that guarantees the integral is tractable. The "weaker" the architectural constraint the better.

Bonus example. Additive coupling layers are volume preserving so $\int_{x\in D} n(x) \; dx= \int_{x\in D}x\; dx$.

$\endgroup$
5
  • 1
    $\begingroup$ I would like to see an example of such an integration as well. Something you should clarify in order to make this tractable is the exact neural network you are considering. We need the details on what inputs and what layers are used. I.e. it is difficult to integrate a function without being told which function we're integrating. $\endgroup$
    – Galen
    Commented Jul 28, 2021 at 17:49
  • $\begingroup$ I do not consider a particular architecture like resnet or transformer. I want to find architectures that are as expressive as possible while simultaneously admitting a tractable integral. $\endgroup$ Commented Jul 28, 2021 at 17:57
  • $\begingroup$ Won't this always integrate to a constant? You have a bounded function on bounded support. $\endgroup$
    – Dave
    Commented Jul 28, 2021 at 18:04
  • $\begingroup$ I like that goal of choosing a model with weak assumptions, but I suspect it is too broad for stats.SE. To improve your chances of a high quality answer, I advise that you pick something like a specific choice of multilayer perceptron. $\endgroup$
    – Galen
    Commented Jul 28, 2021 at 18:04
  • $\begingroup$ @Dave If the network has fixed weights and we assume it does not output infinity, yes. That said, I want the constant to be the same for any weights. I updated the question to clarify this. $\endgroup$ Commented Jul 28, 2021 at 18:13

0