1

I plan to use Google TPUs for scientific numerical simulation (finite element analysis).

That said, do TPUs support float32 for computation? If the precision is too small its unsuitable for me.

There is a notification here, but it says XLA converts it automatically.

2 Answers 2

0

Yes. It is not the default, but it is available ("Run a calculation on a Cloud TPU VM using JAX" at Google Cloud).

You can set precision=jax.lax.Precision.HIGHEST on particular operations, such as matrix multiplication.

It "uses even more [Matrix Multiply Units] passes to achieve full float32 precision".

E.g.:

jax.numpy.matmul(a, b, precision=Precision.HIGHEST)
0

If you are able to do so on that platform, try checking if sizeof(float)*CHAR_BIT == 32

1
  • TPU mathematical operations use a custom compiler, not standard C data types: "Code that runs on TPUs must be compiled by the accelerator linear algebra (XLA) compiler. XLA is a just-in-time compiler that takes the graph emitted by an ML framework application and compiles the linear algebra, loss, and gradient components of the graph into TPU machine code." (cloud.google.com/tpu/docs/intro-to-tpu#xla_compiler). Commented Sep 15 at 16:46

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.