Machine Learning#
The tools discussed in the previous sections can be used to implement a number of different machine learning approaches to approximating the Calabi-Yau metric.
Some methods for a particular approach to this using Flax can be found in the cyjax.ml
submodule.
This includes functions for initializing and working with the Cholesky decomposition of the Hermitian matrix and a batched sampler class.
|
Construct hermitian matrix from Cholesky decomposition. |
|
Construct hermitian matrix from Cholesky decomposition parameters. |
|
Initialize parametrization to yield identity for Hermitian matrix. |
|
One can use multiple different losses which effectively measure the Ricci-flatness of the approximated metric. In particular, we use here the so called \(\sigma\) accuracy and a Monge-Ampere loss, which rely on the property that the Ricci flat metric \(g\) gives rise to a volume form which must be proportional to the one given by the holomorphic top form (Headrick & Nassar, 2013). If \(\Omega\) is the holomorphic top form, we can define the ratio \(\eta = \frac{\det g}{\Omega \wedge \Omega}\). The \(\sigma\) accuracy measures the deviation from \(\eta\) being constant as the integral
For training, we use the related variance-like Monge Ampere loss
which approximates an integral with respect to the volume form \(d\mathrm{vol}_{\Omega}\) using Monte Carlo weights \(w(z)\). The latter “undo” the bias introduced by the sampling scheme used to sample points \(z\) on the manifold, as discussed in a previous section.
|
Compute variance-based eta loss. |
Lastly, there is a configurable MLP-like network for learning the moduli dependence of the \(H\) matrix.
|
Dense network for learning moduli dependence of the H matrix. |
A schematic overview of the MLP-like network is given below.