Toronto Probability
by Lucas Benigni (Université de Montréal)
The Neural Tangent Kernel (NTK) plays a key role in understanding the behavior of wide neural networks, yet its spectral properties in high-dimensional regimes remain subtle. I will present a recent work describing the limiting eigenvalue distribution of the NTK for a two-layer network when the data dimension and parameters count grow at comparable scales. The limit is deterministic and can be expressed using tools from free probability. No prior knowledge of neural networks will be assumed.