Graduate Student
by Tara Stojimirovic (University of Toronto)
Location: Koffler House 113 (569 Spadina Cres, Toronto, ON M5S 1C7)
Many modern-day problems in data analysis deal with very high dimensional vectors. However, performing computations with them is very costly, so being able to use smaller-dimensional representatives that still encode key properties of the original vectors is very valuable. The Johnson-Lindenstrauss lemma tells us that, in this scenario, we can do something random that will work out in our favour with high probability. In particular, we can construct a random projection in a simple way so that the distances between the resulting smaller-dimensional vectors will likely only get distorted up to a small error. In this talk, we discuss this lemma and one of its proofs (that uses a Gaussian concentration inequality).