Blog
The Johnson-Lindenstrauss lemma for the brave
December 23, 2020 - 3484 words - 18 mins
If you are interested in dimensionality reduction, chances are that you have
come across the Johnson-Lindenstrauss lemma. I learned about it while studying
the Linformer paper, which contains a
result on dimensionality reduction for the
Transformer. Essentially, they prove that
self-attention is lo…
read more
Slide talk: Demystifying GPT-3
November 09, 2020 - 803 words - 5 mins
The transformer
For another meeting of our reinforcement/machine learning reading group, I gave a
talk on the underlying model of GPT-2 and GPT-3, the ‘Transformer’.
There are two main concepts I wanted to explain: positional encoding and attention.
During the talk, I found that two things were most…
read more
Online talk: Effective Kan fibrations in simplicial sets
July 06, 2020 - 77 words - 1 mins
Effective Kan fibrations in simplicial sets
As part of the Workshop on Homotopy Type Theory and Univalent Foundations
(HoTT/UF), I have contributed a talk on my
recent work with Benno van den Berg.
The talk is available on youtube.
An abstract can be found here and
the full paper is now also availab…
read more
Slide talk: Off-policy methods with approximation
March 08, 2020 - 109 words - 1 mins
Slides for chapter 11 of Barto and Sutton
For my fortnightly reading group on reinforcement learning, I prepared a talk
on chapter 11 of the book by Barto and
Sutton.
This chapter is on off-policy methods with approximation. The main content consists of
some negative results for stochastic semi-grad…
read more