Hello! I am a research engineer on the Google Brain team.
The last few years of Machine Learning research have shown that rich data combined with simple statistical learning algorithms and powerful computational resources can outperform hand-engineered systems in computer vision, translation, and speech recognition problems.
My research focuses on answering whether that principle — big data and small algorithms — can yield unprecedented capabilities in the domain of robotics, just like the computer vision, translation, and speech revolutions before it. Specifically, I focus on robotic manipulation and self-supervised robotic learning.
My other interests include neuroscience, biomimicry, computer graphics, and financial markets.
- 11/7/19 Robinhood, Leverage, and Lemonade
- 7/6/19 Normalizing Flows in 100 Lines of JAX
- 7/5/19 Tips for Training Likelihood Models
- 5/26/19 Lessons from AI Research Projects: The First 3 Years
- 5/12/19 Fun with Snapchat's Gender Swapping Filter
- 3/10/19 What I Cannot Control, I Do not Understand
- 2/25/19 Why I Love Notebooks
- 2/21/19 Meta-Learning in 50 Lines of JAX
- 2/5/19 Thoughts on the BagNet Paper
- 12/28/18 Uncertainty: a Tutorial中文
- 12/11/18 Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
- 11/30/18 Eric's Machine Learning Meme Collection
- 08/08/18 Dijkstra's in Disguise
- 06/21/18 Bots and Thoughts from ICRA2018
- 04/1/18 Aesthetically Pleasing Learning Rates
- 02/23/18 Teacup: A Short Story中文
- 01/23/18 Doing a Concurrent Masters at Brown
- 01/17/18 Normalizing Flows Tutorial, Part 2: Modern Normalizing Flows
- 01/17/18 Normalizing Flows Tutorial, Part 1: Distributions and Determinants中文
- 12/25/17 Gamma Correction
- 11/20/17 Expressivity, Trainability, and Generalization in Machine Learning español, 中文
- 10/12/17 Strong AI Ideas in Crystal Nights (Greg Egan, 2009)
- 01/02/17 Summary of NIPS 2016
- 11/08/16 Tutorial: Categorical Variational Autoencoders using Gumbel-Softmax
- 09/06/16 Riemann Summation and Physics Simulation are Statistically Biased
- 09/05/16 Monte Carlo Variance Reduction Techniques in Julia
- 08/07/16 A Beginner's Guide to Variational Methods: Mean-Field Approximation
- 07/25/16 Why Randomness is Important for Deep Learning
- 07/16/16 What Product Breakthroughs will Recent Advances in Deep Learning Enable?
- 07/11/16 How to Get an Internship
- 05/14/16 Adversarial Exploration Policies for Robust Model Learning
- 02/27/16 Understanding and Implementing Deepmind's DRAW Model
- 12/29/15 Generative Adversarial Nets in TensorFlow: Part I
- 08/17/15 My Internship Experiences at Pixar, Google, and Two Sigma
- 01/14/14 Reverse-Engineering Apps: a Step-by-Step Beginner's Guide