Abhinav Moudgil

PhD Student, Mila.

I am a CS PhD student at Mila and Concordia University advised by Prof. Eugene Belilovsky. My research is supported by FRQNT and Frederick Lowy Scholars fellowships.

Before starting my PhD, I was a Visiting Scholar with Prof. Devi Parikh and Prof. Dhruv Batra at Georgia Tech, where I worked on building multi-modal embodied agents which can navigate in a photo-realistic environment using visual and language cues.

I completed my Bachelors and Masters in Electronics and Communication Engineering (ECE) in 2019 at IIIT Hyderabad, where I pursued research in long-term visual object tracking with Prof. Vineet Gandhi. My Masters thesis is available here.

During my Masters, I also spent two wonderful semesters at UC San Diego and Stanford University in 2018. I worked with Prof. Sicun Gao at UC San Diego on sample efficient Reinforcement Learning algorithms for Atari games. At Stanford, I collaborated with Prof. Noah Goodman on recognizing humor in text.

Apple MLR
Summer 2024

Mila
2021 - Present

Georgia Tech
2020 - 2021

Stanford
Fall 2018

UC San Diego
Summer 2018

GSOC, CERN
Summer 2016

IIIT Hyderabad
2013 - 2019


News

Apr 2024 Interning at Apple MLR Barcelona x Cambridge!
Dec 2023 Preprint out on arXiv: Can We Learn Communication-Efficient Optimizers?
Sep 2023 Got married!
Aug 2023 Reviewed for TPAMI journal.
Apr 2023 Won FRQNT fellowship, thanks Gouvernement du Québec!
Jul 2022 Received Outstanding Reviewer award at ICML 2022!
May 2022 Our work on scaling DTP got accepted to ICML 2022!
Jan 2022 Preprint out on arXiv: Towards Scaling Difference Target Propagation by Learning Backprop Targets
Sep 2021 SOAT accepted to NeurIPS 2021!
Aug 2021 Awarded Frederick Lowy Scholars Fellowship. Thanks Concordia University for this generous support for 3 years!

Publications

(* denotes equal contribution)

Can We Learn Communication-Efficient Optimizers?

Charles-Étienne Joseph*, Benjamin Thérien*, Abhinav Moudgil, Boris Knyazev, Eugene Belilovsky

arXiv, 2023


Learning to Optimize with Recurrent Hierarchical Transformers

Abhinav Moudgil, Boris Knyazev, Guillaume Lajoie, Eugene Belilovsky

Frontiers4LCD Workshop, ICML 2023


Towards Scaling Difference Target Propagation by Learning Backprop Targets

Maxence Ernoult, Fabrice Normandin*, Abhinav Moudgil*, Sean Spinney, Eugene Belilovsky, Irina Rish, Blake Richards, Yoshua Bengio

ICML 2022


SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language Navigation

Abhinav Moudgil, Arjun Majumdar, Harsh Agrawal, Stefan Lee, Dhruv Batra

NeurIPS 2021


Contrast and Classify: Alternate Training for Robust VQA

Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal

ICCV 2021, NeurIPS Self-Supervised Learning Workshop 2020


Exploring 3Rs of Long-term Tracking: Re-detection, Recovery and Reliability

Shyamgopal Karthik, Abhinav Moudgil, Vineet Gandhi

WACV 2020


Long-Term Visual Object Tracking Benchmark

Abhinav Moudgil, Vineet Gandhi

ACCV 2018 (Oral Presentation)


Open Source Projects


Placeholder image

distributed-dtp

Implements custom distributed scheme for our DTP algorithm (ICML 2022) in PyTorch, parallelizing feedback weight training across GPUs.


Placeholder image

pygoturn

Fast PyTorch implementation of visual tracker GOTURN (Held et al., ECCV 2016) which tracks an input object in a video at 100FPS with a deep siamese convolutional network.


Placeholder image

mosse-tracker

MATLAB implementation of MOSSE tracker (Bolme et al., CVPR 2010), which forms the basis for all the correlation filter-based object tracking algorithms.


Placeholder image

pun-model

Python implementation which reproduces results of the paper “A computational model of linguistic humor in puns” (Kao et al., CogSci 2015). It employs a probabilistic model to compute funniness rating for a given sentence.


Placeholder image

short-jokes-dataset

Collection of Python scripts for building Short Jokes dataset containing 231,657 jokes scraped from various websites like Reddit, Twitter etc.


Placeholder image

ai-bots

Implementation of various algorithms like Deep Q-learning, Policy Gradient, Simulated Annealing and Hill Climbing in Tensorflow / PyTorch; tested on OpenAI Gym environments.