Senior Researcher,
Microsoft Research


Upcoming Talks:
  • June 26th, 2024: Google Research
  • May 7-11th, 2024: Presenting 2 papers at ICLR 2024
  • Jan 30th, 2024: Microsoft Research Forum [Link]
  • Jan 25th, 2024: Microsoft Research Podcast [Link]

Dipendra Misra

I am a machine learning researcher specializing in the field of natural language understanding, interactive learning (e.g., reinforcement learning), and representation learning. My main research agenda is to develop generalizable agents that can interact with the world using actions and natural language, and solve a range of tasks using reward, natural language feedback, or other types of feedback.

News: Our LASER paper got accepted at ICLR 2024, trended on Github python, and got featured in a Verge article!

A recurring theme in my research is developing interactive learning algorithms or representation learning methods using feedback or data that naturally occurs in real-world such as video data, user edits, language feedback, etc. My current areas of research focus are below.

  • (Models) Foundation Models for Decision Making: I am focusing on developing foundation models that can take actions for different agents, in different domains, and for different tasks. E.g., an agent that can interact in an OS or a bot that can play a game against a human. I am particularly interested in using naturally available data such as videos for building these models. My relevant recent work is our ICLR 2024 (Spotlight) paper on learning right representations from videos. Other relevant work is my series of work published at EMNLP 2017, EMNLP 2018, CoRL 2018, and CVPR 2019, on developing instruction following agents that can solve a variety of tasks specified in natural language, in embodied setting.

  • (Algorithms) Learning Algorithms for LLMs and Foundation Model: I am interested in developing both practical and efficient algorithms for training agents. In particular, my recent focus has been on developing algorithms for fine-tuning LLMs using better imitation learning algorithms (arXiv 2023) such as our DR-PO algorithm (arXiv 2024). My other past work includes developing RL methods that are provably sample-efficient and computationally-efficient: the Homer algorithm (ICML 2020), RichID algorithm (NeurIPS 2020), FactoRL Algorithm (ICLR 2021), and PPE algorithm (ICLR 2022 Oral), and AC State (TMLR 2023).

  • (Feedback) Developing Agents that Learn from Language Feedback: Once an agent foundation model has been deployed, it may need post-training or adaptation to given a setup. I have developed approaches that can finetune models using language feedback which is easy and natural for non-expert humans to provide. E.g.,

    • Our recent paper (arXiv 2024) studies aligning LLMs using user edit feedback that is naturally generated in writing assistant applications.

    • It is more expensive to label a trajectory for a given language instruction than to label instruction for a given trajectory. Our ICML 2024 and ICML 2021 papers train agents using these hindsight language instructions.

    • Humans often use free-form language feedback to guide each other. Our recent LLF-Bench paper (arXiv 2023) introduces a benchmark for evaluating learning in LLM agents using language feedback.

Beyond my main agenda, I also have interest in a diverse range of topics including language and vision problems, semantic parsing, statistical learning theory, and computational social science.

Bio: I am a Senior Researcher at Microsoft Research, New York. I received my PhD in computer science from Cornell University (2019) and my bachelors in computer science from Indian Institute of Technology Kanpur (2013).

Quick Links:   MSR Reinforcement Learning,   Intrepid Code Base,   CIFF Code Base,   Math for AI,   My Blog,   RL Formulas

Publications



New Preprints

Aligning LLM Agents by Learning Latent Preference from User Edits
Ge Gao*, Alexey Taymanov*, Eduardo Salinas, Paul Mineiro, and Dipendra Misra (* equal first authorship)
[arXiv 2024] [Code]

Dataset Reset Policy Optimization for RLHF
Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Kianté Brantley, Dipendra Misra, Jason D. Lee, Wen Sun
[arXiv 2024] [Code]

Policy Improvement using Language Feedback Models
Victor Zhong, Dipendra Misra, Xingdi Yuan, Marc-Alexandre Côté
[arXiv 2024]

LLF-Bench: Benchmark for Interactive Learning from Language Feedback
Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith Swaminathan (alphabetic ordering)
[arXiv 2023] [Code] [Website]

Learning to Generate Better Than Your LLM
Jonathan D. Chang, Kiante Brantley, Rajkumar Ramamurthy, Dipendra Misra, Wen Sun
[arXiv 2023] [Preliminary Version accepted at NeurIPS 2023 Workshop]



Conference and Journal Papers

Provable Interactive Learning with Hindsight Instruction Feedback
Dipendra Misra*, Aldo Pacchiano*, Robert E. Schapire* (* equal contribution)
In Proceedings of the International Conference of Machine Learning (ICML), 2024.
[arXiv Version] [Code]

Towards Principled Representation Learning from Videos for Reinforcement Learning
Dipendra Misra*, Akanksha Saran*, Tengyang Xie, Alex Lamb, and John Langford (* equal contribution)
In Proceedings of the 12th International Conference on Learning Representations (ICLR), 2024.
[ICLR 2024] [ICLR Spotlight] [Code]

The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Pratyusha Sharma, Jordan T. Ash* and Dipendra Misra* (* equal advising)
[This paper presents a surprising discovery that doing low-rank approximation of selective weight matrices of an LLM can boost the LLM's QA performance, at times by 20-30% point.]
In Proceedings of the 12th International Conference on Learning Representations (ICLR), 2024.
[arXiv 2023] [ICLR 2024] [Code] [Website]

Survival Instinct in Offline Reinforcement Learning
Anqi Li, Dipendra Misra, Andrey Kolobov and Ching-An Cheng
In Conference on Neural Information Processing Systems (NeurIPS), 2023
[arXiv 2023] [NeurIPS Spotlight] [Preliminary Version accepted at ICML Workshop]

Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information
Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, John Langford
In Proceedings of the International Conference of Machine Learning (ICML), 2023.
[ICML 2023 Version] [Preliminary version accepted at NeurIPS 2022 workshop]

Guaranteed Discovery of Controllable Latent States with Multi-Step Inverse Models
Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, and John Langford
In Proceedings of the Transactions on Machine Learning Research (TMLR), 2023.
[TMLR 2023 Version] [arXiv 2022] [Website]

Provable Safe Reinforcement Learning with Binary Feedback
Andrew Bennett, Dipendra Misra, and Nathan Kallus
In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.
[AISTAS 2023 Version] [arXiv 2022] [Code]

Provably Sample-Efficient RL with Side Information about Latent Dynamics
Yao Liu, Dipendra Misra, Miro Dudík, and Robert Schapire
In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
[NeurIPS 2022 version] [arXiv 2022]

Sample-Efficient RL in the Presence of Exogenous Information
Yonathan Efroni, Dylan Foster, Dipendra Misra, Akshay Krishnamurthy and John Langford
In Proceedings of the 35th Conference on Learning Theory (COLT), 2022.
[COLT Version] [arXiv 2022]

Understanding Contrastive Learning Requires Incorporating Inductive Biases
Nikunj Saunshi, Jordan Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham Kakade, Akshay Krishnamurthy
In Proceedings of the 39th International Conference on Machine Learning (ICML), 2022.
[ICML Version] [arXiv 2022]

Provable RL with Exogenous Distractors via Multistep Inverse Dynamics
Yonathan Efroni, Dipendra Misra, Akshay Krishnamurthy, Alekh Agarwal, and John Langford
In Proceedings of the 10th International Conference on Learning Representations (ICLR), 2022.
[ICLR 2022] [arXiv 2021] [Code] [Oral Presentation]

Investigating the Role of Negatives in Contrastive Representation Learning
Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, and Dipendra Misra     (alphabetic ordering)
The 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
[arXiv 2021] [Code to come soon]

Interactive Learning from Activity Description
Khanh Nguyen, Dipendra Misra, Robert Schapire, Miro Dudík, Patrick Shafto
In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021.
[Paper] [Version at EML workshop, ICLR 2021] [Code]

Provable Rich Observation Reinforcement Learning with Combinatorial Latent States
Dipendra Misra, Qinghua Liu, Chi Jin, John Langford
In Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021.
[Paper] [Code] [RL Theory Seminar]

Learning the Linear Quadratic Regulator from Nonlinear Observations
Zakaria Mhammedi, Dylan J. Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, John Langford
In Proceedings of the 34th Conference on Neural Information Processing Systems (NeuRIPS), 2020.
[arXiv Version] [NeuRIPS Version] [Code]

Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Dipendra Misra, Mikael Henaff, Akshay Krishnamurthy, and John Langford
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
[arXiv Version] [ICML Version] [Code]

Early Fusion for Goal Directed Robotic Vision
Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox
In International Conference on Intelligent Robots and Systems (IROS), 2019.
[Paper]    [Robocup Best paper nomination]

Touchdown: Natural Language Navigation and Spatial Reasoning in Visual Street Environments
Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi
In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[Paper] [Dataset and SDR Code] [Navigation Code]

Mapping Navigation Instructions to Continuous Control Actions with Position Visitation Prediction
Valts Blukis, Dipendra Misra, Ross A. Knepper, and Yoav Artzi
In Proceedings of the Conference on Robot Learning (CoRL), 2018.
[Paper] [Code] [Demo Video]

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Dipendra Misra, Ming-Wei Chang, Xiaodong He and Wen-tau Yih
In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
[Paper] [Code]

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction
Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi
In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
[Paper] [Code, Data and Simulators]

Lipschitz Continuity in Model-based Reinforcement Learning
Kavosh Asadi*, Dipendra Misra*, Michael L. Littman (* equal contribution)
In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
[Paper] [Code]

Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
Dipendra Misra, John Langford and Yoav Artzi
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
[Paper] [Code] [Arxiv Preprint]

Neural Shift-Reduce CCG Semantic Parsing
Dipendra Misra and Yoav Artzi
In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016.
[Paper] [Supplementary] [Code]

Tell Me Dave: Context-Sensitive Grounding of Natural Language to Manipulation Instructions
Dipendra K. Misra, Jaeyong Sung, Kevin K. Lee, Ashutosh Saxena
In The International Journal of Robotics Research (IJRR), 2015.
[Paper]
(Note the domain tellmedave DOT com no longer belongs to my coauthors and I.
Also, the link tellmedave DOT cs DOT cornell DOT edu is no longer active)

Environment-driven lexicon induction for high-level instructions
Dipendra K. Misra, Kejia Tao, Percy Liang, Ashutosh Saxena
In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2015.
[Paper] [Supplementary] [Code] [Data] [Simulator] [Bibtex]

Tell Me Dave: Context-Sensitive Grounding of Natural Language to Manipulation Instructions
Dipendra K. Misra, Jaeyong Sung, Kevin K. Lee, Ashutosh Saxena
In Proceedings of the Robotics: Science and systems (RSS), 2015.
[Paper]
(Note the domain tellmedave DOT com no longer belongs to my coauthors or I.
Also, the link tellmedave DOT cs DOT cornell DOT edu is no longer active)




Workshop

Towards Data-Driven Offline Simulations for Online Reinforcement Learning
Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman
[arXiv 2022] (Accepted at NeurIPS 2022 "3rd Offline RL Workshop: Offline RL as a "Launchpad" Workshop)

Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Andrew Benett, Dipendra Misra, and Nga Than     (alphabetic ordering)
Data Science for Social Good (DSSG) workshop at Conference on Knowledge Discovery and Data Mining (KDD) 2021
[arXiv 2021] [Code]

Towards a Simple Approach to Multi-step Model-based Reinforcement Learning
Kavosh Asadi, Evan Carter, Dipendra Misra, Michael Littman
Deep Reinforcement Learning Workshop at the Conference on Neural Information Processing Systems (NeurIPS), 2018.
[Paper]

The Third Workshop on Representation Learning for NLP (Rep4NLP)
Isabelle Augenstein, Kris Cao, He He, Felix Hill, Spandana Gella, Jamie Kiros, Hongyuan Mei and Dipendra Misra
Workshop at the Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
[Workshop Proceedings]

Equivalence Between Wasserstein and Value-Aware Model-based Reinforcement Learning
Kavosh Asadi, Evan Carter, Dipendra Misra and Michael L. Littman
Workshop on Prediction and Generative Modeling in Reinforcement Learning (PGMRL) at the International Conference on Machine Learning (ICML), 2018.
[ArXiv Preprint]

Reinforcement Learning for Mapping Instructions to Actions with Reward Learning
Dipendra Misra and Yoav Artzi
Symposium on Natural Communication for Human-Robot Collaboration at AAAI Fall Symposium Series, 2017.
[Paper] [Code]


Old Preprints

CHALET: Cornell House Agent Learning Environment
Claudia Yan, Dipendra Misra, Andrew Bennett, Aaron Walsman, Yonatan Bisk and Yoav Artzi
arXiv report, 2018.
[Paper] [Code]

Combating the Compounding-Error Problem with a Multi-step Model
Kavosh Asadi, Dipendra Misra, Seungchan Kim, Michel L Littman
arXiv, 2019.
[Paper]

Robo Brain: Large-Scale Knowledge Engine for Robots
Ashutosh Saxena, Ashesh Jain, Ozan Sener, Aditya Jami, Dipendra K. Misra, Hema S Koppula
[Paper]

Selected Media Articles

Selected articles in popular media about my research.

  • Microsoft LASERs away LLM inaccuracies by Emilia David, in Verge, Jan 31, 2024.

  • Two-step training helps robots interpret human language by Melanie Lefkowitz, in Cornell Chronicle, November 12, 2018.

  • Tell Me Dave Lets You Train A Robot To Respond To Complex Commands by John Biggs, in TechCrunch, June 23, 2014.

  • New robot learns from plain speech, not computer code by Julia Rosen, in Los Angeles Times, June 26, 2014.

  • Teaching old robots new tricks: Machines swap knowledge about how to complete a task despite being hundreds of miles apart by Victoria Woollaston, in DailyMail, Oct 30, 2015.

  • Robots Can Now Teach Each Other New Tricks by Will Knight, in MIT Technology Review, Oct 27, 2015.

  • This robot can make you ice cream Video from Jason Aldag , in The Washington Post, June 24, 2014.

  • Robots Are Smart – But Can They Understand Us? by Randy Rieland, in Smithsonian Magazine, July 8, 2014.

  • Robot Responds to Natural Language Instructions, Brings You Fancy Ice Cream by Evan Ackerman, in IEEE Spectrum, June 26, 2014.

  • Robots learn from (even bad) human language by Bill Steele, in Cornell Chronicle, June 24, 2014.

Posts

  • Academia and Compute-Intensive AI Research    [Post]

  • PAC with Hoeffding-Bernstein    [Post]

  • Growing Bifurcation of AI Scholarship     [Post]

  • Dynkin’s π-λ Theorem and CDF     [Part 1]     [Part 2]

  • Are Synthetic Datasets in AI Useful?     [Post]

  • Are we doing NLP the right way?     [Post]

  • Writing and Proof Reading Research Code     [Post]

  • Mathematical Analysis of Policy Gradient Methods     [Post]

  • Tutorial on Markov Decision Process Theory and Reinforcement Learning.     [Slides Part 1]     [Slides Part 2]     [Post]