I am a research scientist at FAIR, Meta. I graduated with a PhD in Computer Science from UT Austin, advised by Dr. Kristen Grauman. My PhD research was at the intersection of video understanding and embodied AI.

Before coming to UT, I was an intern at MALL Lab, IISc, working with Dr. Partha Talukdar. I completed my B.E. in Computer Science from BITS Goa.

Contact: tushar.nagarajan@utexas.edu | tusharn@meta.com
CV: Link

Publications


VITED: Video Temporal Evidence Distillation
Yujie Lu, Yale Song, Lorenzo Torresani, William Wang, Tushar Nagarajan
CVPR 2025
[paper]
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
Md Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang, Gedas Bertasius, Lorenzo Torresani
CVPR 2025
[paper]
Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos
Sagnik Majumder, Tushar Nagarajan, Ziad Al-Halah, Reina Pradhan, Kristen Grauman
CVPR 2025
[paper]
ExpertAF: Expert Actionable Feedback from Video
Kumar Ashutosh, Tushar Nagarajan, Georgios Pavlakos, Kris Kitani, Kristen Grauman
CVPR 2025
[paper]
VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning
Han Lin, Tushar Nagarajan, Nicolas Ballas, Mido Assran, Mojtaba Komeili, Mohit Bansal, Koustuv Sinha
ICLR 2025
[paper]
User-in-the-loop Evaluation of Multimodal LLMs for Activity Assistance
Mrinal Verghese, Brian Chen, Hamid Eghbalzadeh, Tushar Nagarajan, Ruta Desai
WACV 2025 (Oral)
[paper]
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
Seungwhan Moon*, Andrea Madotto*, Zhaojiang Lin*, Tushar Nagarajan*, Matt Smith, Shashank Jain, Chun-Fu Yeh, Prakash Murugesan, Peyman Heidari, Yue Liu, Kavya Srinet, Babak Damavandi, Anuj Kumar
EMNLP 2024 (Industry Track) (* equal contribution)
[paper]
AMEGO: Active Memory from long EGOcentric videos
Gabriele Goletto, Tushar Nagarajan, Giuseppe Averta, Dima Damen
ECCV 2024
[paper] [project]
Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Md Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang, Fu-Jen Chu, Kris Kitani, Gedas Bertasius, Xitong Yang
ECCV 2024 (Oral)
[paper] [project]
Step Differences in Instructional Video
Tushar Nagarajan, Lorenzo Torresani
CVPR 2024
[paper] [data/code]
Detours for Navigating Instructional Videos
Kumar Ashutosh, Zihui Xue, Tushar Nagarajan, Kristen Grauman
CVPR 2024 (Highlight)
[paper] [project]
Video ReCap: Recursive Captioning of Hour-Long Videos
Md Mohaiminul Islam, Ngan Ho, Xitong Yang, Tushar Nagarajan, Lorenzo Torresani, Gedas Bertasius
CVPR 2024
[paper] [code]
Ego-Exo4D: Understanding Skilled Human Activity from First-and Third-Person Perspectives
Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Tushar Nagarajan*, et al.
CVPR 2024 (Oral)
[paper] [project]
HT-Step: Aligning Instructional Articles with How-To Videos
Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, Huiyu Wang, Lorenzo Torresani
NeurIPS 2023
[paper] [data/code]
Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities
Yale Song, Eugene Byrne, Tushar Nagarajan, Huiyu Wang, Miguel Martin, Lorenzo Torresani
NeurIPS 2023
[paper] [code]
EgoDistill: Egocentric Head Motion Distillation for Efficient Video Understanding
Shuhan Tan, Tushar Nagarajan, Kristen Grauman
NeurIPS 2023
[paper] [project]
EgoEnv: Human-centric environment representations from egocentric video
Tushar Nagarajan, Santhosh K. Ramakrishnan, Ruta Desai, James Hillis, Kristen Grauman
NeurIPS 2023 (Oral)
[paper] [project] [code]
Ego4d: Around the world in 3,000 hours of egocentric video
Kristen Grauman, Andrew Westbury, Tushar Nagarajan*, et al.
CVPR 2022 (Oral) TPAMI 2023 Invited article: Best Papers of CVPR
[paper] [project]
Environment Predictive Coding for Visual Navigation
Santhosh K. Ramakrishnan, Tushar Nagarajan, Ziad Al-Halah, Kristen Grauman
ICLR 2022
[paper] [project] [code]
Shaping embodied agent behavior with activity-context priors from egocentric video
Tushar Nagarajan, Kristen Grauman
NeurIPS 2021 (Spotlight)
[paper] [project] [talk]
Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos
Yanghao Li, Tushar Nagarajan, Bo Xiong, Kristen Grauman
CVPR 2021
[paper] [code]
Differentiable Causal Discovery Under Unmeasured Confounding
Rohit Bhattacharya, Tushar Nagarajan, Daniel Malinsky, Ilya Shpitser
AISTATS 2021
[paper] [code]
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
Tushar Nagarajan, Kristen Grauman
NeurIPS 2020 (Spotlight)
[paper] [project] [talk] [code]
Ego-Topo: Environment Affordances from Egocentric Video
Tushar Nagarajan, Yanghao Li, Christoph Feichtenhofer, Kristen Grauman
CVPR 2020 (Oral)
[paper] [project] [talk] [code]
Grounded Human-Object Interaction Hotspots from Video
Tushar Nagarajan, Christoph Feichtenhofer, Kristen Grauman
ICCV 2019
[paper] [project] [code]
Attributes as Operators: Factorizing Unseen Attribute-Object Compositions
Tushar Nagarajan, Kristen Grauman
ECCV 2018
[paper] [code]
BlockDrop: Dynamic Inference Paths in Residual Networks
Zuxuan Wu*, Tushar Nagarajan*, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
CVPR 2018 (Spotlight) (* equal contribution)
[paper] [code] [talk]
CANDiS: Coupled & Attention-Driven Neural Distant Supervision
Tushar Nagarajan, Sharmistha Jat, Partha Talukdar
ACL 2017 (Workshop)
[paper]
Computational antimicrobial peptide design and evaluation against multidrug-resistant clinical isolates of bacteria
Deepesh Nagarajan, Tushar Nagarajan, Natasha Roy, Omkar Kulkarni, Sathyabaarathi Ravichandran, Madhulika Mishra, Dipshikha Chakravortty, Nagasuma Chandra
JBC 2018
[paper] [code]