Sudharshan Suresh

I'm a research scientist at Boston Dynamics, where I work on machine learning for the Atlas humanoid robot.

I earned my Ph.D. in Robotics from CMU, advised by Michael Kaess. I was also a part-time researcher at FAIR (Meta), where I worked on the manipulation and tactile sensing team. My thesis enabled robots to learn from interaction using vision and touch.

CV  /  Scholar  /  Github  /  LinkedIn  /  Twitter  /  Short bio

Boston Dynamics Icon  we're hiring, reach out!

profile photo

Updates

[Nov '24]   NeuralFeels is published in Science Robotics and featured as cover of November issue (spotlight article).
[Oct '24]   My work on Atlas was featured in their autonomous demo (IEEE, TechCrunch, Verge).
[Mar '24]   I've moved to Greater Boston, to work with the Atlas team at Boston Dynamics (hello).
[Feb '24]   I've defended my Ph.D., here's my talk and thesis!
[Dec '23]   The pre-print for NeuralFeels is out, read it here.
[Aug '23]   Our work RotateIt, led by Haozhi, was accepted to CoRL 2023.
 

Click for more updates

[Apr '23]   Spending the summer as a research scientist intern at FAIR Menlo Park on visuo-tactile manipulation!
[Dec '22]   MidasTouch was showcased at CoRL 2022 with a live demo.
[Oct '22]   Successfully passed my Ph.D. thesis proposal!
[Sep '22]   MidasTouch was accepted to CoRL 2022 as an oral.
[Aug '22]   We've extended iSDF for neural mapping with the Franka robot, code here.
[May '22]   Organized the Debates on the Future of Robotics Research workshop at ICRA '22
[Apr '22]   Spending the summer at FAIR Pittsburgh working on pose tracking from touch
[Jan '22]   ShapeMap 3-D was accepted to ICRA 2022, with an open-source implementation.
[Aug '21]   Presented at the Tartan SLAM series on our working on perception for planar pushing, video here.
[May '21]   Tactile SLAM was the ICRA 2021 best paper in service robotics finalist!
 

 

Research

NeuralFeels with neural fields: Visuo-tactile perception for in-hand manipulation
 
Sudharshan Suresh, Haozhi Qi, Tingfan Wu, Taosha Fan, Luis Pineda, Mike Lambeta, Jitendra Malik, Mrinal Kalakrishnan, Roberto Calandra, Michael Kaess, Joe Ortiz, and Mustafa Mukadam
 
Science Robotics, Nov 2024 (cover)
 
 
paper / website / code / data and models / twitter / presentation
 
Neural perception with vision and touch yields robust tracking
and reconstruction for in-hand manipulation
General In-Hand Object Rotation with Vision and Touch
 
Haozhi Qi, Brent Yi, Sudharshan Suresh, Mike Lambeta Yi Ma, Roberto Calandra, and Jitendra Malik
 
Proc. Conf. on Robot Learning, CoRL, Nov 2023
 
 
paper / website / presentation
 
A visuotactile transformer gives us general dexterity
for multi-axis object rotation in the wild.
MidasTouch: Monte-Carlo inference over distributions across sliding touch
 
[Oral: 6% acceptance rate]
 
Sudharshan Suresh, Zilin Si, Stuart Anderson, Michael Kaess, and Mustafa Mukadam
 
Proc. Conf. on Robot Learning, CoRL, Dec 2022
 
paper / website / code / presentation
 
Where's Waldo? but for robot touch: tracking a robot finger
on an object from geometry captured by touch.
ShapeMap 3-D: Efficient shape mapping through dense touch and vision
 
Sudharshan Suresh, Zilin Si, Joshua Mangelson, Wenzhen Yuan, and Michael Kaess
 
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2022
 
paper / website / code / presentation
 
Online reconstruction of 3D objects from dense touch
and vision via Gaussian processes.
Tactile SLAM: Real-time inference of shape and pose from planar pushing
 
[ICRA best paper award in service robotics finalist]
 
Sudharshan Suresh, Maria Bauza, Peter Yu, Joshua Mangelson, Alberto Rodriguez, and Michael Kaess
 
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2021
 
paper / website / presentation
 
Full SLAM from force/torque sensing for planar pushing:
combining a factor graph with an implicit surface.
Active SLAM using 3D submap saliency for underwater volumetric exploration
 
Sudharshan Suresh, Paloma Sodhi, Joshua Mangelson, David Wettergreen, and Michael Kaess
 
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2020
 
paper / presentation
 
Balancing volumetric exploration and pose uncertainty
in 3D underwater SLAM via SONAR submap saliency.
ARAS: ambiguity-aware robust active SLAM using multi-hypothesis estimates
 
Ming Hsiao, Joshua Mangelson, Sudharshan Suresh, Christian Debrunner, and Michael Kaess
 
IEEE Intl. Conf. on Intelligent Robots and Systems, IROS, Oct 2020
 
paper
 
Active SLAM with multi-hypothesis state estimates
for robust indoor mapping with handheld sensors
 
Through-water stereo SLAM with refraction correction for AUV localization
 
Sudharshan Suresh, Eric Westman, and Michael Kaess
 
IEEE Robotics and Automation Letters (RA-L), presented at ICRA 2019, Jan 2019
 
paper / presentation
 
Dealing with refraction in underwater visual SLAM,
inspired by multimedia photogrammetry.
 
Localized imaging and mapping for underwater fuel storage basins
 
Jerry Hsiung, Andrew Tallaksen, Lawrence Papincak, Sudharshan Suresh, Heather Jones, Red Whittaker, and Michael Kaess
 
Proceedings of the Symposium on Waste Management, Phoenix, Arizona, Mar 2018
 
paper / slides / video
 
We build an underwater platform comprising of stereo,
IMU, standard + structured lighting, and depth.
 
Camera-Only Kinematics for Small Lunar Rovers
 
Sudharshan Suresh , Eugene Fang, and Red Whittaker
 
Robotics Institute Summer Scholars Working Paper Journal, Nov 2016
Annual Meeting of the Lunar Exploration Analysis Group, Nov 2016
 
paper / video / poster
 
Tracking a lunar rover's kinematic state through self-perception
with a downward-facing fisheye lens.
Object category understanding via eye fixations on freehand sketches
 
Ravi Kiran Sarvadevabhatla, Sudharshan Suresh and R. Venkatesh Babu
 
IEEE Transactions on Image Processing (TIP), May 2017
 
paper / website / dataset
 
We understand free-hand sketches through human gaze
fixations based on visual saliency.
 


Last updated: Dec 2024

Imitation is the highest form of flattery