Roberto Calandra is a Postdoctoral Scholar at UC Berkeley in the Berkeley Artificial Intelligence Research Laboratory (BAIR) working with Sergey Levine.
Previously, he received a Ph.D. from TU Darmstadt (Germany) under the supervision of Jan Peters and Marc Deisenroth, a M.Sc. in Machine Learning and Data Mining from the Aalto university (Finland), and a B.Sc. in Computer Science from the Università degli studi di Palermo (Italy).
General research interests
My scientific interests focus at the conjunction of Machine Learning and Robotics, in what is know as Robot Learning.
Some of the research topics that I am currently developing include: Deep Reinforcement Learning, Bayesian Optimization, Dynamics Modeling, and Tactile Sensing.
- 11 Jan 2018: Università di Palermo
- 22 Jan 2018: EPFL
- 23 Jan 2018: ETH
- 24 Jan 2018: Max Planck Institute (Tuebingen)
- 25 Jan 2018: University of Freiburg
- 26 Jan 2018: TU Darmstadt
- 21 Feb 2018: Stanford
- 07 September 2018: Our Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models [Website] paper has been accepted at NIPS 2018 as a spotlight presentation (~4% acceptance rate). Congratulations to Kurtland for his first paper!
- 01 August 2018: Three journal published. Congratulations to all the authors! More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch with Andrew, Dinesh, Wenzhen, Justin, Jitendra, Ted, and Sergey; Control of Musculoskeletal Systems using Learned Dynamics Models with Dieter, Bernhard and Jan; Bayesian Multi-Objective Optimisation with Mixed Analytical and Black-Box Functions: Application to Tissue Engineering with Simon, Mohammad, Liesbet, Marc, and Ruth.
- 30 May 2018: New pre-print available on arxiv about model-based RL: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models [Website]
- 28 May 2018: New pre-print available on arxiv about learning to grasp with tactile sensing: More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch [Website]
- 06 Apr 2018: I am co-organizing the RSS Workshop on Multi-Modal Perception and Control together with Filipe Veiga, Aude Billard and Jan Peters. The submission deadline will be in May 2018!
- 22 Feb 2018: I am co-organizing the FAIM Workshop on Prediction and Generative Modeling in Reinforcement Learning together with Matteo Pirotta, Sergey Levine, Martin Riedmiller and Alessandro Lazaric. The submission deadline is 01 June 2018!
- 26 Jan 2018: Learning Flexible and Reusable Locomotion Primitives for a Microrobot accepted at RAL+ICRA. Congratulations to Brian and Grant!
- 20 Jan 2018: I participated to the Dagstuhl seminar on Personalized Multiobjective Optimization.
- 03 Jan 2018: The talks from the RSS17 Workshop on Tactile Sensing for Manipulation: Hardware, Modeling, and Learning are now available online
- 30 Nov 2017: Three papers accepted to the NIPS workshop on Acting and Interacting in the Real World which will take place on Dec. 8th at NIPS: More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch, Learning Flexible and Reusable Locomotion Primitives for a Microrobot, and On the Importance of Uncertainty for Control with Deep Dynamics Models
- 20 Nov 2017: Invited talk today at Facebook on “Model-based Policy Search and Beyond”
- 16 Oct 2017: The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? is now on available on arxiv.
- 04 Oct 2017: Invited talk today at the University of Southern California on “Learning to Grasp from Vision and Touch”.
- Sep 2017: I am organizing the NIPS Workshop on Meta-learning (MetaLearn) together with Frank Hutter, Hugo Larochelle, and Sergey Levine. The submission deadline is 01 November 2017!
- Sep 2017: Two papers accepted at CoRL 2017: The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?, and MBMF: Model-Based Priors for Model-Free Reinforcement Learning.
- Aug 2017: I am editing the JMLR Special issue on Bayesian optimization together with Roman Garnett, Javier González, Frank Hutter, and Bobak Shahriari. The deadline for submissions is 31 March 2018!
- Jul 2017: Our paper Goal-Driven Dynamics Learning via Bayesian Optimization has been accepted at CDC.
- Apr 2017: Today I gave a talk about “Goal-Driven Dynamics Learning for Model-Based RL” at the DALI 2017 Workshop on Data-Efficient Reinforcement Learning. [Slides][Video]