Research
Areas
Appearance-based loop closure in SLAM
One of the most significant requirements for long-term visual SLAM (Simultaneous Localization and Mapping) is robust place recognition. After an exploratory period, when areas non-observed for long are re-observed are robustly detected, loop closures provide correct data association to obtain consistent maps. The same methods used for loop detection can be used for robot relocation after track lost, due for example to sudden motions, severe occlusions or motion blur. By building a database from the images collected online by the robot, the most similar one can be retrieved when a new image is acquired. If they are similar enough, a loop closure is detected. By using binary descriptors and efficient data structures, we can obtain real-time operation in large sets of thousand of images.
Related publications:MorrisonCS16 MohanICRA15 MorrisonDARS14 MorrisonRSS14 GalvezTRO12 CadenaGalvezTRO12 GalvezIROS11 MajdikIROS11 PiniesJFR10 CadenaGalvezIROS10
Object detection and recognition for SLAM
A robot that moves and operates in an environment needs to acquire live information about it in real time. This information can be obtained from Visual SLAM. Computed maps provide rich geometrical information useful for reliable camera location, but it is poor for describing the observed scene. We can augment these maps with objects to allow the robots to interact with the scene. To include objects in SLAM maps, these must be recognized in the images acquired by the robot by computing a rigid-body 3D transformation. By accumulating observations of objects over time we can compute more robust poses of objects and to keep them constantly located in the scene.
Related publications:GalvezRAS16 RiazueloTASE15 RoboEarthRAM11 CiveraIROS11 SjoGalvezCIT09 GalvezICRA08
Projects
Development of Large-Scale Dense Scene Capture and Tracking Instrument (2014-, National Science Foundation - Major Research Instrumentation Program)
This project, developing a large-scale, dense 3D measurement instrument for capturing dynamic environments, integrates devices such as range-and-color sensing devices like depth cameras (RGB-D sensors) by designing and developing key technical methodologies to fuse the data received from remote networked sensors. The instrument will collectively cover a large space at a sampling resolution of at least 1cm with submillimeter resolution in localized regions. These data are then fused into a single underlying representation. The work involves developing a system that possesses both large-scale and real-time dense capture capabilities.
RoboEarth (2010-2014, Seventh Framework Programme)

At its core, RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.
Bringing a new meaning to the phrase "experience is the best teacher", the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction.
Links:[BBC News Technology: The robots using cloud technology to learn new skills]
[The Birth of RoboEarth]
[RoboEarth - A World Wide Web for Robotcs]
[RoboEarth Final Demonstrator]
Rawseeds (2008-2009, Sixth Framework Programme)

The aim of the RAWSEEDS project is to stimulate and support progress in autonomous robotics by providing a comprehensive, high-quality benchmarking toolkit. RAWSEEDS will also perform all the actions needed for a rapid and thorough dissemination of its results through the academic and industrial domains (e.g. setup of a website, documentation and support actions, workshops, competitions, publications).
SLAM6DOF: Portable Simultaneous Localization and Mapping Systems for Large and Complex Environments (2008-2009, CICYT)
The fundamental goal of the SLAM6DOF project is the development of prototypes of portable systems for the acquisition of sensorial information that can be used to compute the trajectory of a sensor moving freely in 3D with six degrees of freedom, and to build models of such environments. These prototypes include sensors such as one or more cameras, a panoramic camera, or a laser scanner. These systems also incorporate propioceptive devices such as gyroscopes or accelerometers that can provide information about their displacement in the environment. Prototypes may be ported by people walking, bicycling, on paragliders, or swimming. We develop new methods for performing SLAM and compute egomotion when the sensor moves with six degrees of freedom in large and complex environments. Applications include people tracking in rescue operations, panoramic, aerial and underwater mosaicing and virtual and augmented reality.
Cognitive Systems for Cognitive Assistants (2007-2008, Sixth Framework Programme)

The main goal of the Cognitive Systems for Cognitive Assistants project was to advance the science of cognitive systems through a multi-disciplinary investigation of requirements, design options and trade-offs for human-like, autonomous, integrated, physical (eg., robot) systems, including requirements for architectures, for forms of representation, for perceptual mechanisms, for learning, planning, reasoning and motivation, for action and communication.