Official Team Partners: Halodi, Haption, ETH Zürich – RSL, Sensiks, TNO, University of Twente
General
The ANA Avatar XPRIZE is a four-year competition to develop a robotic system that will deploy the presence of a human to a remote location in real time, leading to a more connected world. In this competition, the i-Botics team is working to create synergy in multimodal telepresence, transporting the operator’s social and functional self to any fit-for-purpose avatar through a compelling combination of state-of-the-art social, visual, haptic, audio and olfactory technologies.
Our vision is that distance should not be a barrier to experience social connectedness and to apply one’s skills and knowledge to make this world a better and safer place. Our mission is to develop a system that enables the user to feel present at, and interact with, a remote environment and the people in it as if physically present. We are determined to create societal impact (e.g., in challenges related to aging, health care, safety and security) and business impact (i.e., contributing to strengthening the business proposition of our partners) during and after the ANA Avatar XPRIZE competition.
To achieve this mission, we strive for full avatar ownership, meaning that the operator can crawl into the skin of the avatar as if it where his/her own body.
The Avatar systems
We believe that impact can be made by using fit-for-purpose avatars. Therefore, we use two different robotic systems. We use EVE, a social avatar capable of functional tasks and designed to optimize interaction with humans. EVE is a mobile human-like robot with 23 degrees of freedom. It has a payload of 8 kg per arm, and has force and impedance control in all joints.
Next to EVE, we use the ANYmal, which is optimized for rough-terrain scenarios. ANYmal is a quadrupedal robot with dynamic motion capabilities. It features a robotic arm to manipulate its environment and is equipped with a variety of sensors for effective remote operation.
Our unique universal control pod
The i-Botics universal control pod transports the operator’s actions and senses to any fit-for-purpose avatar. It enables the operator to control the movement of the avatar and its output devices (voice, posture, arm and hand movements, and facial expression). It blocks perceptual input from the local environment and provides full, multisensory cues from the robotic setup (including force feedback in arms and fingers and auditory and visual feedback) and from the remote environment (temperature, airflow and smell).
The Team
Our team is a combination of universities, an applied research organization and cutting-edge high-tech industry. The University of Twente and ETH Zürich have the state-of-the-art knowledge base, TNO integrates fundamental knowledge and industry interests towards working demonstrators, and our industry partners Sensiks, Halodi and Haption achieve high technology readiness levels. This combination of fundamental and applied knowledge, market knowhow and integrative skills is what makes this team strong and able to realize our vision.
The team is led by TNO. Around 40 team members contribute actively to i-Botics Avatar. The team members have different backgrounds and nationalities but combined have the needed breadth of expertise for the competition.
The vision
Beyond the competition we have two goals as a consortium: to create an ecosystem to continue the development, implementation and acceptance of avatar technology, and to develop and launch new products and services. ETH, TNO and the University of Twente wish to extend their current ecosystem, consisting of knowledge and industrial partners, with links to the European Horizon 2020 Digital Innovation Hubs on robotics concerning health care, inspection and maintenance. Our three industrial partners (Halodi, Haption, Sensiks) are eager to improve their existing products and launch new services and products, possibly through the creation of new spinoffs.
The project falls within the i-Botics application: Remote operation & Avatar-mediated interaction.
To get an indication of our vision, please check the video below: