- Cassie had 80 percent success rate in climbing stars at Oregon State University
- The bipedal robot uses body awareness, to proprioception, to navigate steps
- That can be helpful for delivery robots under poor lighting conditions
- Cassies previously walked through fire and rode a Seque scooter
Engineers in the US have devised a robot that can easily climb staircases in the dark.
'Cassie' ascended the steps at Oregon State University with 80 per cent proficiency, all without eyes or other sensors.
The bipedal robot was trained to use 'proprioception,' or body awareness—to navigate uneven surfaces.
Researchers say that's important if fog, dim lighting, or other factors limit a robot's visual acumen.
Cassie was developed by the Dynamic Robotics Lab at Oregon State University in 2017.
It has no 'head' but its hips have three degrees of freedom, allowing it to move its legs forward and backwards, side-to-side, and also rotate them at the same time,' according to the Institute of Electrical and Electronics Engineers.
In addition, Cassie's powered ankles allow it to stand in place without constantly having to move its feet and shift its weight.
It's classified as a 'dynamic walker' with a smoother, more human-like gait than your typical thudding automaton to make it more adept at traversing complex terrain—well, complex from a robot's point of view.
Cassie already withstood other extreme gauntlets, walking through fire and successfully riding a Segue.
Researchers at Oregon State were interested in training Cassie to climb stairs 'blind,' that is with no computer vision or other sensors.
Robots have long been able to conquer staircases using cameras and computer vision but certain conditions, like dim lighting, aren't always ideal for visual input.
They wanted Cassie to go up and down the steps using only its 'proprioception,' or body awareness—the same way you or I might creep down to the basement at night.
Before Cassie could be put to the test on an actual flight of stairs, though, engineers trained it virtually, the researchers explained in a paper posted on the open-access platform ARXIV, with a technique called 'sim-to-real reinforcement learning.'
0 Comments