“Can we build Baymax? Part VIII: Let’s talk about Safe, Commercially Viable Humanoids” will be held at Humanoids 2023, in Austin, Texas, on 12 Dec. 2023.

This video features ten lightning talks.

0:00 Video Title
0:05 Cornelia Bauer, Dominik Bauer
3:07 Joseph Byrnes
6:09 Grzegorz Ficht
9:11 Yunho Han
12:15 Ernesto Hernandez Hinojosa
15:17 Donghyeon Kim
18:03 Cornelius Klas
21:05 Niranjan Kumar
24:06 Daegyu Lim
27:08 Young-Woo Sim
30:10 Invited Speakers

Please find more information: https://baymax.org/

*List of Speakers:
– 1X Technologies – Bernt Øivind Børnich (CEO and Co-founder)
– Agility Robotics – Jonathan Hurst (Co-Founder and Chief Robot Officer)
– Apptronik – Nick Paine (CTO)
– IHMC – Robert Griffin (Research Scientist, Technical Advisor of Boardwalk Robotics)
– Figure AI – Jerry Pratt (CTO)
– Fourier Intelligence – Zen Koh, Global (CEO)
– PAL Robotics – Francesco Ferro (CEO)
– Sanctuary AI – Jeremy Fishel (Principal Researcher)
– Toyota’s Frontier Research Center – Taro Takahashi (Project Manager, Technical Adviser of TRI)
– Unitree – Tony Yang (North America Sales Director)

*Organizers
– Christopher Atkeson (Carnegie Mellon University, USA)
– Katsu Yamane (Bosch Research, USA)
– Joohyung Kim (University of Illinois Urbana-Champaign, USA)
– Jinoh Lee (German Aerospace Center, Germany)
– Alexander Alspach (Toyota Research Institute, USA)

Hi I’m Dominic and today I’m going to talk about how we can design soft robot hands from demonstrations unfortunately humanid robot Hardware isn’t really available to buy off the shelf so you can’t go out and buy a humanite arm or a hand or a

Leg so what do we end up doing well we build our own custom robot Hardware but the problem with that is there also isn’t really an established design process yet for building humanite robots so this especially true for dextrous and soft root hands so in our experience we often end up building a

Lot of prototypes to figure out what works and what doesn’t and um this picture here on the left shows just a few hands our lab has built over the years and of course every prototype and every iteration can be very time consuming and costly so what if we could change this what if

Instead we start with a set of important tasks and then we automatically generate a hand design from a set of demonstrations how can we do that we start with a motion with motion capture trajectories of the task demonstrations and an all spherical joint hand model that has 18 joints and 54 degrees of

Freedom we then adjust the link lengths and Joint locations with the model through optimization such that they can match the trajectories so now we have a hand model that can reproduce the task demonstrations but it still has 54 degrees of freedom so that’s not very practical well we can reduce that number

Without sacrificing capabilities by finding the dominant axis of motion for each joint this allows us to reduce the degrees of freedom in each joint from 3 to one so we end up with 18 degrees of freedom which is a little more practical based on the resulting hand

Model we can then use a library of Link templates to automatically generate a fully printable cat model we then print the hand design in a single print and after a few hours and removing some support we do have a functioning soft robot hand so to conclude I have shown an

Endtoend process that allows us to create soft hand designs from Human demonstrat I now of course due to time I had to leave out a lot of details but I want to point out here that we have extensive experience with evaluating and controlling these types of soft robot

Hands these hands can be easily controlled using T operation or open loop control and they’re surprisingly strong very durable and they last thousands of actuations without breaking thanks for for listening and please come see me at the poster session if you want to learn more details

Thanks hi I’m Joey Burns and I develop the lowle control and electronics of t a dynamic bipad created at U’s Robo Design Lab T is designed for research in Dynamic te Locomotion a human pilot takes steps in her human machine interface or HMI for short and the

Dynamic data of the human is transmitted to the robot controller tell us controller tracks a scaled version of the human’s diverging component emotion and sends force feedback to the human such that the robot and human remain dynamically synchronized this allows us to leverage the human Pilot’s decisions and subconscious reactions to plan

Motions and stabilize T T’s controller is a single board computer running a Linux kernel built with the preempt RT patch this gives us the flexibility of Linux while having the benefits of a real-time OS for deterministic program execution T’s control algorithm updates at 1 khz and is spread across seven isolated CPS use

Andell makes use of high performance motor drivers and sensors with high update rates to keep up with its agile Dynamic motions I’ve designed a variety of custom electronics for T including load cell amplifiers for reading the ground reaction forces encoders for measuring the joint position and velocity and a

Power management PCB for voltage regulation battery switching safety features and more T’s Control software is written in C++ since Fred across 7 isolated CPUs CPU zero is used for running the essential kernel tasks that are required for Linux tour while cpu1 runs T’s main control thread this

Includes the PD control for the legs the balancing algorithm the T motion framework and finally sends commands to T’s actuators all data received by T’s controller is done asynchronously in separate threads cpu2 for example is used for receiving can messages from the motor controllers joint encoders and

Load sell amplifiers and CPU 3 is used for receiving UDP messages and Ur data from the IMU one source of UDP data comes from our custom user interface for running our experiments and another source comes from the human data captured by the HMI CPU 4 is used for

Our state estimation when data is available from our lab’s motion capture system this thread receives map data giving T Center of mass position in the world frame and when no map data is available we use Ross Harley’s contact dated and variant EKF to estimate T’s world pose from its kinematics and

Inertial measurements CPU 5 is Rives for data logging about 175 values are logged every millisecond and the log data can be played back later on CPU 6 as if it’s coming from from the HMI in real time and this allows us to tune gains and Test new features without needing a

Pilot in the loop finally cpu7 is used for a prediction algorithm that monitors the human Pilot’s foot motion and predicts how long their step will be and how high it will be so that the robot can track a similar trajectory in real time here we have a block diagram for

The software running and’s main control thread on CPU one the highle controller computes the required ground reaction forces for balancing and the reference trajectories for T’s feet during their stance and swing phases while the lowle controller hand hand Les tracking those trajectories and converting all of the

Forces and torqus down to the motor space as well as passing all of the state feedback back to the highle controller hello my name is G I’m with the group at the University of Bon and this is improving humanoid Dynamics with hardware and software first let’s take a

Look at the open source ne2 Rob robots that I’ve designed and built although they are reasonably tall and lightwe for their size they follow the usual design scheme with actuators placed at the joints it’s a research platform that our team nimro uses for robocup where we have won the tournament five times back

To back as well as other Awards as you can see the performance is quite robust allowing it to autonomously score goals even in tough conditions however with the underpowered actuator and actuator joint collocation it is approaching the limits and more Dynamic movements are Beyond its capabilities if we look at modern

Control approaches a common point is to reduce the reflected inertia which increases the control bandwidth and durability but also allows for simplifying the Dynamics to where realtime control of Highly Dynamic motions becomes tractable so one way to improve Dynamics is to design limbs with low reflected inertia like this 3D

Printed leg prototype that I designed in 2020 which only weighed 2 kilograms and only two qdd actuators were sufficient for a planer leg the foot position was constrained with a parallelogram and made compliant by using carbon F plates acting in strings here you can see the leg performing jumping simply by using

Impedence control and active compliance by adding an extra 8 kogs of weight the single leg was still capable of jumping which demonstrated that a running humanoid could have been produced from this design the flim Dynamics are too significant to be ignored it’s best to include them in the design of the

Controls using a simple triangle approximation I found a one toone mapping between the Lim kinematics and its mass positioning I then combine the masses of each limb and the Torso into a five mass representation of a humanoid where the relative movement of each Mass shapes the overall inertia this then not

Only defines the system Co but also its angular counterpart in the form of the orientation of principal axis of inertia desired dynamical effects are then achieved by manipulating the co position inertia size and orientation from these values my whole body inverse kinematics approach then produces the necessary joint trajectories I’ve combined this approach

With a set of regulators into a framework which allows for simultaneous linear angular Li regulation for the serit rejection other benefits of modeling the limb Dynamics include a refined centralal State estimator where the zmp and CMP can be obtained without having access to food sensors as well as a feet

Forward controller where the feet forward terms are applied on the centroidal level which abstracts The Joint structure the complete framework has displayed impressive disturbance rejection capabilities during this year’s robocup where a robot took the first place in the push recovery challenge thank you for your attention these are my

References hello everyone my name is youan and I am a researcher taking the PHD course at K University in Republic of Korea I interested in the F of bipedal robots and humanoid robots so I’m currently studying Locomotion planning and control for biped working recently I have been focusing on

Developing an optimization based working control framework furthermore I’m considering the concept of an optimized reg design for ENT leg movement since 20 2022 I have redesigned the humanoid L Lo 3 to change from joint potion control to Joint current control based system and I have conducted various tests to confirm

The top control based operation and performance one of the experiments I have conducted was maintain its balance on the change slope then I introduced the body Dynamics model based the whole body control additionally and I also made attempts to integrate NPC and hor body control however the biggest challenge

Was that there exist a significant difference between simulation and reality while conducting research on to cont based working for me the significant difference factor between simulation and reality was that fiction of the joint Rus three uses high reduction gear harmonic Drive which with a reduction ratio of over 100 to one consequently

The higher reduction ratio of the gears cause the high friction in each joint leading to a difference between the toque command and the actual joint toque output this problem can degrade the torque acash and response speed of the joints most ofs try to solve this problem by attach talk sensors at each

Joint however Lo 3 does have to sensors for sensing the to of joints therefore I attempt to address this problem through joint friction compensation I applied the friction compensation based on and physical model and currently I’m looking for other method to compensate for friction if anyone has an interesting or

Ideas of this is I would appreciate them thank you listening hello I’m Mesto and I’m a recent PhD graduate from the University ofo Chicago and I will be presenting a small portion of my research in model adaps control for realtime parameter toing one approach to controlling bip

Robots like a digit shown here involves modeling the physical behavior of the center mass of the robot by fitting an analytical model to the center of mass kinematics in the simplest case the first order regression function can be used to map the states of the robot at a

Specific instance of the gate cycle to the predicted state after an arbitrary time has elapsed the linear inverted pendulum is one such model that is commonly used for this control however the physics of the robot may change in re world scenarios when the robot is carrying a load or when the robot is

Walking in uneven terrains uh the plots here they depict the velocity on the top and position at the bottom of the robots Center mastering walking using a model based controller the predicted position and velocity exhibit modeling errors that lead to a tracking error in the velocity tracking it is therefore

Important for such analytical models to have adaptive parameters that can be tuned in real time to enhance control uh one approach to fix the tracking error is to add a feedback term to the tracking controller such that the tracking error converges to zero however the controller tackles the tracking

Problem without actually improving the modeling error one type of model adaptation uh that can be utilized is the model integral feedback controller in which an error feedback uh term is to the model such that the model errors in position and velocity they slowly converge to zero uh the and the the

Speed of convergence is determined by the gain Matrix Ki and then the plot here shows the velocity and the position data during a for walking simulation where the model used to uh control is incorrect and so when the time reaches this dotted line the model adaptation is

Turned on and the modeling error as well as the tracking errors converge to zero another approach is using the recursive Le squares algorithm to find the augmented parameter coefficient R by uh using the aror coari Matrix and games as model feedback uh terms here the plots again show that when the adaptation is

Turned on the modeling errors and the tracking error approach zero and the bottom plot shows the value of the augmented parameter uh Co coefficients here’s a simulation where the height uh step width and toll damping of the robot is changed during forward walking the top video shows a trial where no

Adaptation and you can see the robot eventually will lose his stability and fall and then the bottom video shows the trial where the adaptation controller is uh turned on achieving greater uh stability and greater tracking and then we see the coefficients how they’re how the coefficients adapt here in the

Here’s our hard Tri with robot Heights the here is there a TR which EV decreases as the hi everyone I’m Donan Kim from Soul Nation University and I’m delighted to present T based reinforcement learning on by p lobus in this CBB Workshop recently reinforcement learning on L lobus has garnered significant

Attention aiming to overcome the limitations of modelbased approaches as Illustrated in the figure below most studies in this domain have focused on learning Target poses and employed lower lab P controllers in specific GES however a drawback of this position based method is its dependence on PD gains which bury with tasks and robot

Hardware therefore tuning gain becomes necessary for each task and lobal configuration moreover the position based approach lacks compliance posing challenges due to the existence of real Gap the reality G leaders in mismatched contact timing and when unexpected contacts occur the PD controller generates large torque lering in limited

Compliance our approach on the other hand involves directly learning comment TS operating several advantages one notable benefit is that elimination of the need for gain tuning depending on specific Lobos and tests Additionally the torque based method exhibits enhanced compliance rendering it more lust in breaching the reality Gap this video demonstrates the

Compliance of our turque based method showcasing its Effectiveness in handling uncertainties lastly contr to conventional knowledge in model based toque control method which adopts High control rate our findings indicate that the control rate can be allees to 62 HS as a future work we are aiming for R

Based Lear time motion retargeting on physical lus particularly focusing on Locomotion thank you for your attention if you have any questions please don’t hesitate to reach out thank you welcome to my presentation can we build BX on the acation and kinematics for humano robots with humanlike motion I’m Cornelius class research

Scientist and mechanical engineer for human orret design at the high performance human Technologies Lab at The cural Institute of Technology at our Institute we have built several humanoid robots actuation units and hands in recent years one of our latest robots is armade humanoid robot with an omnidirectional

Mobile base 28 degree of Freedom Arms with exoskeletal structure and sensor controller units in all arm joints it also includes height adjustment and humanoid heads I dream of soon having humano robots with human capabilities my research revolves around the question why is the challenging to achieve Human Performance while electric

Motors can have a higher power density than human muscles the three identified challenges are exact requirements for human motions in terms of torque speed and acceleration are not known with actuators placed in the AIS there are large inertias across the kinematic chain and the actuator foretop profile and internal inertia do not fit

The requirements the following three topics address these challenges a robot design tool to derive the requirements from Human motion a novel joint mechanism and new kinematics and a linear drive for talk velocity profiles s similar to those of human human muscles a key in the system for humanite

Robot design is the derivation of normalized actuation requirements from Human motion the input are different humanoid robot kinematics and sets of human motions both human motion and robot kinematics are first normalized then the motion is transferred to the robot with this motion The Joint talks are calculated the result are the normalized actuator

Requirements here the requirements are analyzed for four different sets of motions as a result the normalized velocity acceleration and torque of all amids of the different robot kinematics and the complete arm actuator power requirements can be compared thank you for your attention hello my name is nanjan and

I’m a PhD student at Georgia dech my research focuses on designing learning Frameworks to build datadriven controllers for leg robots most specifically I’m interested in building Frameworks that lets us create diverse robust and expressive policies for humanoid robots to accomplish this goal we need an architecture that scales without

Extensive human engineering to new tasks in this Spirit my recent line of work uses human motion trajectories to train robot control policies my Approach allows a user to just tell the robot what he wants it to do a neural network control policy is then trained with the

User input to control the robot and then perform the task I accomplished this by combining a large language model a text to motion model and a motion imitation approach to convert user language commands into robot control port policies my Approach also enables the user to interactively correct the

Robot’s motion so that it aligns more closely with the user’s intention using this approach a diverse set of skills can be generated without requiring any complex reward engineering or modelbased controller design but in cases where we do not have a data set of human motion to guide the robot we have

To either manually engineer controllers using traditional model based approach or use reinforcement learning to train the robot and maximize the reward this doesn’t scale will because as the task complexity increases the extent of robot engineering that’s required to get the policy to work becomes impractical in my earlier work I propos

A solution to this problem by building new policies to solve new tasks as combinations of a library of skills which can either be learned or designed using model based approaches for example we can train a robot to turn by perturbing a walking policy we can then

Teach the robot to reach a Target by perturbing a mixture of walking and turning policies using approach we build a hierarchy of policies with every level in the hierarchy dependent on skills learned in the previous level ccrl enables us to train a robot to perform complex interactive navigation and object manipulation tasks

That transfer reliability to the real world due to the hierarchical and modular nature of our framework one common drawback with learning based robot control methods is the Sim tooreal Gap where the policies trained in simulation not transfer well to the real world in my prior research I looked at

This problem from a parameter Discovery perspective where I use Bean optimization to tune the simulation parameters so that the robot performs well in the testing envirment in addition to leged robots I also have experience in robot manipulation particularly an interactive perception with manipulator robots the work shown

In the video um I build a framework that lets a robot interact with a cluttered scene and discover hidden objects in another related work I developed a framework that lets a robot discover the physical properties of of an object by interacting with it I’m in the job market for full-time

Positions please reach out to me if you have any potential opportunities hello I am d Lim a PhD candidate at s National University I will present my research titled proprioceptive external joint torque estimation for human noise bya uncertain torque learning the sear in developing general purpose humano robots demands them to

Safely share workpace with humans but a major challenge is ensuring these robots can detect and respond to unexpected collagens to prevent force and ensure human safety to resolve this challenge I propose mov net a no sensor external joint torque estimation method it combines a model based disturbance Observer with dim learning to accurately

Estimate external dur torqus overcoming model certainties here is a demonstration of exteral joint torque estimation when the robot Works randomly on uneven terrain the blue dotted line shows the measured exter joint torque using four to sensors the red line represents the estimation of the proposed method mov

Net compared to the green line for the endtoend learning method and two gray lines for modelbased algorithms mobet and FTS E2 method out perform model based algorithms for IND distribution data additionally in the cion test on out of distribution scenario mov shows the RO estimation eror demonstrating is

Robustness to engine data while fts2 shows much increased errors compared to the IND distribution data test the mob accurate and robust estimations allow for various applications first content rench feedback control for stable working without forto sensors second Collision detection and reaction approach of humanoid robots third an intuitive handu method for moving the

Rber in summary mov contribute to the arate external unor estimation of human noise only with proprioceptive sensors finally OV can decrease the cost of human noise and enhance the safety of humanoid robot for more details on my work and other projects please visit my personal page thank

You hi my name is Yim and today I’m very excited to share our development of interactive and practical design tool for creating attration system of humanoids design of actuation system for humanoids is tough for many reasons when we design a humanoid from scratch we start with basic parameters such as

Height and weight based on these parameters we come up with motions such as walking or carrying boxes now a designer need to find Motors and linkages to bring those motions into the real world a major issue here is the design optimizers typically run slow because the design space is so huge and

Not smooth at all practicality is another important matter for example designer need to check if a candidate design is physically visible or not in this process designers rely a lot on Expert knowledge to reduce engineering work so what we propose is a better design tool that is

Interactive so that we can embed expert knowledge and gain new intuition and also fast so that we can do more interations and try different ideas here’s a quick overview of design tool that we are proposing first a designer need to provide information and knowledge as much as possible about

The target design that is to say a designer need to create a robo model desire motions design libraries with confidence they find useful followed by goals and Custom Design roles that are relevant to design Target next a numerical programming will figure out the best design for you then this

Process iterated a few times to make this process faster we have employed special technique for those who are curious please come to the poster session we ran design studies using this method you’ll see robot doing weightlifting and fast walking what separates this tool from other software is that it is made easy

To reason about a single robot doing multiple tasks it guarantees some safety designed by respecting hard system limitations while other methods us usually rely on soft constraints and finally it is useful in wide range of St stages of design to summarize I’m proposing a design tool that’s not only fast but

Also interactive for more practical results briefly saying this design tool converts rough design envelopes into detailed designs and I believe this will bridge the gap between controls and Hardware Engineers especially in the industry settings for those who are questioning if this idea is actually useful please

Wait a little bit we’re de developing a new humanly named Dash using this method and thanks for listening

1 Comment

Leave A Reply