1. Legged locomotion
RBCCPS has a custom built four legged robot (quadruped) called “Stoch”. The main goal is to explore deep-reinforcement learning algorithms to achieve a wide variety of walking behaviours. The quadruped must be able to plan and navigate over a diverse set of environments like flat plains, undulating and slippery surfaces, and also muddy terrains. See video below of our most recent walking result! Gaits were generated via D-RL based training, and then implemented via kinematic motion primitives.
2. Learning to control
The field of “learning to control” is getting a lot of attention of late. Traditional methods of applying control involve a rigorous formal analysis of stability, convergence rates, and also stability margins for a wide variety of nonlinear dynamical systems. Unfortunately, these methods often become computationally expensive as system complexities increase. On the other hand, learning based methods have a huge advantage of using prior data to obtain seemingly simple control tasks for realizing complex behaviors in all types of systems. Therefore our objective is to explore control methodologies that take the best from both of these two approaches: Obtain control laws that not only guarantee sufficient performance and stability margins, but also are fast and simple to implement.
3. Safety in robotic systems
My interests in real-time safety critical control gradually increased as I worked more and more with industrial robot arms. As we see an increasing use of robots and robotic technology around us, safety will be of critical importance. We need guaranteed margins of safety in order to create a human-friendly environment. My focus currently has been to introduce robustness in safety-critical control via the notion of input-to-state safety. This was mainly motivated from my PhD work on input-to-state stability. Follow the link below for more details:
S. Kolathaya and A. D. Ames. Input-to-State Safety With Control Barrier Functions. 2018. IEEE Control Systems Letters. Volume 3, Issue 1, Pages 108-113. .pdf .bib
1. DURUS Humanoid
The objective here was to realize human-like walking behavior in the humanoid robot DURUS. DURUS is a 23-DOF robot with 15 actuators.
The goal in this project was to realize dynamically stable running with the planar bipedal robot DURUS-2D. DURUS-2D is a 11-DOF robot with 4 actuators. The running realized in the robot consists of a stance phase, and a flight phase alternating between each other. Input-to-state stabilizing controllers were realized to overcome the phase based disturbances.
3. AMBER Series
AMBER is a 5-DOF robot and has 4 actuators. AMBER used a simple linear control adaptation for tracking trajectories in each joint with the end result being stable walking on the treadmill.
AMBER2 is a 7-DOF robot with 6 actuators. This robot is unique due to the presence of feet, allowing for multi-contact locomotion with distinct heel and toe behaviors. Various motion primitives like flat-footed walking, multi-contact walking, standing, lunges, heel-lifts, toe-lifts, swinging, and also bending of knees are demonstrated in the form of dancing below:
4. Loop finder
This was my undergraduate project worked in collaboration with Ravi Lanka, in which the robot scans a set of loops and locates the smallest one in the end.