Motion planning and control for underactuated biped on non-static terrain
A linear inverted pendulum (LIP) based planner is implemented on an under actuated biped to enable the bipedal robot walk stably on the non static terrain. The terrain has forward-backward motion as well as the lateral motion.
|
A linear inverted pendulum (LIP) based planner is implemented on an under actuated biped to enable the bipedal robot walk stably on the non static terrain (lateral motion only). The robot is carrying a box (10 kg) whose weight is unknown to the planner.
|
Reinforcement learning
This implementation of Deep Deterministic Policy Gradient (DDPG) network to control 20 agents (robotic manipulator) to reach its goal position. A reward of +0.1 is provided for each step that the agent's end-effect has reached its goal position (green sphere). Thus, the goal of all agents is to maintain its position at the target location for as many time steps as possible.
|
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
|
LIDAR implementation
Point cloud clustering implementation. In this work, K-D tree is implemented to differentiate distinct objects. The sensor used in this work is Lidar.
|
The unscented Kalman Filter (UKF) is implemented to estimate the state of multiple cars on a highway using noisy lidar and radar measurements. The red spheres above the car are the estimated vehicle position.
|