ND2 - project - Deep RL manipulator
sensor input -> DQN -> actions
integrate with robots and simulator
PyTorch python -> API ->
pass memory objects between the user’s application and Torch without extra copies
create DQN agent
create input_state memory
initialize state
loop
assign input_state base on previous state
DQN agent return
action
base on input_staterecalculate current state
compute
reward
base on previous & current state; send reward to DQN agent; update DQN memoryExit if game over
ENUM
ARM Plugin
arm mode = gazebo-arm.world
ArmPlugin.cpp = gazebo plugin = creating the DQN agent and training it to learn
libgazeboArmPlugin.so = gazebo plugin shared object file = integrating the simulation environment with the RL agent
ArmPlugin::Load()
creating and initializing nodes that subscribe to two specific topics - camera & contact sensor
class_instance is the instance used when a new message is received.
ArmPlugin::onCameraMsg()
callback function for the camera subscriber
It takes the message from the camera topic, extracts the image, and saves it. This is then passed to the DQN.
ArmPlugin::onCollisionMsg()
callback function for the object’s contact sensor(my_contact).
Furthermore, this callback function can also be used to define a reward function based on whether there has been a collision or not.
ArmPlugin::createAgent()
create and initialize the agent.
ArmPlugin::updateAgent()
every camera frame received -> DQN agent take an action -> The updateAgent()
method decides to take that action.
ArmPlugin::OnUpdate()
issue rewards(end of episode || interim rewards) and train the DQN.
Last updated
Was this helpful?