Robot 3: Learning CAD and Teleop
So far, we haven’t worked with the real robot. The best way to start working with it is a safe teleop interface. Coincidentally, this is also the best way to debug our two arm link URDF, and see how to fix the problems we were running into last time (self collisions and proper dynamical properties).
The Intermediate Problem
At the end of the last article I had a working sim of the robot. Or so I thought. I started doing the work to transfer the code I wrote from last part into a Drake LeafSystem
that could run the real robot. (Really, I was transferring code to do inverse dynamics on the real hardware, but let’s pretend I was just doing motion commands for it).
During this whole process, I saw that the model had 5 joints, but the real robot had 6. My first instinct was that the sim didn’t have a joint setup for the gripper, so I kept going, but I never actually looked at the model. It was only when I tried to line up motor rotation directions between the sim and the robot that I realized the simulated robot was missing an entire limb. Imagine you woke up with no forearm - that’s what my simulation was.
This happened because the URDF on the low cost repo website was created by Antonio before the extra limb was added. Armed with this knowledge, and the belief that an accurate simulation would enable me to continue with the manipulation course, I set out to construct the robot in CAD. So here we are - the CAD diaries.
Fusion360
The Low-Cost Robot repo comes with a .step
and a .f3z
file. The latter is a Fusion360 file that comes prebuilt with all the joints, so I started by downloading Fusion and playing around with it. The details are fairly hazy (this was 3 weeks ago now), but I ran into a few issues:
- The default joint connector points did not work for a couple of joints since their center of mass was offset from the joint connection position.
- To create sketches or rename parts in the
.f3z
file you have to click into the underlying part on the left panel - you can’t edit it in the main assembly directly. fusion2urdf
required a very specific structure of parts, and didn’t let you define “composite assemblies”. I tried messing around with the Python scripts, but I couldn’t get a basic URDF to work with my limited CAD experience. I believe this is how Antonio created the original URDF file though, so it definitely seems possible.
The last reason made Fusion360 unworkable for me - I had to find another option. Having spent a lot of (wasted) time learning Fusion360 without having a path to actually create a robot, I decided to find a tool that would have a workable end-to-end flow. Learning number 1: always try a simple version of a complicated problem (shitty CAD to a working URDF file) instead of wasting time optimizing one part of the toolchain (building the perfect CAD).
OnShape
I emailed Antonio and he helpfully pointed me to onshape-to-robot
, a package that lets you convert an OnShape file to a URDF. I tried this with a single link, and it worked well! There were a few tweaks to make it work in Drake - namely adding <transmission />
tags. Some tips for working with onshape:
- Similar to Fusion360, you need to assign materials in the original part not in the completed assembly. You also have to delete surfaces to properly get onshape to render things.
- Manually convert the STL files to OBJ files for Drake.
- When importing files import them as a single part studio instead of a single part.
- Assembly names need to be unique.
- You should set
mergeSTLs
to true in your onshape config so you don’t get spaghetti STLs. - Duplicate connectors/ motors for use in each assembly separately. If you reuse the same motor in multiple places, it doesn’t render correctly. This is probably easily fixable, but it was easier to just duplicate.
- The PyBullet simulator doesn’t always match up with what happens in Drake, so just test in Drake.
Here’s my finished CAD file, and you can find the configs for onshape-to-robot
here. There’s a lot more I could say about CAD, but the best thing is probably to play with OnShape. If I have anything to hate about this tool, it’s that the pan/ zoom is really weird compared to Fusion360, I find myself always having to reset the orientation and start over.
The 6-link Jacobian has some major instability issues compared to the 5-link one. I was really scratching my head over this, and I realized that it might be ok? The pseudoinverse controller targets a constant gripper velocity, which is not a realistic mode of operation. I spent some time optimizing the PID controller and changing the time step, but after I realized that some instability was ok, I moved on for now.
Let’s Play a Game of Operation
Now, we’re ready to teleop this robot! The most important thing when we start this is safety. We want the arm to move slowly, and safely. Here’s the snippet of code I used to make sure any movement commands the robot sent were within a very strict range:
num_movable_joints = 3
should_move = False
joint_safety_diffs = [
# (max_diff, min_diff)
(100, 25),
(100, 25),
(200, 100)
]
for i in range(num_movable_joints):
abs_diff = abs(measured_position[i] - desired_position[i])
max_diff, min_diff = joint_safety_diffs[i]
if abs_diff > max_diff:
print("Desired position too far from current position, skipping")
print(f"Current pos {measured_position}, desired pos as float {desired_position_float}, desired_position {desired_position}")
outputs.SetFromVector(current_pos)
return
elif abs_diff > min_diff:
# only command a new position if there is somewhere new to move to
should_move = True
if should_move:
print(f"Current pos {measured_position}, desired pos as float {desired_position_float}, desired_position {desired_position}")
# you can't reuse the measured position list for the rest of the joints because it's an unstable system and will droop!!
commanded_position = list(desired_position[:num_movable_joints]) + list(self._home_position[num_movable_joints:])
print(commanded_position)
self._robot.set_goal_pos(commanded_position) # gripper is always open for now.
One fascinating lesson in unstable systems I learnt was that you cannot feedback the measured position of the motors to itself. There will be small errors in this process which cause the entire motor to slowly start drooping. Something I couldn’t figure out was why repeatedly sending the motor the same goal position caused it to also seemingly turn off (hence the minimum joint diffs in the snippet above).
I set all this up and hooked it to a teleop interface that Drake provides (in a previous commit here), but ran into a few problems:
- The elbow motor (third from the base) is too weak. It doesn’t go to the goal position commanded exactly, but will move in that direction if there is a large enough difference between the current and goal positions. This causes the safety checks to constantly trigger and render the arm immovable after some time.
- When the elbow motor is unable to move to a position, the simulated robot continues to move to the requested position since they both take in the same “desired” position value.
- The low-pass filter that the teleop is hooked up to causes really jerky motions. The motor speed is also too high.
We can change this so that the simulated robot gets a “desired” position that maps to the real robots “measured” position. This way the simulated robot will just be slightly laggy relative to the real robot, but will always end up reflecting reality.
This solves the sim problem, but still doesn’t solve the elbow motor causing the entire system to stall, or the jerkiness of the motion. That though, is a problem for next time.