• Stars
    star
    233
  • Rank 171,253 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Robot kinematics implemented in pytorch

PyTorch Robot Kinematics

  • Parallel and differentiable forward kinematics (FK) and Jacobian calculation
  • Load robot description from URDF, SDF, and MJCF formats
  • SDF queries batched across configurations and points via pytorch-volumetric

Installation

pip install pytorch-kinematics

For development, clone repository somewhere, then pip3 install -e . to install in editable mode.

Usage

See tests for code samples; some are also shown here.

Reference

DOI

If you use this package in your research, consider citing

@software{Zhong_PyTorch_Kinematics_2023,
author = {Zhong, Sheng and Power, Thomas and Gupta, Ashwin},
doi = {10.5281/zenodo.7700588},
month = {3},
title = {{PyTorch Kinematics}},
version = {v0.5.4},
year = {2023}
}

Forward Kinematics (FK)

import math
import pytorch_kinematics as pk

# load robot description from URDF and specify end effector link
chain = pk.build_serial_chain_from_urdf(open("kuka_iiwa.urdf").read(), "lbr_iiwa_link_7")
# prints out the (nested) tree of links
print(chain)
# prints out list of joint names
print(chain.get_joint_parameter_names())

# specify joint values (can do so in many forms)
th = [0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0]
# do forward kinematics and get transform objects; end_only=False gives a dictionary of transforms for all links
ret = chain.forward_kinematics(th, end_only=False)
# look up the transform for a specific link
tg = ret['lbr_iiwa_link_7']
# get transform matrix (1,4,4), then convert to separate position and unit quaternion
m = tg.get_matrix()
pos = m[:, :3, 3]
rot = pk.matrix_to_quaternion(m[:, :3, :3])

We can parallelize FK by passing in 2D joint values, and also use CUDA if available

import torch
import pytorch_kinematics as pk

d = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.float64

chain = pk.build_serial_chain_from_urdf(open("kuka_iiwa.urdf").read(), "lbr_iiwa_link_7")
chain = chain.to(dtype=dtype, device=d)

N = 1000
th_batch = torch.rand(N, len(chain.get_joint_parameter_names()), dtype=dtype, device=d)

# order of magnitudes faster when doing FK in parallel
# elapsed 0.008678913116455078s for N=1000 when parallel
# (N,4,4) transform matrix; only the one for the end effector is returned since end_only=True by default
tg_batch = chain.forward_kinematics(th_batch)

# elapsed 8.44686508178711s for N=1000 when serial
for i in range(N):
    tg = chain.forward_kinematics(th_batch[i])

We can compute gradients through the FK

import torch
import math
import pytorch_kinematics as pk

chain = pk.build_serial_chain_from_urdf(open("kuka_iiwa.urdf").read(), "lbr_iiwa_link_7")

# require gradient through the input joint values
th = torch.tensor([0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0], requires_grad=True)
tg = chain.forward_kinematics(th)
m = tg.get_matrix()
pos = m[:, :3, 3]
pos.norm().backward()
# now th.grad is populated

We can load SDF and MJCF descriptions too, and pass in joint values via a dictionary (unspecified joints get th=0) for non-serial chains

import math
import torch
import pytorch_kinematics as pk

chain = pk.build_chain_from_sdf(open("simple_arm.sdf").read())
ret = chain.forward_kinematics({'arm_elbow_pan_joint': math.pi / 2.0, 'arm_wrist_lift_joint': -0.5})
# recall that we specify joint values and get link transforms
tg = ret['arm_wrist_roll']

# can also do this in parallel
N = 100
ret = chain.forward_kinematics({'arm_elbow_pan_joint': torch.rand(N, 1), 'arm_wrist_lift_joint': torch.rand(N, 1)})
# (N, 4, 4) transform object
tg = ret['arm_wrist_roll']

# building the robot from a MJCF file
chain = pk.build_chain_from_mjcf(open("ant.xml").read())
print(chain)
print(chain.get_joint_parameter_names())
th = {'hip_1': 1.0, 'ankle_1': 1}
ret = chain.forward_kinematics(th)

chain = pk.build_chain_from_mjcf(open("humanoid.xml").read())
print(chain)
print(chain.get_joint_parameter_names())
th = {'left_knee': 0.0, 'right_knee': 0.0}
ret = chain.forward_kinematics(th)

Jacobian calculation

The Jacobian (in the kinematics context) is a matrix describing how the end effector changes with respect to joint value changes (where dx is the twist, or stacked velocity and angular velocity): jacobian

For SerialChain we provide a differentiable and parallelizable method for computing the Jacobian with respect to the base frame.

import math
import torch
import pytorch_kinematics as pk

# can convert Chain to SerialChain by choosing end effector frame
chain = pk.build_chain_from_sdf(open("simple_arm.sdf").read())
# print(chain) to see the available links for use as end effector
# note that any link can be chosen; it doesn't have to be a link with no children
chain = pk.SerialChain(chain, "arm_wrist_roll_frame")

chain = pk.build_serial_chain_from_urdf(open("kuka_iiwa.urdf").read(), "lbr_iiwa_link_7")
th = torch.tensor([0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0])
# (1,6,7) tensor, with 7 corresponding to the DOF of the robot
J = chain.jacobian(th)

# get Jacobian in parallel and use CUDA if available
N = 1000
d = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.float64

chain = chain.to(dtype=dtype, device=d)
# Jacobian calculation is differentiable
th = torch.rand(N, 7, dtype=dtype, device=d, requires_grad=True)
# (N,6,7)
J = chain.jacobian(th)

# can get Jacobian at a point offset from the end effector (location is specified in EE link frame)
# by default location is at the origin of the EE frame
loc = torch.rand(N, 3, dtype=dtype, device=d)
J = chain.jacobian(th, locations=loc)

The Jacobian can be used to do inverse kinematics. See IK survey for a survey of ways to do so. Note that IK may be better performed through other means (but doing it through the Jacobian can give an end-to-end differentiable method).

SDF Queries

See pytorch-volumetric for the latest details, some instructions are pasted here:

For many applications such as collision checking, it is useful to have the SDF of a multi-link robot in certain configurations. First, we create the robot model (loaded from URDF, SDF, MJCF, ...) with pytorch kinematics. For example, we will be using the KUKA 7 DOF arm model from pybullet data

import os
import torch
import pybullet_data
import pytorch_kinematics as pk
import pytorch_volumetric as pv

urdf = "kuka_iiwa/model.urdf"
search_path = pybullet_data.getDataPath()
full_urdf = os.path.join(search_path, urdf)
chain = pk.build_serial_chain_from_urdf(open(full_urdf).read(), "lbr_iiwa_link_7")
d = "cuda" if torch.cuda.is_available() else "cpu"

chain = chain.to(device=d)
# paths to the link meshes are specified with their relative path inside the URDF
# we need to give them the path prefix as we need their absolute path to load
s = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, "kuka_iiwa"))

By default, each link will have a MeshSDF. To instead use CachedSDF for faster queries

s = pv.RobotSDF(chain, path_prefix=os.path.join(search_path, "kuka_iiwa"),
                link_sdf_cls=pv.cache_link_sdf_factory(resolution=0.02, padding=1.0, device=d))

Which when the y=0.02 SDF slice is visualized:

sdf slice

With surface points corresponding to:

wireframe solid

Queries on this SDF is dependent on the joint configurations (by default all zero). Queries are batched across configurations and query points. For example, we have a batch of joint configurations to query

th = torch.tensor([0.0, -math.pi / 4.0, 0.0, math.pi / 2.0, 0.0, math.pi / 4.0, 0.0], device=d)
N = 200
th_perturbation = torch.randn(N - 1, 7, device=d) * 0.1
# N x 7 joint values
th = torch.cat((th.view(1, -1), th_perturbation + th))

And also a batch of points to query (same points for each configuration):

y = 0.02
query_range = np.array([
    [-1, 0.5],
    [y, y],
    [-0.2, 0.8],
])
# M x 3 points
coords, pts = pv.get_coordinates_and_points_in_grid(0.01, query_range, device=s.device)

We set the batch of joint configurations and query:

s.set_joint_configuration(th)
# N x M SDF value
# N x M x 3 SDF gradient
sdf_val, sdf_grad = s(pts)

Credits

  • pytorch_kinematics/transforms is extracted from pytorch3d with minor extensions. This was done instead of including pytorch3d as a dependency because it is hard to install and most of its code is unrelated. An important difference is that we use left hand multiplied transforms as is convention in robotics (T * pt) instead of their right hand multiplied transforms.
  • pytorch_kinematics/urdf_parser_py, and pytorch_kinematics/mjcf_parser is extracted from kinpy, as well as the FK logic. This repository ports the logic to pytorch, parallelizes it, and provides some extensions.

More Repositories

1

pytorch_mppi

Model Predictive Path Integral (MPPI) with approximate dynamics implemented in pytorch
Python
369
star
2

sdf_tools

Builds 2D signed distance fields from images, 3D signed distance fields from pointclouds, 3D signed distance fields from Octomap, provides a lightweight signed distance field library, message types for signed distance fields, and tools to compress signed distance fields for transport.
C++
187
star
3

Chamfer-Distance-API

A python class that calculates chamfer distance between point clouds using tensorflow
C++
58
star
4

cdcpd

Library for Constrained Deformable Coherent Point Drift
C++
29
star
5

pytorch_volumetric

Volumetric structures such as voxels and SDFs implemented in pytorch
Python
23
star
6

vr_teleop

[ROS 1] Teleop control of dual arm robots using virtual reality devices
CMake
19
star
7

Efficient-Eng-2-LTL

The associated repo for paper "Data-Efficient Learning of Natural Language to Linear Temporal Logic Translators for Robot Task Specification"
Python
16
star
8

chsel

Constrained pose Hypothesis Set Elimination official code for pose estimation
Python
11
star
9

arc_utilities

C++ and Python utilities. ARC -> ARM
C++
10
star
10

husky_stereo_nav

ROS package for navigation of Clearpath Husky using visual SLAM with a stereo camera
Python
7
star
11

kuka_iiwa_interface

Developmental controllers for the IIWA 7 robot, including a Catkinized version of the Kuka FRI SDK
C++
7
star
12

pytorch_collision_checker

PyTorch Collision Checker
Python
6
star
13

probabilistic_shape_completion

Code for Probabilistic Shape Completion
Python
6
star
14

arm_robots

Packages for controlling our robots at a high-level, though simple to use ROS and Python interfaces.
Python
6
star
15

arm_pytorch_utilities

Utilities for working with pytorch
Python
6
star
16

tampc

Trap aware model predictive control
Python
5
star
17

fast_kinematic_simulator

Fast kinematic simulator used for motion planning and control with uncertainty projects.
C++
5
star
18

merge_pointclouds

C++
5
star
19

unity_victor_teleop

Unity project for teleoperation of ARM-lab's "Victor" robot using an HTC Vive
C#
5
star
20

visual_servoing

position based visual servoing library with PnP pose estimation
Python
5
star
21

robot_calibration

A ROS package that publishes position of a table in kinect camera frame.
Python
5
star
22

uncertainty_planning_examples

SE(2), SE(3), Baxter, and UR5 examples for using uncertainty_planning_core
C++
4
star
23

vr_ros2_bridge

Unity project and ROS 2 interface definitions for using the HTC Vive
C#
4
star
24

gpu_voxel_planning

Brad's library for PhD research on CHS, BTP, and other voxel-based planning approaches using ARMLabs Victor robot
C++
3
star
25

mab_ms

Bandit-based Model Selection for Deformable Object Manipulation
C++
3
star
26

CudaSift

Cuda
3
star
27

Traversability-Based-Contact-Space-Planner

Traversability-based contact planner code release
C++
3
star
28

multihypothesis_segmentation_tracking

Public release of code for Manipulating Piles of Stuff voxel pipeline.
Jupyter Notebook
3
star
29

stucco

Soft Tracking Using Contacts for Cluttered Objects
Python
3
star
30

deformable_manipulation_interface

ROS interface between a robot/simulator and planners for deformable object manipulation.
C++
3
star
31

mjregrasping_cpp

C++
2
star
32

arm_segmentation

Python
2
star
33

pytorch_rrt

Kinodynamic RRT implemented in pytorch
Python
2
star
34

arm_video_recorder

ROS node to record from a connected video device (webcam) and save to a file
Python
2
star
35

uncertainty_planning_core

Framework for motion planning with uncertainty
C++
2
star
36

plant_motion_planning

This repository contains code related to the motion planning around plants project.
C++
2
star
37

mjregrasping

Code accompanying "The Grasp Loop Signature: A Topological Representation for Manipulation Planning with Ropes and Cables"
Python
2
star
38

unreliable-deform-manipulation

Code accompanying the paper "Learning Where to Trust Unreliable Models in an Unstructured World for Deformable Object Manipulation"
Jupyter Notebook
2
star
39

gpu_profiling

GPU Profiling Tools For The Lab
Python
1
star
40

val_pbvs_pybullet

Pybullet simulation for visual servoing of a dual arm robot val
Python
1
star
41

SiamMask

Python
1
star
42

deform_control

Project for deformable object control
C++
1
star
43

kinect2_calibration_files

Calibration files for use with the iai_kinect2 ROS package.
CMake
1
star
44

smmap

Simultaneous Modeling Manipulation And Planning
C++
1
star
45

unity_victor_teleop_headless

C#
1
star
46

pr2_reachability_map_visualization

Python
1
star
47

focused-adaptation-dynamics

Code accompanying Focused Adaptation of Dynamics Models for Deformable Object Manipulation
Jupyter Notebook
1
star
48

mps_shape_completion

Neural network completion of partial shapes in a voxel grid
Python
1
star
49

pytorch_seed

RNG seeding and context management for pytorch
Python
1
star
50

data-augmentation-for-manipulation

Code accompanying the RSS paper "Data Augmentation for Manipulation"
Jupyter Notebook
1
star
51

arm_clouds

Simple module for convenient NumPy and PyTorch point cloud utilities
Python
1
star
52

lightweight_vicon_bridge

Bridge between Vicon DataStream SDK and ROS. Designed specifically for reduced latency.
C++
1
star
53

husky_ark_armlab

A Python ROS package to provide additional functionality for the Clearpath Autonomous Research Kit. This project was conducted for the University of Michigan's ARM Lab.
Python
1
star