• Stars
    star
    356
  • Rank 119,446 (Top 3 %)
  • Language
    Swift
  • License
    MIT License
  • Created 9 months ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

VisionOS App + Python Library to stream head / wrist / finger tracking data from Vision Pro to any robots.

VisionProTeleop

CleanShot 2024-03-03 at 13 55 11@2x

Wanna use your new Apple Vision Pro to control your robot? Wanna record how you navigate and manipulate the world to train your robot? This VisionOS app and python library streams your Head + Wrist + Hand Tracking result via gRPC over a WiFi network, so any robots connected to the same wifi network can subscribe and use.

For a more detailed explanation, check out this short paper.

How to Use

If you use this repository in your work, consider citing:

@software{park2024avp,
    title={Using Apple Vision Pro to Train and Control Robots},
    author={Park, Younghyo and Agrawal, Pulkit},
    year={2024},
    url = {https://github.com/Improbable-AI/VisionProTeleop},
}

Step 1. Install the app on Vision Pro

This app is now officially on VisionOS App Store! You can search for Tracking Streamer from the App Store and install the app.

If you want to play around with the app, you can build/install the app yourself too. To learn how to do that, take a look at this documentation. This requires (a) Apple Developer Account, (b) Vision Pro Developer Strap, and (c) a Mac with Xcode installed.

Step 2. Run the app on Vision Pro

After installation, click on the app on Vision Pro and click Start. That's it! Vision Pro is now streaming the tracking data over your wifi network.

Tip Remember the IP address before you click start; you need to specify this IP address to subscribe to the data. Once you click start, the app will immediately enter into pass-through mode. Click on the digital crown to stop streaming.

Step 3. Receive the stream from anywhere

The following python package allows you to receive the data stream from any device that's connected to the same WiFi network. First, install the package:

pip install avp_stream

Then, add this code snippet to any of your projects you were developing:

from avp_stream import VisionProStreamer
avp_ip = "10.31.181.201"   # example IP 
s = VisionProStreamer(ip = avp_ip, record = True)

while True:
    r = s.latest
    print(r['head'], r['right_wrist'], r['right_fingers'])

Available Data

r = s.latest

r is a dictionary containing the following data streamed from AVP:

r['head']: np.ndarray  
  # shape (1,4,4) / measured from ground frame
r['right_wrist']: np.ndarray 
  # shape (1,4,4) / measured from ground frame
r['left_wrist']: np.ndarray 
  # shape (1,4,4) / measured from ground frame
r['right_fingers']: np.ndarray 
  # shape (25,4,4) / measured from right wrist frame 
r['left_fingers']: np.ndarray 
  # shape (25,4,4) / measured from left wrist frame 
r['right_pinch_distance']: float  
  # distance between right index tip and thumb tip 
r['left_pinch_distance']: float  
  # distance between left index tip and thumb tip 
r['right_wrist_roll']: float 
  # rotation angle of your right wrist around your arm axis
r['left_wrist_roll']: float 
 # rotation angle of your left wrist around your arm axis

Axis Convention

Refer to the image below to see how the axis are defined for your head, wrist, and fingers.

Hand Skeleton used in VisionOS

Refer to the image above to see what order the joints are represented in each hand's skeleton.

Acknowledgements

We acknowledge support from Hyundai Motor Company and ARO MURI grant number W911NF-23-1-0277.

More Repositories

1

walk-these-ways

Sim-to-real RL training and deployment tools for the Unitree Go1 robot.
Python
540
star
2

rapid-locomotion-rl

Code for Rapid Locomotion via Reinforcement Learning
Python
162
star
3

dribblebot

Code release accompanying DribbleBot: Dynamic Legged Manipulation in the Wild
Python
91
star
4

dexenv

Code for Visual Dexterity: In-Hand Reorientation of Novel and Complex Object Shapes (Science Robotics)
Python
77
star
5

eipo

Official codebase for Redeeming Intrinsic Rewards via Constrained Policy Optimization
Python
75
star
6

airobot

A python library for robot learning - An extension to PyRobot
Python
74
star
7

pql

Parallel Q-Learning: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation
Python
50
star
8

curiosity_redteam

Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizXgXU)
Jupyter Notebook
43
star
9

Stubborn

Python
36
star
10

human-guided-exploration

Official codebase for Human Guided Exploration (HuGE)
Python
21
star
11

dw-offline-rl

Official implementation of NeurIPS'23 paper, Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets
Python
16
star
12

harness-offline-rl

Official implementation of Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Reweighting
Python
14
star
13

curiosity_baselines

An open source reinforcement learning codebase with a variety of intrinsic exploration methods implemented in PyTorch.
Python
10
star
14

learning-compliance

4
star
15

monkey-job-runner

Monkey Job Runner
Python
2
star
16

ter

TER codebase for release
Python
1
star
17

bvn

Official implementation of bilinear value networks
Python
1
star