• Stars
    star
    898
  • Rank 50,853 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Python library that enables the use of Jetson's GPIOs

Jetson.GPIO - Linux for Tegra

Jetson TX1, TX2, AGX Xavier, and Nano development boards contain a 40 pin GPIO header, similar to the 40 pin header in the Raspberry Pi. These GPIOs can be controlled for digital input and output using the Python library provided in the Jetson GPIO Library package. The library has the same API as the RPi.GPIO library for Raspberry Pi in order to provide an easy way to move applications running on the Raspberry Pi to the Jetson board.

This document walks through what is contained in The Jetson GPIO library package, how to configure the system and run the provided sample applications, and the library API.

Package Components

In addition to this document, the Jetson GPIO library package contains the following:

  1. The lib/python/ subdirectory contains the Python modules that implement all library functionality. The gpio.py module is the main component that will be imported into an application and provides the needed APIs. The gpio_event.py and gpio_pin_data.py modules are used by the gpio.py module and must not be imported directly in to an application.

  2. The samples/ subdirectory contains sample applications to help in getting familiar with the library API and getting started on an application. The simple_input.py and simple_output.py applications show how to perform read and write to a GPIO pin respectively, while the button_led.py, button_event.py and button_interrupt.py show how a button press may be used to blink an LED using busy-waiting, blocking wait and interrupt callbacks respectively.

Installation

Using pip

The easiest way to install this library is using pip:

sudo pip install Jetson.GPIO

Manual download

You may clone this git repository, or download a copy of it as an archive file and decompress it. You may place the library files anywhere you like on your system. You may use the library directly from this directory by manually setting PYTHONPATH, or install it using setup.py:

sudo python3 setup.py install

Setting User Permissions

In order to use the Jetson GPIO Library, the correct user permissions/groups must be set first.

Create a new gpio user group. Then add your user to the newly created group.

sudo groupadd -f -r gpio
sudo usermod -a -G gpio your_user_name

Install custom udev rules by copying the 99-gpio.rules file into the rules.d directory.

If you have downloaded the source to Jetson.GPIO:

sudo cp lib/python/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d/

If you installed Jetson.GPIO from a package, e.g. using pip into a virtual environment:

sudo cp venv/lib/pythonNN/site-packages/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d/

For the new rule to take place, you either need to reboot or reload the udev rules by running:

sudo udevadm control --reload-rules && sudo udevadm trigger

Running the sample scripts

With the permissions set as needed, the sample applications provided in the samples/ directory can be used. The following describes the operation of each application:

  1. simple_input.py: This application uses the BCM pin numbering mode and reads the value at pin 12 of the 40 pin header and prints the value to the screen.

  2. simple_out.py: This application uses the BCM pin numbering mode from Raspberry Pi and outputs alternating high and low values at BCM pin 18 (or board pin 12 on the header) every 2 seconds.

  3. button_led.py: This application uses the BOARD pin numbering. It requires a button connected to pin 18 and GND, a pull-up resistor connecting pin 18 to 3V3 and an LED and current limiting resistor connected to pin 12. The application reads the button state and keeps the LED on for 1 second every time the button is pressed.

  4. button_event.py: This application uses the BOARD pin numbering. It requires a button connected to pin 18 and GND, a pull-up resistor connecting the button to 3V3 and an LED and current limiting resistor connected to pin 12. The application performs the same function as the button_led.py but performs a blocking wait for the button press event instead of continuously checking the value of the pin in order to reduce CPU usage.

  5. button_interrupt.py: This application uses the BOARD pin numbering. It requires a button connected to pin 18 and GND, a pull-up resistor connecting the button to 3V3, an LED and current limiting resistor connected to pin 12 and a second LED and current limiting resistor connected to pin 13. The application slowly blinks the first LED continuously and rapidly blinks the second LED five times only when the button is pressed.

To run these sample applications if Jetson.GPIO is added to the PYTHONPATH:

python3 <name_of_application_to_run>

Alternatively, if Jetson.GPIO is not added to the PYTHONPATH, the run_sample.sh script can be used to run these sample applications. This can be done with the following command when in the samples/ directory:

./run_sample.sh <name_of_application_to_run>

The usage of the script can also be viewed by using:

./run_sample.sh -h
./run_sample.sh --help

Complete library API

The Jetson GPIO library provides all public APIs provided by the RPi.GPIO library. The following discusses the use of each API:

1. Importing the libary

To import the Jetson.GPIO module use:

import Jetson.GPIO as GPIO

This way, you can refer to the module as GPIO throughout the rest of the application. The module can also be imported using the name RPi.GPIO instead of Jetson.GPIO for existing code using the RPi library.

2. Pin numbering

The Jetson GPIO library provides four ways of numbering the I/O pins. The first two correspond to the modes provided by the RPi.GPIO library, i.e BOARD and BCM which refer to the pin number of the 40 pin GPIO header and the Broadcom SoC GPIO numbers respectively. The remaining two modes, CVM and TEGRA_SOC use strings instead of numbers which correspond to signal names on the CVM/CVB connector and the Tegra SoC respectively.

To specify which mode you are using (mandatory), use the following function call:

GPIO.setmode(GPIO.BOARD)
# or
GPIO.setmode(GPIO.BCM)
# or
GPIO.setmode(GPIO.CVM)
# or
GPIO.setmode(GPIO.TEGRA_SOC)

To check which mode has be set, you can call:

mode = GPIO.getmode()

The mode must be one of GPIO.BOARD, GPIO.BCM, GPIO.CVM, GPIO.TEGRA_SOC or None.

3. Warnings

It is possible that the GPIO you are trying to use is already being used external to the current application. In such a condition, the Jetson GPIO library will warn you if the GPIO being used is configured to anything but the default direction (input). It will also warn you if you try cleaning up before setting up the mode and channels. To disable warnings, call:

GPIO.setwarnings(False)

4. Set up a channel

The GPIO channel must be set up before use as input or output. To configure the channel as input, call:

# (where channel is based on the pin numbering mode discussed above)
GPIO.setup(channel, GPIO.IN)

To set up a channel as output, call:

GPIO.setup(channel, GPIO.OUT)

It is also possible to specify an initial value for the output channel:

GPIO.setup(channel, GPIO.OUT, initial=GPIO.HIGH)

When setting up a channel as output, it is also possible to set up more than one channel at once:

# add as many as channels as needed. You can also use tuples: (18,12,13)
channels = [18, 12, 13]
GPIO.setup(channels, GPIO.OUT)

5. Input

To read the value of a channel, use:

GPIO.input(channel)

This will return either GPIO.LOW or GPIO.HIGH.

6. Output

To set the value of a pin configured as output, use:

GPIO.output(channel, state)

where state can be GPIO.LOW or GPIO.HIGH.

You can also output to a list or tuple of channels:

channels = [18, 12, 13] # or use tuples
GPIO.output(channels, GPIO.HIGH) # or GPIO.LOW
# set first channel to LOW and rest to HIGH
GPIO.output(channel, (GPIO.LOW, GPIO.HIGH, GPIO.HIGH))

7. Clean up

At the end of the program, it is good to clean up the channels so that all pins are set in their default state. To clean up all channels used, call:

GPIO.cleanup()

If you don't want to clean all channels, it is also possible to clean up individual channels or a list or tuple of channels:

GPIO.cleanup(chan1) # cleanup only chan1
GPIO.cleanup([chan1, chan2]) # cleanup only chan1 and chan2
GPIO.cleanup((chan1, chan2))  # does the same operation as previous statement

8. Jetson Board Information and library version

To get information about the Jetson module, use/read:

GPIO.JETSON_INFO

This provides a Python dictionary with the following keys: P1_REVISION, RAM, REVISION, TYPE, MANUFACTURER and PROCESSOR. All values in the dictionary are strings with the exception of P1_REVISION which is an integer.

To get information about the library version, use/read:

GPIO.VERSION

This provides a string with the X.Y.Z version format.

9. Interrupts

Aside from busy-polling, the library provides three additional ways of monitoring an input event:

The wait_for_edge() function

This function blocks the calling thread until the provided edge(s) is detected. The function can be called as follows:

GPIO.wait_for_edge(channel, GPIO.RISING)

The second parameter specifies the edge to be detected and can be GPIO.RISING, GPIO.FALLING or GPIO.BOTH. If you only want to limit the wait to a specified amount of time, a timeout can be optionally set:

# timeout is in milliseconds
GPIO.wait_for_edge(channel, GPIO.RISING, timeout=500)

The function returns the channel for which the edge was detected or None if a timeout occurred.

The event_detected() function

This function can be used to periodically check if an event occurred since the last call. The function can be set up and called as follows:

# set rising edge detection on the channel
GPIO.add_event_detect(channel, GPIO.RISING)
run_other_code()
if GPIO.event_detected(channel):
    do_something()

As before, you can detect events for GPIO.RISING, GPIO.FALLING or GPIO.BOTH.

A callback function run when an edge is detected

This feature can be used to run a second thread for callback functions. Hence, the callback function can be run concurrent to your main program in response to an edge. This feature can be used as follows:

# define callback function
def callback_fn(channel):
    print("Callback called from channel %s" % channel)

# add rising edge detection
GPIO.add_event_detect(channel, GPIO.RISING, callback=callback_fn)

More than one callback can also be added if required as follows:

def callback_one(channel):
    print("First Callback")

def callback_two(channel):
    print("Second Callback")

GPIO.add_event_detect(channel, GPIO.RISING)
GPIO.add_event_callback(channel, callback_one)
GPIO.add_event_callback(channel, callback_two)

The two callbacks in this case are run sequentially, not concurrently since there is only thread running all callback functions.

In order to prevent multiple calls to the callback functions by collapsing multiple events in to a single one, a debounce time can be optionally set:

# bouncetime set in milliseconds
GPIO.add_event_detect(channel, GPIO.RISING, callback=callback_fn,
bouncetime=200)

If the edge detection is not longer required it can be removed as follows:

GPIO.remove_event_detect(channel)

10. Check function of GPIO channels

This feature allows you to check the function of the provided GPIO channel:

GPIO.gpio_function(channel)

The function returns either GPIO.IN or GPIO.OUT.

11. PWM

See samples/simple_pwm.py for details on how to use PWM channels.

The Jetson.GPIO library supports PWM only on pins with attached hardware PWM controllers. Unlike the RPi.GPIO library, the Jetson.GPIO library does not implement Software emulated PWM. Jetson Nano supports 2 PWM channels, and Jetson AGX Xavier supports 3 PWM channels. Jetson TX1 and TX2 do not support any PWM channels.

The system pinmux must be configured to connect the hardware PWM controlller(s) to the relevant pins. If the pinmux is not configured, PWM signals will not reach the pins! The Jetson.GPIO library does not dynamically modify the pinmux configuration to achieve this. Read the L4T documentation for details on how to configure the pinmux.

Using the Jetson GPIO library from a docker container

The following describes how to use the Jetson GPIO library from a docker container.

Building a docker image

samples/docker/Dockerfile is a sample Dockerfile for the Jetson GPIO library. The following command will build a docker image named testimg from it.

sudo docker image build -f samples/docker/Dockerfile -t testimg .

Running the container

Basic options

You should map /dev into the container to access to the GPIO pins. So you need to add these options to docker container run command.

-v /dev:/dev \

and if you want to use GPU from the container you also need to add these options:

--runtime=nvidia --gpus all

Running the container in privilleged mode

The library determines the jetson model by checking /proc/device-tree/compatible and /proc/device-tree/chosen by default. These paths only can be mapped into the container in privilleged mode.

The following example will run /bin/bash from the container in privilleged mode.

sudo docker container run -it --rm \
--runtime=nvidia --gpus all \
--privileged \
-v /proc/device-tree/compatible:/proc/device-tree/compatible \
-v /proc/device-tree/chosen:/proc/device-tree/chosen \
-v /dev:/dev \
testimg /bin/bash

Running the container in non-privilleged mode

If you don't want to run the container in privilleged mode, you can directly provide your jetson model name to the library through the environment variable JETSON_MODEL_NAME:

# ex> -e JETSON_MODEL_NAME=JETSON_NANO
-e JETSON_MODEL_NAME=[PUT_YOUR_JETSON_MODEL_NAME_HERE]

You can get the proper value for this variable by running samples/jetson_model.py on the host or in previlleged mode.

# run on the host or in previlleged mode
sudo python3 samples/jetson_model.py

The following example will run /bin/bash from the container in non-privilleged mode.

sudo docker container run -it --rm \
--runtime=nvidia --gpus all \
-v /dev:/dev \
-e JETSON_MODEL_NAME=[PUT_YOUR_JETSON_MODEL_NAME_HERE] \
testimg /bin/bash

Obtaining L4T Documentation

The L4T documentation may be available in the following locations:

Within the documentation, relevant topics may be found by searching for e.g.:

  • Hardware Setup.
  • Configuring the 40-Pin Expansion Header.
  • Jetson-IO.
  • Platform Adaptation and Bring-Up.
  • Pinmux Changes.

More Repositories

1

nvidia-docker

Build and run Docker containers leveraging NVIDIA GPUs
16,896
star
2

open-gpu-kernel-modules

NVIDIA Linux open GPU kernel module source
C
14,997
star
3

DeepLearningExamples

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
Jupyter Notebook
13,339
star
4

NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Python
12,016
star
5

FastPhotoStyle

Style transfer, deep learning, feature transform
Python
11,020
star
6

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
C++
10,618
star
7

Megatron-LM

Ongoing research training transformer models at scale
Python
10,332
star
8

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
C++
8,542
star
9

vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Python
8,482
star
10

apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Python
8,239
star
11

pix2pixHD

Synthesizing and manipulating 2048x1024 images with conditional GANs
Python
6,488
star
12

cuda-samples

Samples for CUDA Developers which demonstrates features in CUDA Toolkit
C
6,119
star
13

cutlass

CUDA Templates for Linear Algebra Subroutines
C++
5,519
star
14

FasterTransformer

Transformer related optimization, including BERT, GPT
C++
5,313
star
15

DALI

A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
C++
5,048
star
16

thrust

[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
C++
4,914
star
17

tacotron2

Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Jupyter Notebook
4,562
star
18

warp

A Python framework for high performance GPU simulation and graphics
Python
4,206
star
19

DIGITS

Deep Learning GPU Training System
HTML
4,105
star
20

NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Python
4,064
star
21

nccl

Optimized primitives for collective multi-GPU communication
C++
3,187
star
22

flownet2-pytorch

Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
Python
2,938
star
23

ChatRTX

A developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM
TypeScript
2,635
star
24

k8s-device-plugin

NVIDIA device plugin for Kubernetes
Go
2,481
star
25

libcudacxx

[ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl
C++
2,294
star
26

GenerativeAIExamples

Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Python
2,192
star
27

nvidia-container-toolkit

Build and run containers leveraging NVIDIA GPUs
Go
2,171
star
28

waveglow

A Flow-based Generative Network for Speech Synthesis
Python
2,133
star
29

MinkowskiEngine

Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors
Python
2,007
star
30

TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Python
1,917
star
31

Stable-Diffusion-WebUI-TensorRT

TensorRT Extension for Stable Diffusion Web UI
Python
1,886
star
32

semantic-segmentation

Nvidia Semantic Segmentation monorepo
Python
1,763
star
33

gpu-operator

NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes
Go
1,735
star
34

cub

[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
Cuda
1,679
star
35

DeepRecommender

Deep learning for recommender systems
Python
1,662
star
36

stdexec

`std::execution`, the proposed C++ framework for asynchronous and parallel programming.
C++
1,554
star
37

OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Python
1,511
star
38

CUDALibrarySamples

CUDA Library Samples
Cuda
1,468
star
39

VideoProcessingFramework

Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space and pixel format conversions
C++
1,303
star
40

deepops

Tools for building GPU clusters
Shell
1,252
star
41

open-gpu-doc

Documentation of NVIDIA chip/hardware interfaces
C
1,243
star
42

aistore

AIStore: scalable storage for AI applications
Go
1,233
star
43

Q2RTX

NVIDIA’s implementation of RTX ray-tracing in Quake II
C
1,217
star
44

trt-samples-for-hackathon-cn

Simple samples for TensorRT programming
Python
1,211
star
45

cccl

CUDA Core Compute Libraries
C++
1,200
star
46

MatX

An efficient C++17 GPU numerical computing library with Python-like syntax
C++
1,187
star
47

partialconv

A New Padding Scheme: Partial Convolution based Padding
Python
1,145
star
48

sentiment-discovery

Unsupervised Language Modeling at scale for robust sentiment classification
Python
1,055
star
49

nvidia-container-runtime

NVIDIA container runtime
Makefile
1,035
star
50

modulus

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods
Python
991
star
51

gpu-monitoring-tools

Tools for monitoring NVIDIA GPUs on Linux
C
974
star
52

dcgm-exporter

NVIDIA GPU metrics exporter for Prometheus leveraging DCGM
Go
886
star
53

retinanet-examples

Fast and accurate object detection with end-to-end GPU optimization
Python
885
star
54

flowtron

Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
Jupyter Notebook
867
star
55

nccl-tests

NCCL Tests
Cuda
864
star
56

cuda-python

CUDA Python Low-level Bindings
Python
859
star
57

mellotron

Mellotron: a multispeaker voice synthesis model based on Tacotron 2 GST that can make a voice emote and sing without emotive or singing training data
Jupyter Notebook
852
star
58

gdrcopy

A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology
C++
832
star
59

libnvidia-container

NVIDIA container runtime library
C
818
star
60

BigVGAN

Official PyTorch implementation of BigVGAN (ICLR 2023)
Python
806
star
61

spark-rapids

Spark RAPIDS plugin - accelerate Apache Spark with GPUs
Scala
800
star
62

nv-wavenet

Reference implementation of real-time autoregressive wavenet inference
Cuda
728
star
63

DLSS

NVIDIA DLSS is a new and improved deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games
C
727
star
64

tensorflow

An Open Source Machine Learning Framework for Everyone
C++
719
star
65

gvdb-voxels

Sparse volume compute and rendering on NVIDIA GPUs
C
674
star
66

MAXINE-AR-SDK

NVIDIA AR SDK - API headers and sample applications
C
671
star
67

nvvl

A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training
C++
665
star
68

runx

Deep Learning Experiment Management
Python
633
star
69

NVFlare

NVIDIA Federated Learning Application Runtime Environment
Python
630
star
70

NeMo-Aligner

Scalable toolkit for efficient model alignment
Python
564
star
71

nvcomp

Repository for nvCOMP docs and examples. nvCOMP is a library for fast lossless compression/decompression on the GPU that can be downloaded from https://developer.nvidia.com/nvcomp.
C++
545
star
72

multi-gpu-programming-models

Examples demonstrating available options to program multiple GPUs in a single node or a cluster
Cuda
535
star
73

Dataset_Synthesizer

NVIDIA Deep learning Dataset Synthesizer (NDDS)
C++
530
star
74

TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
Python
513
star
75

jitify

A single-header C++ library for simplifying the use of CUDA Runtime Compilation (NVRTC).
C++
512
star
76

nvbench

CUDA Kernel Benchmarking Library
Cuda
501
star
77

libglvnd

The GL Vendor-Neutral Dispatch library
C
501
star
78

NeMo-Curator

Scalable data pre processing and curation toolkit for LLMs
Jupyter Notebook
500
star
79

cuda-quantum

C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
C++
496
star
80

AMGX

Distributed multigrid linear solver library on GPU
Cuda
474
star
81

cuCollections

C++
470
star
82

enroot

A simple yet powerful tool to turn traditional container/OS images into unprivileged sandboxes.
Shell
459
star
83

NeMo-Framework-Launcher

Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
Python
459
star
84

hpc-container-maker

HPC Container Maker
Python
442
star
85

MDL-SDK

NVIDIA Material Definition Language SDK
C++
438
star
86

PyProf

A GPU performance profiling tool for PyTorch models
Python
437
star
87

framework-reproducibility

Providing reproducibility in deep learning frameworks
Python
424
star
88

gpu-rest-engine

A REST API for Caffe using Docker and Go
C++
421
star
89

DCGM

NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs
C++
394
star
90

NvPipe

NVIDIA-accelerated zero latency video compression library for interactive remoting applications
Cuda
390
star
91

torch-harmonics

Differentiable signal processing on the sphere for PyTorch
Jupyter Notebook
386
star
92

cuQuantum

Home for cuQuantum Python & NVIDIA cuQuantum SDK C++ samples
Jupyter Notebook
344
star
93

data-science-stack

NVIDIA Data Science stack tools
Shell
317
star
94

ai-assisted-annotation-client

Client side integration example source code and libraries for AI-Assisted Annotation SDK
C++
308
star
95

video-sdk-samples

Samples demonstrating how to use various APIs of NVIDIA Video Codec SDK
C++
301
star
96

egl-wayland

The EGLStream-based Wayland external platform
C
299
star
97

nvidia-settings

NVIDIA driver control panel
C
292
star
98

NVTX

The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resources in your applications.
C
290
star
99

go-nvml

Go Bindings for the NVIDIA Management Library (NVML)
C
288
star
100

gpu-feature-discovery

GPU plugin to the node feature discovery for Kubernetes
Go
286
star