VisionScript
VisionScript is an abstract programming language for doing common computer vision tasks, fast.
VisionScript is built in Python, offering a simple syntax for running object detection, classification, and segmentation models. Read the documentation.
Get Started 🚀
First, install VisionScript:
pip install visionscript
You can then run VisionScript using:
visionscript
This will open a VisionScript REPL in which you can type commands.
Run a File 📁
To run a VisionScript file, use:
visionscript ./your_file.vic
Use VisionScript in a Notebook 📓
VisionScript offers an interactive web notebook through which you can run VisionScript code.
To use the notebook, run:
visionscript --notebook
This will open a notebook in your browser. Notebooks are ephermal. You will need to copy your code to a file to save it.
Quickstart 🚀
Find people in an image using object detection
Load["./photo.jpg"]
Detect["person"]
Say[]
Find people in all images in a folder using object detection
In["./images"]
Detect["person"]
Say[]
Replace people in a photo with an emoji
Load["./abbey.jpg"]
Size[]
Say[]
Detect["person"]
Replace["emoji.png"]
Save["./abbey2.jpg"]
Classify an image
Load["./photo.jpg"]
Classify["apple", "banana"]
Installation 👷
To install VisionScript, clone this repository and run pip install -r requirements.txt
.
Then, make a file ending in .vic
in which to write your VisionScript code.
When you have written your code, run:
visionscript ./your_file.vic
Run in debug mode
Running in debug mode shows the full Abstract Syntax Tree (AST) of your code.
visionscript ./your_file.vic --showtree=True
Debug mode is useful for debugging code while adding new features to the VisionScript language.
Inspiration 🌟
The inspiration behind this project was to build a simple way of doing one-off tasks.
Consider a scenario where you want to run zero-shot classification on a folder of images. With VisionScript, you can do this in three lines of code:
In["./images"]
Classify["cat", "dog"]
Say[]
VisionScript is not meant to be a full programming language for all vision tasks, rather an abstract way of doing common tasks.
VisionScript is ideal if you are new to concepts like "classify" and "segment" and want to explore what they do to an image.
Syntax
The syntax is inspired by both Python and the Wolfram Language. VisionScript is an interpreted language, run line-by-line like Python. Statements use the format:
Statement[argument1, argument2, ...]
This is the same format as the Wolfram Language.
Lexical Inference and Memory
An (I think!) unique feature in VisionScript compared to other languages is lexical inference.
You don't need to declare variables to store images, etc. Rather, you can let VisionScript do the work. Consider this example:
Load["./photo.jpg"]
Size[]
Say[]
Here, Size[]
and Say[]
do not have any arguments. Rather, they use the last input. Wolfram Alpha has a feature to get the last input using %
. VisionScript uses the same concept, but with a twist.
Indeed, Size[]
and Say[]
don't accept any arguments.
Developer Setup 🛠
If you want to add new features or fix bugs in the VisionScript language, you will need to set up a developer environment.
To do so, clone the language repository:
git clone https://github.com/capjamesg/VisionScript
Then, install the required dependencies and VisionScript:
pip install -r requirements.txt
pip install -e .
Now, you can run VisionScript using:
visionscript
Supported Models 📚
VisionScript provides abstract wrappers around:
- CLIP by OpenAI (Classification)
- Ultralytics YOLOv8 (Object Detection Training, Segmentation Training)
- FastSAM by CASIA-IVA-Lab. (Segmentation)
- GroundedSAM (Object Detection, Segmentation)
- BLIP (Caption Generation)
- ViT (Classification Training)
License 📝
This project is licensed under an MIT license.