• Stars
    star
    252
  • Rank 161,312 (Top 4 %)
  • Language PureBasic
  • License
    MIT License
  • Created almost 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Recycle your plastic better with Artificial Intelligence ♻️

EcoSnap

ecosnap.mp4

Recycle your plastic better with Artificial Intelligence ♻️

EcoSnap tells you how and where to recycle your items from a simple picture, with advice tailored to your location. We built this product in a week for Ben's Bites AI Hackathon.

👉 Try it now - it's free with no sign in needed

Deploy with Vercel

You can support this project (and many others) through GitHub Sponsors! ❤️

Made by Alyssa X & Leo. Read more about how we built this here.

EcoSnap - Recycle your plastic better with Artificial Intelligence | Product Hunt

Table of contents

Features

📸 Snap or upload a picture of a plastic code
📱 Install the PWA on your phone for easy access
🔍 Search for specific item to know how to dispose of it
♻️ Learn how to recycle effectively using AI
🥤 Keep track of how many plastic items you've recycled
🌍 Change your location for specific advice
...and much more to come - all for free & no sign in needed!

Installation

You can deploy to Vercel directly by clicking here.

Important: Make sure to update the environment variable for NEXT_PUBLIC_MODEL_URL in the .env file, and set it to an absolute URL where you host the model.json (make sure to include the other shard bin files alongside the JSON).

The AI Model

Data

The model was trained on image examples of the 7 different resin codes, the data for this can be found in ml/seven_plastics. It is a combination of the following Kaggle Dataset and images collected by the authors and contributors.

Training

The final model was trained using TensorFlow's EfficientNet implementation, the model weights were frozen for transfer learning, so the model could learn the resin codes faster! The model was trained in Python on a GPU-powered machine, for faster training! You can find the training script in ml/train.py and try it for yourself, there you will see that different meta architectures and parameters were experimented with before arriving at the final model.

Prediction

To predict the plastic resin code, the model had to be integrated with the front end app for real-time results, to do this we had to convert the model in a way that was compatible with TensorFlow.js. We used Web Workers to prevent the main thread from being block while running the prediction in the client.

The app passes the image Tensor onto the model that then gives a probability for each of the plastic resin codes, the one with the highest probability gets shown to the user, along with bespoke advice!

Feedback

Training a specific model is hard, the model always gets things wrong. So if it does, we give the user an opportunity to tell us what the right code was! This benefits in several ways:

  1. The user gets the information they need on how to recycle their item
  2. We can see how the model is performing in production
  3. We get new data (if the user lets us) to train the model with and improve it for everyone

While we implemented the front end for the feedback loop, we ended up not connecting it to the backend as it added complexity and cost, and we wanted the app to be very lightweight and running entirely on the client. We'd also have to communicate clearly to the user how exactly their images would be used, and set up either an opt-in or opt-out system, which felt a bit cumbersome.

Credit

Libraries used

Feel free to reach out to us at [email protected], to Alyssa or Leo directly if you have any questions or feedback! Hope you find this useful 💜