• Stars
    star
    11,854
  • Rank 2,630 (Top 0.06 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 7 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

pix2code: Generating Code from a Graphical User Interface Screenshot

pix2code

Generating Code from a Graphical User Interface Screenshot

License

Abstract

Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

Citation

@article{beltramelli2017pix2code,
  title={pix2code: Generating Code from a Graphical User Interface Screenshot},
  author={Beltramelli, Tony},
  journal={arXiv preprint arXiv:1705.07962},
  year={2017}
}

Disclaimer

The following software is shared for educational purposes only. The author and its affiliated institution are not responsible in any manner whatsoever for any damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of the use or inability to use this software.

The project pix2code is a research project demonstrating an application of deep neural networks to generate code from visual inputs. The current implementation is not, in any way, intended, nor able to generate code in a real-world context. We could not emphasize enough that this project is experimental and shared for educational purposes only. Both the source code and the datasets are provided to foster future research in machine intelligence and are not designed for end users.

Setup

Prerequisites

  • Python 2 or 3
  • pip

Install dependencies

pip install -r  requirements.txt

Usage

Prepare the data:

# reassemble and unzip the data
cd datasets
zip -F pix2code_datasets.zip --out datasets.zip
unzip datasets.zip

cd ../model

# split training set and evaluation set while ensuring no training example in the evaluation set
# usage: build_datasets.py <input path> <distribution (default: 6)>
./build_datasets.py ../datasets/ios/all_data
./build_datasets.py ../datasets/android/all_data
./build_datasets.py ../datasets/web/all_data

# transform images (normalized pixel values and resized pictures) in training dataset to numpy arrays (smaller files if you need to upload the set to train your model in the cloud)
# usage: convert_imgs_to_arrays.py <input path> <output path>
./convert_imgs_to_arrays.py ../datasets/ios/training_set ../datasets/ios/training_features
./convert_imgs_to_arrays.py ../datasets/android/training_set ../datasets/android/training_features
./convert_imgs_to_arrays.py ../datasets/web/training_set ../datasets/web/training_features

Train the model:

mkdir bin
cd model

# provide input path to training data and output path to save trained model and metadata
# usage: train.py <input path> <output path> <is memory intensive (default: 0)> <pretrained weights (optional)>
./train.py ../datasets/web/training_set ../bin

# train on images pre-processed as arrays
./train.py ../datasets/web/training_features ../bin

# train with generator to avoid having to fit all the data in memory (RECOMMENDED)
./train.py ../datasets/web/training_features ../bin 1

# train on top of pretrained weights
./train.py ../datasets/web/training_features ../bin 1 ../bin/pix2code.h5

Generate code for batch of GUIs:

mkdir code
cd model

# generate DSL code (.gui file), the default search method is greedy
# usage: generate.py <trained weights path> <trained model name> <input image> <output path> <search method (default: greedy)>
./generate.py ../bin pix2code ../gui_screenshots ../code

# equivalent to command above
./generate.py ../bin pix2code ../gui_screenshots ../code greedy

# generate DSL code with beam search and a beam width of size 3
./generate.py ../bin pix2code ../gui_screenshots ../code 3

Generate code for a single GUI image:

mkdir code
cd model

# generate DSL code (.gui file), the default search method is greedy
# usage: sample.py <trained weights path> <trained model name> <input image> <output path> <search method (default: greedy)>
./sample.py ../bin pix2code ../test_gui.png ../code

# equivalent to command above
./sample.py ../bin pix2code ../test_gui.png ../code greedy

# generate DSL code with beam search and a beam width of size 3
./sample.py ../bin pix2code ../test_gui.png ../code 3

Compile generated code to target language:

cd compiler

# compile .gui file to Android XML UI
./android-compiler.py <input file path>.gui

# compile .gui file to iOS Storyboard
./ios-compiler.py <input file path>.gui

# compile .gui file to HTML/CSS (Bootstrap style)
./web-compiler.py <input file path>.gui

FAQ

Will pix2code supports other target platforms/languages?

No, pix2code is only a research project and will stay in the state described in the paper for consistency reasons. This project is really just a toy example but you are of course more than welcome to fork the repo and experiment yourself with other target platforms/languages.

Will I be able to use pix2code for my own frontend projects?

No, pix2code is experimental and won't work for your specific use cases.

How is the model performance measured?

The accuracy/error reported in the paper is measured at the DSL level by comparing each generated token with each expected token. Any difference in length between the generated token sequence and the expected token sequence is also counted as error.

How long does it take to train the model?

On a Nvidia Tesla K80 GPU, it takes a little less than 5 hours to optimize the 109 * 10^6 parameters for one dataset; so expect around 15 hours if you want to train the model for the three target platforms.

I am a front-end developer, will I soon lose my job?

(I have genuinely been asked this question multiple times)

TL;DR Not anytime soon will AI replace front-end developers.

Even assuming a mature version of pix2code able to generate GUI code with 100% accuracy for every platforms/languages in the universe, front-enders will still be needed to implement the logic, the interactive parts, the advanced graphics and animations, and all the features users love. The product we are building at Uizard Technologies is intended to bridge the gap between UI/UX designers and front-end developers, not replace any of them. We want to rethink the traditional workflow that too often results in more frustration than innovation. We want designers to be as creative as possible to better serve end users, and developers to dedicate their time programming the core functionality and forget about repetitive tasks such as UI implementation. We believe in a future where AI collaborate with humans, not replace humans.

Media coverage

More Repositories

1

Deep-Spying

Spying using Smartwatch and Deep Learning
Python
186
star
2

Deep-Lyrics

Lyrics Generator aka Character-level Language Modeling with Multi-layer LSTM Recurrent Neural Network
Python
143
star
3

Air-Kinect-Gesture-Lib

Air Kinect Gesture Library
ActionScript
52
star
4

Cocos2D-Mask-Shader

Mask sprites with OpenGL ES 2.0 shader in Cocos2D
Objective-C
27
star
5

Supervised-End-to-end-Weight-sharing-for-StarCraft-II

StarCraft 2 AI Workshop
Python
21
star
6

Android-NFC-P2P-Communication

Android P2P communication over NFC
Java
14
star
7

Graphics-And-Vision

Computer graphics and computer vision (eye tracking, projective geometry, stereo vision)
Python
12
star
8

ExportSQLite

A plugin for MySQLWorkbench to export SQLite files
Lua
8
star
9

Cocos2D-Chipmunk-Scaffold

A scaffold project for Cocos2D iPhone Framework and Chipmunk physics engine.
Objective-C
6
star
10

Custom-Simple-Captcha

Simple and flexible captcha system against dumb automated spambots
HTML
5
star
11

The-Web-Copter-Experiment

Hardware-accelerated 3D graphics web experiment - Chrome Experiment
JavaScript
4
star
12

Connected-Mind-Neural-Network

Neuroevolution project implementing Evolutionary Algorithm and Genetic Algorithm
Java
4
star
13

Information-Retrieval-System

Information retrieval system, search engine, document classification, machine learning
Scala
4
star
14

Arduino-Remote-Controlled-Glass-Drum

Remote-controlled Servo motor and LED through Web Server using AJAX asynchronous request
Arduino
3
star
15

Android-Wear-Permissions-Bug

Permissions granted without being explicitely defined in the manifest file.
Java
3
star
16

BlackOutGobelins

Student project - iOS game that brings user's Facebook data to life
Objective-C
2
star
17

TMXResolutionTool

Command line tool to convert .tmx tile map files and images into different resolutions, Apple Retina display notation support
Objective-C
2
star
18

Ubiquitous-Media-Sharing-Surface

Exchanging images between smartphones on a shared surface
Java
1
star
19

BlorkTheShmurph-GameCore

AIR game - 35h student project at Les Gobelins school
ActionScript
1
star
20

Taco-DSL

The Taco domain specific language to generate surveys
Java
1
star