emilwallner / Screenshot-to-code
A neural network that transforms a design mock-up into a static website.
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing emilwallner/Screenshot-to-code in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler view--- **A detailed tutorial covering the code in this repository:** Turning design mockups into code with deep learning. **Plug:** 👉 Check out my 60-page guide, No ML Degree, on how to land a machine learning job without a degree. The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize. The models are based on Tony Beltramelli's pix2code, and inspired by Airbnb's sketching interfaces, and Harvard's im2markup. **Note:** only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy. The best model uses a GRU instead of an LSTM. This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train. The current model is also trained on a homogeneous and small dataset, thus it's hard to tell how well it behaves on more complex layouts. Dataset: https://github.com/tonybeltramelli/pix2code/tree/master/datasets A quick overview of the process: 1) Give a design image to the trained neural network 2) The neural network converts the image into HTML markup 3) Rendered output Installation FloydHub Click this button to open a Workspace on FloydHub where you will find the same environment and dataset used for the *Bootstrap version*. You can also find the trained models for testing. Local Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct . Folder structure Hello World HTML Bootstrap Model weights • Bootstrap (The pre-trained model uses GRUs instead of LSTMs) • HTML Acknowledgments • Thanks to IBM for donating computing power through their PowerAI platform • The code is largely influenced by Tony Beltramelli's pix2code paper. Code Paper • The structure and some of the functions are from Jason Brownlee's excellent tutorial