graviraja / MLOps-Basics
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing graviraja/MLOps-Basics in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewMLOps-Basics > There is nothing magic about magic. The magician merely understands something simple which doesn’t appear to be simple or natural to the untrained audience. Once you learn how to hold a card while making your hand look empty, you only need practice before you, too, can “do magic.” – Jeffrey Friedl in the book Mastering Regular Expressions **Note: Please raise an issue for any suggestions, corrections, and feedback.** The goal of the series is to understand the basics of MLOps like model building, monitoring, configurations, testing, packaging, deployment, cicd, etc. Week 0: Project Setup Refer to the Blog Post here The project I have implemented is a simple classification problem. The scope of this week is to understand the following topics: • • • • • • Following tech stack is used: • Huggingface Datasets • Huggingface Transformers • Pytorch Lightning Week 1: Model monitoring - Weights and Biases Refer to the Blog Post here Tracking all the experiments like tweaking hyper-parameters, trying different models to test their performance and seeing the connection between model and the input data will help in developing a better model. The scope of this week is to understand the following topics: • • • • Following tech stack is used: • Weights and Biases • torchmetrics References: • Tutorial on Pytorch Lightning + Weights & Bias • WandB Documentation Week 2: Configurations - Hydra Refer to the Blog Post here Configuration management is a necessary for managing complex software systems. Lack of configuration management can cause serious problems with reliability, uptime, and the ability to scale a system. The scope of this week is to understand the following topics: • • • • • Following tech stack is used: • Hydra References • Hydra Documentation • Simone Tutorial on Hydra Week 3: Data Version Control - DVC Refer to the Blog Post here Classical code version control systems are not designed to handle large files, which make cloning and storing the history impractical. Which are very common in Machine Learning. The scope of this week is to understand the following topics: • • • • • Following tech stack is used: • DVC References • DVC Documentation • DVC Tutorial on Versioning data Week 4: Model Packaging - ONNX Refer to the Blog Post here Why do we need model packaging? Models can be built using any machine learning framework available out there (sklearn, tensorflow, pytorch, etc.). We might want to deploy models in different environments like (mobile, web, raspberry pi) or want to run in a different framework (trained in pytorch, inference in tensorflow). A common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers will help a lot. This is acheived by a community project . The scope of this week is to understand the following topics: • • • • • Following tech stack is used: • ONNX • ONNXRuntime References • Abhishek Thakur tutorial on onnx model conversion • Pytorch Lightning documentation on onnx conversion • Huggingface Blog on ONNXRuntime • Piotr Blog on onnx conversion Week 5: Model Packaging - Docker Refer to the Blog Post here Why do we need packaging? We might have to share our application with others, and when they try to run the application most of the time it doesn’t run due to dependencies issues / OS related issues and for that, we say (famous quote across engineers) that . So for others to run the applications they have to set up the same environment as it was run on the host side which means a lot of manual configuration and installation of components. The solution to these limitations is a technology called Containers. By containerizing/packaging the application, we can run the application on any cloud platform to get advantages of managed services and autoscaling and reliability, and many more. The most prominent tool to do the packaging of application is Docker 🛳 The scope of this week is to understand the following topics: • • • • References • Analytics vidhya blog Week 6: CI/CD - GitHub Actions Refer to the Blog Post here CI/CD is a coding philosophy and set of practices with which you can continuously build, test, and deploy iterative code changes. This iterative process helps reduce the chance that you develop new code based on a buggy or failed previous versions. With this method, you strive to have less human intervention or even no intervention at all, from the development of new code until its deployment. In this post, I will be going through the following topics: • Basics of GitHub Actions • First GitHub Action • Creating Google Service Account • Giving access to Service account • Configuring DVC to use Google Service account • Configuring Github Action References • Configuring service account • Github actions Week 7: Container Registry - AWS ECR Refer to the Blog Post here A container registry is a place to store container images. A container image is a file comprised of multiple layers which can execute applications in a single instance. Hosting all the images in one stored location allows users to commit, identify and pull images when needed. Amazon Simple Storage Service (S3) is a storage for the internet. It is designed for large-capacity, low-cost storage provision across multiple geographical regions. In this week, I will be going through the following topics: • • • • • Week 8: Serverless Deployment - AWS Lambda Refer to the Blog Post here A serverless architecture is a way to build and run applications and services without having to manage infrastructure. The application still runs on servers, but all the server management is done by third party service (AWS). We no longer have to provision, scale, and maintain servers to run the applications. By using a serverless architecture, developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. In this week, I…