Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
12
21
username
stringclasses
6 values
license
stringclasses
6 values
title
stringlengths
34
98
publication_description
stringlengths
4.41k
109k
0CBAR8U8FakE
3rdson
none
How to Add Memory to RAG Applications and AI Agents
![1705674621330.png](1705674621330.png) Sometime in the last 5 months, I built a RAG application, and after building this RAG application, I realised there was a need to add memory to it before moving it to production. I went on YouTube and searched for videos, but I couldn’t find anything meaningful. I saw some videos, but these videos weren’t about adding persistent memory to a production-ready RAG application. They only talked about adding in-memory storage to a RAG application, which is unsuitable for a full-scale application. It was then that I realized I needed to figure things out myself and write a good article that would guide readers through the thought processes and steps needed to add memory to a RAG application or AI agent. Quick Note: If you are building with Streamlit, you can follow this tutorial to find an easy way to add memory to your Streamlit app. --- Pre-requisites : 1. Before jumping into the discussion, I want to believe you already know what RAG is and why it is needed. If you’re unfamiliar with this concept, you can read more about it [here](https://www.datacamp.com/blog/what-is-retrieval-augmented-generation-rag). 2. I also want to believe you already know how to build RAG applications. If you want to learn how to build RAG applications, you can follow my [previous article](https://app.readytensor.ai/publications/how_to_build_rag_apps_with_pinecone_openai_langchain_and_python_sBFzhbX4GpeQ). 3. For this tutorial, I used MongoDB as my traditional database, Langchain as my LLM framework and OpenAI GPT 3.5turbo as my LLM. But you can use any technologies of your choice once you have understood the workflow. 4. To follow along, `pip install` the libraries below. ``` openai python-dotenv langchain-openai pymongo ``` --- ## Now you are good to go ![1-5e5944a1.png](1-5e5944a1.png) --- # What Is Memory and Why Do RAG Applications and AI Agents Need Them? Let’s use ChatGPT as an example. When you ask ChatGPT a question like `“Who is the current president of America?”`, it will tell you ‘`‘Joe Biden“` and then if you go further to ask “How old is he?“, ChatGPT will tell you `“88”.` Now, here is the question: “How was chatGPT able to relate the second question to the first question and give you the answer you needed without you being so specific in your question?” The simple answer to this is the presence of memory. Just like the same way human beings can easily relate to past experiences or questions, ChatGPT has been built to have memory which can help it know when you are asking a question related to the previous question. In my simplest definition, and with regards to RAG and AI agents, memory or adding memory to RAG applications means making the AI agent to be able to make inferences from previous questions and give you new answers based on new questions, previous questions and previous answers. So now that you have known what memory is, the question is: How Can I Add a Memory to My RAG or AI Agent? Here is the concept I came up with. Human beings have memory because they all have a brain that stores information, and they can answer and make decisions based on the information(data) stored in their brains. So to achieve this when building an AI Agent or an RAG application, you need to also give the RAG application a brain by including the following: 1. A database (for storing user’s questions, the AI’s answer, chat IDs, the user’s email etc) 2. A function that retrieves users’ previous questions whenever a new question is asked 3. A function that uses LLM to check if the current question is related to the previous one. If it is related, it will create a new stand-alone question using the present question and previous questions. This question will now be embedded and sent to the vector database or AI agent, depending on what you are building. But if the present question is not related to the past questions, it will send the question as it is. ## Creating a Database for Storing the User’s Questions and AI’s Answers Below, I used pymongo to create a Mongo database so you can have an understanding of the kind of fields you will need. ```python from pymongo import MongoClient from datetime import datetime from bson.objectid import ObjectId # Connect to MongoDB (modify the URI to match your setup) client = MongoClient("mongodb://localhost:27017/") db = client["your_database_name"] # The name of your database collection = db["my_ai_application"] # The name of the collection # Sample document to be inserted document = { "_id": ObjectId("66c990f566416e871fdd0b43"), # you can omit this to auto-generate "question": "Who is the President of America?", "email": "nnajivictorious@gmail.com", "response": "The current president of the United States is Joe Biden.", "chatId": "52ded9ebd9ac912c8433b699455eb655", "userId": "6682632b88c6b314ce887716", "isActive": True, "isDeleted": False, "createdAt": datetime(2024, 8, 24, 7, 51, 17, 503000), "updatedAt": datetime(2024, 8, 24, 7, 51, 17, 503000) } # Insert the document into the collection result = collection.insert_one(document) print(f"Inserted document with _id: {result.inserted_id}") ``` In the code above, I created a MongoDB connection using MongoClient and connected to a specified database and collection in MongoDB. I then defined a sample document with fields like `question`, `email`, `response`, `chatId`, and `userId`, along with metadata fields such as` isActive`, `isDeleted`, `createdAt`, and `updatedAt` to track each entry's status and timestamps. The _id field is assigned using ObjectId, which you can omit to let MongoDB auto-generate it. When insert_one(document) is called, the document is inserted into the `my_ai_application` collection, and MongoDB returns a unique _id for the document, which is printed to confirm the insertion. Make sure you change your connection credentials and other specific information. Now that you have created the database and have understood the kind of fields you need in the database, let’s now see how to use the database to create a memory. ## Creating a Function That Retrieves Users’ Previous Questions Whenever a New Question Is Asked Below, we are going to define a function that retrieves the user’s last 3 questions from the database using the user’s email and the chat_id. ```python from typing import List client = MongoClient("mongodb://localhost:27017/") db = client.your_database_name collection = db.my_ai_application # no need to initialize this connection if you had already done it def get_last_three_questions(email: str, chat_id: str) -> List[str]: """ Retrieves the last three questions asked by a user in a specific chat session. Args: email (str): The user's email address used to filter results. chat_id (str): The unique identifier for the chat session. Returns: List[str]: A list containing the last three questions asked by the user, ordered from most recent to oldest. """ query = {"email": email, "chatId": chat_id} results = collection.find(query).sort("createdAt", -1).limit(3) questions = [result["question"] for result in results] return questions # Call the function past_questions = get_last_three_questions("nnajivictorious@gmail.com", "52ded9ebd9ac912c8433b699455eb655") ``` You can change this to retrieve the last five or even ten questions from the user’s database by setting `.limit(5)` or `.limit(10).` But note: These questions, together with the new question will still be passed into a system prompt later. So, you need to make sure you aren’t exceeding the input token size of your LLM. Now that you have defined a function that retrieves the past questions from the database, you need to create a new function that compares the current question with the previous questions and creates a stand-alone question if needed. But if the new question has nothing to do with the previous questions, it will push the user’s question just as it is. Creating a function that creates a standalone question by comparing the new question with the previous questions Below we are going to create a system prompt called new_question_modifier and now use this system prompt within the function we will define. It is this system prompt that does the comparing for us. Check the code below to understand how it works. ```python from langchain_openai import OpenAI from dotenv import load_dotenv # Load your OpenAI API key from .env file load_dotenv() CHAT_LLM = OpenAI() new_question_modifier = """ Your primary task is to determine if the latest question requires context from the chat history to be understood. IMPORTANT: If the latest question is standalone and can be fully understood without any context from the chat history or is not related to the chat history, you MUST return it completely unchanged. Do not modify standalone questions in any way. Only if the latest question clearly references or depends on the chat history should you reformulate it as a complete, standalone legal question. When reformulating: """ def modify_question_with_memory(new_question: str, past_questions: List[str]) -> str: """ Modifies a new question by incorporating past questions as context. This function takes a new question and a list of past questions, combining them into a single prompt for the language model (LLM) to generate a standalone question with sufficient context. If there are no past questions, the new question is returned as-is. Args: new_question (str): The latest question asked. past_questions (List[str]): A list of past questions for context. Returns: str: A standalone question that includes necessary context from past questions. """ if past_questions: past_questions_text = " ".join(past_questions) # Combine the system prompt with the past questions and the new question system_prompt = f"{new_question_modifier}\nChat history: {past_questions_text}\nLatest question: {new_question}" # Get the standalone question using the LLM standalone_question = CHAT_LLM.invoke(system_prompt) else: standalone_question = new_question return standalone_question modified_question = modify_question_with_memory(new_question="your new question here", past_questions=past_questions) ``` The code above creates a stand-alone question using the previous questions, the new question, and the new_question_modifier which is passed into an LLM (OpenAI) - But what do I really mean by a standalone question? A stand-alone question is a question that can be understood by the LLM without prior knowledge of the past conversation. Let me explain with an example…… Let’s assume your first question is, `“Who is the president of America?”` and the LLM answers `“Joe Biden”` and then you ask `“How old is he?”` The question, `how old is he?` is not a standalone question because no one can answer the question without knowing whom you are talking about. So what the function above does is: It will look at your new question `“How old is he?“` and compare it with the former question `“Who is the president of America?“` Then the LLM will ask itself, “Is the recent question related to the past questions?“ If the answer is yes, it will now modify this new question to something like `“How old is the current president of America?“` or `“How old is Joe Biden?“` and then return this new question so that it will now be embedded and sent to the vector database for similarity search. But if the answer is no, it will pass your question just as it is. This modified question is called a `stand-alone question` because anyone can understand it even without knowing the previous conversation. I hope this is clear 😁✌️ Finally, after the function has given you the standalone question, you can now send it to your embedding model and from there to your vector store for similarity search Note: All these steps must be in a single pipeline so that the output of one becomes the input of the next until the user gets his answers. I believe you understand what I’m saying 🤗 Also, don’t forget to try out different system prompts and know what works best for your use case. The system prompt I used here is just an example for you to build on. - IN CONCLUSION I developed this approach after thorough brainstorming, and while it works effectively for the most part, I’d genuinely appreciate any feedback you have. I'd also be grateful if you could share any alternative approaches you've tried that might improve upon it. See you in the comment section and thank you so much for reading HAPPY RAGING🤗🚀 You can always reach me on [X: 3rdSon__](https://x.com/3rdSon__) [LinkedIn: Victory Nnaji](https://www.linkedin.com/in/3rdson/) [GitHub: 3rd-Son](https://github.com/3rd-Son)--DIVIDER--
0hkuicWh2tKk
regmi.prakriti24
Hands on Computer Vision: Build Production-Grade Models in an Hour
:::youtube[Title]{#8em2GBD0H8g} --DIVIDER-- --- --DIVIDER--# Learning Objectives > *In this notebook, we will explore the practical implementations of some primal CV tasks like image classification, image segmentation, and object detection using modern computer vision techniques leveraging some popular pre-trained models.* By the end of this session, you will be able to: 1) Understand the applications of image classification, segmentation, and object detection. <br> 2) Use pre-trained models to perform these tasks with minimal setup. <br> 3) And, visualize the outputs of pre-trained models for test analysis. <br> --DIVIDER--# Prerequisites To ensure participants can fully engage and benefit from this workshop, the following are recommended: 1. **Basic Understanding of Python:** Familiarity with Python programming, including syntax, data structures, and basic libraries like numpy and matplotlib. 2. **Google Account:** You'll need a Google account to access and run the Colab notebook we'll be using during the webinar. 3. **Basic Understanding of Deep Learning:** No advanced expertise needed, but a basic grasp of how CNNs process images would be helpful. All required libraries and dependencies are pre-installed in the Colab environment. --DIVIDER--:::info{title="Webinar Resources"} 📝 To follow along with this webinar: 1. Use our [Google Colab Notebook](https://colab.research.google.com/drive/1oGzv7q9PqnlNMj0i0pu2ZtvEi-GkoG4N) - Sign in with your Google account - Click "Copy to Drive" to create your own editable version - All required libraries are pre-installed in Colab 2. For later reference, check our [GitHub Repository](https://github.com/readytensor/rt-cv-2024-webinar) which is also linked in the **Models** section of this webinar publication. - Contains complete code base - Additional code examples and resources - Extended documentation The presentation slides used in this webinar are also available in the **Resources** section as **"Ready Tensor Computer Vision Webinar.pdf"**. We recommend using the Colab notebook during the code review section for the smoothest experience! ::: Now, let's dive into computer vision!--DIVIDER-- --- --DIVIDER--# Image Classification Image classification is the task of identifying what's in an image by assigning it a label from a set of predefined categories. For example, determining if a photo contains a dog, cat, car, or person. When implementing image classification, you have several approaches: 1. **Build your own models from scratch** - giving you full control but requiring extensive training data and computational resources 2. **Use pre-trained models** - leveraging models already trained on large datasets like ImageNet 3. **Fine-tune pre-trained models on your specific dataset** - combining the best of both worlds For most real-world applications, using pre-trained models (approach #2) is the smart choice. These models have already learned to recognize a wide variety of visual features, allowing you to: - Get started quickly without extensive training data - Save significant time and computing resources - Often achieve better results than training from scratch In this tutorial, we'll use a pre-trained model to classify images. If you're interested in training a model on your own dataset, check out the resources section for a detailed guide on transfer learning and fine-tuning. Let's get started! 👇--DIVIDER--**Importing the Libraries** ```python import tensorflow as tf import matplotlib.pyplot as plt import glob as glob import os import cv2 import random import json import numpy as np from PIL import Image ``` <br> --DIVIDER--**Accessing the Data** ```python base_dataset_path = os.path.join("WebinarContent", "Datasets") classification_data_samples = "ImageClassification" images = os.path.join(base_dataset_path, classification_data_samples) image_paths = [os.path.join(base_dataset_path, classification_data_samples, x) for x in os.listdir(images) ] # Sort files for consistent ordering image_paths.sort() ```--DIVIDER--**Visualizing Test Sample** We will use the French Bulldog image for prediction. Let's load and display it. ```python plt.figure(figsize=(8,4)) image = plt.imread(image_paths[2]) plt.imshow(image) plt.axis("off") ```--DIVIDER--![FrenchBullDog.jpg](FrenchBullDog.jpg)--DIVIDER--<br> ## Inception V3 for Image Classification InceptionV3, introduced by **Google** in 2015, is a successor to InceptionV1 and V2. It is a convolutional neural network designed for high accuracy in image classification while being computationally efficient. The model uses convolutional, pooling, and inception modules, with inception blocks enabling the network to learn features at multiple scales using filters of varying sizes. Before we move ahead let's take a look at the images the model has been trained on. --DIVIDER--**Accessing Inception V3 Model Labels**--DIVIDER--```python class_index_file = "WebinarContent/ModelConfigs/imagenet_class_index_file.json" with open(class_index_file, 'r') as f: class_mapping = json.load(f) ``` --DIVIDER--```python class_names = [class_mapping[str(i)][1] for i in range(len(class_mapping))] print(f"Total Classes: {len(class_names)}") ``` ```bash > Total Classes: 1000 ```--DIVIDER--**Visualization of The Classes**--DIVIDER--```python random.shuffle(class_names) num_rows, num_cols = 2, 3 fig, ax = plt.subplots(num_rows, num_cols, figsize=(7, 2.5)) fig.suptitle("Sample ImageNet Classes", fontsize=12) for i, ax in enumerate(ax.flat): if i < len(class_names[:6]): ax.text(0.5, 0.5, class_names[i], ha='center', va='center', fontsize=10) ax.set_xticks([]) ax.set_yticks([]) else: ax.axis('off') ``` --DIVIDER-- ![ImageNet Classes.png](ImageNet%20Classes.png)--DIVIDER--InceptionV3 was trained on the **ImageNet** dataset, a large-scale dataset commonly used for image classification tasks, with categories ranging from animals and plants to everyday objects and scenes, which consists of over **1.2 million labeled images** across **1,000 categories**. --DIVIDER--### Loading the Inception V3 Model--DIVIDER--```python from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.applications.inception_v3 import preprocess_input ```--DIVIDER--```python inception_v3_model = InceptionV3(weights='imagenet') ```--DIVIDER--**Model Input Size Check** Knowing the image shape is crucial for preprocessing, model compatibility, resource management (memory), and ensuring the model performs optimally with the given data. 1. **Model Compatibility**: Most models, including InceptionV3, expect input images of a specific shape (e.g., 299x299x3 for InceptionV3). **If the images fed into the model don't match this expected shape, the model will throw an error.** Therefore, knowing the image shape ensures that the images are preprocessed correctly to fit the model’s requirements. 2. **Data Preprocessing:** Knowing the expected input shape helps in resizing images properly. If an image is too large or too small, resizing it to the required dimensions is necessary for consistent model performance. 3. **Memory and Computational Efficiency:**: The shape of the image affects the amount of memory required to store the data. Larger images (higher resolution) require more memory. For instance, images of shape (299, 299, 3) will take up less memory than images of shape (512, 512, 3) --DIVIDER-- ```python print(inception_v3_model.input_shape) ``` ``` > (None, 299, 299, 3) ```--DIVIDER--Here, the input shape **(None, 299, 299, 3)** means the InceptionV3 model expects input images of size 299x299 pixels with 3 color channels (RGB). This shape is consistent with the pre-trained InceptionV3 model, which is designed to work with color images resized to 299x299 pixels.--DIVIDER--### Image Preprocessing--DIVIDER--```python image_paths[0] > Datasets\ImageClassification\FrenchBullDog.jpg ``` --DIVIDER----DIVIDER--```python tf_image = tf.io.read_file(image_paths[0]) #reading image decoded_image = tf.image.decode_image(tf_image) # decode the image into a tensor image_resized = tf.image.resize(decoded_image, inception_v3_model.input_shape[1:3]) # resizing the image to match the expected input shape of the model image_batch = tf.expand_dims(image_resized, axis = 0) # add an extra dimension to the image image_batch = preprocess_input(image_batch) #preprocess the image to match the input format ```--DIVIDER--### Prediction With Inception v3--DIVIDER--```python model_prediction = inception_v3_model(image_batch) decoded_model_prediction = tf.keras.applications.imagenet_utils.decode_predictions( preds = model_prediction, top = 1 ) print("Predicted Result: {} with confidence {:5.2f}%".format( decoded_model_prediction[0][0][1],decoded_model_prediction[0][0][2]*100)) plt.imshow(Image.open(image_paths[0])) plt.axis('off') plt.show() ``` ![out3.png](out3.png)--DIVIDER--> Oops ! Here the sunglasses overruled :(. But we can always use a model more specialized for our use case! --DIVIDER-- --- --DIVIDER--## Using a Specialized Pre-trained Model When a pre-trained model doesn't deliver satisfactory results for your specific use case, one option is to fine-tune the model or explore other sources that provide fine-tuned models. Fine-tuning allows you to adapt a model trained on large datasets to perform better on your specific data by updating only the last few layers of the model. Here are some sources you can utilize: 1. TensorFlow Hub 2. Hugging Face Model Hub 3. Keras Applications 4. Facebook AI Research 5. ReadyTensor's Model Hub Let's try using one of the fine-tuned models for our purpose.--DIVIDER--```python from transformers import AutoImageProcessor, AutoModelForImageClassification ```--DIVIDER--```python image_processor = AutoImageProcessor.from_pretrained("jhoppanne/Dogs-Breed-Image-Classification-V2") model = AutoModelForImageClassification.from_pretrained("jhoppanne/Dogs-Breed-Image-Classification-V2") ```--DIVIDER--This model is a fine-tuned version of `microsoft/resnet-152` on the **Stanford Dogs** dataset, achieving: ``` Loss: 1.0115 Accuracy: 84.08 % ``` Source : [Model Source](https://huggingface.co/jhoppanne/Dogs-Breed-Image-Classification-V2) --DIVIDER--```python image = Image.open("WebinarContent/Datasets/ImageClassification/FrenchBullDog.jpg") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(f"Predicted class: {model.config.id2label[predicted_class_idx]}.") plt.imshow(image) ``` ![out4.png](out4.png)--DIVIDER-- > There you go! Just what we needed :) --DIVIDER-- --- --DIVIDER--# Object Detection Object Detection is a computer vision task that involves identifying and localizing objects within an image or video. It not only classifies objects but also uses bounding boxes to pinpoint their positions. **Key Components:** <br> 1. Localization: Identifies the object’s position with a bounding box. 2. Classification: Labels the object (e.g., "dog," "car"). 3. Confidence Score: The probability that the prediction is correct. **Techniques:** **Two-Stage Models (e.g., Faster R-CNN):** Generate region proposals first and classify them second, offering high accuracy but slower speeds. <br> **One-Stage Models (e.g., YOLO, SSD):** Predict everything in one pass, fast and suitable for real-time applications but may sacrifice some accuracy.--DIVIDER--> Lets try using the YOLO (a pretrained) model from **UltraLytics** for our object detection task --DIVIDER--:::info{title="Info"} **Why Use YOLO from Ultralytics?** Ultralytics **YOLO** models are optimized for fast and accurate inferencing, ideal for real-time tasks like object detection and segmentation. Pre-trained models can be deployed on edge devices and support formats like ONNX and TensorFlow Lite for versatile usage. So, we look forward to leveraging it. :::--DIVIDER--### YOLOv11 for Object Detection--DIVIDER--**Loading Libraries** ```python #!pip install ultralytics ```--DIVIDER--**Loading the YOLO Module** ```python from ultralytics import YOLO ```--DIVIDER--### Loading The Model ```python yolo11_model = YOLO(os.path.join("WebinarContent", "Models", "yolov11m.pt")) ```--DIVIDER--**Visualization of The Classes**--DIVIDER--```python yolo11_classes = yolo_classes = list(yolo11_model.names.values()) random.seed(0) random.shuffle(yolo11_classes) num_rows, num_cols = 3, 6 fig, ax = plt.subplots(num_rows, num_cols, figsize=(8, 2)) fig.suptitle("Sample Yolo11 Classes", fontsize=12) for i, ax in enumerate(ax.flat): if i < len(yolo11_classes[:18]): ax.text(0.5, 0.5, yolo11_classes[i], ha='center', va='center', fontsize=8) ax.set_xticks([]) ax.set_yticks([]) else: ax.axis('off') ```--DIVIDER-- ![sample_yolo_classes.png](sample_yolo_classes.png)--DIVIDER--**Visualizing Test Sample** We will use a road traffic image for object detection. Let's visualize it first.--DIVIDER--```python test_image_path = "/content/WebinarContent/Datasets/ObjectDetection/roadTraffic.png" ```--DIVIDER--```python test_image = Image.open(test_image_path) ig, ax = plt.subplots() ax.imshow(test_image) ``` ![out6.png](out6.png)--DIVIDER--### Making Predictions With YOLO v11--DIVIDER--```python results = yolo11_model.predict(test_image_path) ``` image 1/1 WebinarContent\Datasets\ObjectDetection\roadTraffic.png: 352x640 3 persons, 8 cars, 339.6ms Speed: 4.0ms preprocess, 339.6ms inference, 2.0ms postprocess per image at shape (1, 3, 352, 640) --DIVIDER--```python print(f"The number of objects detected in the image is:{len(results[0].boxes)}") ``` The number of objects detected in the image is: 11--DIVIDER--### Visualizing Object Detection Results --DIVIDER--```python prediction_ccordinates = [] predictions = [] for box in results[0].boxes: class_id = results[0].names[box.cls[0].item()] predictions.append(class_id) cords = box.xyxy[0].tolist() cords = [round(x) for x in cords] prediction_ccordinates.append(cords) conf = round(box.conf[0].item(), 2) ``` --DIVIDER--```python fig, ax = plt.subplots() ax.imshow(test_image) for i, bbox in enumerate(prediction_ccordinates): rect = plt.Rectangle((bbox[0], bbox[1]), bbox[2] - bbox[0], bbox[3] - bbox[1], linewidth=2, edgecolor='r', facecolor='none') ax.add_patch(rect) ax.text(bbox[0], bbox[1] - 10, f'{predictions[i]}', color='b', fontsize=6, backgroundcolor='none') plt.show() ``` ![out7.png](out7.png)--DIVIDER--This way using a pre-trained **YOLOv8** model for **car detection** in traffic management we can gain several benefits: 1. **Automatic Traffic Analysis**: It can count the number of vehicles, detect traffic jams, and measure the speed of cars, enabling smart traffic lights and dynamic traffic management. 2. **Parking Management**: YOLOv8 can help in detecting available parking spots by identifying parked cars in parking lots, improving the user experience in urban areas. And more These applications can significantly enhance traffic management, improve road safety, and optimize urban planning.--DIVIDER-- > **Let's try to take a level up next !** --DIVIDER-- --- --DIVIDER--# Image Segmentation This is an advanced use case where the model is applied to segment objects in an image, rather than just detecting them. Unlike traditional object detection, segmentation involves classifying each pixel in an image, allowing precise boundaries for objects like cars, people, or building. The training of **Object Detection** and **Image Segmentation** models differs mainly in the output and data requirements. Object detection models, like YOLO, produce **bounding boxes** around objects and assign class labels, requiring annotations that specify object locations. Segmentation models, like YoloV8-Seg, generate **pixel-wise masks**, assigning a class to each pixel in the image, requiring more detailed pixel-level annotations. While object detection typically uses simpler loss functions (e.g., bounding box and classification loss) and is less computationally expensive, image segmentation is more resource-intensive, requiring more complex models and loss functions (e.g., Dice loss) to provide precise object boundaries.--DIVIDER--### Loading the YOLOv11-seg Model--DIVIDER--```python segmentation_model = YOLO(os.path.join("WebinarContent", "Models", "yolov11m-seg.pt")) ```--DIVIDER--**Loading The Test Image**--DIVIDER--```python segmentation_test_image_path = os.path.join("WebinarContent", "Datasets", "ImageSegmentation", "beatles.png") img = cv2.cvtColor(cv2.imread(segmentation_test_image_path,cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB) plt.imshow(img) ``` ![out8.png](out8.png)--DIVIDER--**Accessing the Model Labels**--DIVIDER--```python yolo_seg_classes = list(segmentation_model.names.values()) classes_ids = [yolo_classes.index(clas) for clas in yolo_seg_classes] ```--DIVIDER--### Inferencing With YOLOv11-seg--DIVIDER--```python conf = 0.5 # setting threshold results = segmentation_model.predict(img, conf=conf) ```--DIVIDER--### Visualizing Segmentation Results--DIVIDER--```python colors = [random.choices(range(256), k=3) for _ in classes_ids] person_class_id = 0 for result in results: for mask, box in zip(result.masks.xy, result.boxes): points = np.int32([mask]) class_id = int(box.cls[0]) if (class_id == person_class_id ): cv2.polylines(img, points, True, (255, 0, 0), 1) color_number = classes_ids.index(int(box.cls[0])) cv2.fillPoly(img, points, colors[color_number]) plt.imshow(img) ``` ![out9.png](out9.png)--DIVIDER--Voila ! You did it!--DIVIDER--# Conclusion As we have demonstrated in this hands-on session, building production-grade computer vision systems is now achievable within an hour thanks to pre-trained models like InceptionV3 and YOLO. By leveraging these powerful models, we can quickly implement complex tasks from image classification to segmentation, making advanced computer vision capabilities readily accessible for real-world applications. --DIVIDER--# Exercises Here are some exercises to help you practice and extend what you've learned. They are arranged in increasing order of difficulty: ## 1. Model Comparison (Beginner) Try using ResNet50 instead of InceptionV3 for image classification: - Load the pre-trained ResNet50 model - Run inference on the same images - Compare the predictions and confidence scores - Which model performs better for our dog breed images? ## 2. YOLO Performance Analysis (Intermediate) Experiment with different YOLO model sizes: - Try all 5 variants - Measure and compare inference times and GPU memory usage - Analyze the trade-off between speed and accuracy - Which size would you choose for a real-time application? ## 3. Object Tracking (Intermediate) Implement object tracking in a video: - Use YOLO's tracking feature with ByteTrack - Display unique IDs for each detected object - Track objects across frames - Bonus: Add motion trails that fade over time (last 1-2 seconds of movement) ## 4. Video Segmentation with Tracking (Advanced) Combine segmentation and tracking in a video pipeline: - Load and process video files frame by frame - Apply segmentation to each frame - Track segmented objects across frames - Create an output video showing both masks and tracking IDs Tips and starter code for each exercise are available in the GitHub repository. Feel free to share your project work in a publication on Ready Tensor!--DIVIDER-- <br> ### Additional Reading Materials - Detailed overview on [AlexNet](https://papers.nips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf), [Inception](https://arxiv.org/pdf/1409.4842), [YOLO](https://arxiv.org/pdf/1506.02640) - References to some popular data hubs [IEEE DataPort](https://ieee-dataport.org/datasets), [Hugging Face Dataset Hub](https://huggingface.co/datasets), [Ready Tensor Dataset Hub](https://app.readytensor.ai/datasets) - Guidelines to getting started with Frameworks : [Tensorflow](https://www.tensorflow.org/api_docs), [Pytorch](https://pytorch.org/docs/stable/index.html), [Ultralytics](https://www.ultralytics.com/)
0llldKKtn8Xb
ready-tensor
cc-by
The Open Source Repository Guide: Best Practices for Sharing Your AI/ML and Data Science Projects
![repo-hero-cropped.jpg](repo-hero-cropped.jpg) <p align="center"><em>Image credit: https://www.pexels.com</em></p> --DIVIDER-- # Abstract This article presents a comprehensive framework for creating and structuring AI/ML project repositories that maximize accessibility, reproducibility, and community benefit. We introduce a three-tiered evaluation system, namely, Essential, Professional, and Elite, to help practitioners assess and improve their code repositories at appropriate levels of rigor. The framework encompasses five critical categories: Documentation, Repository Structure, Environment and Dependencies, License and Legal considerations, and Code Quality. Drawing from industry standards and best practices, we provide concrete criteria, common pitfalls, and practical examples that enable AI practitioners, researchers, and students to create repositories that serve as valuable resources for both their creators and the wider community. By implementing these practices, contributors can enhance their professional portfolios while simultaneously advancing open science principles in the AI landscape. --DIVIDER--# Introduction AI and machine learning have advanced dramatically through open collaboration. The field thrives on shared knowledge, with researchers and practitioners expected to contribute their work openly. For many, public repositories serve dual purposes: showcasing personal expertise and advancing collective understanding. Yet most shared repositories fall far short of professional standards that would make them truly valuable to the community. Take a moment to examine two different AI project repositories implementing the same ResNet18 image classification model: **Repository A**: https://github.com/readytensor/rt_img_class_jn_resnet18_exampleA **Repository B**: https://github.com/readytensor/rt_img_class_jn_resnet18_exampleB **What did you notice?** The readme for Repository A contains a brief desription about the project. With no additional information related to pre-requisites, installation, implementation, and usage, visitors cannot determine how to use it or whether it's trustworthy. Most visitors spend less than 30 seconds on Repository A before moving on. Repository B provides clear organization and proper documentation. Visitors immediately understand what the project does and have enough information to use it effectively. Though both repositories contain the same technical work, one presents it in a way that builds trust and facilitates adoption. **Which repository would you want your name attached to?** The reality is that many AI/ML projects resemble Repository A. This is a missed opportunity to showcase the work effectively and benefit the community. A poorly created repository creates a negative impression that can impact career opportunities, collaboration potential, and project adoption. This article presents a comprehensive framework to help you create repositories that are not just functional but truly valuable — repositories that answer four crucial questions for visitors: 1. **What is this about?** (Clear communication of purpose and capabilities) 2. **Why should I care?** (Value proposition and applications) 3. **Can I trust it?** (Demonstrated professionalism and quality) 4. **Can I use it?** (Clear instructions and appropriate licensing) We organize best practices into five categories with three tiers of implementation (Essential, Professional, and Elite), allowing you to match your effort to project needs and resource constraints. Whether you are a student showcasing class projects, a researcher publishing code alongside a paper, or a professional building tools for broader use, these guidelines will help you create repositories that enhance your professional portfolio and contribute meaningfully to the field.--DIVIDER--:::info{title="Info"} **Important Note** 1. While this framework primarily targets AI/ML and data science projects, most concepts apply to software development repositories in general. The principles of good documentation, organization, and reproducibility benefit all code projects regardless of domain. 2. Many criteria in the current framework are specifically designed for Python-based implementations, reflecting its prevalence in AI/ML work. Future iterations will expand to address the unique requirements of other languages such as R, JavaScript, and others. 3. This article focuses on repository structure and sharing practices, not on AI/ML methodology itself. Even the most technically sound AI/ML project may fail to gain community adoption if it cannot be easily understood, trusted, and used by others. We aim to help you effectively share your work, not instruct you on how to conduct that work in the first place. :::--DIVIDER-- # Why Well-Organized Repositories Matter For AI/ML engineers and data scientists, the quality of your code repositories directly impacts your work efficiency, career progression, and contribution to the community in three fundamental ways: ![repo-best-practices-benefits.jpg](repo-best-practices-benefits.jpg) **Time Savings Through Enhanced Usability** Well-structured repositories dramatically improve your own productivity by making your work reusable and maintainable. When you properly document and organize code, you avoid spending hours rediscovering how your own implementations work months later. Data scientists frequently report spending more time understanding and fixing old code than writing new solutions. Clean dependency management prevents environment reconstruction headaches, allowing you to immediately resume work on interesting problems rather than debugging configuration issues. This organization also makes your code extensible—when you want to build on previous work, add features, or adapt models to new datasets, the foundation is solid and understandable. **Career Advancement Through Professional Demonstration** Your repositories serve as concrete evidence of your professional capabilities. Hiring managers and potential collaborators regularly evaluate GitHub profiles when assessing candidates, often placing repository quality on par with technical skills. A well-organized repository demonstrates not just coding ability but also production readiness, attention to detail, and consideration for users - all qualities highly valued in professional settings. Many data scientists find that quality repositories lead to unexpected opportunities: conference invitations, collaboration requests, and interview offers frequently come from people who discovered their well-structured work. In a field where practical implementation matters as much as theoretical knowledge, your repositories form a crucial part of your professional identity. **Community Impact Through Accessible Knowledge** The collective advancement of AI/ML depends on shared implementations and reproducible research. When you create quality repositories, you help others avoid reinventing solutions to common problems, allowing the field to progress more rapidly. Consider the frustration you have experienced trying to implement papers with missing details or the hours spent making someone else's code work. Your well-organized repository prevents others from facing these same challenges. Repositories that clearly answer what the project does, why it matters, whether it can be trusted, and how to use it become valuable community resources rather than one-time demonstrations. Every properly structured repository contributes to building a more collaborative, efficient AI ecosystem. Investing time in repository quality is not about perfectionism — it is about practical benefits that directly affect your daily work, career trajectory, and impact on the field. The framework presented in this article provides a structured approach to realizing these benefits in your own projects. --DIVIDER--# Best Practices Framework The AI repository best practices framework provides a structured approach to organizing and documenting code repositories for AI and machine learning projects. It establishes clear standards across five critical categories, with tiered implementation levels to accommodate different project stages and requirements. ## Framework Structure The framework organizes best practices into five main categories: 1. **Documentation**: The written explanations and guides that help users understand and use your project 2. **Repository Structure**: The organization of directories and files within your repository 3. **Environment and Dependencies**: The specification of software requirements and configuration needed to run your code 4. **License and Legal**: The permissions and terms governing the use of your code and associated assets 5. **Code Quality**: The technical standards and practices applied to your codebase Each category contains specific criteria that can be assessed to determine if a repository meets established standards. Rather than presenting these as an all-or-nothing requirement, the framework defines three progressive tiers of implementation: ## Implementation Tiers The best practices framework is structured into three tiers of implementation - Essential, Professional, and Elite. You can select the tier that aligns with your project goals, audience expectations, and available resources. ![implementation-tiers.jpg](implementation-tiers.jpg) | **Tier** | **Definition** | **Key Characteristics** | **Appropriate For** | |------|-------------|---------------------|-----------------| | **Essential** | Minimum standards for usefulness | • Basic understandability for first-time visitors<br>• Sufficient information for technical users<br>• Basic organizational structure | • Personal projects<br>• Course assignments<br>• Early-stage research code<br>• Proof-of-concept implementations | | **Professional** | Comprehensive documentation and organization | • Detailed guidance for various users<br>• Consistent structure and organization<br>• Complete environment specifications<br>• Established coding standards<br>• Testing frameworks and documentation | • Team projects<br>• Open-source projects with contributors<br>• Published research code<br>• Professional portfolio work<br>• Small production-quality projects | | **Elite** | Best-in-class practices | • Comprehensive project documentation<br>• Meticulous logical structures<br>• Robust dependency management<br>• Complete legal compliance<br>• Advanced quality assurance | • Major open-source projects<br>• Production-level repositories<br>• Research code for broad adoption<br>• Reference implementations | The tiered structure allows for incremental implementation, with each level building on the previous one. This progressive approach makes the framework accessible to projects of different scales and maturity levels. The framework is not prescriptive about specific technologies or tools, focusing instead on the underlying principles of good repository design. This flexibility allows it to be applied across different programming languages, AI/ML frameworks, and project types. Each criterion in the framework is designed to be objectively assessable, making it possible to evaluate repositories systematically. This assessment can be conducted manually or through automated tools that check for the presence of specific files, structural patterns, or documentation elements. In the following sections, we will explore each category in detail, examining specific criteria, providing examples, and offering implementation guidance for each tier. --DIVIDER--## Documentation Documentation is the foundation of a user-friendly repository, serving as the primary interface between your code and its potential users. Well-crafted documentation answers fundamental questions about your project: what it does, why it matters, how to use it, and what to expect from it. Unfortunately, documentation is often treated as an afterthought, creating immediate barriers to adoption. The following chart lists the common pitfalls in documentation. ![documentation-pitfalls.jpg](documentation-pitfalls.jpg) Many repositories suffer from missing or minimal README files, leaving users with no understanding of project purpose or functionality. Others lack clear installation instructions, causing users to encounter confusing errors during setup. Without usage examples, users cannot verify if the implementation meets their needs. Undocumented prerequisites and methodologies further compound these issues, leaving critical information hidden until users encounter mysterious failures. The documentation component of our framework addresses these challenges through a structured approach that scales with project complexity. The following chart lists the criteria for Essential, Professional, and Elite documentation tiers, guiding you to create effective documentation that meets user needs at every level. --DIVIDER-- ![Documentation.svg](Documentation.svg) Detailed definitions of each of the criteria are provided in the document titled `Ready Tensor Repository Assessment Framework v1.pdf` available in the **Resources** section of this publication. --DIVIDER--Let's explore the key principles of documentation at each tier. **Essential Documentation** provides the minimum information needed for basic understanding and use. It answers "What is this project?", "Can I use it?", and "How do I use it?" — enabling quick evaluation and adoption with minimal friction. **Professional Documentation** supports serious adoption by providing comprehensive setup instructions, detailed usage guides, and technical specifications. It addresses users who plan to incorporate your work into their projects, answering "How does this work under different conditions?" and "What configuration options exist?" Professional documentation also demonstrates trustworthiness for production environments by incorporating testing procedures, error handling approaches, and other reliability features that signal production readiness. **Elite Documentation** fosters a sustainable ecosystem around your project through contribution guidelines, change tracking, and contact information. It creates pathways for collaboration, answering "How can I contribute?" and "How is this project evolving?" Effective documentation transforms your repository from personal code storage into a valuable community resource, significantly increasing your project's accessibility, adoption, and impact regardless of its scale. --DIVIDER--## Repository Structure A well-organized repository structure provides a solid foundation for your AI/ML project, making it easier for users to navigate, understand, and contribute to your code. Proper structure serves as a visual map of your project's architecture and components, guiding users through your implementation. Poorly organized AI/ML repositories create significant barriers to understanding and use. The following chart illustrates common pitfalls in repository structure. ![repo-structure-pitfalls.jpg](repo-structure-pitfalls.jpg) AI/ML project repositories often exhibit a chaotic root directory filled with dozens of unrelated files, making it difficult to identify entry points or understand the project's organization. Code, configuration, and data files might be randomly mixed together without logical separation. Inconsistent or confusing naming conventions create additional cognitive load for new users trying to understand the codebase. Many repositories also lack clear boundaries between different components, such as model definition, data processing, and evaluation code. To address repository organization challenges, our framework offers systematic guidelines that adapt to project size. The chart below presents Essential, Professional, and Elite structure criteria, designed to help you create intuitive and maintainable organization. ![Repository Structure Criteria.svg](Repository%20Structure%20Criteria.svg) Let's explore the key principles of repository structure at each tier. **Essential Structure** provides the minimum level of organization needed for basic navigation and understanding. It establishes a basic modular organization with logical separation of files, consistent and descriptive naming conventions for files and directories, a properly configured .gitignore file, and clearly identifiable entry points. This level focuses on answering "Where do I find what I need?" and "How do I start using this?" **Professional Structure** enhances navigability and maintainability through specific separation of components. It organizes code in dedicated module structures (such as src/ directories with submodules), places data in designated directories, separates configuration from code, and organizes notebooks, tests, documentation, and assets in their own logical locations. Professional repositories maintain appropriate directory density (under 15 files per directory) and reasonable directory depth (no more than 5 levels deep). They also properly isolate environment configuration files and dependency management structures. This level signals that the project is built for serious use and collaboration. **Elite Structure** builds on the Professional tier with the same organizational principles applied at a higher standard of consistency and completeness. The Elite structure maintains all the same criteria as Professional repositories but with greater attention to detail and thoroughness across all components. This comprehensive organization demonstrates adherence to industry best practices, making the project immediately familiar to experienced developers. A thoughtfully designed repository structure communicates professionalism and attention to detail, significantly reducing the barrier to entry for new users while improving maintainability for contributors. It transforms your repository from a personal collection of files into an accessible, professional software project that others can confidently build upon. --DIVIDER--## Environment and Dependencies Proper environment and dependency management is critical for ensuring that AI/ML projects can be reliably reproduced and used by others. This aspect of repository design directly impacts whether users can successfully run your code without frustrating setup issues or unexpected behavior. Many repositories fail to adequately address environment configuration, leading to the infamous "works on my machine" problem. The following chart highlights common pitfalls in environment and dependency management.--DIVIDER-- ![environment-and-dependency-pitfalls.jpg](environment-and-dependency-pitfalls.jpg)--DIVIDER--Dependency management problems appear when repositories fail to specify required libraries clearly, forcing users to guess which packages they need. When dependencies do appear, they often lack version numbers, creating compatibility problems as package APIs evolve. Missing documentation about Python version requirements or hardware dependencies leads to confusing errors when users attempt to run code in unsuitable environments. The environment and dependencies section of our framework provides solutions that grow with project sophistication. Below are the tiered criteria (Essential, Professional, and Elite) that guide reproducible environment setup.--DIVIDER-- ![Environment and Dependencies Criteria.svg](Environment%20and%20Dependencies%20Criteria.svg)--DIVIDER-- Let's explore the key principles of environment and dependency management at each tier. **Essential Environment Management** provides the minimum information needed for basic reproducibility. It clearly lists all project dependencies in standard formats such as requirements.txt, setup.py, or pyproject.toml. This level focuses on answering "What packages do I need to install?" allowing users to at least attempt to recreate the necessary environment. **Professional Environment Management** enhances reproducibility and ease of setup by pinning specific dependency versions to ensure consistent behavior across installations. It organizes dependencies into logical groups (core, dev, test) through separate requirement files or configuration options. Professional repositories specify required Python versions and include configuration for virtual environments such as environment.yml (conda), Pipfile (pipenv), or poetry.lock (poetry). This level provides confidence that the project can be reliably set up and run in different environments. **Elite Environment Management** optimizes for complete reproducibility and deployment readiness. It provides exact environment specifications through lockfiles, documents GPU-specific requirements including CUDA versions when applicable, and includes containerization through Dockerfiles or equivalent solutions. This comprehensive approach ensures that users can recreate the exact execution environment regardless of their underlying system, eliminating "it works on my machine" issues entirely. Proper environment and dependency management transforms your repository from a collection of code that runs only in specific conditions into a reliable, reproducible project that users can confidently deploy in their own environments. This attention to reproducibility demonstrates professional rigor and significantly increases the likelihood that others will successfully use and build upon your work. --DIVIDER--## License and Legal Proper licensing and legal documentation is a critical aspect of AI/ML repositories that is frequently overlooked. Without clear licensing, potential users cannot determine whether they can legally use, modify, or build upon your work, regardless of its technical quality. Many repositories either omit licenses entirely or include inappropriate licenses for their content. The following chart highlights common pitfalls in licensing and legal aspects.--DIVIDER-- ![license-legal-pitfalls.jpg](license-legal-pitfalls.jpg)--DIVIDER--Legal issues arise when repositories operate without licenses, creating ambiguity that prevents use by organizations with compliance concerns. Some repositories include licenses that conflict with their dependencies, while others neglect the unique legal aspects of AI/ML work regarding data and model rights. The absence of copyright notices and unclear terms for incorporated datasets or pretrained models further complicates legitimate use. For proper licensing and legal considerations, our framework provides clear benchmarks at varying complexity levels. The following chart presents Essential, Professional, and Elite tier criteria for legal compliance and clarity.--DIVIDER-- ![License and Legal Criteria.svg](License%20and%20Legal%20Criteria.svg)--DIVIDER-- Let's explore the key principles of licensing and legal documentation at each tier. **Essential Legal Documentation** ensures that users can determine basic usage rights. It includes a recognized license file (LICENSE, LICENSE.md, or LICENSE.txt) in the root directory that explicitly states terms of use, modification, and distribution. The chosen license must be appropriate for the project's purpose, dependencies, and intended use, avoiding unclear or conflicting terms. This level answers the fundamental question: "Am I legally permitted to use this?" **Professional Legal Documentation** enhances legal clarity by addressing AI/ML-specific concerns. In addition to proper licensing, it includes clear documentation of data usage rights, stating ownership, licensing, compliance requirements, and restrictions for any datasets used or referenced. Similarly, it documents model usage rights, specifying ownership, licensing terms, and redistribution policies for any ML models included or referenced. This level provides confidence that the project can be legally used in professional contexts. **Elite Legal Documentation** establishes a comprehensive legal framework supporting long-term community engagement. It builds on the Professional tier by adding explicit copyright statements in source files and documentation to prevent ambiguity in legal rights and attribution. Elite repositories also include a Code of Conduct that outlines contributor behavior expectations, enforcement mechanisms, and reporting guidelines to foster an inclusive and respectful environment. This level demonstrates commitment to professional standards and community values. Proper licensing and legal documentation transforms your repository from a potentially risky resource into a legally sound project that organizations and individuals can confidently incorporate into their work. This attention to legal concerns removes a significant barrier to adoption and signals professionalism to potential users and contributors.--DIVIDER--## Code Quality Code quality is the foundation of maintainable, reliable AI/ML projects. While functional code can deliver results, high-quality code enables long-term sustainability, collaboration, and trust in your implementation. In AI/ML repositories, functionality frequently takes precedence over quality, resulting in maintainability and reliability issues. The following chart highlights common code quality pitfalls.--DIVIDER-- ![code-quality-pitfalls.jpg](code-quality-pitfalls.jpg)--DIVIDER--Code quality issues manifest in sprawling, monolithic scripts that defy debugging efforts. Excessive function length and high cyclomatic complexity make maintenance difficult. The prevalence of hardcoded values, minimal error handling, and lack of tests results in brittle, unpredictable code. In the AI/ML context, missing random seed settings compromise reproducibility, while poorly documented notebooks obscure the development process.--DIVIDER--Our framework tackles code quality through graduated standards appropriate for different project stages. The chart below details the Essential, Professional, and Elite criteria that promote maintainable, reliable code as projects evolve--DIVIDER-- ![Code Quality Criteria.svg](Code%20Quality%20Criteria.svg)--DIVIDER--Let's explore the key principles of code quality at each tier. **Essential Code Quality** establishes basic maintainability by organizing code into functions and methods rather than monolithic scripts, keeping individual scripts under 500 lines, and implementing basic error handling through try/except blocks. It uses dedicated configuration files to separate parameters from code logic and sets random seeds to ensure reproducibility. For notebooks, it maintains reasonable cell length (under 100 lines) and includes markdown documentation (at least 10% of cells). This level provides the minimum quality needed for others to understand and use your code. **Professional Code Quality** significantly enhances maintainability and reliability by implementing comprehensive best practices. Functions are kept under 50 lines, code duplication is limited, and hardcoded constants are minimized. Professional repositories use environment variables for sensitive configurations, implement logging, include tests with framework support, and provide docstrings with parameter and return documentation. They also implement type hints, use style checkers for consistent formatting, control function complexity, and include data validation. For notebooks, they import custom modules and manage output cells properly. This level demonstrates serious software engineering practices. **Elite Code Quality** takes quality to production-grade standards by adding advanced practices such as comprehensive logging configuration, custom exception classes, and test coverage metrics. These repositories represent the highest standard of code quality, suitable for critical production environments and long-term maintenance. High-quality code communicates professionalism and reliability, significantly increasing confidence in your implementation. This attention to quality transforms your repository from working code into trustworthy software that others can confidently build upon, adapt, and maintain over time. --DIVIDER--# Implementation Guide with Examples The following steps outline a practical approach to creating high-quality AI/ML project repositories. For detailed examples of repository structures and README templates at each implementation tier, see **Appendix A: Sample Repository Structures** and **Appendix B: Sample README Structures**. ### Step 1: Select an Appropriate Template Choose a repository structure that matches your project complexity and goals: - **Essential**: For personal projects, educational demonstrations, or proof-of-concepts - **Professional**: For team projects, research code intended for publication, or open-source contributions - **Elite**: For production systems, major open-source projects, or reference implementations Refer to **Appendix A: Sample Repository Structures** for detailed examples at each tier. Customize these templates to fit your specific needs while maintaining the core organizational principles. Remember that even a small project can benefit from good structure. ### Step 2: Choose the Right License Select a license appropriate for your project's content and intended use: - **MIT License**: Permissive license good for most software projects, allowing commercial use - **Apache 2.0**: Similar to MIT but with patent protections - **GPL (v3)**: Strong copyleft license requiring derivative works to be open-sourced - **Creative Commons**: Various options for non-software content like datasets or documentation Consider the licenses of your dependencies, as they may constrain your options. Ensure your license is compatible with the libraries and frameworks you use. ### Step 3: Implement Environment and Dependency Management Choose the appropriate dependency management approach for your project: - **Essential**: `requirements.txt` listing direct dependencies - **Professional**: - Pinned version numbers (`numpy==1.21.0` instead of just `numpy`) - Separated requirements files for different purposes - Virtual environment configuration (conda, venv, etc.) - **Elite**: - Lockfiles for exact reproduction (poetry.lock, Pipfile.lock) - Containerization with Docker - Environment variables for configuration Document any non-Python dependencies or system requirements clearly in your README. ### Step 4: Create a Structured README Develop a README that matches your target implementation tier. A well-structured README is critical as it's often the first thing visitors see when discovering your project. When creating your README: - Focus on answering the four key questions: what the project is about, why users should care, whether they can trust it, and how they can use it - Match the detail level to your target tier (Essential, Professional, or Elite) - Include examples and code snippets where appropriate - Consider adding screenshots or diagrams for visual clarity Refer to **Appendix B: Sample README Structures** for detailed templates at each implementation tier, from basic structures covering essential information to comprehensive documents that support serious adoption and community engagement. ### Step 5: Follow Coding Best Practices Adopt established coding standards appropriate for your language: - **Python**: - Follow PEP 8 style guidelines (consistent indentation, naming conventions, etc.) - Use type hints for function signatures - Write docstrings for modules, classes, and functions - Consider using linters and formatters (black, flake8, pylint) - **Markdown**: - Use proper heading hierarchy - Include code blocks with language specification - Use lists, tables, and emphasis consistently - Add alt text to images for accessibility - **General Practices**: - Keep functions small and focused on a single task - Write descriptive variable and function names - Include comments explaining "why" not just "what" - Control script and function length - Set random seeds for reproducibility in AI/ML code For a comprehensive list of relevant tools and references, see the **Additional Resources** section at the end of this article, which includes links to code style guides, repository templates, documentation tools, and dependency management solutions.--DIVIDER--# Tools and Resources The following tools can significantly reduce the effort required to implement best practices in your repositories: ## Documentation Tools - [Sphinx](https://www.sphinx-doc.org/): Python documentation generator - [ReadTheDocs](https://readthedocs.org/): Documentation hosting platform - [Markdown Guide](https://app.readytensor.ai/publications/LX9cbIx7mQs9): Project documentation with Markdown - [Jupyter Book](https://jupyterbook.org/): Create publication-quality books from notebooks - [Docstring Conventions](https://app.readytensor.ai/publications/DM3Ao23CIocT): Guide for Python docstrings ## Repository Structure and Templates - [Cookiecutter](https://github.com/cookiecutter/cookiecutter): Project template tool - [Cookiecutter Data Science](https://github.com/drivendata/cookiecutter-data-science): Template for data science projects - [PyScaffold](https://github.com/pyscaffold/pyscaffold): Project generator for Python packages - [nbdev](https://nbdev.fast.ai/): Create Python packages from Jupyter notebooks ## Dependency Management - [uv](https://github.com/astral-sh/uv): Fast Python package installer and resolver - [Poetry](https://python-poetry.org/): Python packaging and dependency management - [Conda](https://docs.conda.io/): Package and environment management system - [pip-tools](https://github.com/jazzband/pip-tools): Set of tools for managing pip-compiled requirements - [Pipenv](https://pipenv.pypa.io/): Python development workflow tool - [Docker](https://www.docker.com/): Containerization platform ## Code Quality and Testing - [Pre-commit](https://pre-commit.com/): Git hook scripts manager - [Black](https://black.readthedocs.io/): Uncompromising Python code formatter - [Flake8](https://flake8.pycqa.org/): Python code linter - [Pylint](https://pylint.org/): Python static code analysis tool - [mypy](https://mypy.readthedocs.io/): Static type checker for Python - [pytest](https://docs.pytest.org/): Python testing framework - [Coverage.py](https://coverage.readthedocs.io/): Code coverage measurement for Python ## License Resources - [License Guide](https://app.readytensor.ai/publications/qWBpwY20fqSz): A primer on licenses for ML projects - [Choose a License](https://choosealicense.com/): Help picking an open source license - [Open Source Initiative](https://opensource.org/licenses): License information and standards - [TL;DR Legal](https://tldrlegal.com/): Software licenses explained in plain English - [Creative Commons](https://creativecommons.org/licenses/): Licenses for non-code assets ## Style Guides and Standards - [PEP 8](https://app.readytensor.ai/publications/pCgumBWFPD90): Style Guide for Python Code - [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html): Comprehensive style guide - [Docstrings Guide](https://app.readytensor.ai/publications/DM3Ao23CIocT): Python Docstrings for Machine Learning code These tools address different aspects of repository quality, offering options for projects of all scales. Select tools that match your project needs and team capabilities rather than adopting everything at once. --DIVIDER--# Conclusion Well-structured repositories are essential for the success of AI/ML projects in the wider community. Our framework addresses five fundamental aspects of repository quality: 1. **Documentation** that communicates purpose, usage, and technical details 2. **Repository Structure** that organizes code logically 3. **Environment and Dependencies** that enable reproducibility 4. **License and Legal** considerations that establish usage rights 5. **Code Quality** standards that ensure maintainability The tiered approach, namely Essential, Professional, and Elite, allows you to match your effort to project needs and resource constraints. By evaluating your repositories against this framework, you can systematically improve their quality and impact. This will not only benefit your work efficiency and career prospects but also contribute to the wider AI/ML community.--DIVIDER--# Appendices--DIVIDER--## Appendix A: Sample Repository Structures This appendix provides example repository structures for AI/ML projects at the Essential, Professional, and Elite levels. These examples are starting points that should be adapted to your specific project requirements, technology stack, and team preferences. ### A.1 Essential Repository Structure This basic structure is suitable for simple projects, educational demonstrations, or exploratory research work primarily using Jupyter notebooks: ``` project-name/ │ ├── README.md # Essential project information ├── LICENSE # Appropriate license file ├── requirements.txt # Project dependencies ├── .gitignore # Configured for Python/Jupyter │ ├── notebooks/ # Organized notebooks │ ├── 01_data_exploration.ipynb │ ├── 02_preprocessing.ipynb │ └── 03_model_training.ipynb │ ├── data/ # Data directory (often gitignored) │ ├── .gitkeep # Placeholder to track empty directory │ └── README.md # Data acquisition instructions │ └── models/ # Saved model files (often gitignored) └── .gitkeep # Placeholder to track empty directory ``` **Key Characteristics:** - Clear separation of notebooks, data, and models - Sequential naming of notebooks to indicate workflow - Basic documentation with README files - Simple dependency management with requirements.txt ### A.2 Professional Repository Structure This structure is appropriate for more advanced projects, team collaborations, or code intended for wider distribution: ``` project-name/ │ ├── README.md # Comprehensive project documentation ├── LICENSE # Appropriate license file ├── setup.py # Package installation configuration ├── requirements.txt # Core dependencies ├── requirements-dev.txt # Development dependencies ├── pyproject.toml # Python project metadata ├── .gitignore # Configured for project needs │ ├── src/ # Source code package │ └── project_name/ # Main package directory │ ├── __init__.py # Package initialization │ ├── data/ # Data processing modules │ │ ├── __init__.py │ │ ├── loader.py │ │ └── preprocessor.py │ ├── models/ # Model implementation modules │ │ ├── __init__.py │ │ └── model.py │ ├── utils/ # Utility functions │ │ ├── __init__.py │ │ └── helpers.py │ └── config.py # Configuration parameters │ ├── notebooks/ # Jupyter notebooks (if needed) │ ├── exploration.ipynb │ └── evaluation.ipynb │ ├── tests/ # Test modules │ ├── __init__.py │ ├── test_data.py │ └── test_models.py │ ├── docs/ # Documentation files │ ├── usage.md │ ├── api.md │ └── examples.md │ ├── data/ # Data directory (often gitignored) │ └── README.md # Data acquisition instructions │ └── models/ # Saved model outputs (often gitignored) └── README.md # Model usage information ``` **Key Characteristics:** - Proper Python package structure with `src` layout - Modular organization of code with clear separation of concerns - Comprehensive documentation in dedicated directory - Test directory that mirrors package structure - Separated dependency specifications for different purposes ### A.3 Elite Repository Structure This structure demonstrates a comprehensive repository setup suitable for production-level projects, major open-source initiatives, or reference implementations: ``` project-name/ │ ├── README.md # Main documentation with quick start guide ├── LICENSE # Appropriate license file ├── CHANGELOG.md # Version history and changes ├── CONTRIBUTING.md # Contribution guidelines ├── CODE_OF_CONDUCT.md # Community standards ├── setup.py # Package installation ├── pyproject.toml # Python project config (PEP 518) ├── poetry.lock # Locked dependencies (if using Poetry) ├── requirements/ # Dependency specifications │ ├── base.txt # Core requirements │ ├── dev.txt # Development requirements │ ├── test.txt # Testing requirements │ └── docs.txt # Documentation requirements ├── Dockerfile # Container definition ├── docker-compose.yml # Multi-container setup ├── .gitignore # Git ignore patterns ├── .pre-commit-config.yaml # Pre-commit hook configuration ├── .github/ # GitHub-specific configurations │ ├── workflows/ # CI/CD workflows │ └── ISSUE_TEMPLATE/ # Issue templates │ ├── src/ # Source code package │ └── project_name/ # Main package │ ├── __init__.py # Package initialization with version │ ├── cli.py # Command-line interface │ ├── config.py # Configuration management │ ├── exceptions.py # Custom exceptions │ ├── logging.py # Logging configuration │ ├── data/ # Data processing │ ├── models/ # Model implementations │ └── utils/ # Utility functions │ ├── scripts/ # Utility scripts │ ├── setup_environment.sh │ └── download_datasets.py │ ├── notebooks/ # Jupyter notebooks (if applicable) │ └── examples/ # Example notebooks │ ├── tests/ # Test suite │ ├── conftest.py # Test configuration │ ├── integration/ # Integration tests │ └── unit/ # Unit tests organized by module │ ├── docs/ # Documentation │ ├── conf.py # Sphinx configuration │ ├── index.rst # Documentation home │ ├── installation.rst # Installation guide │ ├── api/ # API documentation │ ├── examples/ # Example usage │ └── _static/ # Static content for docs │ ├── data/ # Data directory (structure depends on project) │ ├── raw/ # Raw data (often gitignored) │ ├── processed/ # Processed data (often gitignored) │ └── README.md # Data documentation │ └── models/ # Model artifacts ├── trained/ # Trained models (often gitignored) ├── pretrained/ # Pretrained models └── README.md # Model documentation ``` **Key Characteristics:** - Comprehensive community documents (CONTRIBUTING, CODE_OF_CONDUCT) - Advanced dependency management with separated requirements - Containerization for reproducible environments - CI/CD configuration for automated testing and deployment - Extensive documentation with proper structure - Clear separation of all project components ### Adapting These Structures These sample structures serve as templates that should be adapted based on: 1. **Project Size and Complexity**: Smaller projects may not need all components shown in the Professional or Elite examples. Include only what serves your project's needs. 2. **Technology Stack**: While these examples focus on Python-based projects, adjust directory structures for other languages or frameworks accordingly. 3. **Team Conventions**: Align with existing conventions your team has established for consistency across projects. 4. **Project Type**: Different AI/ML applications may require specialized structures: - Time series forecasting projects might need additional data versioning - Computer vision projects might require separate directories for images/videos - NLP projects might benefit from corpus and vocabulary management structures 5. **Deployment Context**: Projects deployed as APIs, web applications, or embedded systems will need additional structure to support their deployment environments. Remember that repository structure should facilitate development and use—not impose unnecessary overhead. Start with the simplest structure that meets your needs and expand as your project grows in complexity. --DIVIDER--## Appendix B: Sample README Structures This appendix provides example README structures for AI/ML projects at the Essential, Professional, and Elite levels. These templates offer a starting point that should be customized to fit your specific project needs and audience. ### B.1 Essential README Structure This basic structure covers the minimum needed for a useful README: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Installation Basic installation instructions. ## Usage Simple examples of how to use the project. ## License Information about the project's license. ``` **Key Characteristics:** - Clear project identity with title and description - Basic explanation of purpose and value - Simple instructions for installation and use - License information for legal clarity ### B.2 Professional README Structure This comprehensive structure supports serious adoption: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Target Audience Who this project is intended for. ## Prerequisites Required knowledge, hardware, and system compatibility. ## Installation Step-by-step installation instructions. ## Environment Setup Environment and dependency information. ## Usage Detailed usage instructions with examples. ## Data Requirements Expected data formats and setup. ## Testing How to run tests for the project. ## Configuration Information on configuration options. ## License Information about the project's license. ## Contributing Guidelines for contributing to the project. ``` **Key Characteristics:** - Comprehensive project description with target audience - Detailed prerequisites and installation steps - Thorough usage documentation with examples - Technical details on data, testing, and configuration - Community engagement through contribution guidelines ### B.3 Elite README Structure This advanced structure creates a complete resource for all users: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Target Audience Who this project is intended for. ## Prerequisites Required knowledge, hardware, and system compatibility. ## Installation Step-by-step installation instructions. ## Environment Setup Environment and dependency information. ## Usage Detailed usage instructions with examples. ## Data Requirements Expected data formats and setup. ## Testing How to run tests for the project. ## Configuration Information on configuration options. ## Methodology Explanation of the approach and algorithms. ## Performance Benchmarks and performance expectations. ## License Information about the project's license. ## Contributing Guidelines for contributing to the project. ## Changelog Version history and key changes. ## Citation How to cite this project in academic work. ## Contact How to reach the maintainers. ``` **Key Characteristics:** - All elements from Professional README - Technical depth with methodology and performance sections - Project history through changelog - Academic integration with citation information - Maintainer accessibility through contact information ### Customizing README Content These templates provide structure, but effective READMEs require thoughtful content: 1. **Project Description**: Be clear and specific about what your project does. Avoid vague descriptions and technical jargon without explanation. 2. **Examples**: Include concrete, runnable examples that demonstrate key functionality. Code snippets should be complete enough to execute with minimal modification. 3. **Visual Elements**: Consider adding diagrams, screenshots, or other visual elements that clarify complex concepts or demonstrate the project in action. 4. **Audience Adaptation**: Adjust technical depth based on your expected audience. Research projects may include more mathematical detail, while application-focused projects should emphasize practical usage. 5. **Maintenance Status**: Clearly indicate the current maintenance status of the project, especially for open-source work. Remember that a README is often the first interaction users have with your project. It should provide enough information for users to quickly determine if the project meets their needs and how to get started using it.
0z4EC8313LzS
ready-tensor
mit
Time Series Step Classification Benchmark
![hero.jpg](hero.jpg)--DIVIDER--# Introduction In the field of time series analysis, step classification plays a critical role in interpreting sequential data by assigning class labels to each time step. This study presents a comprehensive benchmark of 25 machine learning models trained on five distinct datasets aimed at improving time series step classification accuracy. We evaluated each model's performance using four key metrics: accuracy, precision, recall, and F1-score. Our analysis provides insights into the effectiveness of various modeling approaches across different types of time series data, highlighting the strengths and limitations of each model. The results indicate significant variations in model performance, underscoring the importance of tailored model selection based on specific characteristics of the dataset and the classification task. This study not only guides practitioners in choosing appropriate models for time series step classification but also contributes to the ongoing discourse on methodological advancements in time series analysis.--DIVIDER--# Datasets | dataset | # of series | # classes | # features | min series length | max series length | time frequency | source link | | -------------------------- | :----------: | :-------: | :--------: | :---------------: | :---------------: | :------------: | ----------------------------------------------------------------------------------- | | har70plus | 18 | 7 | 6 | 871 | 1536 | OTHER | [link](https://archive.ics.uci.edu/dataset/780/har70) | | hmm_continuous | 500 | 4 | 3 | 50 | 300 | OTHER | synthetic | | multi_frequency_sinusoidal | 100 | 5 | 2 | 109 | 499 | OTHER | synthetic | | occupancy_detection | 1 | 2 | 5 | 20560 | 20560 | SECONDLY | [link](https://archive.ics.uci.edu/dataset/357/occupancy+detection) | | pamap2 | 9 | 12 | 31 | 64 | 2725 | OTHER | [link](https://archive.ics.uci.edu/dataset/231/pamap2+physical+activity+monitoring) | The HAR70 and PAMAP2 datasets are an aggregated version of the datasets from the UCI Machine Learning Repository. Data were mean aggregated to create a dataset with fewer time steps. The datasets repository is available [here](https://github.com/readytensor/rt_datasets_time_step_classification)--DIVIDER--# Models Our benchmarking study on time series step classification evaluates a diverse array of models, which we have categorized into two main types: Machine Learning (ML) models and Neural Network models. Each model is assessed individually to understand its specific performance characteristics and suitability for different types of time series data. ## Machine Learning Models This category includes 17 ML models, each selected for its unique strengths in pattern recognition and handling of sequential dependencies within time series data. These models range from robust ensemble methods to basic regression techniques, providing a comprehensive overview of traditional machine learning approaches in time series classification. Examples of models in this category include: Random Forest, K-Nearest Neighbors and Logistic Regression ## Neural Network Models Comprising 7 models, this category features advanced neural network architectures that excel in capturing intricate patterns and long-range dependencies in data through deep learning techniques. These models are optimized for handling large datasets and complex classification tasks that might be challenging for traditional ML models. Examples of models in this category include: LSTM and CNN ## Special Mention Additionally, our study includes the Distance Profile model, which stands apart from the conventional categories. This model employs a technique based on computing the distances between time series data points, providing a unique approach to classification that differs from typical machine learning or neural network methods. For more information on distance profile, checkout the [Distance Profile for Time-Step Classification in Time Series Analysis](https://app.readytensor.ai/publications/distance_profile_for_time-step_classification_in_time_series_analysis_ljGAbBceZbpv) publication.--DIVIDER--# Results Each model, regardless of its category, is evaluated on its own merits across various datasets to pinpoint the most effective approaches for time series step classification. We have averaged the performance metrics for each model across all datasets. This consolidated data is presented in a heat map, where models are listed on the y-axis and the metrics—accuracy, precision, recall, and F1-score—on the x-axis. The values in the table represent the average of each metric for a model across all datasets, providing a clear, visual comparison of how each model performs generally in time series step classification. This method allows us to succinctly demonstrate the overall performance trends and identify which models consistently deliver the best results across various conditions. ![leaderboard.png](leaderboard.png)--DIVIDER--1. Top Performers Boosting algorithms and advanced ensemble methods generally perform exceptionally well in the task of time series step classification. The top performers include: • CatBoost (0.80): Excels in managing complex features and imbalanced datasets, consistently delivering high performance. • LightGBM (0.78): Known for its efficiency and accuracy, especially in large datasets, with strong overfitting prevention. • Hist Gradient Boosting (0.77): A powerful algorithm that builds on the strength of traditional gradient boosting by optimizing performance with histogram-based methods. • XGBoost (0.77): Offers robustness and scalability, making it an ideal choice for handling large datasets and complex tasks. • Stacking (0.77): Combines multiple models to improve prediction accuracy, performing strongly in time series classification. 2. Strong Contenders These models show good F1-scores but are not at the very top. They are reliable and can be considered for use cases where the top performers might be computationally expensive or overfit: • Gradient Boosting (0.75): A solid model that performs well in a variety of conditions. • Extra Trees (0.75) and Random Forest (0.75): These ensemble models provide robust performance, benefiting from their ability to reduce prediction variance. 3. Baseline or Average Performers These models perform moderately well and may serve as baselines or options when computational simplicity is desired: • Bagging (0.74) and SVC (0.74): Both provide reasonable performance, though not as strong as the top models. • CNN, RNN, and LSTM (all 0.73): Neural networks tailored for sequential data, performing moderately well in this context. • Voting (0.73): A basic ensemble method that combines predictions from multiple models, offering solid but average results. • MLP, ANN, and LSTM-CNN (all 0.72): These neural networks exhibit potential but may require additional tuning to excel in time series step classification. 4. Below Average Performers These models have lower F1-scores and might need substantial tuning or are inherently less suitable for time series step classification: • Logistic Regression (0.66), Ridge (0.64), and Decision Tree (0.63): These simpler models struggle to capture the complex temporal dependencies in time series data. • Passive Aggressive (0.63) and Distance Profile (0.62): These models perform less effectively, likely due to their sensitivity to noise and outliers in the dataset. • KNN (0.61): Its performance is hindered by high dimensionality and noise, which are common in time series data. • AdaBoost (0.60): Despite being a boosting algorithm, it underperforms, likely due to its sensitivity to noise and imbalanced datasets.--DIVIDER--# Conclusion Our benchmarking study has provided a comprehensive evaluation of 25 different models across four diverse datasets, focusing on the task of time series step classification. The results highlight the general efficacy of boosting algorithms—specifically CatBoost, LightGBM, and XGBoost—in managing the complexities associated with time series data, with the notable exception of AdaBoost, which did not perform as well. The table visualization of average accuracy, precision, recall, and F1-score across all models and datasets has offered a clear and succinct comparison, underscoring the strengths and potential areas for improvement in each model. This analysis not only assists in identifying the most suitable models for specific types of time series classification tasks but also sheds light on the broader applicability of machine learning techniques in this evolving field. As we continue to advance our understanding of time series analysis, it is crucial to consider not just the accuracy but also the computational efficiency and practical applicability of models in real-world scenarios. Future studies may explore the integration of more complex neural network architectures or the development of hybrid models that can leverage the strengths of both traditional machine learning and neural networks to further enhance classification performance. In conclusion, this study serves as a valuable resource for researchers and practitioners in selecting the right models for their specific needs, ultimately contributing to more effective and efficient time series analysis and classification.
1yiSfLXTffSF
aryan_patil
none
UV: The Next Generation Python Package Manager Built for Speed
![UV.png](UV.png)--DIVIDER--# TL;DR UV is a Rust-built Python package manager that's 10-100x faster than pip/poetry/conda, combining virtual environment creation and dependency management in one tool while maintaining compatibility with existing Python standards.--DIVIDER--# Introduction The evolution in Python has been closely linked to improvements in package management, from manual installations to modern tools like pip and poetry. Yet, as projects become more and more complex, conventional tools struggle to keep up with the demands for speed and efficiency. UV is a modern, high-performance, cutting-edge Python package and project manager developed in Rust. It represents a new generation of Python package managers as it serves as a replacement for traditional Python package management tools like pip and poetry. It combines the functionality of tools like pip, poetry, and virtualenv and streamlines tasks like dependency management, script execution, and project building offering significant improvement in speed and reliability. It is designed to address common challenges in the Python ecosystem such as lengthy installation times, dependency conflicts, and complexity of managing environments. UV accomplishes this by implementing an innovative architecture and effectively, delivering 10 to 100 times faster speed than the conventional package managers. Its key features include support for editable installations, Git and URL-based dependencies, constraint files, custom package indexes, and more. UV's standards-compliant virtual environments integrate smoothly with other tools, eliminating the need for lock-in or extensive customization. It is cross-platform, compatible with Linux, Windows, and macOS, and has undergone rigorous testing against the PyPI index. --DIVIDER--# Key Features - Speed: UV is blazingly faster than the traditional tools like pip, and dramatically reduces the time required to install packages. - Optimization: Saves storage by using a global cache for dependency deduplication. - Flexible Installation Options: Can be installed effortlessly using `curl`, `pip`, or`pipx` with no need for Python or Rust to be pre-installed. - Cross-Platform Support: Runs on macOS, Linux, and Windows, supporting a wide range of advanced functionalities. - Enhanced Dependency Management: Includes features like version overrides, alternative resolution methods, and resolver for tracking conflicts. - Error Messaging: Provides a detailed and clear error message, simplifying conflict resolution for developers. - Consolidated Tooling: Integrate the capabilities of tools like `pip`, `pipx`, `poetry`, `pyenv`, into one solution. - Project and Script Management: Handles Python version management, runs scripts with inline dependency metadata, and facilitates workflows. --DIVIDER--# Installation Installing UV is quick and straightforward. You can choose installers or install it directly from `PyPl`. Before using UV, it is necessary to add its path to the environment variables. On Linux and macOS, you can update the PATH environment variable by running the following command in the terminal: `export PATH="/path/to/uv:$PATH"` For Windows, to add directory to the PATH environment variable for both user systems, search for “Environment Variables” in the search bar. Locate the PATH variable under either User Variables or System Variables, click Edit and then select New and input the desired path. `%USERPROFILE%\.local\bin` With pip: `pip install uv` With pipx: `pipx install uv` With Homebrew: `brew install uv` With Pacman: `pacman -S uv` After the installation, run the uv command in the terminal to verify that it has been installed correctly.--DIVIDER--# Creating Virtual Environment Creating a virtual environment with uv is very simple. Use the following command with the desired name to create it. `uv venv` To activate the virtual environment, run the following commands: - For Linux and macOS: `source .venv/bin/activate` - For Windows: `.venv\Scripts\activate` --DIVIDER--# Installing Packages To install packages for the virtual environment, follow a familiar process as shown below: - `uv pip install flask` use this command to install the Flask Framework - `uv pip install -r requirements.txt` use this command to install all the dependencies listen in the requirements.txt file. - `uv pip install -e ` use this to install the current project in editable mode, allowing changes to be reflected without reinstalling. - `uv pip install "package @ ."` use this to install the current project from the local disk - `uv pip install "flask[dotenv]"` use this to install Flask along with the additional "dotenv" functionality.--DIVIDER--# Initializing a New Project using UV To initialize a new project with UV, first create a directory for your new project by running the command `mkdir “project_name”` and then navigate into it using the `cd` command. After creating the project directory, you can initialize the project with uv by running a command `uv init`. This will create necessary files like `requirements.txt` or other configuration files required for your project. Once the project is initialized, you can install any required dependencies by running `uv pip install -r requirements.txt`. Then set up any necessary project files, depending on the framework you’re using. Finally, once your project is set up, you can start it by running `uv`. --DIVIDER--# Managing Dependencies with UV UV simplifies the process of creating virtual environments and installing dependencies with a single command, `uv add`. For Example: When the `uv add` command is executed for the first time, UV creates a new virtual environment in the current directory and installs the specified dependencies. For subsequent commands, UV reuses the existing environment and installs or updates the requested packages, making dependency management efficient. Every time you run the `uv add` command, UV also resolves dependencies. Using its modern dependency resolver, UV analyses the entire dependency graph to identify a compatible set of package versions that fulfil all requirements. The resolver accounts for factors such as version constraints, Python version compatibility, and platform-specific requirements to determine the best set of packages to install. After running the `uv add command`, UV updates both the `pyproject.toml` and `uv.lock` files. Here’s an example of a `pyproject.toml` file after installing Scikit-learn and XG-Boost: To remove a dependency, you can use the `uv remove` command. This uninstalls the specified package along with any dependencies it introduced. This streamlined approach to managing dependencies ensures an efficient and conflict-free environment.--DIVIDER--# Executing Python Scripts with UV After Installation of necessary dependencies, you can start writing Python scripts as usual. UV provides different ways to run Python code. To run a script directly, you can use the `uv run` command after your script name instead of the traditional `python script.py` syntax: `$ uv run hello.py` --DIVIDER--# Using command line tools with UV UV simplifies working with Python packages that provide command-line tools, such as “black” for code formatting, `flake8` for testing and `mypy` for type checking. If offers two interfaces for managing these tools. 1. Running tools with `uv tool run` : This interface allows you to execute command-line tools directly. When you run a command like `uv tool run <tool>`, UV creates a temporary virtual environment in its cache, installs the specified tool, and executes it from that cached environment. 2. Using the `uvx` command: Similarly, when you run a command via uvx, UV sets up a temporary virtual environment, installs the required tool, and runs it without polluting your project's primary virtual environment. This approach keeps your project’s dependencies clean while providing fast execution times, since the tools are managed separately in a cached environment rather than being installed directly into your project's environment. --DIVIDER--# Key Features of UV Tool Interface - Compatible with any Python package that provides command-line tools, such as flake8, mypy, black, or pytest. - Cached environments are automatically removed when UV’s cache is cleared. - New cached environments are created on-demand as required. - Ideal for occasional use of development tools without cluttering project dependencies. --DIVIDER--# Lock Files Lock files `uv.loc` are an essential part of dependency management in UV. When you run `uv add` commands to install dependencies, UV automatically generates and updates a `uv.lock` file. The file serves several important functions like: - It captures the exact version of all installed dependencies and their sub-dependencies. - It ensures reproducible builds by “locking” dependency versions across different systems and environments. - It minimizes the risk of “dependency hell” by maintaining consistent package versions. - It speeds up installation since UV can use the locked versions instead of solving the dependencies again. The management of the lock file is entirely automated, so manual edits are unnecessary. To ensure consistent environments for all collaborators, the “uv.lock” file should always be included in version control. --DIVIDER--# Difference between Lock Files and requirement.txt Lock files and requirements.txt serve similar purposes in tracking dependencies but differ in their details and use cases. Lock files contain detailed information about exact package versions and their complete dependency tree, ensuring consistent environments across development. requirements.txt files are simpler, typically listing only direct dependencies, making it more suitable for deployment scenarios or for sharing code with users who may not be using UV. These files are often required for compatibility with external tools and services that do not recognize UV’s lock file format. While lock files are indispensable for maintaining reliable build during development, requirements.txt is more appropriate when distributing or deploying in environments where UV-specific features are unavailable. Both formats complement each other in managing dependencies effectively. --DIVIDER--![Blue Gradient Modern Freelancer YouTube Thumbnail .png](Blue%20Gradient%20Modern%20Freelancer%20YouTube%20Thumbnail%20.png) # UV vs PIP PIP has been the standard tool managing Python packages and creating virtual environments. While it is effective, UV provides advantages that make it a compelling alternative. Here are some of them: - Speed: Developed with Rust, UV is much faster than PIP for package installation and dependency resolution, completing tasks in seconds that might take minutes in PIP. - Integrated Environment Management: Unlike virtualenv which focuses solely on environment creation, and PIP which manages package management, UV combines both the functionalities into a single tool, simplifying the development workflow. UV maintains full compatibility with PIP’s ecosystem while addressing some of its limitations. It supports the same requirements.txt files and package indexes making the transition to UV simple and effortless The key differences include: - Performance: UV’s parallel downloads and optimized dependency resolvers make it 10-100x faster than PIP for larger projects. - Memory Efficiency: During package installation and dependency resolution UV consumes significantly less memory when compared to PIP. - Enhanced Error Handling: UV provides clearer error messages and better conflict resolution when dependencies clash. - Reproducibility: UV’s lockfile mechanism ensures consistent environments across different systems, addressing a limitation of standard requirements.txt files Although PIP remains a reliable choice, UV’s modern design, enhanced performance, and integrated features provide developers with a more efficient and streamlined workflow. Its ability to integrate seamlessly into existing projects without disrupting current processes makes UV an excellent option. --DIVIDER--# UV vs Poetry UV also promises many of the same benefits as Poetry like: - Dependency Management: Both tools excel at handling package dependencies and creating virtual environments. - Project Structure: They provide utilities for initializing and organizing Python projects. - Lock Files: Both generate lock files to ensure consistent and reproducible environments across systems. - Package Publishing: They support publishing Python packages to PyPl. - Modern Tooling: Both represent contemporary approaches to Python projects and dependency management. What sets UV apart, is its extraordinary speed and minimal resource usage. While Poetry is a major step forward compared to traditional tools, UV pushes the boundaries even further with its Rust based implementation. Additionally, UV’s compatibility with existing Python packaging standards allows it to work seamlessly alongside tools like pip. This offers flexibility that Poetry’s more rigid approach doesn’t always provide. --DIVIDER--# UV vs Conda Many developers who avoid using PIP and virtualenv often turn to Conda, and for good reasons: - Conda offers package management solution that handles not only Python packages but also system-level dependencies. - It is effective for managing complex and scientific computing environments, supporting libraries like NumPy, SciPy, and TensorFlow. - Conda environments are highly isolated and ensure reproducibility across various operating systems. However, even dedicated Conda users might find compelling reasons to explore UV. With its exceptionally fast package installation and dependency resolution, UV significantly reduces the time needed to set up environments compare to Conda’s often slower performance. UV’s lightweight design translates to lower memory usage and faster startup times. Additionally, UV integrates with existing Python packaging tools and standards, ensuring compatibility with broader Python ecosystems. For projects that don't require Conda's non-Python package management, UV provides a more streamlined, efficient solution that can significantly improve development workflows. --DIVIDER--# Switching from PIP or Virtualenv to UV ![Blue Gradient Modern Freelancer YouTube Thumbnail (1).png](Blue%20Gradient%20Modern%20Freelancer%20YouTube%20Thumbnail%20%20(1).png) Migrating from PIP and virtualenv to UV is a simple process since UV maintains full compatibility with existing Python packaging standards. If you have an existing project using virtualenv and pip, start generating a requirements.txt file from your current environment. This can be done by the following command: `$ pip freeze > requirements.txt` Next create a new UV project in the same directory and then install the dependencies from your requirements.txt file: `$ uv init .` After setting up your UV environment, you can replace the common pip and virtualenv commands with their UV equivalents: `$ uv pip install -r requirements.txt` Once the migration is complete, you can safely remove the old virtualenv directory and start using UV’s virtual environment management. The transition should be smooth, you can also use the pip commands through UV’s pip compatibility layer.--DIVIDER--# Current Limitations While UV offers a fast and efficient solution for Python package management, it does have some limitations. One of the main challenges is its incomplete pip compatibility. Although UV supports a significant portion of the pip interface, it does not yet cover the entire feature set. Some of these limitations are due to the intentional design choices, while others are a result of UV being in its early stage of development. For a detailed comparison, you can also refer to the pip compatibility guide. Another limitation is the platform-specific requirements.txt files. Similar to `pip-compile`, UV generates platform specific `requirements.txt` files, which can cause issues when trying to transfer them across different platforms or Python environments. This differs from tools like `Poetry` and `PDM`, which create platform-agnostic lock files (e.g., `poetry.lock` or `pdm.lock`). As a result, UV’s `requirements.txt` files may not be as portable across different environments as those generated by other tools. --DIVIDER--# Conclusion UV presents a modern advancement in Python package management, offering a fast and efficient alternative to traditional tools like PIP and virtualenv. It has key advantages such as 10-100x faster performance, integration with Python packaging standards, a built-in virtual environment, efficient dependency resolution, and low memory footprint. UV greatly enhances the workflow. Whether you are starting a new project or migrating, UV provides a robust solution that improves efficiency while maintaining compatibility with existing tools. With continuous advancements in Python ecosystems, UV demonstrates how modern technologies like Rust can enhance the development experience without compromising the simplicity that Python developers appreciate. --DIVIDER--# References 1. [uv](https://github.com/astral-sh/uv): Python environment and package manager. 2. [PIP](https://pypi.org/project/pip/): Python package installer. 3. [Conda](https://github.com/conda/conda): Cross-platform, language-agnostic binary package manager. 4. [Poetry](https://python-poetry.org/): Python packaging and dependency manager.
4SAKUg8ciBuV
ready-tensor
cc-by-sa
Image compression with Auto-Encoders
![hero.png](hero.png)--DIVIDER--# Introduction to Auto-Encoders In the field of data compression, traditional methods have long dominated, ranging from lossless techniques such as ZIP file compression to lossy techniques like JPEG image compression and MPEG video compression. These methods are typically rule-based, utilizing predefined algorithms to reduce data redundancy and irrelevance to achieve compression. However, with the advent of advanced machine learning techniques, particularly Auto-Encoders, new avenues for data compression have emerged that offer distinct advantages over traditional methods in certain contexts. Auto-encoders are a class of neural network designed for unsupervised learning of efficient encodings by compressing input data into a condensed representation and then reconstructing the output from this representation. The primary architecture of an auto-encoder consists of two main components: an encoder and a decoder. The encoder compresses the input into a smaller, dense representation in the latent space, and the decoder reconstructs the input data from this compressed representation as closely as possible to its original form. --DIVIDER--![auto-encoder.png](auto-encoder.png) --DIVIDER--# Advantages Over Traditional Compression The flexibility and learning-based approach of Auto-Encoders provide several benefits over traditional compression methods: - **Adaptability**: Unlike traditional methods that rely on fixed algorithms, Auto-Encoders can learn from data, adapting their parameters to optimize for specific types of data or applications. This adaptability makes them particularly useful for complex data types for which traditional compression algorithms may not be optimized, such as high-dimensional data or heterogeneous datasets. - **Feature Learning**: Auto-Encoders are capable of learning to preserve important features in the data while still achieving compression. This is especially beneficial in domains like medical imaging or scientific data analysis, where preserving specific features can be more important than minimizing storage space or transmission bandwidth. - **Lossy Compression with Controlled Degradation**: Auto-Encoders offer lossy compression with adjustable quality. By tuning the network architecture and training parameters, we can balance compression ratio against reconstruction quality. This flexibility allows for fine-grained control over information loss, unlike many traditional methods which often have fixed or limited preset options for quality-compression trade-offs. - **Non-Linear Compression**: Unlike traditional algorithms such as Principal Component Analysis (PCA) or Singular Value Decomposition (SVD) that perform linear transformations, Auto-Encoders can model complex, non-linear relationships in the data. This capability allows for more efficient compression schemes that better capture the underlying data structure. - **Scalability**: Auto-Encoders offer excellent scalability for large datasets. Once trained, they can compress new data points quickly, with encoding time typically scaling linearly with input size. This makes them well-suited for applications involving high-volume data or real-time compression needs. Additionally, Auto-Encoders can be implemented efficiently on GPUs, further enhancing their performance on large-scale tasks. --DIVIDER--# Exploring Compression Capabilities of Auto-Encoders In the notebook included in the **Resources** section, an experimental framework is set up to investigate the compression capabilities of Auto-Encoders using the MNIST dataset. MNIST, a common benchmark in machine learning, consists of 60,000 grayscale images in 10 classes of size 28x28, providing a diverse range of handwritten digits for evaluating model performance. # Methodology For the image compression task, we utilize a convolutional autoencoder, leveraging the spatial hierarchy of convolutional layers to efficiently capture the patterns in image data. The autoencoder's architecture includes multiple convolutional layers in the encoder part to compress the image, and corresponding deconvolutional layers in the decoder part to reconstruct the image. The model is trained with the objective of minimizing the mean squared error (MSE) between the original and reconstructed images, promoting fidelity in the reconstructed outputs. # Experimental Setup The notebook details a systematic exploration of different sizes of the latent space, ranging from high-dimensional to low-dimensional representations. The goal is to understand how the dimensionality of the latent space affects both the compression percentage and the quality of the reconstruction. The compression percentage is calculated based on the ratio of the dimensions of the latent space to the original image dimensions, while the reconstruction error is measured using the MSE. We explore 4 scenarios of compression: 50%, 90%, 95% and 99%. --DIVIDER--# Results ## Original vs Reconstructed Images Let's examine a sample of images to visualize how the size reduction in the latent space affects the quality of reconstructed images: ![compressed-images.png](compressed-images.png) As we increase the compression ratio, we observe: 1. Increasing blur in reconstructed images 2. At 99% compression: - Digit "2" starts resembling an "8" - Digit "4" looks like a "9" 3. Most digits remain recognizable until extreme compression. This highlights the trade-off between compression efficiency and image fidelity. --DIVIDER--## Compression ratio vs MSE Loss We now examine the relationship between compression ratio and reconstruction loss (MSE). Specifically, as the latent space is reduced, achieving higher compression percentages, the reconstruction error initially remains low, indicating effective compression. However, a marked increase in reconstruction error is observed as the latent dimension is further reduced beyond a certain threshold . This suggests a boundary in the compression capabilities of the autoencoder, beyond which the loss of information significantly impacts the quality of the reconstructed images. --- ![reconstruction_error.png](reconstruction_error.png) -----DIVIDER--The chart below illustrates the reconstruction error for each digit at 95% and 99% compression rates. --- ![label_error.png](label_error.png) --- Our analysis reveals that the digit "1" shows the lowest reconstruction error, while digit "2" exhibits the highest error at 95% compression, and digit "8" at 99% compression. However, it's crucial to understand that these results don't account for the total amount of information each digit contains, often visualized as the amount of "ink" or number of pixels used to write it. The lower error for digit "1" doesn't necessarily mean it's simpler to represent in latent space. Rather, even if all digits were equally complex to encode per unit of information, digits like "2" or "8" would naturally accumulate more total error because they contain more information (more "ink" or active pixels). For a fairer comparison, we would need to normalize the error by the amount of information in each digit. For instance, if we measured error per 100 pixels of "ink", we might find that the relative complexity of representing each digit in the latent space is more similar than the raw error suggests.--DIVIDER--## Comparing Distributions Using t-SNE Below is a scatter plot that visualizes the distribution of original images (blue points) and their reconstructed counterparts (red points) using t-SNE. This visualization allows us to compare the high-dimensional structure of the original and reconstructed data in a 2D space. Key observations: 1. At lower compression ratios, the blue and red points significantly overlap, indicating that the reconstructed images closely match the distribution of the original images. 2. As we increase the compression to 99%, we begin to see some divergence between the original and reconstructed distributions: - The digit "1" shows the most noticeable separation between blue and red points at 99% compression, suggesting that this digit's reconstruction is most affected by extreme compression. - Digits 3, 7, 8, and 9 also exhibit slight divergences at this high compression level, though less pronounced than digit "1". 3. The degree of overlap between blue and red points serves as a visual indicator of reconstruction quality. Greater overlap suggests better preservation of the original data's structure, while separation indicates more significant information loss during compression. --- ![tsne.png](tsne.png) -----DIVIDER--:::info{title="Info"} ## Regarding t-SNE t-SNE (t-distributed Stochastic Neighbor Embedding) is a popular technique for visualizing high-dimensional data in two or three dimensions. It's particularly effective at revealing clusters and patterns in complex datasets. t-SNE works by maintaining the relative distances between points in the original high-dimensional space when projecting them onto a lower-dimensional space. This means that points that are close together in the original data will tend to be close together in the t-SNE visualization, while distant points remain separated. This property makes t-SNE especially useful for exploring the structure of high-dimensional data, such as images or word embeddings, in a more interpretable 2D or 3D format. </br></br> In this tutorial, we're using t-SNE to compare the distributions of original images and their autoencoder reconstructions. By plotting both sets of data points on the same t-SNE chart (using different colors, e.g., blue for originals and red for reconstructions), we can visually assess the quality of the reconstruction. If the autoencoder is performing well, the blue and red points should significantly overlap, indicating that the original and reconstructed data have similar distributions. Conversely, if the points are clearly separated, it suggests that the reconstructions differ significantly from the originals, pointing to potential issues with the autoencoder's performance. </br></br> One might wonder why t-SNE, which can effectively reduce high-dimensional data to two or three dimensions for visualization, isn't directly used for data compression. There are two major limitations that make t-SNE unsuitable for this purpose: 1. Computational Complexity: t-SNE has a time complexity of O(n²), where n is the number of data points. This quadratic scaling makes it computationally expensive and impractical for large datasets. 2. Non-Parametric Nature: t-SNE doesn't learn a parametric mapping between the high-dimensional and low-dimensional spaces. This means it can't directly transform new, unseen data points without recomputing the entire embedding. These limitations highlight why we use purpose-built compression techniques, such as Auto-Encoders, which offer better scalability and can efficiently process new data once trained. :::--DIVIDER--# Summary This publication investigated the efficacy of autoencoders as a tool for data compression, with a focus on image data represented by the MNIST dataset. Through systematic experimentation, we explored the impact of varying latent space dimensions on both the compression ratio and the quality of the reconstructed images. The primary findings indicate that autoencoders, leveraging their neural network architecture, can indeed compress data significantly while retaining a considerable amount of original detail, making them superior in certain aspects to traditional compression methods.
57Nhu0gMyonV
ready-tensor
mit
Building CLIP from Scratch: A Tutorial on Multi-Modal Learning
![hero-image.png](hero-image.png)--DIVIDER--# Abstract This work provides a comprehensive implementation of Contrastive Language-Image Pretraining (CLIP) from the ground up. CLIP, introduced by OpenAI, jointly trains image and text encoders using contrastive learning to align visual and textual representations in a shared embedding space. This tutorial details the architectural design, including the use of transformer-based models for text encoding and convolutional neural networks for image encoding, as well as the application of contrastive loss for training. The resulting implementation offers a clear, reproducible methodology for understanding and constructing CLIP models, facilitating further exploration of multi-modal learning techniques.--DIVIDER--# Introduction Contrastive Language-Image Pretraining (CLIP) is a pioneering multi-modal model introduced by OpenAI that bridges the gap between visual and textual understanding. By jointly training an image encoder and a text encoder, CLIP learns to align these two modalities in a shared embedding space, enabling it to perform tasks such as zero-shot image classification, image search by textual queries, and text generation based on visual content. This alignment is achieved through contrastive learning, where the model is trained to associate corresponding image-text pairs while distinguishing them from unrelated pairs. The key innovation of CLIP lies in its ability to generalize across a wide range of visual and textual inputs without requiring task-specific fine-tuning. This is particularly useful in open-ended scenarios where the model is expected to handle diverse, unseen data. Traditional models often require large labeled datasets and are constrained to specific tasks. In contrast, CLIP can be trained on uncurated, web-scale datasets containing image-text pairs, making it highly flexible and applicable in various domains, from content retrieval to creative generation. The usefulness of CLIP extends beyond its impressive performance on standard vision tasks. It provides a scalable approach to multi-modal learning, where text can be leveraged to guide image understanding in more abstract ways, and vice versa. This makes it a powerful tool for applications in fields like computer vision, natural language processing, and even human-computer interaction, where cross-modal relationships are essential. In this tutorial, the focus will be on implementing CLIP from scratch, offering insights into its architecture and training process. This implementation provides a hands-on exploration of the core principles of multi-modal contrastive learning, highlighting CLIP’s versatility and effectiveness in real-world applications.--DIVIDER--# CLIP Architecture CLIP employs a dual-encoder architecture that processes images and text separately but aligns their representations in a shared embedding space. The model consists of two key components: an **image encoder** and a **text encoder**. These encoders operate independently to produce embeddings for their respective inputs, which are then compared using a contrastive loss function to learn meaningful correspondences between images and their associated textual descriptions. ![clip-overview.png](clip-overview.png)--DIVIDER--## Image Encoder The image encoder in CLIP is responsible for converting images into high-dimensional embeddings that capture meaningful visual features. These embeddings are then aligned with text embeddings through a shared space, allowing the model to learn relationships between images and textual descriptions. The image encoder is flexible and can be built using different architectures, with **ResNet** and **Vision Transformers (ViT)** being the most commonly used. Both of these architectures can be employed in CLIP to encode visual information effectively. The choice of image encoder depends on the complexity and scale of the task, as well as the type of image data being used. ResNet tends to work well for standard image recognition tasks, while ViT excels in capturing more abstract relationships within images. ![img-encoder2.png](img-encoder2.png) <h2> Image Encoder Architecture</h2> The image encoder in this implementation is inspired by the Vision Transformer (ViT) architecture, which processes images as sequences of patches, allowing it to capture relationships across different regions of an image efficiently. 1. **Patch Embedding**: The first step in the image encoding process is to divide the input image into small, fixed-size patches (in this case, 16x16 pixels). Each patch is treated as an individual token, similar to words in a text sequence. These patches are then linearly projected into a higher-dimensional space (768 dimensions), effectively converting the image into a series of patch embeddings. This process ensures that the model can process and understand each part of the image separately. 2. **Positional Embedding**: Since transformers are sequence models and do not inherently have any notion of spatial relationships, positional embeddings are added to each patch embedding. These positional embeddings provide information about the relative position of each patch in the original image, ensuring that the model can account for spatial arrangement while processing the image. ```python class ImageEmbeddings(nn.Module): def __init__( self, embed_dim: int = 768, patch_size: int = 16, image_size: int = 224, num_channels: int = 3, ): super(ImageEmbeddings, self).__init__() self.embed_dim = embed_dim self.patch_size = patch_size self.image_size = image_size self.num_channels = num_channels self.patch_embedding = nn.Conv2d( in_channels=self.num_channels, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size, padding="valid", ) self.num_patches = (self.image_size // self.patch_size) ** 2 self.position_embedding = nn.Embedding(self.num_patches, self.embed_dim) self.register_buffer( "position_ids", torch.arange(self.num_patches).expand((1, -1)), persistent=False, ) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Channels, Height, Width) -> (Batch size, Embed dim, Height, Width) x = self.patch_embedding(x) # x: (Batch size, Embed dim, Height, Width) -> (Batch size, Height * Width, Embed dim) x = x.flatten(2).transpose(1, 2) # Add position embeddings x = x + self.position_embedding(self.position_ids) return x ``` 3. **Self-Attention Mechanism**: Once the image has been converted into a series of patch embeddings with positional information, a multi-head self-attention mechanism is applied. In this context, since each patch can attend to all other patches in the image, no masking is required, unlike in tasks such as language modeling where padding or causal masking may be necessary. The attention mechanism enables the model to weigh the importance of different patches relative to each other, allowing it to focus on significant regions of the image. This setup captures both local and global interactions across patches, and the use of multiple heads enables the model to learn various relationships in parallel, enriching the understanding of the image’s structure. ```python class Attention(nn.Module): def __init__( self, embed_dim: int = 768, num_heads: int = 12, qkv_bias: bool = False, attn_drop_rate: float = 0.0, proj_drop_rate: float = 0.0, ): super(Attention, self).__init__() assert ( embed_dim % num_heads == 0 ), "Embedding dimension must be divisible by number of heads" self.num_heads = num_heads head_dim = embed_dim // num_heads self.scale = head_dim**-0.5 self.wq = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.wk = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.wv = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop_rate) self.wo = nn.Linear(embed_dim, embed_dim) self.proj_drop = nn.Dropout(proj_drop_rate) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) batch_size, n_patches, d_model = x.shape q = ( self.wq(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) k = ( self.wk(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) v = ( self.wv(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) attn = (q @ k.transpose(-2, -1)) * self.scale attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(batch_size, n_patches, d_model) x = self.wo(x) x = self.proj_drop(x) return x ``` 4. **Feed-Forward Network (MLP)**: After the attention mechanism, the patch embeddings pass through a multi-layer perceptron (MLP). This feed-forward network processes each patch embedding individually, helping the model to further refine the visual features extracted from the image. It consists of two linear layers with a non-linear activation function in between, followed by dropout to prevent overfitting. ```python class MLP(nn.Module): def __init__( self, in_features: int, hidden_features: int, drop_rate: float = 0.0, ): super(MLP, self).__init__() self.fc1 = nn.Linear(in_features, hidden_features) self.act = nn.GELU() self.fc2 = nn.Linear(hidden_features, in_features) self.drop = nn.Dropout(drop_rate) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) x = self.fc1(x) x = self.act(x) x = self.drop(x) x = self.fc2(x) x = self.drop(x) return x ``` 5. **Layer Normalization and Residual Connections**: To stabilize training and improve performance, layer normalization is applied before both the attention and MLP layers. Additionally, residual connections are employed, where the input to each block is added to the block’s output, allowing the model to retain information from earlier layers and avoid vanishing gradients. These techniques improve the model’s ability to learn efficiently, even with deep architectures. <h2>Image Encoder Layer</h2> ```python class ImageEncoderLayer(nn.Module): def __init__( self, embed_dim: int = 768, num_heads: int = 12, mlp_ratio: int = 4, qkv_bias: bool = False, drop_rate: float = 0.0, attn_drop_rate: float = 0.0, ): super(ImageEncoderLayer, self).__init__() self.norm1 = nn.LayerNorm(embed_dim, eps=1e-6) self.attn = Attention( embed_dim=embed_dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop_rate=attn_drop_rate, proj_drop_rate=drop_rate, ) self.norm2 = nn.LayerNorm(embed_dim) self.mlp = MLP( in_features=embed_dim, hidden_features=int(embed_dim * mlp_ratio), drop_rate=drop_rate, ) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) residual = x x = self.norm1(x) x = residual + self.attn(x) residual = x x = self.norm2(x) x = residual + self.mlp(x) return x ``` This architecture provides the flexibility to learn both fine-grained details and abstract patterns across images, making it effective for encoding visual information in multi-modal tasks like CLIP. The combination of patch embeddings, attention, and feed-forward networks allows the model to understand and represent images in a way that can be directly compared to text embeddings. --DIVIDER--## Text Encoder The text encoder in CLIP is responsible for converting input text into a fixed-dimensional embedding that can be aligned with image embeddings. CLIP can use various transformer-based models like **BERT** or **GPT** as its text encoder. These models tokenize the input text, turning each word or subword into an embedding vector that captures semantic meaning. To handle word order, **positional encodings** are added to these token embeddings, ensuring the model understands the structure of the sentence. A **multi-head self-attention** mechanism then allows each token to attend to all others in the sequence, capturing both local and global dependencies in the text. Finally, the output is refined through a **feed-forward network**, with **layer normalization** and **residual connections** applied to stabilize training and maintain information across layers. This architecture ensures the model generates high-quality embeddings that represent the meaning of the text, ready to be aligned with the corresponding image embeddings. In this implementation, we chose GPT-2 as our text encoder: ```python configuration = GPT2Config( vocab_size=50257, n_positions=max_seq_length, n_embd=embed_dim, n_layer=num_layers, n_head=num_heads, ) self.text_encoder = GPT2Model(configuration) ```--DIVIDER--## Data Fusion Once the image and text inputs have been encoded separately by their respective encoders, CLIP projects both modalities into a shared embedding space. This process, known as **data fusion**, allows the model to align visual and textual representations so that they can be directly compared. To achieve this, both the image and text embeddings are passed through a **projection layer** that maps them into the same dimensional space. By doing so, the model can compute similarities between images and text, enabling it to link corresponding image-text pairs and differentiate between unrelated ones. This shared space is crucial for tasks like zero-shot image classification and cross-modal retrieval, where the model must understand and relate visual and textual information in a unified way. ```python self.image_projection = nn.Linear(img_embed_dim, embed_dim) self.text_projection = nn.Linear(embed_dim, embed_dim) ``` --DIVIDER--# Contrastive Loss CLIP’s training process relies on **contrastive learning**, which is designed to align image and text embeddings by maximizing the similarity between matched pairs while minimizing it for mismatched pairs. This is achieved through the use of a **contrastive loss** function, which encourages the model to bring together the embeddings of corresponding images and text in the shared space. During training, the model is given a batch of image-text pairs. For each pair, the model computes similarities between the image embedding and all the text embeddings in the batch, as well as between the text embedding and all the image embeddings. The goal is to maximize the similarity for the correct image-text pair and minimize it for all incorrect pairs. This encourages the model to learn meaningful correspondences between images and descriptions, ensuring that related images and text are positioned closely in the embedding space, while unrelated pairs are pushed apart. ![clip-loss-1.png](clip-loss-1.png) The contrastive loss can be implemented as follows: ```python def clip_loss(image_embeddings, text_embeddings): # Normalize embeddings image_embeddings = F.normalize(image_embeddings, dim=-1) text_embeddings = F.normalize(text_embeddings, dim=-1) # Compute logits by multiplying image and text embeddings (dot product) logits_per_image = image_embeddings @ text_embeddings.T logits_per_text = text_embeddings @ image_embeddings.T # Create targets (diagonal is positive pairs) num_samples = image_embeddings.shape[0] labels = torch.arange(num_samples, device=image_embeddings.device) # Compute cross-entropy loss for image-to-text and text-to-image directions loss_image_to_text = F.cross_entropy(logits_per_image, labels) loss_text_to_image = F.cross_entropy(logits_per_text, labels) # Final loss is the average of both directions loss = (loss_image_to_text + loss_text_to_image) / 2.0 return loss ``` --DIVIDER--# Solving Multiple Choice Questions <h2>Model Training and Evaluation on Image-Based MCQ Task</h2> One of the practical use cases for CLIP-like models is solving multiple-choice questions (MCQs) where the question is an image and the answer options are in text form. This setup highlights CLIP’s ability to bridge visual and textual data, aligning image features with corresponding text descriptions to select the most relevant answer. To train the model for this type of task, we used the [Attila1011/img_caption_EN_AppleFlair_Blip](https://huggingface.co/datasets/Attila1011/img_caption_EN_AppleFlair_Blip) dataset from Hugging Face. This dataset contains pairs of images and corresponding captions, making it ideal for training models that require aligned image-text data, such as CLIP. By learning the associations between diverse visual inputs and their textual descriptions, the model can effectively map images to related text in a shared embedding space, a key component in contrastive learning frameworks. The diverse nature of the images and captions in this dataset allows the model to generalize well across various visual scenes and their textual counterparts. This ensures that the model can capture a wide range of image-text relationships, which is critical for tasks involving open-ended or unseen data, such as solving MCQs where new image-based questions are presented. After training on this dataset, the model was evaluated on a multiple-choice question (MCQ) dataset where it was tasked with selecting the correct text-based answer for each image. Below, we provide an example visualization, showing the images from the MCQ dataset, the model's answer choices, and its selected answer. ![mcq1.png](mcq1.png)--DIVIDER--# Conclusion In this work, we provided a detailed walkthrough of implementing Contrastive Language-Image Pretraining (CLIP) from scratch, covering both the architectural design and the training process. By leveraging contrastive learning, the model effectively aligns image and text embeddings in a shared space, enabling it to generalize across various multi-modal tasks without the need for task-specific fine-tuning. We demonstrated the versatility of CLIP through its ability to handle both visual and textual information, and further evaluated its performance on a multiple-choice question (MCQ) dataset. This implementation highlights the powerful capabilities of CLIP in multi-modal learning, laying the foundation for future exploration in fields such as computer vision, natural language processing, and cross-modal retrieval.--DIVIDER--# References Radford, Alec, et al. “[Learning Transferable Visual Models From Natural Language Supervision.](https://arxiv.org/pdf/2103.00020)” International Conference on Machine Learning (ICML), 2021. [Attila1011/img_caption_EN_AppleFlair_Blip Dataset](https://huggingface.co/datasets/Attila1011/img_caption_EN_AppleFlair_Blip)
82lYI7TWVtvP
3rdson
cc-by
Core concepts of Agentic AI and AI agents
![AIClips675547-1024x585.png](AIClips675547-1024x585.png) Over the past year, there has been immense hype and discussion around AI, particularly **GenAI**, **Agentic AI**, and **RAG systems**. This buzz has sparked significant shifts across industries, with everyone scrambling to understand: *What exactly are agents? What defines "agentic AI"?* How do we distinguish an AI system as "agentic" versus a non-agentic tool? We’ve seen companies racing to adopt AI, startups pitching "agents-as-a-service," and a flood of new frameworks. But amid the noise, the fundamentals often get lost. That’s exactly why we’re breaking it all down in this article. In this article, we will be **explaining Agentic AI, AI agents, and recent GenAI trends** in the simplest way possible. Here’s what we’ll cover: 1. **Agentic AI** – What makes it revolutionary? 2. **AI Agents** – Core components that define them 3. **LLM Frameworks & Workflows** – The engine behind the magic We’ll also unpack key concepts like: - **Memory & Context Management** (How agents "remember") - **Prompt Engineering** (How to instruct AI Agents) - **Multi-Agent Communication** (When agents team up) - **Real-World Applications** (Where agents *actually* shine today) Stick with us until the end, we’ll make sure you walk away with clarity on how these pieces fit into the bigger AI landscape. ![gen AI pub for all.webp](gen%20AI%20pub%20for%20all.webp) ## **So, What Is Agentic AI?** To understand this, let's start with the basics: **What are AI agents?** **AI agents** are systems powered by AI (typically LLMs) that interact with software, data, *and even hardware* to achieve specific goals. Think of them as proactive problem-solvers: they autonomously complete tasks, make decisions, and adapt to new information with no micromanaging required. It's crucial to note that what makes a system truly "agentic" goes beyond just behavior, the implementation matters too. This is because traditional automated systems using if-else logic can mimic agent-like behavior, but true AI agents are distinguished by how their decisions are made. Instead of following pre-programmed conditional logic, they use LLMs to actively make decisions and determine their course of action. This fundamental difference in implementation (LLM-driven decision making versus traditional programming logic) is what sets genuine AI agents apart from sophisticated automation. Unlike basic chatbots or static AI tools, agents **plan** and **decide** independently (guided by the user's input) until they nail the best result. But how do they achieve this? They achieve this through their brain(LLM). The LLM lets them: 1. **Observe** their environment (e.g., data inputs, user requests). 2. **Orient** themselves to understand how to use their tools. 3. **Decide** on the optimal action. 4. **Execute** that action. In short, they’re *goal-driven, self-directed systems* and Agentic AI is the field focused on building and refining these autonomous agents. --- ## **What’s an Autonomous Agent?** An autonomous agent is an advanced form of AI that can understand and respond to enquiries and then take action **without human intervention**. When given an objective, it can: - Generate tasks for itself, - Complete those tasks. - Move to the next one. ...until the objective is fully achieved. --- ## **Autonomous Agents vs. AI Agents** While all autonomous agents are technically AI agents, **not all AI agents are autonomous**. Here’s the breakdown: - **AI agents** include assistive tools like copilots, which rely on human input to complete tasks. - **Autonomous agents** work independently, needing little to no human involvement. But note that both can learn and make decisions based on new information, but **only autonomous agents** can chain multiple tasks in sequence. --- ### **Let’s See an AI Agent in Action** Imagine an AI assistant that seamlessly manages tasks on your laptop like a virtual assistant built into your laptop. For example: - You ask it, *“Do I have any emails from Abhy?”* The agent **interprets** your request, **decides** to connect to your Gmail API, scans your inbox, and instantly pulls up every email from Abhy. - Or, *“What’s trending in NYT news today?”* The agent **recognises** it needs to search the web, crawls trusted sources (like NYT’s API), and spits out a bullet-point summary of key trends. This system is called an “agent” because at **every step**, it uses its brain (the LLM) to: 1. **Interpret** your goal 2. **Decide** which tools (email, web search, calendar) to use 3. **Execute** actions end-to-end Unlike basic chatbots that wait for step-by-step commands, AI agents **autonomously bridge intent and outcome**. They leverage tools, analyse context, and keep iterating until the job’s done. *Note*: This example is **software-based**, but there are also **hardware-based AI agents** (physical ones), like robots or self-driving cars. These use cameras, mics, and sensors to capture real-world data—then act on it (e.g., a warehouse robot navigating around obstacles). --- ## **Agentic AI** Now that we’ve nailed what agents are, agentic AI becomes straightforward. At it's core, **Agentic AI** is the *autonomy engine* for AI systems. It’s the intelligence and methodology that lets agents act independently, the “how” behind their ability to plan, decide, and execute without hand-holding. Think of it as the **framework** (and mindset) for building agents that truly “think for themselves.” --- ## **Core Components of AI Agents** We’ll break down the core components into two categories: ### **1. Architectural Components of AI Agents** These are the foundational building blocks in every AI agent’s design. They include: **1. Large Language Models (LLMs): The Brain** LLMs are the powerhouses behind AI agents similar to the human brain. They’re responsible for: - Understanding user input - Deciding which tools to use - Generating final answers after processing. **2. Tools Integration: The “Hands”** Tools let AI agents interact with the digital world. In our earlier example, tools included Gmail APIs and web crawlers which was used to fetch data from external sources. **3. Memory Systems: The “Recall”** Memory allows agents to retain and reuse information across interactions (think personalised context!). Without it, an agent is like a goldfish, forgetting every conversation instantly. ### AI agents can have any of the following memories: 1. **Short-term Memory:** Keeps track of the ongoing conversation, enabling the Al to maintain coherence within a single interaction. 2. **Long-term Memory:** Stores information across multiple interactions, allowing the Al to remember user preferences, past queries, and more. #### Long term memory can further be split into 3 types - Episodic Memory: Remembers specific past events or interactions, enabling the Al to recall and reference previous exchanges. - Semantic Memory: Holds general knowledge and facts that the Al can draw upon to provide informed response - Procedural. Memory: This is defined as anything that has been codified intuit the AI agent by us. It may include the structure of the system prompt, the tools we provide the agent etc ![1_l0oRfSsoJXjaexRmM3FQwg.png](1_l0oRfSsoJXjaexRmM3FQwg.png) --- ### **2. Cognitive Components of AI Agents** These define how agents “think” and act: **1. Perception: The “Senses”** This is the AI agent’s ability to gather and interpret data from its surroundings. Much like human senses, perception allows the agent to ‘see’ and ‘hear’ the world around it. In the AI agent example above, the agent displayed the perception ability by interacting with APIs, databases, or web services to gather relevant information. **2. Reasoning: The “Logic”** Once an AI agent has gathered the relevant data through perception, it needs to make sense of that data. This is where reasoning comes into play. Reasoning involves analysing the collected data, identifying patterns, and drawing conclusions. It’s the process that allows an AI agent to transform raw data into actionable insights **3. Action: The “Doing”** This is the ability of the AI agent to bring it's decision to life. The ability to take action based on perception and reasoning is what truly makes an AI agent autonomous. Actions can be physical, like a robot moving an object, or digital, such as a software agent sending an email. **4. Feedback & Learning: The “Growth”** One of the most fascinating aspect of AI agents is their ability to learn and improve over time. Learning allows AI agents to adapt to new situations, refine their decision-making processes, and become more efficient at their tasks. ![PHOTO-2025-02-19-16-10-59.jpg](PHOTO-2025-02-19-16-10-59.jpg) --- ## **Multi-Agent Systems (MAS)** Just like the name suggests, a **multi-agent system (MAS)** involves multiple AI agents teaming up to tackle tasks for a user or system. Instead of relying on one “do-it-all” agent, MAS uses a squad of specialised agents working together. Thanks to their flexibility, scalability, and domain expertise, MAS can solve **complex real-world problems**. --- ## **MAS Architectures** ### **1. Centralized Networks** In centralised networks, a central unit contains the global knowledge base, connects the agents, and oversees their information. A strength of this structure is the ease of communication between agents and uniform knowledge. A weakness of the centrality is the dependence on the central unit; if it fails, the entire system of agents fails. **Example**: Like a conductor in an orchestra, directing every musician. ### **2. Decentralized Networks** In a decentralised network, there is no central agent or unit that controls the oversees information. The agents share information with their neighbour in a decentralised manner. Some benefits of decentralised networks are robustness and modularity. The failure of one agent does not cause the overall system to fail since there is no central unit. One challenge of decentralised agents is coordinating their behaviour to benefit other cooperating agents. **Example**: A flock of birds adjusting flight paths without a leader. --- ## **Next Up: Building AI Agents** Now that we’ve covered *what* AI agents are and *how* they work, let’s tackle the big question: **How do you actually build one?** (And which frameworks/libraries make it easier?) --- ### **Building AI Agents: Frameworks & Tools** Python dominates the AI/ML world, so familiarity with it unlocks countless SDKs and frameworks. But even non-coders can build agents using **drag-and-drop GUI tools**. Let’s break down the options: --- ## **Code-Based Frameworks** For developers who want granular control over workflows, memory, and multi-agent collaboration: ### **Code-Based Frameworks** 1. **LangGraph** Developed by the team behind LangChain, LangGraph takes things further by letting you design AI workflows as *visual graphs*. Imagine building a customer support system where one agent handles initial queries, another escalates complex issues, and a third schedules follow-ups; all connected like nodes on a flowchart. It’s perfect for multi-step processes that need to "remember" where they are in a task. 🔗 [Docs](https://langchain-ai.github.io/langgraph/) | [GitHub](https://github.com/langchain-ai/langgraph) 2. **Microsoft AutoGen** AutoGen is Microsoft’s answer to collaborative AI. With Microsoft AutoGen, you can have a system where one agent writes code, another reviews it for errors, and a third tests the final script. These agents debate, self-correct, and even use tools like APIs or calculators. It is ideal for coding teams or research projects where multiple perspectives matter. 🔗 [Docs](https://microsoft.github.io/autogen/stable/) | [GitHub](https://github.com/microsoft/autogen) 3. **CrewAI** CrewAI organizes agents into specialized roles, like a startup team. For example, a "Researcher" agent scours the web for data, a "Writer" drafts a report, and an "Editor" polishes it. They pass tasks back and forth, refining their work until it’s ready to ship with no micromanaging required. 🔗 [Docs](https://docs.crewai.com/introduction) | [GitHub](https://github.com/crewAIInc/crewAI) 4. **LlamaIndex** Formerly called GPT Index, LlamaIndex acts like a librarian for your AI agents. If you need your agent to reference a 100-page PDF, a SQL database, and a weather API, LlamaIndex is the framework to go to. It helps it fetch and connect data from all these sources, ensuring responses are informed and accurate. 🔗 [Docs](https://docs.llamaindex.ai/en/stable/) | [GitHub](https://github.com/run-llama/llama_index) 5. **Pydantic AI** Built by the Pydantic team, this framework acts as a data validator for your AI workflows. If your agent interacts with APIs, Pydantic AI checks that inputs and outputs match the expected data format. Like ensuring a date field isn’t accidentally filled with text. No more "garbage in, garbage out" chaos. 🔗 [Docs](https://ai.pydantic.dev/) | [GitHub](https://github.com/pydantic/pydantic-ai) 6. **OpenAI Swarm** OpenAI’s experimental Swarm framework explores how lightweight AI agents can solve tasks collaboratively. One agent gathers data, another analyzes it, and a third acts on it. It’s not ready for production yet but it's worth mentioning. 🔗 [GitHub](https://github.com/openai/swarm) --- ### **Visual (GUI) Frameworks** 1. **Rivet** Rivet is like digital LEGO for AI. You just have to drag and drop nodes to connect ChatGPT to your CRM, add a "send email" action, and voilà, you’ve built an agent that auto-replies to customer inquiries. Perfect for business teams who want automation without coding. 🔗 [Website](https://rivet.ironcladapp.com/) 2. **Vellum** Vellum is the Swiss Army knife for prompt engineers. It allows you to test 10 versions of a prompt side-by-side, see which one gives the best results, and deploy it to your agent; all through a clean interface. It’s like A/B testing for AI workflows. 🔗 [Website](https://www.vellum.ai/) 3. **Langflow** Langflow is the drag and drop alternative to LangChain. You can just drag a "web search" node into your workflow, link it to a "summarize" node, and watch your agent turn a 10-article search into a crisp summary. It is great for explaining AI logic to your CEO. 🔗 [Website](https://www.langflow.org/) 4. **Flowise AI** Flowise AI is the open-source cousin of Langflow. You can use it to build a chatbot that answers HR questions by just linking your company handbook to an LLM—no coding, just drag, drop, and deploy. 🔗 [Website](https://flowiseai.com/) 5. **Chatbase** Chatbase lets you train a ChatGPT-like assistant on your own data. Upload your FAQ PDFs, tweak the design to match your brand, and embed it on your website. It’s like having a 24/7 customer service rep who actually reads the manual. 🔗 [Website](https://www.chatbase.co/) ### **Here are some Factors to consider before choosing a Framework** 1. **Use Case** What’s your agent’s job? A coding assistant needs AutoGen’s teamwork, while a document chatbot thrives with Langflow’s simplicity. 2. **Criticality** Are you building a mission-critical system? Opt for battle-tested tools like LangGraph. If experimenting, try experimental frameworks like Swarm. 3. **Team Skills** If you have Python pros in your team, then go for code-based frameworks but if don't have Python pros in your team, GUI tools like Rivet or Chatbase will save the day. 4. **Time/Budget** Need it yesterday? No-code tools speed things up. Got resources? Custom code offers long-term flexibility. 5. **Integration** If you would need to plug in connectors like Slack API, Jira API etc, check if the framework supports those connectors out-of-the-box. 6. **AIOps** If you will scale to a thousand users, prioritize frameworks with built-in monitoring, logging, and auto-scaling. --- ## **LLM Workflows: The “Conductor” Behind the Magic** LLM workflows are essentially a series of interconnected processes that ensure an AI system can understand user intent, maintain context, break down tasks, collaborate among agents, and ultimately deliver actionable results. Think of LLM workflows as the *recipe* your AI agents follow. Just like baking a cake requires mixing, baking, and frosting steps, LLM workflows chain prompts, tools, and logic into a sequence. For example, a customer support agent might: 1. **Analyze** a user’s complaint, 2. **Search** past tickets for similar issues, 3. **Draft** a response, and 4. **Escalate** if it’s urgent. Frameworks like LangGraph or Microsoft AutoGen let you orchestrate these steps like a playlist with less coding headaches. --- ## **Context Management: How Agents “Remember”** Context management is the mechanism by which an AI system keeps track of ongoing interactions and relevant data. It ensures that the conversation or task remains coherent over multiple turns. Ever chatted with a bot that forgets your name two messages later? *That’s bad context management*. Modern agents use **memory systems** to retain details across interactions. For instance: - A travel agent remembers your allergy to seafood when booking restaurants. - A project manager agent tracks deadlines from prior chats. Tools like LlamaIndex or LangChain’s memory modules act as the agent’s “sticky notes”, keeping conversations coherent and personalized. --- ### **Prompt Engineering: Talking to AI Like a Pro** Prompt engineering involves crafting and refining the inputs given to an LLM so that it produces the most relevant and accurate outputs. Prompt engineering isn’t just typing questions. It’s **crafting instructions LLMs can’t ignore**. For example: - *Weak prompt*: “Summarize this article.” → Gets a generic response. - *Strong prompt*: “Summarize this article in 3 bullet points for a CEO. Focus on financial risks.” → Gold. Tools like Vellum or PromptFlow help you test and refine prompts like a mad scientist. --- ### **Task Planning & Decomposition: Breaking Down the Impossible** Agents don’t solve “Plan my wedding” in one go. They **chop big tasks into bite-sized steps**: 1. Book venue → 2. Create guest list → 3. Order cake → 4. Send invites. Task planning and decomposition involve breaking down a complex problem or query into smaller, more manageable subtasks. This methodical approach helps AI systems tackle complicated challenges step by step. CrewAI’s role-based agents excel here. --- ### **Multi-Agent Communication: When Bots Team Up** Multi-agent communication refers to the way multiple AI agents interact and share information with one another to collaboratively solve a problem. This is particularly useful in systems designed to handle complex or distributed tasks. Picture a hospital where one agent diagnoses symptoms, another checks drug interactions, and a third books follow-ups. **Multi-agent systems** let specialists collaborate: - **Centralized**: Like a CEO assigning tasks (AutoGen’s manager-worker teams). - **Decentralized**: Like Uber drivers coordinating via an app (OpenAI Swarm’s experimental approach). --- ### **Real-World Applications: Where Agents *Actually* Shine** - **Customer Support**: Chatbots that resolve 80% of queries without humans. - **Healthcare**: Diagnostic agents cross-referencing symptoms with medical journals. - **Finance**: Fraud-detection agents scanning transactions in real time. - **Logistics**: Warehouse robots coordinating deliveries (physical agents + decentralized MAS). Even **creatives** use them—like AI writing teams drafting blog outlines while you sip coffee. --- ## **Wrapping Up: The Future is Agentic** We've covered a lot of ground; from the basics of AI agents to the nitty-gritty of building them. Here's the key takeaway: **Agentic AI isn't just another tech buzzword**. It's a fundamental shift in how we interact with AI systems. Whether you're a developer diving into frameworks like LangGraph and AutoGen, or a business leader exploring no-code tools like Rivet and Chatbase, there's never been a better time to jump into the agentic AI revolution. Remember: The best AI agent isn't necessarily the most complex one. It's the one that **solves real problems** while being reliable, scalable, and (most importantly) actually useful in the real world. *The future of AI isn't just about smarter algorithms, it's about systems that think, plan, and act with purpose*. And that future is already here.
8eAX8A1gfdkJ
ready-tensor
cc-by-sa
Transformer Models for Automated PII Redaction: A Comprehensive Evaluation Across Diverse Datasets
![personal-records_tiny.jpg](personal-records_tiny.jpg) --DIVIDER--# TL;DR We automated PII redaction using transformer models like RoBERTa and DeBERTa, assessing their effectiveness on five datasets. RoBERTa was selected for its balance of performance and efficiency. The study introduced a redaction script combining RoBERTa, regex, and the Faker library to ensure data privacy by replacing real PII with fictitious yet plausible data.--DIVIDER--# Introduction In today’s digital age, the protection of Personally Identifiable Information (PII) has become a crucial concern for organizations and individuals alike. The increasing prevalence of digital records and the growing reliance on data-driven technologies have elevated the risk of exposing sensitive personal data. Unauthorized access to PII can lead to privacy breaches, identity theft, and significant legal and financial repercussions. As the amount of digital data grows, manual methods for PII redaction are no longer feasible, demanding more efficient, automated solutions. In this study, we tackle the challenge of automatically identifying and redacting PII using state-of-the-art transformer models. We trained six different models—ALBERT, DistilBERT, BERT, RoBERTa, T5, and DeBERTa—on five datasets to evaluate their effectiveness in detecting and redacting various types of PII. Our aim is to provide a comprehensive comparison of these models' performance and to present a robust approach to ensuring the security and privacy of personal data in large-scale digital documents.--DIVIDER--# Datasets Our study utilizes five datasets for training and evaluation, The [n2c2 2014 (National NLP Clinical Challenges)](https://portal.dbmi.hms.harvard.edu/projects/n2c2-2014/) dataset specifically focuses on the de-identification of protected health information (PHI) in medical records and is widely recognized in clinical natural language processing research. However, it is important to note that the n2c2 2014 dataset is not publicly available. Access can be requested through the provided link. Key characteristics of the n2c2 dataset: Origin: The dataset was originally created for the n2c2 de-identification challenge, which encourages advancements in automatically removing personal health information from medical records. Entities: The n2c2 dataset includes several types of PHI, including: - PERSON (consolidated from PATIENT and DOCTOR) - LOCATION - DATE - PHONE - EMAIL - Additional entities like AGE, IDNUM, and HOSPITAL The comprehensive nature of the dataset, with its wide variety of PHI types, allows for a detailed evaluation of PHI redaction techniques. We focused on the most critical types of PII—PERSON, LOCATION, PHONE_NUMBER, DATE, and EMAIL—in our redaction models, ensuring a manageable yet impactful scope for model training and validation. --DIVIDER--In addition to the n2c2 dataset, we expanded our study by incorporating four more datasets to enhance the diversity of data and evaluate the performance of our models across different domains. These datasets include a mixture of real-world and synthetic data, focusing on PII detection and redaction in various contexts. [CoNLL-2003](https://huggingface.co/datasets/eriktks/conll2003) The CoNLL-2003 dataset is widely used for Named Entity Recognition (NER) tasks. While primarily focused on identifying entities such as PERSON, LOCATION, ORGANIZATION, and MISC, it serves as a useful benchmark for training models on real-world text, helping improve general PII detection capabilities. [PII Masking 300k](https://huggingface.co/datasets/ai4privacy/pii-masking-300k) This dataset contains real and synthetic text samples with a variety of PII types, such as PERSON, LOCATION, PHONE_NUMBER, EMAIL, and more. While the full dataset contains 300,000 samples across multiple languages, we specifically used only the English text, reducing the dataset size to approximately 37,000 samples. This subset allowed us to focus on English PII redaction while maintaining a manageable dataset size. [Synthetic PII Finance Multilingual](https://huggingface.co/datasets/gretelai/synthetic_pii_finance_multilingual) This dataset is designed for multilingual PII redaction tasks in the financial domain, offering a diverse set of texts with financial jargon and various types of PII such as PERSON, DATE, IDNUM, PHONE_NUMBER, and LOCATION. Its multilingual nature allows models to generalize across languages, increasing the scope of PII detection. [Synthetic PII Dataset](https://github.com/microsoft/presidio-research/blob/master/data/synth_dataset_v2.json) (Presidio) The final dataset we used is available through Microsoft’s Presidio tool under its official GitHub repository. This dataset contains synthetic text annotated with various types of PII. It provides a valuable resource for evaluating PII detection and redaction models in a controlled and diverse synthetic environment.--DIVIDER--# Results In this evaluation, we will focus on macro recall as the main metric because, in Named Entity Recognition (NER), recall is often the most important metric. This is due to the fact that missing important entities (false negatives) can have significant consequences in real-world applications, making it crucial to maximize the identification of all relevant entities. :::info{title="Info"} # What is Macro Recall? **Macro recall** calculates the recall for each class independently and then takes the unweighted average across all classes. This means that each class contributes equally to the final recall score, regardless of how many instances are in each class. It is particularly useful in tasks like **Named Entity Recognition (NER)**, where you want to ensure that the model performs well across both majority and minority classes. In NER, failing to detect an entity (false negative) can have a significant impact, which is why macro recall is often a key focus. ::: ## Macro Recall for models/datasets ![macro_recall.svg](macro_recall1.svg) Following the comparison of the overall performance of our models across various datasets, we have chosen to concentrate our evaluation efforts on the CoNLL-2003 dataset. This dataset is recognized as a well-established benchmark in the field, making it ideal for a detailed assessment of our PII redaction capabilities. By focusing on this dataset, we aim to provide a clear and standardized measure of our model's effectiveness and efficiency in handling sensitive information. ## Macro Metrics on CoNLL2003 dataset ![conll2003_metrics.svg](conll2003_metrics.svg) ## Macro Recall per Class ![conll2003_recall_per_class.svg](conll2003_recall_per_class1.svg) In our evaluation, DeBERTa and RoBERTa perform very closely. However, DeBERTa is the largest model, requiring significantly more time and computational resources for training. Given that RoBERTa offers nearly the same level of performance but is smaller in size and more efficient to train, we have chosen RoBERTa for PII redaction. On the other hand, T5 performs the weakest among the models. Its architecture, designed for sequence-to-sequence tasks rather than token-level predictions, likely affects its effectiveness in PII redaction. The need for more complex generation tasks makes T5 less suited for the focused, entity-specific nature of PII redaction, resulting in lower overall performance. Additionally, the detailed metrics per label for all models/datasets are available in **`scores.zip`** file in the resources section. --DIVIDER--# Redaction of Personally Identifiable Information (PII) To develop a universally applicable PII redactor that enhances the privacy and security of any dataset, we developed a script that leverages the RoBERTa model and regular expressions to redact specific types of personally identifiable information (PII). The redaction process involves two key components: the RoBERTa model identifies names, addresses, and dates, while regex patterns focus on detecting and redacting phone numbers, emails, and URLs. Below are the steps involved: Regex Patterns: We crafted precise regex patterns to pinpoint emails, phone numbers, and URLs within the textual data. These patterns are tailored to detect a variety of formats, ensuring comprehensive coverage and robust detection. RoBERTa Model: The RoBERTa model is utilized to identify more complex PII elements such as names, addresses, and dates. This AI-driven approach enhances the accuracy of PII detection beyond the capabilities of regex alone. Faker Library: To replace identified PII, we use the Faker library, which generates realistic yet fictitious data mimicking the original information's structure. This maintains the text’s integrity and utility while ensuring all sensitive details are securely anonymized. Search and Replace Functionality: Our script incorporates a dual mechanism where PII elements identified by either the RoBERTa model or regex patterns are replaced with corresponding fake data from Faker. This ensures that no real PII remains in the text, significantly reducing privacy risks while preserving the document's readability and format. Implementation: The implementation process is straightforward, involving the reading of text data, the application of the RoBERTa and regex identification methods, and the replacement of detected PII with Faker-generated data. This methodical approach ensures that the modified text remains practical for use without compromising on privacy.--DIVIDER--# Usage For using the model on you own data, download the [github repository](https://github.com/readytensor/rt_roberta_pii_redactor) and follow the `usage ` section in the readme. # Example You can see the redacted text in yellow and the text used as replacement in green: --DIVIDER-- ![redaction-example.png](redaction-example.png)--DIVIDER--# Summary In this study, we address the challenge of automatically redacting Personally Identifiable Information (PII) using transformer models. Given the growing risk of privacy breaches in the digital era, efficient PII redaction is crucial. We evaluated six models—ALBERT, DistilBERT, BERT, RoBERTa, T5, and DeBERTa—across five datasets, including the widely used n2c2 2014 dataset, to measure their effectiveness in detecting and redacting PII. Our evaluation focused on macro recall due to its importance in ensuring that critical entities are not missed. The CoNLL-2003 dataset served as the primary benchmark for this evaluation, due to its popularity in Named Entity Recognition (NER). Additional datasets included PII Masking 300k, Synthetic PII Finance Multilingual, and the Synthetic PII Dataset from Microsoft's Presidio tool. These diverse datasets ensured that the models were evaluated across a range of contexts and data types. We found that DeBERTa and RoBERTa performed very closely, with RoBERTa being chosen for PII redaction due to its similar performance and smaller size, making it more practical for training and deployment. T5 was the weakest performer, likely due to its architecture being optimized for sequence-to-sequence tasks rather than token-level predictions. To enhance privacy, we developed a redaction script using RoBERTa, along with regular expressions and the Faker library. This script replaces detected PII elements such as emails and phone numbers with realistic fake data while maintaining the document's readability and format. The implementation is straightforward and available in the provided GitHub repository.
AewIJAspNLZz
mo.abdelhamid
Ranking Fear Emotions Using EEG and Machine Learning
"\n![hero.jpg](hero.jpg)--DIVIDER--# Abstract\n\nThis publication focuses on the classification of f(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6