
92% Booked
Registration Closed
Professional Certification Program
This hands-on workshop teaches participants how to build, deploy, and optimize a RAG-powered Q&A bot. Attendees will learn to set up the environment, ingest and embed documents, implement vector retrieval, and integrate a language model via FastAPI, with a focus on testing and performance optimization.
The aim of this workshop is to equip participants with the skills to build, deploy, and optimize a RAG-powered Q&A bot, covering document ingestion, vector retrieval, and integration of language models through a FastAPI endpoint.
Ensure the environment is properly configured for the project by following these steps:
Spin up a Python virtual environment (venv) for isolation.
Install necessary dependencies using pip:
Create a .env file to store your API key for OpenAI integration.
Integrate document ingestion and text embedding with the following process:
Write a script to load sample text/PDFs from the ./docs/ directory.
Chunk the texts into manageable segments (e.g., 500 tokens per chunk).
Use OpenAIEmbeddings to generate embeddings and store them in FAISS for efficient retrieval.
Set up the vector store and implement a retrieval function:
Initialize the FAISS index in your code for storing and searching vectors.
Implement the retrieve(query): function using index.similarity_search.
Perform quick tests by printing out retrieved chunks for sample queries.
Define a QA chain that utilizes the retrieved context for answering questions:
Create a prompt template that dynamically injects the retrieved context into the model.
Integrate the LLMChain (or RetrievalQA from LangChain) to process the query.
Example code:
Test the system by asking 2–3 different questions to check if it retrieves accurate answers.
Expose the solution via an API:
Scaffold a FastAPI app with a /qa POST endpoint.
Integrate the retrieve and llm_chain inside this endpoint for real-time querying.
Test the API via curl or Postman to ensure functionality.
Ensure the system is robust and extendable:
Handle cases where no results are found: return a message like “No context found.”
Experiment with different chunk sizes and k-values in retrieval for performance tuning.
Conduct a performance check: Measure latency for sample queries and optimize as necessary.
This workshop is intended for developers, data scientists, and AI enthusiasts with a basic understanding of Python programming and machine learning concepts. Familiarity with APIs, natural language processing (NLP), and working with vector databases like FAISS is beneficial but not required.
2025-06-28
Indian Standard Timing
2025-06-28 to 2025-06-28
Indian Standard Timing 5 PM
Gain practical experience in building a RAG-powered Q&A bot using Python, LangChain, and FAISS.
Learn how to ingest, process, and embed documents for retrieval-based systems.
Understand how to implement vector retrieval functions and integrate them with language models.
Gain hands-on experience in deploying a Q&A bot via FastAPI.
Acquire skills in debugging, performance optimization, and handling edge cases in real-world applications.
Develop the ability to create and deploy intelligent Q&A systems for various use cases.
INR. 2499
USD. 65
Take your research to the next level!
Achieve excellence and solidify your reputation among the elite!
Digital Twins: Predictive …
AI in Sound Modification
AI, Biopolymers, and Smart …
AI-Powered Drug Discovery with …
none
PhD in Computational Mechanics from MIT with 15+ years of experience in Industrial AI. Former Lead Data Scientist at Tesla and current advisor to Fortune 500 manufacturing firms.
Instant Access
Not sure if this course is right for you? Schedule a free 15-minute consultation with our academic advisors.