📎 Referral Code:
📊 Dashboard Sign In
Navigation
🗺️
Courses
🎬
Short Videos
💡
Pro Tip Videos
Job Support
🎯
Interview Board
👥
Chat Room
AI Tools
🌐
Project Explanation Agent
🛟
Support Works
Home
Generative AI Realtime Project
Generative AI Project: Student Q&A Chat System using RAG
Generative AI Realtime Project Generative AI Project: Student Q&A Chat System using RAG
Generative AI Project: Student Q&A Chat System using RAG
Generative AI Realtime Project
32:21
Now Watching
First Lesson
Lesson Progress
Next →
Generative AI Project: CSV to JSON Data Transformation for RAG
Next
📄 View Reference Document & Notes

📋 Lesson Notes & Resources

Project Source Code
Generative AI Project: Student Q&A Chat System using RAG
Project Overview
This project builds a student question-answering chat system using Retrieval-Augmented Generation (RAG). It retrieves relevant data from a vector database and generates accurate answers using an LLM.
Architecture Flow
1. Student asks a question
2. Convert query into embedding
3. Perform vector search in OpenSearch
4. Retrieve top results
5. Send results to LLM for final answer
Real-Time Scenario: Student Query Handling
A student asks a question, and the system retrieves the most relevant answer using embeddings and vector search.
import boto3
import json
from opensearchpy import OpenSearch

bedrock = boto3.client("bedrock-runtime")

def get_embedding(text):
    body = json.dumps({"inputText": text})
    response = bedrock.invoke_model(
        modelId="amazon.titan-embed-text-v1",
        body=body
    )
    result = json.loads(response["body"].read())
    return result["embedding"]

client = OpenSearch(
    hosts=[{"host": "YOUR-ENDPOINT", "port": 443}]
)

query = "What is Python list?"
query_vector = get_embedding(query)

search_body = {
    "size": 1,
    "query": {
        "knn": {
            "embedding": {
                "vector": query_vector,
                "k": 1
            }
        }
    }
}

response = client.search(
    index="rag-index",
    body=search_body
)

context = response["hits"]["hits"][0]["_source"]["text"]

prompt = f"Answer based on context: {context}\nQuestion: {query}"

llm_response = bedrock.invoke_model(
    modelId="amazon.titan-text-lite-v1",
    body=json.dumps({"inputText": prompt})
)

answer = json.loads(llm_response["body"].read())

print(answer)
  
Final Output
The system provides accurate, context-aware answers to student queries using RAG architecture.
Course Content
8 lessons