← Back to jobs

AI/ML Engineering Intern — RAG Chatbot

AARA Solutions
INTERN Remote · US US Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
About the role We're looking for a motivated engineering intern to help us build an intelligent RAG (Retrieval-Augmented Generation) chatbot from the ground up. You'll work with real production data, AWS-native AI services, and modern LLM orchestration frameworks — a rare opportunity to ship a full AI system end to end. What you'll build Replicate and sync data from a PostgreSQL RDS instance into a local embedded vector database (e.g. pgvector, ChromaDB, or SQLite-VSS) Implement document chunking, embedding generation, and vector indexing pipelines Orchestrate end-to-end RAG workflows using LangChain or AWS-native tooling (Bedrock Agents / Knowledge Bases) Integrate Amazon Bedrock to call foundation LLMs (Claude, Titan, etc.) and return grounded answers to users Build and test a conversational chatbot interface that ties retrieval and generation together What we're looking for Currently pursuing a degree in Computer Science, Software Engineering, or a related field Solid Python skills; comfortable writing data pipelines and REST APIs Familiarity with SQL and relational databases (PostgreSQL experience is a plus) Basic AWS knowledge — IAM, RDS, S3, Lambda are a bonus Exposure to LangChain, LlamaIndex, or similar LLM orchestration libraries Curiosity about LLMs, embeddings, and vector search — no PhD required, just genuine interest Nice to have Experience with Amazon Bedrock, SageMaker, or any managed LLM service Knowledge of embedding models (text-embedding-ada-002, Titan Embeddings, etc.) Hands-on with pgvector, ChromaDB, FAISS, or Weaviate Prior internship or personal project involving GenAI or NLP