rag_patterns 50 Q&As

RAG Patterns FAQ & Answers

50 expert RAG Patterns answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

50 questions
A

from langchain_text_splitters import RecursiveCharacterTextSplitter; text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=100, length_function=len, is_separator_regex=False); chunks = text_splitter.split_text(document_text). The chunk_size=512 sets maximum chunk length (characters not tokens - use tiktoken for token counting). chunk_overlap=100 creates 100-character overlap between chunks (~20% overlap ratio, recommended 10-20%). Default separators: ['\n\n', '\n', ' ', ''] - tries paragraph breaks first, then newlines, spaces. For token-based chunking: use tiktoken.get_encoding('cl100k_base').encode(text) to count tokens, adjust chunk_size accordingly. Overlap preserves context across chunks - crucial for semantic search. Documents object version: text_splitter.split_documents(docs). Install: pip install langchain-text-splitters>=0.2.0.

99% confidence
A

For 2-5 sentence documents (~100-250 words), use NO chunking - process entire document as single chunk. June 2025 arXiv study found non-overlapping recursive chunking (RT100-0) optimal default: F1=0.162, F2=0.564, minimal index size. When chunking is necessary: chunk_size=100-200 tokens with 10-20% overlap (20-40 tokens). Overlap beyond 20% causes steep precision loss with no recall gain. For recall-critical applications: RT100-60 (60% overlap) achieves 0.725 recall but 60% larger index, 15% precision drop. Short documents: chunking adds overhead without semantic benefit - embed complete document. For longer documents: 512-1000 token chunks with 10-20% overlap (50-200 tokens). Test on validation set to optimize for your specific use case. Use text-embedding-3-large with 1536 dimensions for production balance.

99% confidence
A

from langchain_text_splitters import RecursiveCharacterTextSplitter; text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100, length_function=len); chunks = text_splitter.split_text(text). The chunk_overlap=100 with chunk_size=1000 creates 10% overlap sliding window. Each chunk shares 100 characters with next chunk, preserving context boundaries. The 10-20% overlap is recommended best practice (2025 research). For 500-char chunks: use chunk_overlap=50-100. Overlap ensures semantic continuity - prevents splitting mid-sentence/mid-concept. Default values in langchain-text-splitters: chunk_size=1000, chunk_overlap=200 (20%). Install: pip install langchain-text-splitters==0.2.*. Alternative: from langchain.text_splitter import CharacterTextSplitter for simpler splitting. Sliding window mechanism: chunk N ends at position X, chunk N+1 starts at position X-overlap. Monitor index size - overlap increases storage proportionally.

99% confidence
A

Use 1536 dimensions for 95% of production use cases - provides excellent quality with 50% storage/cost savings vs 3072. OpenAI testing shows 1536-dim performance nearly matches 3072-dim for semantic search, RAG, and retrieval tasks. Sweet spot: 1024 dimensions (4KB vs 12KB) maintains quality while maximizing efficiency. Use 3072 dimensions only for: legal document analysis, academic research, complex semantic tasks requiring maximum precision where cost justified. Dimension comparison: 3072-dim is 6.5x more expensive than text-embedding-3-small with marginal quality improvement for most tasks. API call: client.embeddings.create(input=texts, model='text-embedding-3-large', dimensions=1536). Set dimensions parameter at embed time - model default is 3072. Production recommendation (2025): 1536 for general use, 1024 for cost-sensitive deployments, 3072 only when quality absolutely critical. Vector databases perform better with smaller dimensions.

99% confidence
A

import asyncio; from openai import AsyncOpenAI; client = AsyncOpenAI(api_key='key'); async def embed_batch(texts, batch_size=100): async def embed_chunk(chunk): return await client.embeddings.create(input=chunk, model='text-embedding-3-large', dimensions=1536); batches = [texts[i:i+batch_size] for i in range(0, len(texts), batch_size)]; results = await asyncio.gather(*[embed_chunk(batch) for batch in batches]); return results; embeddings = asyncio.run(embed_batch(chunks)). Batch size: 100 texts per request optimal (600 takes ~5s). For 1000 chunks: 10 concurrent batches. Use dimensions=1536 for storage efficiency. Alternative: OpenAI Batch API for 50% cost savings (24hr completion). LangChain: from langchain_openai import OpenAIEmbeddings; embeddings = OpenAIEmbeddings(); await embeddings.aembed_documents(texts, chunk_size=100). Monitor rate limits: 3,000 RPM for tier 1. Handle retries with exponential backoff. Production: combine with semaphore for rate limit control: asyncio.Semaphore(10).

99% confidence
A

MMR formula: MMR = (1-λ) × relevance_score - λ × max(similarity_with_selected). Select top K candidates from vector search, then iteratively pick documents maximizing MMR score. Lambda (λ) controls diversity: 0=pure relevance, 1=pure diversity, 0.5=balanced (recommended). OpenSearch 3.3+ native: query with mmr_search(query_vector, k=20, lambda=0.5). Python implementation: from sklearn.metrics.pairwise import cosine_similarity; def mmr_rerank(query_vec, candidates, lambda_param=0.5, k=10): selected = []; remaining = list(range(len(candidates))); for _ in range(k): mmr_scores = [(1-lambda_param) * relevance[i] - lambda_param * max([similarity(candidates[i], candidates[j]) for j in selected] or [0]) for i in remaining]; best_idx = max(mmr_scores); selected.append(remaining[best_idx]); remaining.remove(remaining[best_idx]); return selected. Benefits: reduces redundancy in RAG, improves LLM context diversity, better user engagement. Production: limit reranking depth (top 100) for performance.

99% confidence
A

from langchain_experimental.text_splitter import SemanticChunker; from langchain_openai import OpenAIEmbeddings; text_splitter = SemanticChunker(embeddings=OpenAIEmbeddings(), breakpoint_threshold_type='percentile', breakpoint_threshold_amount=0.5); chunks = text_splitter.split_text(text). Algorithm: (1) Generate embeddings for sentences/paragraphs, (2) Calculate cosine similarity between adjacent segments, (3) Split when similarity drops below threshold. Breakpoint types: 'percentile' (relative, 0.5=median), 'standard_deviation' (absolute), 'interquartile' (outlier detection). Threshold=0.5 recommended for balanced chunking. Benefits: preserves semantic coherence, natural topic boundaries, variable chunk sizes (50-500 tokens). Trade-offs: slower than fixed-size (requires embedding each sentence), higher API costs, unpredictable chunk sizes. Use for: articles, reports, documentation with clear topics. Avoid for: code files, tables, lists. Install: pip install langchain-experimental. Production: cache sentence embeddings to reduce costs.

99% confidence
A

Use BM25Retriever with EnsembleRetriever for fusion: from langchain.retrievers import BM25Retriever, EnsembleRetriever; from langchain_community.vectorstores import FAISS; bm25 = BM25Retriever.from_documents(docs); bm25.k = 10; vector_store = FAISS.from_documents(docs, embeddings); vector_retriever = vector_store.as_retriever(search_kwargs={'k': 10}); ensemble = EnsembleRetriever(retrievers=[bm25, vector_retriever], weights=[0.5, 0.5]); results = ensemble.invoke(query). Fusion algorithm: Reciprocal Rank Fusion (RRF). Weights: [0.5, 0.5]=balanced, [0.3, 0.7]=favor semantic, [0.7, 0.3]=favor keyword. BM25 strengths: exact keyword matches, abbreviations, names, rare terms. Vector strengths: synonyms, paraphrasing, semantic intent. Combined recall: 15-25% higher than either alone. Production: use Weaviate/Qdrant native hybrid search for performance. Alternative: Elasticsearch with knn + BM25 query. Tune weights on validation set using F1 score.

99% confidence
A

Install cohere: pip install cohere. Retrieve candidates + rerank: import cohere; co = cohere.Client(api_key='key'); vector_results = vector_store.similarity_search(query, k=100); docs = [r.page_content for r in vector_results]; rerank_results = co.rerank(model='rerank-english-v3.5', query=query, documents=docs, top_n=10); top_docs = [vector_results[r.index] for r in rerank_results.results]. Two-stage approach: (1) Fast vector search retrieves 50-100 candidates, (2) Slow cross-encoder reranks top-10. Rerank-3.5 features: 4096 token context, 100 languages, JSON/semi-structured data support. Cost: $2/1K searches (100 docs each). Alternative: mixedbread mxbai-rerank-large-v1 (open-source, self-hosted). Improvements: 20-30% relevance gain over vector-only. LangChain integration: from langchain.retrievers import ContextualCompressionRetriever; from langchain.retrievers.document_compressors import CohereRerank; compressor = CohereRerank(model='rerank-english-v3.5', top_n=10); retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=vector_retriever).

99% confidence
A

Install RAGAS: pip install ragas. Evaluate pipeline: from ragas import evaluate; from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall; from datasets import Dataset; data = {'question': questions, 'answer': answers, 'contexts': retrieved_contexts, 'ground_truth': reference_answers}; dataset = Dataset.from_dict(data); result = evaluate(dataset, metrics=[faithfulness, answer_relevancy, context_precision, context_recall]). Metrics: Faithfulness (0-1): answer claims verifiable from context (higher=better), Answer Relevancy (0-1): answer addresses question (higher=better), Context Precision (0-1): relevant docs ranked higher (higher=better), Context Recall (0-1): ground truth covered by context (higher=better). RAGAS uses LLM as judge (GPT-4/Claude). Production targets: faithfulness >0.8, answer_relevancy >0.85, context_recall >0.9. Benefits: reference-free (no ground truth needed for faithfulness/relevancy), LangChain/LlamaIndex integration. Use for: A/B testing chunking strategies, model selection, prompt tuning.

99% confidence
A

Define metadata schema and use LLM to auto-extract filters: from langchain.chains.query_constructor.base import AttributeInfo; from langchain.retrievers.self_query.base import SelfQueryRetriever; metadata_fields = [AttributeInfo(name='department', description='Document department (engineering, sales)', type='string'), AttributeInfo(name='year', description='Document year', type='integer')]; retriever = SelfQueryRetriever.from_llm(llm=ChatOpenAI(), vectorstore=vector_store, document_contents='Technical documentation', metadata_field_info=metadata_fields); results = retriever.invoke('engineering docs from 2024'). LLM extracts: query='docs', filter={'department': 'engineering', 'year': 2024}. Supports: exact match, range (gte/lte), contains, in/not_in operators. Benefits: natural language filtering without manual parsing, reduces irrelevant results by 40-60%, faster retrieval with metadata indexes. Production: validate extracted filters, fallback to unfiltered search on parse errors. Alternative: LlamaIndex Auto-Retrieval for same functionality. Use for: multi-tenant RAG, time-based filtering, source restrictions.

99% confidence
A

from langchain_community.document_loaders import PyPDFLoader; loader = PyPDFLoader('document.pdf'); pages = loader.load(). Returns list of Document objects, one per page, with metadata={'source': 'document.pdf', 'page': 0}. For splitting: docs = loader.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=1000)). Alternative loaders: PyMuPDFLoader (faster, better layout), PDFPlumberLoader (tables/structured data), UnstructuredPDFLoader (OCR support). Multi-file: from langchain_community.document_loaders import DirectoryLoader; loader = DirectoryLoader('docs/', glob='**/*.pdf', loader_cls=PyPDFLoader); docs = loader.load(). Dependencies: pip install pypdf (PyPDFLoader), pymupdf (PyMuPDFLoader). Handle errors: try loading with PyMuPDF if PyPDF fails (encoding issues). Production: async loading for large PDFs, extract images separately, preprocess scanned PDFs with OCR. Metadata enrichment: add custom fields like upload_date, author from PDF metadata.

99% confidence
A

from langchain_community.document_loaders import Docx2txtLoader; loader = Docx2txtLoader('document.docx'); docs = loader.load(). Returns single Document with page_content=full text, metadata={'source': 'document.docx'}. Preserves: paragraphs, headings, lists. Loses: formatting, images, tables structure. For better table handling: from langchain_community.document_loaders import UnstructuredWordDocumentLoader; loader = UnstructuredWordDocumentLoader('doc.docx', mode='elements'); docs = loader.load() (extracts tables as separate elements). Dependencies: pip install docx2txt (Docx2txtLoader), unstructured[docx] (UnstructuredWordDocumentLoader). Multi-file: DirectoryLoader('docs/', glob='**/*.docx', loader_cls=Docx2txtLoader). Split after loading: text_splitter.split_documents(docs). Production: validate DOCX is not corrupted (handle exceptions), extract images with python-docx if needed, preserve headings as metadata for section-aware chunking. Use UnstructuredWordDocumentLoader for complex documents, Docx2txtLoader for simple text extraction.

99% confidence
A

Use LLM to generate hypothetical answer, then embed it: from langchain.chains import HypotheticalDocumentEmbedder; from langchain_openai import ChatOpenAI, OpenAIEmbeddings; base_embeddings = OpenAIEmbeddings(); llm = ChatOpenAI(); hyde = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, prompt_key='web_search'); embedded_query = hyde.embed_query('What is quantum computing?'). LLM generates: 'Quantum computing uses qubits and superposition for parallel computation...' (hypothetical doc), then embeds hypothetical answer instead of query. Retrieval: vector_store.similarity_search_by_vector(embedded_query). Benefits: bridges semantic gap between question style and document style, 10-15% recall improvement. Trade-offs: 2x latency (LLM call + embedding), hallucination risk in hypothesis. Prompts: web_search, code_search, conversation. Production: cache hypotheses for common queries, use fast LLM (GPT-3.5), combine with multi-query for robustness. Alternative: manual query expansion with domain-specific prompts.

99% confidence
A

Use LLM to generate query variations: from langchain.retrievers.multi_query import MultiQueryRetriever; from langchain_openai import ChatOpenAI; retriever = MultiQueryRetriever.from_llm(retriever=vector_store.as_retriever(), llm=ChatOpenAI()); results = retriever.invoke('How do I deploy Kubernetes?'). LLM generates 3-5 variations: 'Kubernetes deployment steps', 'Deploy K8s cluster guide', 'Setting up Kubernetes production'. Retrieves docs for each query, deduplicates by document ID, returns union of results. Custom prompt: retriever = MultiQueryRetriever.from_llm(retriever=base, llm=llm, prompt=PromptTemplate(template='Generate 5 search queries for: {question}')). Benefits: captures different phrasings, 15-20% higher recall, handles ambiguous queries. Trade-offs: 3-5x retrieval cost, slower (sequential queries). Production: parallel retrieval with asyncio, limit to 3 queries, cache query variations. Alternative: manual query expansion with synonyms. Use for: complex questions, diverse terminology, exploratory search. Combine with reranking to filter noise.

99% confidence
A

Use ContextualCompressionRetriever with LLM-based compressor: from langchain.retrievers import ContextualCompressionRetriever; from langchain.retrievers.document_compressors import LLMChainExtractor; compressor = LLMChainExtractor.from_llm(ChatOpenAI(model='gpt-3.5-turbo')); retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=vector_retriever); compressed_docs = retriever.invoke(query). Compressor extracts only relevant sentences from each chunk, discards irrelevant context. Example: 1000-token chunk → 200 tokens relevant to query. Benefits: 3-5x more chunks in context window, reduced prompt costs, focused LLM attention. Alternative: EmbeddingsFilter (removes low-similarity chunks), LLMChainFilter (binary relevant/not filter). Production: use GPT-3.5 for compression (faster/cheaper), cache compressed chunks for repeated queries, monitor compression ratio (target 60-80% reduction). Combine with reranking: retrieve 50 chunks → rerank top 20 → compress to 10. Use for: long documents, limited context windows (4K-8K), cost optimization. Compression adds 200-500ms latency.

99% confidence
A

Include source metadata in prompt + parse citations: from langchain_core.prompts import ChatPromptTemplate; prompt = ChatPromptTemplate.from_template('Answer using these sources:\n{context}\n\nQuestion: {question}\nAnswer with inline citations [1], [2], [3].'); chain = prompt | llm. Format context with numbers: context = '\n'.join([f'[{i+1}] {doc.page_content} (Source: {doc.metadata["source"]})' for i, doc in enumerate(retrieved_docs)]). Parse response for citations: import re; citations = re.findall(r'[(\d+)]', answer); cited_sources = [retrieved_docs[int(c)-1].metadata for c in citations]. Return answer + sources: {'answer': answer, 'sources': [{'title': doc.metadata['source'], 'page': doc.metadata.get('page'), 'url': doc.metadata.get('url')} for doc in cited_docs]}. Alternative: Use RetrievalQAWithSourcesChain which auto-formats sources. Production: validate citation numbers exist, deduplicate cited sources, include snippet preview in source object. Benefits: transparency, fact-checking, regulatory compliance. Use for: legal docs, academic research, customer support.

99% confidence
A
  1. Caching: from langchain_core.globals import set_llm_cache; from langchain_community.cache import RedisSemanticCache; set_llm_cache(RedisSemanticCache(redis_url='redis://localhost', embedding=embeddings, score_threshold=0.95)) - cache semantically similar queries, 60-80% hit rate. 2. Embedding batching: await embeddings.aembed_documents(chunks, chunk_size=100) instead of individual calls - 3x faster, same cost. 3. Smaller embedding models: text-embedding-3-small (512 dim) vs 3-large (3072 dim) - 6x cheaper, 90% quality. 4. Dimension reduction: OpenAI dimensions=1024 vs default 3072 - 50% storage savings. 5. Chunk size tuning: 512 tokens vs 1000 - fewer chunks, less embedding cost. 6. Compression: ContextualCompressionRetriever reduces LLM input tokens by 60-80%. 7. Reranking only top-K: retrieve 100, rerank 20 instead of reranking all. 8. Model selection: GPT-3.5 for simple queries, GPT-4 for complex - 10x cost difference. Monitor: OpenAICallbackHandler tracks token usage per request.
99% confidence
A

Preprocessing pipeline: (1) Clean: Remove headers/footers, page numbers, boilerplate: import re; text = re.sub(r'Page \d+', '', text). (2) Normalize whitespace: text = re.sub(r'\s+', ' ', text). (3) Fix encoding: text.encode('utf-8', errors='ignore').decode('utf-8'). (4) Extract metadata: date, author, section from headers. (5) Segment by structure: split by headings before chunking: sections = re.split(r'\n#{1,3} ', text). (6) Table handling: extract tables to structured JSON, store separately: from tabulate import tabulate. (7) Remove noise: citations [1], [2], footnotes, URLs if not needed. (8) Deduplication: hash chunks, remove exact/near duplicates (Jaccard similarity >0.95). Production: use unstructured library for complex docs: from unstructured.partition.auto import partition; elements = partition('doc.pdf'). Benefits: 20-30% retrieval improvement, cleaner LLM context. Store raw + processed versions for debugging. Use spaCy for sentence segmentation if chunking by sentences.

99% confidence
A

Use LLM to rewrite vague queries into specific search queries: from langchain_core.prompts import ChatPromptTemplate; rewrite_prompt = ChatPromptTemplate.from_template('Rewrite this question to be more specific for search:\nOriginal: {question}\nRewritten:'); rewriter = rewrite_prompt | llm | StrOutputParser(); rewritten = rewriter.invoke({'question': query}); results = vector_store.similarity_search(rewritten). Patterns: (1) Acronym expansion: 'K8s' → 'Kubernetes', (2) Contextualization: 'how to deploy?' → 'how to deploy Kubernetes cluster in production?', (3) Keyword extraction: verbose question → key terms, (4) Disambiguation: 'Python installation' → 'Python programming language installation on Ubuntu'. LangGraph approach: def rewrite_node(state): if state['retrieval_score'] < 0.5: return {'query': rewrite(state['query'])}; graph.add_conditional_edges('retrieval', should_rewrite, {'rewrite': rewrite_node, 'continue': generate_node}). Iterative rewriting: rewrite up to 3 times until relevance threshold met. Production: cache rewrites, monitor rewrite quality, use fast LLM (GPT-3.5). Benefits: 15-25% retrieval improvement, handles typos/abbreviations.

99% confidence
A

Prompt LLM to identify and reconcile conflicts: prompt = 'Based on these sources:\n{context}\n\nSources may contradict. If so:\n1. Note the contradiction\n2. Explain different viewpoints\n3. Indicate which is more recent/authoritative\nQuestion: {question}'. Include metadata: timestamps, source authority scores. Preprocessing: detect conflicts before LLM: from sentence_transformers import util; def detect_conflicts(docs): embeddings = model.encode([d.page_content for d in docs]); similarities = util.cos_sim(embeddings, embeddings); conflicts = [(i,j) for i in range(len(docs)) for j in range(i+1, len(docs)) if similarities[i][j] < 0.3]. Reranking: prefer recent documents: docs.sort(key=lambda x: x.metadata.get('date', '2000-01-01'), reverse=True). Ensemble voting: retrieve with multiple strategies, take majority answer. Production: flag low-confidence responses when contradictions detected, surface conflicting sources to user, use temporal ordering for evolving info (API changes, best practices). Benefits: transparency, accuracy, user trust. Use for: fast-changing domains (web dev, AI), regulatory docs, news articles.

99% confidence
A

Update strategies: (1) Incremental updates: New docs only - add to vector DB daily/hourly: new_embeddings = embeddings.embed_documents(new_docs); vector_store.add_documents(new_docs). (2) Versioned collections: Qdrant/Pinecone collections by timestamp, switch after validation: collection_name = f'docs_{datetime.now().strftime("%Y%m%d")}'. (3) Soft deletes: Mark outdated docs inactive, filter in retrieval: metadata={'active': False}. (4) Reindexing: Full rebuild weekly/monthly for consistency: async def reindex(): new_store = FAISS.from_documents(all_docs, embeddings); swap_vector_store(new_store). Monitor: document count, average embedding time, retrieval latency. Validation: smoke tests after updates (query 10 common questions, check recall). Cache invalidation: clear semantic cache on updates: redis.flushdb(). Blue-green deployment: maintain 2 vector DBs, switch traffic after validation. Rollback: keep previous version for 24 hours. Automation: GitHub Actions triggers reindex on docs/ changes. Log: update timestamp, doc count delta, errors.

99% confidence
A

Security measures: (1) PII detection: Use Presidio before indexing: from presidio_analyzer import AnalyzerEngine; analyzer = AnalyzerEngine(); results = analyzer.analyze(text, language='en'); redact PII before embedding. (2) Access control: Store user_id in metadata: doc.metadata['allowed_users'] = ['user1']; filter: results = vector_store.similarity_search(query, filter={'allowed_users': current_user}). (3) Encryption: TLS for vector DB connections, encrypt embeddings at rest: vector_db.encrypt_collection(). (4) Audit logging: Log all queries: logger.info(f'User {user_id} queried: {query}'). (5) Data retention: Auto-delete old docs: if doc.metadata['created'] < cutoff_date: vector_store.delete(doc.id). (6) Prompt injection defense: Validate queries: if contains_injection(query): raise ValueError. (7) Output filtering: Remove PII from responses: redacted_answer = redactor.redact(answer). Compliance: GDPR (right to deletion), HIPAA (PHI protection), SOC 2 (access logs). Production: use Azure OpenAI (data residency), implement rate limiting per user, monitor for data leakage.

99% confidence
A

Use MapReduceDocumentsChain for parallelization: from langchain.chains import MapReduceDocumentsChain; map_prompt = ChatPromptTemplate.from_template('Extract key points from:\n{page_content}'); reduce_prompt = ChatPromptTemplate.from_template('Synthesize these summaries into one answer:\n{summaries}\nQuestion: {question}'); map_chain = map_prompt | llm; reduce_chain = reduce_prompt | llm; chain = MapReduceDocumentsChain(llm_chain=map_chain, reduce_documents_chain=reduce_chain). Process: (1) Retrieve docs from multiple sources, (2) Map: LLM extracts relevant info from each doc in parallel, (3) Reduce: LLM synthesizes individual outputs into final answer. Alternative: RefineDocumentsChain (sequential refinement). For large doc sets: use MapReduceChain with batching: batch_size=5 docs per map call. Production: track which sources contributed to answer, deduplicate retrieved docs before processing, use cheaper LLM for map step (GPT-3.5), expensive for reduce (GPT-4). Benefits: comprehensive answers, cross-validation, handles contradictions. Use for: research, comparative analysis, due diligence. Monitor: total tokens (can be high), latency (parallelizable).

99% confidence
A

from langchain_text_splitters import RecursiveCharacterTextSplitter; text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=100, length_function=len, is_separator_regex=False); chunks = text_splitter.split_text(document_text). The chunk_size=512 sets maximum chunk length (characters not tokens - use tiktoken for token counting). chunk_overlap=100 creates 100-character overlap between chunks (~20% overlap ratio, recommended 10-20%). Default separators: ['\n\n', '\n', ' ', ''] - tries paragraph breaks first, then newlines, spaces. For token-based chunking: use tiktoken.get_encoding('cl100k_base').encode(text) to count tokens, adjust chunk_size accordingly. Overlap preserves context across chunks - crucial for semantic search. Documents object version: text_splitter.split_documents(docs). Install: pip install langchain-text-splitters>=0.2.0.

99% confidence
A

For 2-5 sentence documents (~100-250 words), use NO chunking - process entire document as single chunk. June 2025 arXiv study found non-overlapping recursive chunking (RT100-0) optimal default: F1=0.162, F2=0.564, minimal index size. When chunking is necessary: chunk_size=100-200 tokens with 10-20% overlap (20-40 tokens). Overlap beyond 20% causes steep precision loss with no recall gain. For recall-critical applications: RT100-60 (60% overlap) achieves 0.725 recall but 60% larger index, 15% precision drop. Short documents: chunking adds overhead without semantic benefit - embed complete document. For longer documents: 512-1000 token chunks with 10-20% overlap (50-200 tokens). Test on validation set to optimize for your specific use case. Use text-embedding-3-large with 1536 dimensions for production balance.

99% confidence
A

from langchain_text_splitters import RecursiveCharacterTextSplitter; text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100, length_function=len); chunks = text_splitter.split_text(text). The chunk_overlap=100 with chunk_size=1000 creates 10% overlap sliding window. Each chunk shares 100 characters with next chunk, preserving context boundaries. The 10-20% overlap is recommended best practice (2025 research). For 500-char chunks: use chunk_overlap=50-100. Overlap ensures semantic continuity - prevents splitting mid-sentence/mid-concept. Default values in langchain-text-splitters: chunk_size=1000, chunk_overlap=200 (20%). Install: pip install langchain-text-splitters==0.2.*. Alternative: from langchain.text_splitter import CharacterTextSplitter for simpler splitting. Sliding window mechanism: chunk N ends at position X, chunk N+1 starts at position X-overlap. Monitor index size - overlap increases storage proportionally.

99% confidence
A

Use 1536 dimensions for 95% of production use cases - provides excellent quality with 50% storage/cost savings vs 3072. OpenAI testing shows 1536-dim performance nearly matches 3072-dim for semantic search, RAG, and retrieval tasks. Sweet spot: 1024 dimensions (4KB vs 12KB) maintains quality while maximizing efficiency. Use 3072 dimensions only for: legal document analysis, academic research, complex semantic tasks requiring maximum precision where cost justified. Dimension comparison: 3072-dim is 6.5x more expensive than text-embedding-3-small with marginal quality improvement for most tasks. API call: client.embeddings.create(input=texts, model='text-embedding-3-large', dimensions=1536). Set dimensions parameter at embed time - model default is 3072. Production recommendation (2025): 1536 for general use, 1024 for cost-sensitive deployments, 3072 only when quality absolutely critical. Vector databases perform better with smaller dimensions.

99% confidence
A

import asyncio; from openai import AsyncOpenAI; client = AsyncOpenAI(api_key='key'); async def embed_batch(texts, batch_size=100): async def embed_chunk(chunk): return await client.embeddings.create(input=chunk, model='text-embedding-3-large', dimensions=1536); batches = [texts[i:i+batch_size] for i in range(0, len(texts), batch_size)]; results = await asyncio.gather(*[embed_chunk(batch) for batch in batches]); return results; embeddings = asyncio.run(embed_batch(chunks)). Batch size: 100 texts per request optimal (600 takes ~5s). For 1000 chunks: 10 concurrent batches. Use dimensions=1536 for storage efficiency. Alternative: OpenAI Batch API for 50% cost savings (24hr completion). LangChain: from langchain_openai import OpenAIEmbeddings; embeddings = OpenAIEmbeddings(); await embeddings.aembed_documents(texts, chunk_size=100). Monitor rate limits: 3,000 RPM for tier 1. Handle retries with exponential backoff. Production: combine with semaphore for rate limit control: asyncio.Semaphore(10).

99% confidence
A

MMR formula: MMR = (1-λ) × relevance_score - λ × max(similarity_with_selected). Select top K candidates from vector search, then iteratively pick documents maximizing MMR score. Lambda (λ) controls diversity: 0=pure relevance, 1=pure diversity, 0.5=balanced (recommended). OpenSearch 3.3+ native: query with mmr_search(query_vector, k=20, lambda=0.5). Python implementation: from sklearn.metrics.pairwise import cosine_similarity; def mmr_rerank(query_vec, candidates, lambda_param=0.5, k=10): selected = []; remaining = list(range(len(candidates))); for _ in range(k): mmr_scores = [(1-lambda_param) * relevance[i] - lambda_param * max([similarity(candidates[i], candidates[j]) for j in selected] or [0]) for i in remaining]; best_idx = max(mmr_scores); selected.append(remaining[best_idx]); remaining.remove(remaining[best_idx]); return selected. Benefits: reduces redundancy in RAG, improves LLM context diversity, better user engagement. Production: limit reranking depth (top 100) for performance.

99% confidence
A

from langchain_experimental.text_splitter import SemanticChunker; from langchain_openai import OpenAIEmbeddings; text_splitter = SemanticChunker(embeddings=OpenAIEmbeddings(), breakpoint_threshold_type='percentile', breakpoint_threshold_amount=0.5); chunks = text_splitter.split_text(text). Algorithm: (1) Generate embeddings for sentences/paragraphs, (2) Calculate cosine similarity between adjacent segments, (3) Split when similarity drops below threshold. Breakpoint types: 'percentile' (relative, 0.5=median), 'standard_deviation' (absolute), 'interquartile' (outlier detection). Threshold=0.5 recommended for balanced chunking. Benefits: preserves semantic coherence, natural topic boundaries, variable chunk sizes (50-500 tokens). Trade-offs: slower than fixed-size (requires embedding each sentence), higher API costs, unpredictable chunk sizes. Use for: articles, reports, documentation with clear topics. Avoid for: code files, tables, lists. Install: pip install langchain-experimental. Production: cache sentence embeddings to reduce costs.

99% confidence
A

Use BM25Retriever with EnsembleRetriever for fusion: from langchain.retrievers import BM25Retriever, EnsembleRetriever; from langchain_community.vectorstores import FAISS; bm25 = BM25Retriever.from_documents(docs); bm25.k = 10; vector_store = FAISS.from_documents(docs, embeddings); vector_retriever = vector_store.as_retriever(search_kwargs={'k': 10}); ensemble = EnsembleRetriever(retrievers=[bm25, vector_retriever], weights=[0.5, 0.5]); results = ensemble.invoke(query). Fusion algorithm: Reciprocal Rank Fusion (RRF). Weights: [0.5, 0.5]=balanced, [0.3, 0.7]=favor semantic, [0.7, 0.3]=favor keyword. BM25 strengths: exact keyword matches, abbreviations, names, rare terms. Vector strengths: synonyms, paraphrasing, semantic intent. Combined recall: 15-25% higher than either alone. Production: use Weaviate/Qdrant native hybrid search for performance. Alternative: Elasticsearch with knn + BM25 query. Tune weights on validation set using F1 score.

99% confidence
A

Install cohere: pip install cohere. Retrieve candidates + rerank: import cohere; co = cohere.Client(api_key='key'); vector_results = vector_store.similarity_search(query, k=100); docs = [r.page_content for r in vector_results]; rerank_results = co.rerank(model='rerank-english-v3.5', query=query, documents=docs, top_n=10); top_docs = [vector_results[r.index] for r in rerank_results.results]. Two-stage approach: (1) Fast vector search retrieves 50-100 candidates, (2) Slow cross-encoder reranks top-10. Rerank-3.5 features: 4096 token context, 100 languages, JSON/semi-structured data support. Cost: $2/1K searches (100 docs each). Alternative: mixedbread mxbai-rerank-large-v1 (open-source, self-hosted). Improvements: 20-30% relevance gain over vector-only. LangChain integration: from langchain.retrievers import ContextualCompressionRetriever; from langchain.retrievers.document_compressors import CohereRerank; compressor = CohereRerank(model='rerank-english-v3.5', top_n=10); retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=vector_retriever).

99% confidence
A

Install RAGAS: pip install ragas. Evaluate pipeline: from ragas import evaluate; from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall; from datasets import Dataset; data = {'question': questions, 'answer': answers, 'contexts': retrieved_contexts, 'ground_truth': reference_answers}; dataset = Dataset.from_dict(data); result = evaluate(dataset, metrics=[faithfulness, answer_relevancy, context_precision, context_recall]). Metrics: Faithfulness (0-1): answer claims verifiable from context (higher=better), Answer Relevancy (0-1): answer addresses question (higher=better), Context Precision (0-1): relevant docs ranked higher (higher=better), Context Recall (0-1): ground truth covered by context (higher=better). RAGAS uses LLM as judge (GPT-4/Claude). Production targets: faithfulness >0.8, answer_relevancy >0.85, context_recall >0.9. Benefits: reference-free (no ground truth needed for faithfulness/relevancy), LangChain/LlamaIndex integration. Use for: A/B testing chunking strategies, model selection, prompt tuning.

99% confidence
A

Define metadata schema and use LLM to auto-extract filters: from langchain.chains.query_constructor.base import AttributeInfo; from langchain.retrievers.self_query.base import SelfQueryRetriever; metadata_fields = [AttributeInfo(name='department', description='Document department (engineering, sales)', type='string'), AttributeInfo(name='year', description='Document year', type='integer')]; retriever = SelfQueryRetriever.from_llm(llm=ChatOpenAI(), vectorstore=vector_store, document_contents='Technical documentation', metadata_field_info=metadata_fields); results = retriever.invoke('engineering docs from 2024'). LLM extracts: query='docs', filter={'department': 'engineering', 'year': 2024}. Supports: exact match, range (gte/lte), contains, in/not_in operators. Benefits: natural language filtering without manual parsing, reduces irrelevant results by 40-60%, faster retrieval with metadata indexes. Production: validate extracted filters, fallback to unfiltered search on parse errors. Alternative: LlamaIndex Auto-Retrieval for same functionality. Use for: multi-tenant RAG, time-based filtering, source restrictions.

99% confidence
A

from langchain_community.document_loaders import PyPDFLoader; loader = PyPDFLoader('document.pdf'); pages = loader.load(). Returns list of Document objects, one per page, with metadata={'source': 'document.pdf', 'page': 0}. For splitting: docs = loader.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=1000)). Alternative loaders: PyMuPDFLoader (faster, better layout), PDFPlumberLoader (tables/structured data), UnstructuredPDFLoader (OCR support). Multi-file: from langchain_community.document_loaders import DirectoryLoader; loader = DirectoryLoader('docs/', glob='**/*.pdf', loader_cls=PyPDFLoader); docs = loader.load(). Dependencies: pip install pypdf (PyPDFLoader), pymupdf (PyMuPDFLoader). Handle errors: try loading with PyMuPDF if PyPDF fails (encoding issues). Production: async loading for large PDFs, extract images separately, preprocess scanned PDFs with OCR. Metadata enrichment: add custom fields like upload_date, author from PDF metadata.

99% confidence
A

from langchain_community.document_loaders import Docx2txtLoader; loader = Docx2txtLoader('document.docx'); docs = loader.load(). Returns single Document with page_content=full text, metadata={'source': 'document.docx'}. Preserves: paragraphs, headings, lists. Loses: formatting, images, tables structure. For better table handling: from langchain_community.document_loaders import UnstructuredWordDocumentLoader; loader = UnstructuredWordDocumentLoader('doc.docx', mode='elements'); docs = loader.load() (extracts tables as separate elements). Dependencies: pip install docx2txt (Docx2txtLoader), unstructured[docx] (UnstructuredWordDocumentLoader). Multi-file: DirectoryLoader('docs/', glob='**/*.docx', loader_cls=Docx2txtLoader). Split after loading: text_splitter.split_documents(docs). Production: validate DOCX is not corrupted (handle exceptions), extract images with python-docx if needed, preserve headings as metadata for section-aware chunking. Use UnstructuredWordDocumentLoader for complex documents, Docx2txtLoader for simple text extraction.

99% confidence
A

Use LLM to generate hypothetical answer, then embed it: from langchain.chains import HypotheticalDocumentEmbedder; from langchain_openai import ChatOpenAI, OpenAIEmbeddings; base_embeddings = OpenAIEmbeddings(); llm = ChatOpenAI(); hyde = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, prompt_key='web_search'); embedded_query = hyde.embed_query('What is quantum computing?'). LLM generates: 'Quantum computing uses qubits and superposition for parallel computation...' (hypothetical doc), then embeds hypothetical answer instead of query. Retrieval: vector_store.similarity_search_by_vector(embedded_query). Benefits: bridges semantic gap between question style and document style, 10-15% recall improvement. Trade-offs: 2x latency (LLM call + embedding), hallucination risk in hypothesis. Prompts: web_search, code_search, conversation. Production: cache hypotheses for common queries, use fast LLM (GPT-3.5), combine with multi-query for robustness. Alternative: manual query expansion with domain-specific prompts.

99% confidence
A

Use LLM to generate query variations: from langchain.retrievers.multi_query import MultiQueryRetriever; from langchain_openai import ChatOpenAI; retriever = MultiQueryRetriever.from_llm(retriever=vector_store.as_retriever(), llm=ChatOpenAI()); results = retriever.invoke('How do I deploy Kubernetes?'). LLM generates 3-5 variations: 'Kubernetes deployment steps', 'Deploy K8s cluster guide', 'Setting up Kubernetes production'. Retrieves docs for each query, deduplicates by document ID, returns union of results. Custom prompt: retriever = MultiQueryRetriever.from_llm(retriever=base, llm=llm, prompt=PromptTemplate(template='Generate 5 search queries for: {question}')). Benefits: captures different phrasings, 15-20% higher recall, handles ambiguous queries. Trade-offs: 3-5x retrieval cost, slower (sequential queries). Production: parallel retrieval with asyncio, limit to 3 queries, cache query variations. Alternative: manual query expansion with synonyms. Use for: complex questions, diverse terminology, exploratory search. Combine with reranking to filter noise.

99% confidence
A

Use ContextualCompressionRetriever with LLM-based compressor: from langchain.retrievers import ContextualCompressionRetriever; from langchain.retrievers.document_compressors import LLMChainExtractor; compressor = LLMChainExtractor.from_llm(ChatOpenAI(model='gpt-3.5-turbo')); retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=vector_retriever); compressed_docs = retriever.invoke(query). Compressor extracts only relevant sentences from each chunk, discards irrelevant context. Example: 1000-token chunk → 200 tokens relevant to query. Benefits: 3-5x more chunks in context window, reduced prompt costs, focused LLM attention. Alternative: EmbeddingsFilter (removes low-similarity chunks), LLMChainFilter (binary relevant/not filter). Production: use GPT-3.5 for compression (faster/cheaper), cache compressed chunks for repeated queries, monitor compression ratio (target 60-80% reduction). Combine with reranking: retrieve 50 chunks → rerank top 20 → compress to 10. Use for: long documents, limited context windows (4K-8K), cost optimization. Compression adds 200-500ms latency.

99% confidence
A

Include source metadata in prompt + parse citations: from langchain_core.prompts import ChatPromptTemplate; prompt = ChatPromptTemplate.from_template('Answer using these sources:\n{context}\n\nQuestion: {question}\nAnswer with inline citations [1], [2], [3].'); chain = prompt | llm. Format context with numbers: context = '\n'.join([f'[{i+1}] {doc.page_content} (Source: {doc.metadata["source"]})' for i, doc in enumerate(retrieved_docs)]). Parse response for citations: import re; citations = re.findall(r'[(\d+)]', answer); cited_sources = [retrieved_docs[int(c)-1].metadata for c in citations]. Return answer + sources: {'answer': answer, 'sources': [{'title': doc.metadata['source'], 'page': doc.metadata.get('page'), 'url': doc.metadata.get('url')} for doc in cited_docs]}. Alternative: Use RetrievalQAWithSourcesChain which auto-formats sources. Production: validate citation numbers exist, deduplicate cited sources, include snippet preview in source object. Benefits: transparency, fact-checking, regulatory compliance. Use for: legal docs, academic research, customer support.

99% confidence
A
  1. Caching: from langchain_core.globals import set_llm_cache; from langchain_community.cache import RedisSemanticCache; set_llm_cache(RedisSemanticCache(redis_url='redis://localhost', embedding=embeddings, score_threshold=0.95)) - cache semantically similar queries, 60-80% hit rate. 2. Embedding batching: await embeddings.aembed_documents(chunks, chunk_size=100) instead of individual calls - 3x faster, same cost. 3. Smaller embedding models: text-embedding-3-small (512 dim) vs 3-large (3072 dim) - 6x cheaper, 90% quality. 4. Dimension reduction: OpenAI dimensions=1024 vs default 3072 - 50% storage savings. 5. Chunk size tuning: 512 tokens vs 1000 - fewer chunks, less embedding cost. 6. Compression: ContextualCompressionRetriever reduces LLM input tokens by 60-80%. 7. Reranking only top-K: retrieve 100, rerank 20 instead of reranking all. 8. Model selection: GPT-3.5 for simple queries, GPT-4 for complex - 10x cost difference. Monitor: OpenAICallbackHandler tracks token usage per request.
99% confidence
A

Preprocessing pipeline: (1) Clean: Remove headers/footers, page numbers, boilerplate: import re; text = re.sub(r'Page \d+', '', text). (2) Normalize whitespace: text = re.sub(r'\s+', ' ', text). (3) Fix encoding: text.encode('utf-8', errors='ignore').decode('utf-8'). (4) Extract metadata: date, author, section from headers. (5) Segment by structure: split by headings before chunking: sections = re.split(r'\n#{1,3} ', text). (6) Table handling: extract tables to structured JSON, store separately: from tabulate import tabulate. (7) Remove noise: citations [1], [2], footnotes, URLs if not needed. (8) Deduplication: hash chunks, remove exact/near duplicates (Jaccard similarity >0.95). Production: use unstructured library for complex docs: from unstructured.partition.auto import partition; elements = partition('doc.pdf'). Benefits: 20-30% retrieval improvement, cleaner LLM context. Store raw + processed versions for debugging. Use spaCy for sentence segmentation if chunking by sentences.

99% confidence
A

Use LLM to rewrite vague queries into specific search queries: from langchain_core.prompts import ChatPromptTemplate; rewrite_prompt = ChatPromptTemplate.from_template('Rewrite this question to be more specific for search:\nOriginal: {question}\nRewritten:'); rewriter = rewrite_prompt | llm | StrOutputParser(); rewritten = rewriter.invoke({'question': query}); results = vector_store.similarity_search(rewritten). Patterns: (1) Acronym expansion: 'K8s' → 'Kubernetes', (2) Contextualization: 'how to deploy?' → 'how to deploy Kubernetes cluster in production?', (3) Keyword extraction: verbose question → key terms, (4) Disambiguation: 'Python installation' → 'Python programming language installation on Ubuntu'. LangGraph approach: def rewrite_node(state): if state['retrieval_score'] < 0.5: return {'query': rewrite(state['query'])}; graph.add_conditional_edges('retrieval', should_rewrite, {'rewrite': rewrite_node, 'continue': generate_node}). Iterative rewriting: rewrite up to 3 times until relevance threshold met. Production: cache rewrites, monitor rewrite quality, use fast LLM (GPT-3.5). Benefits: 15-25% retrieval improvement, handles typos/abbreviations.

99% confidence
A

Prompt LLM to identify and reconcile conflicts: prompt = 'Based on these sources:\n{context}\n\nSources may contradict. If so:\n1. Note the contradiction\n2. Explain different viewpoints\n3. Indicate which is more recent/authoritative\nQuestion: {question}'. Include metadata: timestamps, source authority scores. Preprocessing: detect conflicts before LLM: from sentence_transformers import util; def detect_conflicts(docs): embeddings = model.encode([d.page_content for d in docs]); similarities = util.cos_sim(embeddings, embeddings); conflicts = [(i,j) for i in range(len(docs)) for j in range(i+1, len(docs)) if similarities[i][j] < 0.3]. Reranking: prefer recent documents: docs.sort(key=lambda x: x.metadata.get('date', '2000-01-01'), reverse=True). Ensemble voting: retrieve with multiple strategies, take majority answer. Production: flag low-confidence responses when contradictions detected, surface conflicting sources to user, use temporal ordering for evolving info (API changes, best practices). Benefits: transparency, accuracy, user trust. Use for: fast-changing domains (web dev, AI), regulatory docs, news articles.

99% confidence
A

Update strategies: (1) Incremental updates: New docs only - add to vector DB daily/hourly: new_embeddings = embeddings.embed_documents(new_docs); vector_store.add_documents(new_docs). (2) Versioned collections: Qdrant/Pinecone collections by timestamp, switch after validation: collection_name = f'docs_{datetime.now().strftime("%Y%m%d")}'. (3) Soft deletes: Mark outdated docs inactive, filter in retrieval: metadata={'active': False}. (4) Reindexing: Full rebuild weekly/monthly for consistency: async def reindex(): new_store = FAISS.from_documents(all_docs, embeddings); swap_vector_store(new_store). Monitor: document count, average embedding time, retrieval latency. Validation: smoke tests after updates (query 10 common questions, check recall). Cache invalidation: clear semantic cache on updates: redis.flushdb(). Blue-green deployment: maintain 2 vector DBs, switch traffic after validation. Rollback: keep previous version for 24 hours. Automation: GitHub Actions triggers reindex on docs/ changes. Log: update timestamp, doc count delta, errors.

99% confidence
A

Security measures: (1) PII detection: Use Presidio before indexing: from presidio_analyzer import AnalyzerEngine; analyzer = AnalyzerEngine(); results = analyzer.analyze(text, language='en'); redact PII before embedding. (2) Access control: Store user_id in metadata: doc.metadata['allowed_users'] = ['user1']; filter: results = vector_store.similarity_search(query, filter={'allowed_users': current_user}). (3) Encryption: TLS for vector DB connections, encrypt embeddings at rest: vector_db.encrypt_collection(). (4) Audit logging: Log all queries: logger.info(f'User {user_id} queried: {query}'). (5) Data retention: Auto-delete old docs: if doc.metadata['created'] < cutoff_date: vector_store.delete(doc.id). (6) Prompt injection defense: Validate queries: if contains_injection(query): raise ValueError. (7) Output filtering: Remove PII from responses: redacted_answer = redactor.redact(answer). Compliance: GDPR (right to deletion), HIPAA (PHI protection), SOC 2 (access logs). Production: use Azure OpenAI (data residency), implement rate limiting per user, monitor for data leakage.

99% confidence
A

Use MapReduceDocumentsChain for parallelization: from langchain.chains import MapReduceDocumentsChain; map_prompt = ChatPromptTemplate.from_template('Extract key points from:\n{page_content}'); reduce_prompt = ChatPromptTemplate.from_template('Synthesize these summaries into one answer:\n{summaries}\nQuestion: {question}'); map_chain = map_prompt | llm; reduce_chain = reduce_prompt | llm; chain = MapReduceDocumentsChain(llm_chain=map_chain, reduce_documents_chain=reduce_chain). Process: (1) Retrieve docs from multiple sources, (2) Map: LLM extracts relevant info from each doc in parallel, (3) Reduce: LLM synthesizes individual outputs into final answer. Alternative: RefineDocumentsChain (sequential refinement). For large doc sets: use MapReduceChain with batching: batch_size=5 docs per map call. Production: track which sources contributed to answer, deduplicate retrieved docs before processing, use cheaper LLM for map step (GPT-3.5), expensive for reduce (GPT-4). Benefits: comprehensive answers, cross-validation, handles contradictions. Use for: research, comparative analysis, due diligence. Monitor: total tokens (can be high), latency (parallelizable).

99% confidence