Tag: LLM Evaluations
-
What’s Wrong in my RAG Pipeline?
RAG is a technique used to train LLMs on specific knowledge base. Know more about how RAG works and analyze major failure cases in RAG.
-
Manage LLM Applications with UpTrain + Langfuse
Learn to use evaluation and observability stats to manage LLM applications. Track your applications’ latency, cost and quality.
-
Decoding Perplexity and its significance in LLMs
Explore how perplexity guides LLMs in language comprehending. Dive into its use cases, calculation, and impact on model performance.
-
Revealing the Hidden Truths: The Negative Impacts of Hallucinations in Large Language Models (LLMs)
Learn about the adverse effects of hallucinations in multiple industries like Education, Fintech, Sales, etc. and learn techniques on how to detect it
-
Dealing with Hallucinations in LLMs: A Deep Dive
Learn about hallucinations, how to detect them and how to solve them via techniques like RAG, prompting, chain of verification, etc.