Technical writer by day, engineer by night, and father everywhere in between, Manny wears many (figurative) hats. He is passionate about intuitive and scalable developer experiences, and likes diving into the deep end as the 0th user.
RAGs to Riches: How Our Content Affects Retrieval Augmented Generation
LLMs have a knowledge problem: their training data is often out of date, and the info we want to leverage often isn’t included in the training data in the first place. Retrieval Augmented Generation (RAG) is an approach to dynamically give LLMs the information we need them to have right when we need them to have it. Want an LLM to know about your products, procedures, or users? RAG is the solution.
But how do docs affect RAG? Doc content, its formatting, and how it’s processed all impact whether the LLM responds with knowledge and (reasonable) confidence or whether the response is full of hallucinations. Come learn about RAG, how technical communicators can make use of it, how our docs affect RAG performance, and what we can do to make sure our users (and our LLMs) get the content they need when they need it.
In this session, attendees will learn:
- How Retrieval Augmented Generation (RAG) works and why it’s valuable for enhancing LLM applications
- The different kinds of RAG, what differentiates them, and when to use them
- The critical relationship between documentation quality and RAG performance
- Best practices for structuring and formatting technical content to optimize RAG retrieval
- How to help your developers improve their RAG tooling and own a part of how your content is used