You can force an LLM to only output valid answers
YouTube just open-sourced a project called STATIC that solves a problem most people don’t know exists: LLMs can say anything, but sometimes you need them to only pick from a specific list. The Problem When an LLM generates text, it picks one token (word/number) at a time from a vocabulary of ~32,000+ options. That’s great for conversation, but terrible when you need it to output something specific: a valid product ID, a medical code, or a video recommendation from a catalog of millions. ...