AI & Data Innovation
TalkSession Code
Sess-85Day 3
11:00 - 11:30 EST
Ever wondered why AI confidently gives the wrong answers? Join me to discover how embedding models—the backbone of modern AI systems—fundamentally misunderstand human language. I'll reveal shocking test results where models think "take medication" and "don't take medication" are nearly identical, confuse 5% with 95%, and can't tell the difference between "before" and "after" in critical instructions. These aren't minor glitches—they're design limitations causing real-world failures in healthcare, finance, and legal systems. Learn the patterns behind these hallucinations and practical techniques to make your AI systems more reliable. If you're building with embeddings, you can't afford to miss this