5 months ago
Tues Sep 2, 2025 5:29pm PST
Show HN: Papr – Predictive Memory for AI (Ranked #1 on Stanford's Benchmark)
Most AI systems rely on vector search. It finds similar fragments, but not the right context. It can tell you two passages are related, but not how they connect or why they matter together.

So everyone ends up “engineering context” — manually deciding what to stuff into prompts using RAG pipelines, agentic search, or trees of thought. These tricks work for small demos, but not at scale. That’s why MIT found that 95% of AI pilots fail, and why you keep seeing threads about vector search breaking down.

We built a different approach: a retrieval model that predicts the right context for every turn in a conversation. On Stanford’s STaRK benchmark it ranks #1. It’s also fast enough for voice chat, where even 100ms of lag kills the experience.

We also introduced a new metric: retrieval loss. Like language model loss, but for retrieval. Traditional systems get worse as your dataset grows. With Papr, retrieval loss drops as your dataset grows — meaning more knowledge makes your system smarter, not dumber.

Our memory APIs are available to try out with a generous free tier. We’d love feedback, questions, and brutal critique. Full details here - https://substack.com/home/post/p-172573217

read article
comments:
add comment
loading comments...