Linking pages
Related searches:

Search whole site: site:twitter.com

Search title: Greg Kamradt on X: "Pressure Testing GPT-4-128K With Long Context Recall 128K tokens of context is awesome - but what's performance like? I wanted to find out so I did a “needle in a haystack” analysis Some expected (and unexpected) results Here's what I found: Findings: * GPT-4’s recall… https://t.co/nHMokmfhW5" / X

See how to search.