Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Finally! I’ve been getting the shakes waiting for next OpenAI release.

16k context with 3.5-turbo is huge. It’ll make all those dime a dozen document driven assistants a lot more useful.

I’m curious to see if people will figure out ways to hack functions to get more reliable structured JSON data out out of GPT without tons of examples, giving lots more context room to play with



This is awesome. We were finding a lot of frustrating with 4k context being far too short to properly chunk documents.

In a worst case scenario, you have to assume that output is going to be the same length as input. That means useful context is actually half of the total context.

Add in a bit of fixed size for chunking/overlap (maybe ~500 tokens), suddenly you're looking at only 1k to 1.5k being reliably available for input. 16k context bumps that number up to 7.5k available for input. That's massive.


Can you provide some examples of what document driven assistants you're referring to?


I'm assuming something like humata.ai




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: