Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's definitely a lot of wiggle room for lawyers and doctors to up their game. People cannot keep up with all the stuff that's published. There's simply too much of it. Doctors only read a fraction of what is published. Lawyers have to be aware of orders of magnitude more information than is humanly possible.

LLMs allow them to take some short cuts here. Even something like perplexity that can help you dig out relevant source material is extremely helpful. You still have to cross check what it digs out.

The mistake people make is confusing knowledge with reasoning when evaluating LLMs. Perplexity is useful because it can use reasoning to screen sources with knowledge; not because it has perfect recollection of what's in those sources. There's a subtle difference. It's much better at summarizing and far less likely to hallucinate than it is when it wouldn't base its answers on the results of a search. Like chat gpt used to do (they've gotten better at this too).

For lawyers and medical professionals this means that they have all the best knowledge easily accessible without having to read and memorize all of it. I know some lawyer types that are really good at scrabble, remembering trivia, etc. That's a side effect of the type of work they do: which is mostly just reading and scanning through massive amounts of text so that they can recall enough information to know where to look. Doctors have to do similar things with medical texts.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: