Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.

Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.

Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.



You've answered your own question as to why many people will want this approach gone entirely.


I always really like answers like yours as they are clever and in my opinion maybe a bit true as well

I think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.


I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.


If I give you tens of billions of dollars, like, wired to your personal bank account, do you think you could figure it out given a decade or two?


Yes! I think that would do it. But is anyone out there is committing tens of billions of dollars to traceable AI?


In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: