I wonder what part of these failed sales is due to GDRP requirements in the IT enterprise industry. I have my own european view, and it seems our governments are treating the matter very seriously. How do you ensure an AI agent won't leak anything? It just so happened that it wiped entire database or cleared a disk and later being very "sorry" about it. Is the risk worth it?
Having worked with this stuff a lot, privacy isn't the biggest problem (though it is a problem). This shit just doesn't work. Wide-eyed investors might be willing to overlook the 20% failure rates, but ordinary people won't, especially when a single mistake can cost you millions of dollars. In most places I've seen AI shoved - especially Copilot - it takes more time to read and dismiss its crappy suggestions than it does to just do the work without it. But the really insidious case is when you don't realize it is making shit up and then you act on it. If you are lucky you embarrass yourself in front of a customer. If you are unlucky you unintentionally wipe out the production database. That's much more of an overt and immediate concern than leaking some PII.