Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll add my 2c because I'm getting a little annoyed that folks who only have familiarity with Dropbox's consumer offering seem to think they have any idea what they're talking about.

In an enterprise, it can be extremely difficult to ensure your permissions are what you want them to be: that people can share things easily with the right groups, but also that sensitive data is not inadvertently exposed. Dropbox in particular excels at sharing documents with others outside your company, but that is also where there is obviously the most risk.

Currently, I find Dropbox's enterprise permissions management tools pretty difficult to use. There are loads of options and it's too easy to get something "wrong" if you inadvertently miss checking the right checkbox. It's not hard for me to see how AI tools could help improve this situation, and especially to provide additional capabilities in the DLP (data loss prevention) space that would make it easier to detect misconfigured access.



> There are loads of options and it's too easy to get something "wrong" if you inadvertently miss checking the right checkbox.

This seems the exact opposite of what the current crop of AI is good at.


> This seems the exact opposite of what the current crop of AI is good at.

Yeah, last thing I want when trying to share a file with people outside of my organization is some opaque kafkaesque chatgpt-wannabe model telling me "I’m sorry Dave, I’m afraid I can’t do that".


The AI doesn't need to be the end-all-be-all of defining permissions. But it's not hard to imagine a couple of areas where AI could help:

1. Letting users enter desired permissions setting in a natural language, and then the AI recommending checkbox settings, and, importantly, explaining these settings.

2. Useful as a monitoring/alerting system for DLP. Most DLP systems already use some sort of machine learning for identifying sensitive docs.

3. Easily running "test scenarios" to show to admins who can and can't get access to particular docs.

There is a huge chasm between "AI owns all my permission settings" to "AI can make it easier and more robust for me to understand what my permission settings should be."


There is a reason permissions are not defined in natural language. Language is imprecise and that is why security is so hard to get right


"I'm 99% certain that you're a poodle, and poodles are not allowed to share that document. Is there anything else I can help you with today?"


Needing strong AI to manage permissions on a file sharing program sounds like something out of Hitchhiker’s Guide to the Galaxy. Like the sentient AI stuck operating an elevator.


Have you used Box? Dropbox offers editor and view levels with some additional options to add a password, set an expiration, download or not while Box is over here with 7 different sharing levels. I've managed Box environments and I've managed Dropbox ones and the amount of data leakage coming out of the Box environments was not only more often but much grander on scale because people kept picking the wrong level of access.


How does AI tooling help with that? "The AI tool said I could grant this user access this way" is not something that would pass compliance.


I wonder what the compliance process would think of the telemetry that is adding stuff back to Dropbox and can’t be turned off.

https://news.ycombinator.com/item?id=35724939#35726399


> Dropbox in particular excels at sharing documents with others outside your company, but that is also where there is obviously the most risk.

> Currently, I find Dropbox's enterprise permissions management tools pretty difficult to use.

These two sentences were one after the other. So which way is it?


Dropbox can be at the same time the best solution and nevertheless hard to use, if the other solutions are less powerful or more confusing.


You are exactly right. And to address the concerns about errors made by the machine, we are not looking in my organization to have the machine automatically make decisions on things like access. It would ideally warn us when we've likely done or are about to do something wrong. An auditing tool at least.


There are many good use cases for AI, however, permissions is not one of them. The current gen AI makes things up and would likely give a member of the cleaning staff super admin to allow them to clean the data.


People aren't arguing that AI alone makes all the permissions decisions. It's not hard to see a big leap from DLP solutions, which currently already make copious use of machine learning, to permissions auditing, monitoring and recommendations in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: