And I think your example of the plane is where this technique is applicable. In a cinema for example, imagine yourself watching a horror movie and you can hear the evil demon sneak up behind your back. shivers. In a concert setting however, the source of the sound is (usually) static, while I might move my head slightly from time to time and pick up the sound from different angles. If that could be simulated using headphones, that would be pretty cool! But again, I fail to see what purpose it would serve in a normal stereo setting where the sound is produced the same way, from stationary speakers. Then of course there are other factors like the venue layout, design and material. As well as speaker placement, mixing of the sound and even the crowd itself. Which all contributes to the sound and the "feeling" of listening to the music.
> I might move my head slightly from time to time and pick up the sound from different angles. If that could be simulated using headphones, that would be pretty cool!
This could surely be done (and probably has been done) using a VR headset or even just IR lights and cameras a-la TrackIR. If latency was low enough, you could process the sound to model the changes caused by head position/orientation.
Of course, but that doesn't equal the sound "moving around", the violin section will always remain in the same place for example, which is the feeling I get when listening to one of those 8D Audio videos on youtube. The sound moving around that is.
I have never understood what problem self-driving cars is trying to solve. I can, on the other hand, see that it could be more desirable to have self flying planes. But to even think that self flying = issue free and non crashable planes is beyond naive. Personally I believe that the best is to let humans do what they do best, and let computers do what they are good at. Without trying to force one into the other's shoes. (Computers has shoes, don't they?)
They should have been fined more than a measly €20k in my opnion. As a developer I'm deeply ashamed that people are still storing user passwords in plain text. There is no reason behind this behaviour what so ever, other than pure laziness ...
When the breach was announced, they revealed that they did not store the passwords themselves in plain text, but had a second store that did, so they could prevent users from posting their passwords in chats. [0]
Still stupid, but at least the had good intentions, just bad execution.
Huh, that's actually... a decent-sounding intention.
Is there any way to do that in a secure manner? Because a hash says nothing about the length of a password (and you certainly don't want to store the length, which would make the attack space much smaller)... so if passwords are anywhere, say, from 8-64 characters, then for each chat message you'd need to hash every possible consecutive string of characters for every possible window size separately, which if the hash is even remotely computationally intensive could possibly turn into too much -- especially if being done on the server instead of the client (in order not to expose the hash and salt).
Is this just something it's not possible to protect against?
But is storing a plaintext password, even on the client, good practice? E.g. in a browser that uses a cookie with something like a session ID to make sure you're logged-in... is storing a plaintext password in localStorage considered a valid security practice? I would have assumed not, although it's certainly not close to as bad as storing it on the server...
if you store plaintext password on the client, you'd be one XSS attack away from potentially having a lot of passwords stolen - best practice is to have password in plaintext for a little as possible (there's some research on not transmitting the password at all but I don't think there's anything widely accepted like bcrypt is for password hashing https://en.wikipedia.org/wiki/Zero-knowledge_password_proof)
You are already one xss attack away from having your session stolen or having your credentials stolen or any number of other bad things. Passwords on the client are fine.
anything that makes computation less intensive for you also makes it less intensive for a potential malefactor - it's just an inherent tradeoff.
Rather than scan for password being contained in the message, something more reasonable to try would be to check if the whole message is the password since you can just plug that into the normal password hasher and run just one slower hash op
So you think that someone who's capable of building a system like this, has somehow missed the fact that you should store passwords safely? Nah, I don't buy that.
I do. You do an online tutorial, and together with bits of code from Stack Overflow you can tie together APIs, including payment processing APIs relatively easily into what you want. You don't read the documentation, you just google till you get the code you want from SO, so any warnings in there are lost. Your boss is on your back about it and you're on your 4th straight 20 hour day, so you just do whatever to get the result your boss requires.
I find this kind of error the most unsettling, it implies the people writing the authentication system don't trust the underlying ORM/database sanitisation layer (if there even is one!) enough, so to 'play it safe' they manually filter out 'suspicious characters'.
It makes you wonder that if there's a team that isn't as rigorous elsewhere (or a team on which pressure has been applied to accidentally leave in some such 'mistakes') what kind of SQL injection possibilities exist.
I was once doing some SEO work for a client and noticed something similar. Any URL that contained an apostrophe would return a completely blank page. I asked my manager if I could spend a little time investigating that as a security vulnerability that would have been out of scope for the project and within 45 minutes I had a working SQL injection proof of concept that would return credit card details from their order table.
1. They tried to prevent SQL injection attacks by stopping the page from loading instead of properly escaping data.
2. They failed to actually check if parameters had the forbidden characters they were looking for (they checked the URL, instead of the parameters after they were parsed so all it took was URL encoding an apostrophe)
3. They stored credit card details that they should have never recorded in the first place (including CVV code) rather than just storing the transaction ID from Authorize.net
4. They never bothered to archive old order data even though their ecommerce site didn't even have a customer login and they had absolutely no use for old orders after they were complete.
If you spot that kind of incompetence on something inconsequential from a small team, dollars to donuts they're making the same kind of mistakes with far more serious code. And due to the Dunning-Kruger effect, they're probably too incompetent to realize that they shouldn't be touching anything related to i.e. payment processing or authentication.
"... much of what is done is faux-agile, disregarding agile's values and principles." This. It feels so easy to get caught up in methods and tools, to not make a proper commitment (because it is tough to change the way you work) and to land in a sort of semi-agile state that no one likes.
I love the idea of having your actual users testing the software, rather a dedicated tester. A dedicated, employed, tester can never use the product like a normal user would. They have too much knowledge of how the system works and what it can and can't handle. In order to truly observe something, we must completely, or as much as is possible, remove ourself from that situation. A normal user however doesn't know that it's not possible to achieve task X within the system, and will try, and if enough users try to perform that task, it might be an indication that it is a task that should be implemented or looked over to see why they are trying to perform this. This is just an example, but I find the role of a tester pretty confusing and awkward. As a developer I build and test the code I'm writing. If something doesn't work within that code I want to know as quick as possible if something isn't right. So why should I hand over my code to someone else? And also, if a user doesn't find the bug, is it really a bug?
As a developer the simple fact is that if I knew something could happen I would have written the code to support it. Testers are about looking at the code from a different angle. It’s not surprising quality has dropped dramatically since Microsoft embraced the idea that devs should also be testers.
Customers are the ultimate testers. But using them as guinea pigs has a cost to your reputation.
I don't think it's baffling at all. Facebook is the largest social media company, and what do they do to try to stop all of the propaganda, troll factories and hateful things going on there? Naught. They know that they are the largest company and therefore doesn't have to do anything as long as their position stays the same. Which it seems to be doing. Yes, there are people leaving the platform but where do they go? There is no real contender, therefore Facebook will remain number 1.
On my city Startup Facebook group, I see lots of posts offering fake Facebook & Amazon review services. When I report such posts, the response I get is, the said posts doesn't violate Facbook's Community standard Guidlines.
I don't think it's completely the same thing though, this propaganda and trolling could arguably be a positive for Facebook by driving engagement. Of course there's a balance to maintain but it's not necessarily all bad. Socially and ethically it might be bad but bean-counting-wise it might not be that terrible. Facebook doesn't really care if people trust what they see on the platform as long as they generate pageviews and get ad impressions.
On the other hand I don't see how fake reviews and bootleg items do anything but hurt Amazon by making people distrust the platform. You don't want to add friction to the buying process by making people triple check that they're not getting duped.
And it's not like it's a new problem either, lack of trust was a huge issue in the early days of e-commerce, I'm sure Amazon doesn't want to return to these days where you felt like you were swimming in a sea of scams.
The problem is that before you bought stuff largely from Amazon.
Whereas now it's more akin to eBay which always suffered with scam problems (and responded by giving buyers far more power - which then resulted in buyers scamming sellers instead of the reverse).
I wish it could return to the days where you just bought stuff from Amazon - or at least make it very easy to only search for their products.
I have friends who work on teams in Facebook who use ML to detect and filter out “undesirable content”. It’s a difficult problem but they’re a pretty intense team (workaholics) so I wouldn’t say they spend 0 effort.
The surname is wrong in the title, it should be Klint, not Klimt. It's an interesting read however, although I'm no art expert by any means, I've always liked abstract paintings as they have a sort of liberating effect on my mind where the imagination in some cases can run completely it's own course.
And Klimt of course was a very different artist, not particularly abstract unless you count the decorative elements, and very, very male.
Which makes the slip in the title here a bit poignant, as a lot of people think it was gender bias that kept af Klint out of the art canon for so long.
Define "technical". But when it comes to programming I always enjoy Sandi Metz's talks. I can highly recomend this talk she made at the RailsConf in 2014, about taking an ugly beast of code and turning it to something more digestable and beautiful https://www.youtube.com/watch?v=8bZh5LMaSmE
Totally agree, Sandi is awesome, all of her talks are worth watching. I quite like the the one where she tells the future https://youtu.be/JOM5_V5jLAs . I feel that kind of step change will shortly be with us (if we're not already in the middle of it)
Apparently it "ensures artists receive the compensation they are owed, encourages fair industry competition, and protects the intellectual property rights of studios nationwide—among other benefits."[0] However, I find it difficult to find anything about how this is actually implemented. Apart from this: "t changes the procedure by which millions of songs are made available for streaming on these services and limits the liability a service can incur if it adheres to the new process. It funds the creation of a comprehensive database with buy in from all the major publishers and digital service providers." [1] Which to me sounds very vague.
Found this also. "The MMA is a bill to be added to legislation with the goal of establishing a new collecting society, called the Mechanical Licensing Collective (MLC), that would be empowered to provide a blanket license for streaming services to companies, covering mechanical rights in any songs not otherwise covered by a digital company’s direct deals with music publishers."[2] So if I interpret this quote correctly, it basically means that there's going to be some sort of organisation chaperoning artist and music makers without a license to their music, collecting a license fee from streaming services and paying it to said musicians and music makers?
... "why even call it the "disgusting food"? Obviously anyone who connotates the exhibit with the items inside will be going in with a specific bias." Well, isn't that what they are trying to challenge? The 'whys' of disgusting food. I think it's an appropriate name considering that these items are usually presented as disgusting in the general public. So by giving it the name Disgusting Food Museum they create a mindset for the visitor that is saying 'you're now entering a museum of food you'll find disgusting', and then making the visitor quesiton that mindset; Why do they find Surstromming or Balut disgusting? You can't ask yourself such questions unless you think those dishes are disgusting in the first place. Why do you think chocolate is disgusting? Well, uh, I don't.