The new QUERY method strikes me as a really promising addition. Not being able to send a body with a GET-type request is a gnawing issue I have with HTTP
Elastic search (used to?) use HTTP body as parameters for a GET request. IIRC the HTTP specification doesn't (again, used to?) mandate GET request to have no HTTP body
It still does. I don't think it violates the HTTP 1.1 specification but more that it is unspecified. It's just that a lot of http clients simply don't support doing HTTP GET with a body under the assumption that it is redundant / not needed / forbidden. Of course elasticsearch allows POST as an alternative.
People used to obsess a lot more about those HTTP verbs and their meaning a lot more than today. At least I don't seem to get dragged into debates on the virtues of PUT vs. POST, or using PATCH for a simple update API. If you use graphql, everything is a POST anyway. Much like SOAP back in the day. It's all just different ways to call stuff on servers. Remote procedure calls have a long history that predates all of the web.
GET requests with a body (unspecified in HTTP/1.1) reminds me of a similar case I encountered years ago: URL query params in a POST (an HTML form whose action attr contained a query string).
I feel not obsessing about the meanings of HTTP verbs can, has, and will lead to security incidents and interoperability issues between middleware. Specifications where everyone gets to pick and choose different behaviors is a nightmare.
Interestingly though GET with data exists in the wild, and has for many years.
I manage a http library class, and a customer encountered an API that required a GET but with data. (think query parameters passed as XML).
I implemented that for the customer, and then implemented the reverse in the server class. I'm not going to say its used a lot, but it makes semantic sense.
Incidentally it also becomes true for DELETE which is another request typically without a body.
This is the first I've heard of QUERY though, so look forward to reading up on that.
nothing on the spec prevents arbitrary data on body of a GET. but clients and proxies are implemented by lazy people who make excuses they are preserving some legacy security feature or something and continue to ignore the spec.
The spec is actually pretty clear on this - do not specify a body on a GET request.
> A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
Previously it was "SHOULD ignore the payload".
It's nothing to do with laziness or security - people are writing spec conforming software. And indeed every library I've used allows interacting with a body, even on a GET.
> The spec is actually pretty clear on this - do not specify a body on a GET request.
That's not what your quote says.
Not having a defined semantics does not mean if is not supported. Just because some implementations fail to support GET with a request body that it does not mean all implementations should interpret it as a malformed request.
I can roll out a service with endpoints that require GET with request bodies and it would still be valid HTTP.
"out of spec" means that it is out of specification. It is literally not specified. You are doing something that is not specified. It is therefore an action that is out of specification. it is therefore out of spec.
If there was an utter ban, then it would be against specification and not compliant, not merely out of specification.
> "out of spec" means that it is out of specification. It is literally not specified.
That's not what it means at all. Being out of spec means the spec explicitly stating that a request with a body should be reject. If the spec does not state that a request with a body should be rejected then you are not required to reject a request which packs a body.
No, not defined means it's not within the purview of the spec. Spec doesn't care. You can send one. Maybe it'll work, maybe it won't, maybe it'll crash, maybe it'll be rejected, maybe some proxy along the way will strip it and the server won't even get it, maybe it'll get your client banned forever.
All of these are fine, because spec doesn't care.
> If accepting body in GET is out of spec, then spec is supposed to say, GET cannot send body.
No, then it would be against spec, like HEAD with a response body.
I would guess that people either don't appreciate or don't recognize the sarcasm. Or recognize it and don't agree that tightening the money supply is unnecessary
The key problem I see with the current semi-permanent student debt payment holiday is that there are no added stipulations for ensuring a finished degree will come close to a net economic benefit. Otherwise, we're executively making a very expensive fiscal policy decision and allowing tertiary education costs to balloon without anchoring those costs to the expected added value to society.
I can also think of many things a bad employee might do that the employer should probably care about.
But if you hired someone you've given them a tremendous amount of responsibility and power, if you can't trust them to do their job without monitoring everything they do then you should just fire them and hire someone more trustworthy.
I would be interested of the reason because we've been fighting this successfully for a number of years with our clients IT departments (while we were smaller). They hired us to make the developement process better and one of the things we always found problematic was that when the software developers didn't have normal internet access (but instead tunneled everything through some bottleneck http proxy) they both lost the good developers because they were fed up with not being able to do their job efficently and thus the quality of their software went down.
A year ago we've been bought by a big company to help them do what we did untill then but on a much bigger scale. The first thing we did was to get rid of the restricted internet. And now we again help our customers (big car manufacturers) to make their processes better. And even for here every developer has free access to the Internet to be able to work efficiently.
But as I said in the beginning, I'm really interested in the reasons for this behaviour of those large companies. My suspission is that it's just easier for the IT department to work against the developers instead of helping them to do their job.
There are IT shops with large dev teams that don't put proxies between their users and the Internet, but in every one of them that I'm aware of, developer laptops are subject to intrusive continuous monitoring. And, even at firms where there are no proxies, VPNs are problematic.
The reason is that large firms are legally obligated to make sure that insiders aren't exfiltrating protected or confidential information.
The reason is that large firms are legally obligated to make sure that insiders aren't exfiltrating protected or confidential information.
If it makes people feel better about this, the same countermeasures also help with the case "Adversary pops any laptop in the company via e.g. phishing or malware and then pivots to All The Things." i.e. you don't need to posit non-trust of employees to want to implement continuous monitoring of work equipment.
> If it makes people feel better about this, the same countermeasures also help with the case "Adversary pops any laptop in the company via e.g. phishing or malware and then pivots to All The Things." i.e. you don't need to posit non-trust of employees to want to implement continuous monitoring of work equipment.
Even assuming we don't care about worker privacy and all the stuff, I think we can still do better.
I have no insider information about this (not a Google employee and definitely not associated with the project) but I read some good things about BeyondCorp
Regardless of the threat model, "security" has be practical. The main thing business should care about is productivity. I've done subversion checkouts that slow down to a crawl because the malware detection hogs down the disk IO. I've seen "anti-theft" agent go haywire sending a heartbeat too often and making network access unusable.
I haven't had to deal with being denied access to stack overflow and frankly I would take the first offer and quit if I ever had to.
That being said, I think I am OK with random crap running on company owned machines as long as it is reasonable and does not hurt performance. Oh and there should be no expectation that I will take them home with me.
This reminds me of another funny story. One place I worked at, we were not allowed to leave our computers at our desk at the end of the day. We either had to put it in a locked cabinet or take it home with us. Nobody believes me when I tell this story but it is true.
That would only make sense if tethering via your phone, USB-sticks, cameras, and every other way of doing copies weren't allowed on those premisses either. But this is (almost) never the case.
> But if you hired someone you've given them a tremendous amount of responsibility and power, if you can't trust them to do their job without monitoring everything they do then you should just fire them and hire someone more trustworthy.
Trust but verify, as Reagan used to say. We give even more power to the presidents and senators, but imagine what would happen in there was not plenty of people around to monitor what they did with that power.
To bring up just one thing that doesn't have to do with trustworthiness: If you let an employee onto a 3rd party VPN you open yourself up to a whole new vector of attacks that you can't prevent.
And then there's the problem with allowing unrestricted, unmonitored Internet access with regards to auditing and establishing a timeline of events if you ever need to do so.
You can visit sites that aren't blacklisted on the company's network which makes it easier to social engineer you. You have less control over what stupid things your employees can do.
You're right, this wouldn't be any more dangerous than being on a coffee shop's wifi but you already don't care about network security if that's how you're working.
"Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions."
Which seems to me a disappointing end to the article. An appeal for a national effort to manage AI.
"Other countries have major AI projects" but what exactly should the US model itself after?
"The United States has not yet systematically explored it's scope" but US publishes the second most research papers on AI (https://www.timeshighereducation.com/data-bites/which-countr...) which I think in general is a bad metric but if you're looking at the output that a system produces, it's the best metric you're going to get.
Looking at the private sector, the AI makeup of the leading AI/Robotics ETF $BOTZ (https://www.globalxfunds.com/funds/botz/) is comprised of mostly American companies. Similarly, look at the sheer number of ML/AI startups in SV.
So I fail to see where the crisis is. If American universities are among the leaders in AI research and American companies are among the leaders in the AI economy, why is there such a tone of urgency in this article? Kissinger's argument leads me to believe that he's advocating for a blanket "AI" initiative but he doesn't have a clear idea of what he wants this initiative to do. Without a clear direction for how he wants the US to be "relating AI to humanistic traditions", whatever he's proposing here is just the marriage of shallow musings about the consequences of AI and some kind of blind belief in federal government initiative.
He didn't say all developers working on AI are inexperienced in politics. He was simply restating his thesis which was to appeal to those who hadn't considered what he was writing about for a skepticism of the consequences of AI.