Hacker Newsnew | past | comments | ask | show | jobs | submit | NickLamp's commentslogin

The new QUERY method strikes me as a really promising addition. Not being able to send a body with a GET-type request is a gnawing issue I have with HTTP


Elastic search (used to?) use HTTP body as parameters for a GET request. IIRC the HTTP specification doesn't (again, used to?) mandate GET request to have no HTTP body

Edit: Example of Elastic search API with GET body, now deprecated https://www.elastic.co/guide/en/elasticsearch/reference/7.7/...

Edit 2: https://stackoverflow.com/questions/978061/http-get-with-req...

The now obsolete specification https://www.rfc-editor.org/rfc/rfc2616 , obsoleted by https://www.rfc-editor.org/rfc/rfc9110


It still does. I don't think it violates the HTTP 1.1 specification but more that it is unspecified. It's just that a lot of http clients simply don't support doing HTTP GET with a body under the assumption that it is redundant / not needed / forbidden. Of course elasticsearch allows POST as an alternative.

People used to obsess a lot more about those HTTP verbs and their meaning a lot more than today. At least I don't seem to get dragged into debates on the virtues of PUT vs. POST, or using PATCH for a simple update API. If you use graphql, everything is a POST anyway. Much like SOAP back in the day. It's all just different ways to call stuff on servers. Remote procedure calls have a long history that predates all of the web.


GET requests with a body (unspecified in HTTP/1.1) reminds me of a similar case I encountered years ago: URL query params in a POST (an HTML form whose action attr contained a query string).


It's fairly common to have POST requests that have both a body and a query string. Or maybe not "fairly common" but it isn't really rare.


I know it's not that rare. But it is problematic.


I feel not obsessing about the meanings of HTTP verbs can, has, and will lead to security incidents and interoperability issues between middleware. Specifications where everyone gets to pick and choose different behaviors is a nightmare.


The obsession with fine-grained distinctions may be gone, but GET/POST is still relevant when talking about caching etc.


Interestingly though GET with data exists in the wild, and has for many years.

I manage a http library class, and a customer encountered an API that required a GET but with data. (think query parameters passed as XML).

I implemented that for the customer, and then implemented the reverse in the server class. I'm not going to say its used a lot, but it makes semantic sense.

Incidentally it also becomes true for DELETE which is another request typically without a body.

This is the first I've heard of QUERY though, so look forward to reading up on that.


Just a heads-up that cloudfront will return a 403 when it receives a GET with body.

It's documented and all but I still find it a peculiar choice. A 400 would have been better and less of a red herring.


It also produces an error on all Apple platforms, as it's banned by URLSession.


Is it that you can but you’re just told not to, even if both client and server agree on the semantics?


Just send a POST request lol


But you're only "supposed" to use POST with requests that add/modify data, or something silly like that.

In practice, QUERY is most useful for where you want a bunch of different verbs for the same endpoint and need a body.


QUERY is cacheable.


... in theory only. Not that many http cache programs support QUERY. And many HTTP middle-boxes bans non GET/POST verbs.


For now. Change has to come from somewhere.


And retry-able.


I do... but there are good reasons to not want to. If you read the linked spec you can learn for yourself

https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...


In practice I think that’s exactly what many people do.


GraphQL is doomed?


No, this would actually be very useful for GraphQL API's.


nothing on the spec prevents arbitrary data on body of a GET. but clients and proxies are implemented by lazy people who make excuses they are preserving some legacy security feature or something and continue to ignore the spec.


The spec is actually pretty clear on this - do not specify a body on a GET request.

> A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.

Previously it was "SHOULD ignore the payload".

It's nothing to do with laziness or security - people are writing spec conforming software. And indeed every library I've used allows interacting with a body, even on a GET.


> The spec is actually pretty clear on this - do not specify a body on a GET request.

That's not what your quote says.

Not having a defined semantics does not mean if is not supported. Just because some implementations fail to support GET with a request body that it does not mean all implementations should interpret it as a malformed request.

I can roll out a service with endpoints that require GET with request bodies and it would still be valid HTTP.


> That's not what your quote says.

Yes it does. "No defined semantics" = "out of spec".

> I can roll out a service with endpoints that require GET with request bodies and it would still be valid HTTP.

You're out of the HTTP spec entirely.


How are you interpreting that English?

Not defined means, it could be anything. If accepting body in GET is out of spec, then spec is supposed to say, GET cannot send body.


"out of spec" means that it is out of specification. It is literally not specified. You are doing something that is not specified. It is therefore an action that is out of specification. it is therefore out of spec.

If there was an utter ban, then it would be against specification and not compliant, not merely out of specification.


> "out of spec" means that it is out of specification. It is literally not specified.

That's not what it means at all. Being out of spec means the spec explicitly stating that a request with a body should be reject. If the spec does not state that a request with a body should be rejected then you are not required to reject a request which packs a body.


> Not defined means, it could be anything.

No, not defined means it's not within the purview of the spec. Spec doesn't care. You can send one. Maybe it'll work, maybe it won't, maybe it'll crash, maybe it'll be rejected, maybe some proxy along the way will strip it and the server won't even get it, maybe it'll get your client banned forever.

All of these are fine, because spec doesn't care.

> If accepting body in GET is out of spec, then spec is supposed to say, GET cannot send body.

No, then it would be against spec, like HEAD with a response body.


You can do whatever the fuck you want, the spec defines what it defines.


Because companies tend to interview less during the holiday season which means people laid off will have a harder time finding a job


I would guess that people either don't appreciate or don't recognize the sarcasm. Or recognize it and don't agree that tightening the money supply is unnecessary


The key problem I see with the current semi-permanent student debt payment holiday is that there are no added stipulations for ensuring a finished degree will come close to a net economic benefit. Otherwise, we're executively making a very expensive fiscal policy decision and allowing tertiary education costs to balloon without anchoring those costs to the expected added value to society.


I'd like to sell you a bridge


Lichess is up (and also better)


Children of the faculty


I can think of many things one might do on a VPN that an employer might and probably should care about


I can also think of many things a bad employee might do that the employer should probably care about.

But if you hired someone you've given them a tremendous amount of responsibility and power, if you can't trust them to do their job without monitoring everything they do then you should just fire them and hire someone more trustworthy.


There is basically no IT shop at a large company that operates this way, and there's a reason for that.


I would be interested of the reason because we've been fighting this successfully for a number of years with our clients IT departments (while we were smaller). They hired us to make the developement process better and one of the things we always found problematic was that when the software developers didn't have normal internet access (but instead tunneled everything through some bottleneck http proxy) they both lost the good developers because they were fed up with not being able to do their job efficently and thus the quality of their software went down.

A year ago we've been bought by a big company to help them do what we did untill then but on a much bigger scale. The first thing we did was to get rid of the restricted internet. And now we again help our customers (big car manufacturers) to make their processes better. And even for here every developer has free access to the Internet to be able to work efficiently.

But as I said in the beginning, I'm really interested in the reasons for this behaviour of those large companies. My suspission is that it's just easier for the IT department to work against the developers instead of helping them to do their job.


There are IT shops with large dev teams that don't put proxies between their users and the Internet, but in every one of them that I'm aware of, developer laptops are subject to intrusive continuous monitoring. And, even at firms where there are no proxies, VPNs are problematic.

The reason is that large firms are legally obligated to make sure that insiders aren't exfiltrating protected or confidential information.


The reason is that large firms are legally obligated to make sure that insiders aren't exfiltrating protected or confidential information.

If it makes people feel better about this, the same countermeasures also help with the case "Adversary pops any laptop in the company via e.g. phishing or malware and then pivots to All The Things." i.e. you don't need to posit non-trust of employees to want to implement continuous monitoring of work equipment.


> If it makes people feel better about this, the same countermeasures also help with the case "Adversary pops any laptop in the company via e.g. phishing or malware and then pivots to All The Things." i.e. you don't need to posit non-trust of employees to want to implement continuous monitoring of work equipment.

Even assuming we don't care about worker privacy and all the stuff, I think we can still do better.

I have no insider information about this (not a Google employee and definitely not associated with the project) but I read some good things about BeyondCorp

https://news.ycombinator.com/item?id=14596613

Regardless of the threat model, "security" has be practical. The main thing business should care about is productivity. I've done subversion checkouts that slow down to a crawl because the malware detection hogs down the disk IO. I've seen "anti-theft" agent go haywire sending a heartbeat too often and making network access unusable.

I haven't had to deal with being denied access to stack overflow and frankly I would take the first offer and quit if I ever had to.

That being said, I think I am OK with random crap running on company owned machines as long as it is reasonable and does not hurt performance. Oh and there should be no expectation that I will take them home with me.

This reminds me of another funny story. One place I worked at, we were not allowed to leave our computers at our desk at the end of the day. We either had to put it in a locked cabinet or take it home with us. Nobody believes me when I tell this story but it is true.


That would only make sense if tethering via your phone, USB-sticks, cameras, and every other way of doing copies weren't allowed on those premisses either. But this is (almost) never the case.


In some jurisdictions the monitoring is forbidden due to privacy or labour protection laws.


> But if you hired someone you've given them a tremendous amount of responsibility and power, if you can't trust them to do their job without monitoring everything they do then you should just fire them and hire someone more trustworthy.

Trust but verify, as Reagan used to say. We give even more power to the presidents and senators, but imagine what would happen in there was not plenty of people around to monitor what they did with that power.

https://en.wikipedia.org/wiki/Trust,_but_verify


To bring up just one thing that doesn't have to do with trustworthiness: If you let an employee onto a 3rd party VPN you open yourself up to a whole new vector of attacks that you can't prevent.

And then there's the problem with allowing unrestricted, unmonitored Internet access with regards to auditing and establishing a timeline of events if you ever need to do so.


What attacks are opened by a VPN that aren't open to an ISP or the local coffee shop?


I'm not referencing MITM attacks...

You can visit sites that aren't blacklisted on the company's network which makes it easier to social engineer you. You have less control over what stupid things your employees can do.

You're right, this wouldn't be any more dangerous than being on a coffee shop's wifi but you already don't care about network security if that's how you're working.


Towards the end of the article Kissinger states

"Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions."

Which seems to me a disappointing end to the article. An appeal for a national effort to manage AI.

"Other countries have major AI projects" but what exactly should the US model itself after?

"The United States has not yet systematically explored it's scope" but US publishes the second most research papers on AI (https://www.timeshighereducation.com/data-bites/which-countr...) which I think in general is a bad metric but if you're looking at the output that a system produces, it's the best metric you're going to get.

Looking at the private sector, the AI makeup of the leading AI/Robotics ETF $BOTZ (https://www.globalxfunds.com/funds/botz/) is comprised of mostly American companies. Similarly, look at the sheer number of ML/AI startups in SV.

So I fail to see where the crisis is. If American universities are among the leaders in AI research and American companies are among the leaders in the AI economy, why is there such a tone of urgency in this article? Kissinger's argument leads me to believe that he's advocating for a blanket "AI" initiative but he doesn't have a clear idea of what he wants this initiative to do. Without a clear direction for how he wants the US to be "relating AI to humanistic traditions", whatever he's proposing here is just the marriage of shallow musings about the consequences of AI and some kind of blind belief in federal government initiative.


He didn't say all developers working on AI are inexperienced in politics. He was simply restating his thesis which was to appeal to those who hadn't considered what he was writing about for a skepticism of the consequences of AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: