Hacker Newsnew | past | comments | ask | show | jobs | submit | kopecs's commentslogin

Do you not think it a bit too hyperbolic to throw scare quotes around experts and imply the only people who can have opinions on systemic risk are software engineers? I don't think it is unreasonable for people who haven't run or worked for a hosting service to have opinions on the policy aspect or economic impact of hyperscalers.


> I don't think it is unreasonable for people who haven't run or worked for a hosting service to have opinions on the policy aspect or economic impact of hyperscalers.

Yeah, that's completely fair. My angle was more that firstly this doesn't come across as an opinion that needs the expert in question, and secondly this is yet another case of 'Talk is cheap, show me the code', particularly when quotes in the article include "We urgently need diversification in cloud computing."

I feel like the 'We' is doing an awful lot of heavy lifting and there's no mention of the costs of taking on such a task.

Additionally, and awkwardly, it's possible to be both a monopoly in the space but also technically a more stable solution, making the cost for competitors or people willing to use competitors doubly high.

Edit: Realised afer the fact I'm GP to your post, assumed it was mine, keeping the words anyway.


I don't think anyone needs to produce any code. I've worked at companies with thousands of employees who don't use any cloud services.

It can be done, and contrary to marketing, it's probably cheaper and more reliable.


What code is needed to make a decision to go with a smaller provider instead of AWS?


No, it's 100% appropriate. Anyone can have opinions on anything, but frankly, most of them have little relevance to reality. Their use of the word "expert" is supposed to mean the person has knowledge or expertise that renders their opinion on a subject substantially more valid and relevant than any regular person. That clearly is not the case here. If I wanted to know what a random person on the street thought about a subject, I could go ask one myself. The purpose of news organizations was supposed to be to better-inform people by getting opinions from actual relevant experts in a subject.

These people don't seem to have much ability to discuss relevant subjects like what the actual reliability of lower-tier hosting providers is, the value-add to business and iteration speed of having a variety of extra services (SQS, DynamoDB, VPC, RDS, managed K8s, etc) available, etc.


I don’t think it’s useful at all.

What are they going to say that’s useful for making concrete technical decisions?

They can advise on how to write contracts for dealing with these situations after the fact, I suppose.


Anyone can have an opinion, I never said or implied otherwise. Having an opinion does not make one an expert, hence the scare quotes.

The headline is misleading because when there is news about experts saying something about technology, one would naturally think that they are at least somewhat technical experts. Instead the "expert" is the director of the "Big Tech is Bad Institute" who says that "Big Tech is Bad". And their qualification of being an expert is solely that they are director of the "Big Tech is Bad Institute".


> when there is news about experts saying something about technology, one would naturally think that they are at least somewhat technical experts.

But the experts here are not "saying something about technology". Rather they are saying something about uses of technology. So they don't need to be cloud engineers or know anything about datacenters, at all, really. What would be required (and here you may have a leg to stand on) is expertise in social and economic aspects of (now) critical infrastructure.


And one would hope that the stats being quoted about desktop share were from someone who has been at that research firm in the last 20 years or so. I'm not sure how active he is at all at this point. I have a feeling someone looking for some stats found something old that may or may not have actually had a date on it.

(If I'm wrong mea culpa but I'm pretty sure.)


Assuming you're referring to Thaler v. Perlmutter, Thaler claimed to the copyright office that the image at issue was "autonomously created by a computer algorithm running on a machine". So the question of "if you claim the LLM did it itself" is settled (shocker, cf. Naruto v. Slater, 888 F.3d 418), but that definitely did not settle "_I_ used the LLM to do it".


Tbf, IANAL and was only repeating what journalists wrote back then. Ultimately, I have no deeper knowledge of the laws in question and thus don't have a qualified opinion on the matter.


I think the suggestion is that the government use of that public data could be such as to create a chilling effect. That is, the upload and interaction of the user with the private company is almost irrelevant: it is just part of the antecedent to the government's conduct.

If you believe the government would only use that data for just purposes then you probably wouldn't then believe that there is a 1A issue. But if you think the government would use it to identify persons at a protest and then take adverse actions against them on the basis of their presence alone (which to be clear, seems distinguished from the immediate instance) you would probably think there is a 1A issue.


I don't think it is accurate to say that the data becomes the government's or they have to act as an informant (I think that implies a bit more of an active requirement than responding to a subpoena), but I agree with the gist.


Do you think the 4th amendment enjoins courts from requiring the preservation of records as part of discovery? The court is just requiring OpenAI to maintain records it already maintains and segregate them. Even if one thinks that _is_ a government seizure, which it isn't---See Burdeau v. McDowell, 256 U.S. 465 (1921); cf. Walter v. United States, 447 U.S. 649, 656 (1980) (discussing the "state agency" requirement)---no search or seizure has even occurred. There's no reasonable expectation of privacy in the records you're sending to OpenAI (you know OpenAI has them!!; See, e.g., Smith v. Maryland, 442 U.S. 735 (1979)) and you don't have any possessory interest in the records. See, e.g., United States v. Jacobsen, 466 U.S. 109 (1984).


This would help explain why entities with a “zero data retention” agreement are “not affected,” then, per OpenAI’s statement at the time? Because records aren’t created for those queries in the first place, so there’s nothing to retain?


AIUI Because if you have a zero data retention agreement you are necessarily not in the class of records at issue (since enterprise customers records are not affected, again AIUI per platinffs' original motion which might be because they don't think they're relevant for market harm or something).

So I think that this is more so an artefact of the parameters than an outcome of some mechanism of law.


> There's no reasonable expectation of privacy in the records

There is a reasonable expectation that deleted and anonymous chats would not be indefinitely retained.

> The court is just requiring OpenAI to maintain records it already maintains and segregate them.

Incorrect. The court is requiring OpenAI to maintain records it would have not maintained otherwise.

That is the crux of this entire thing.


> The court is requiring OpenAI to maintain records it would have not maintained otherwise.

Not quite. The court is requiring OpenAPI to maintain records longer than it would otherwise retain them. It's not making them maintain records that they never would have created in the first place (like if a customer of theirs has a zero-retention agreement in place).

Legal holds are a thing; you're not going to successfully argue against them on 4A grounds. This might seem like an overly broad legal hold, though, but I'm not sure if there are any rules that prevent that sort of thing.


> This might seem like an overly broad legal hold

Exactly


Litigation holds do not violate the 4th amendment.


Well, presumably the claim would be that a factor in their not having taxable income was the fact that they didn't have to amortize their development cost.


Yeah; start-ups will start paying tax much sooner since salaries are the main expense in software development, and only a fraction can be deducted per year. The tax change must make things marginally more difficult for young companies that have some revenue, aren't cash-flow positive, and have a short horizon.


It's not marginal. It significantly impacts sub-$10MM companies.


> Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.

Seriously? Is this argument in all earnestly "No profession has been more contemptuous therefore we should keep on keeping on"? Should we as an industry not bother to try and improve our ethics? Why don't we all just make munitions for a living and wash our hands of guilt because "the industry was always like this".

Seems a bit ironic against the backdrop of <https://news.ycombinator.com/user?id=tptacek>:

> All comments Copyright © 2010, 2011, 2012, 2013, 2015, 2018, 2023, 2031 Thomas H. Ptacek, All Rights Reserved.

(although perhaps this is tongue-in-cheek given the last year)


It's a fib sequence



There already was a verdict (for the Times) but it was thrown out on appeal. This is a re-do.

ETA: I looked up the docket [0] and in fact, this was the second appeal (See ECF 64). There was also an appeal on a prior MTD, hence the extreme delay.

[0]: https://www.courtlistener.com/docket/6081165/palin-v-the-new...


Probably https://free.law/

ETA: which is of course mentioned on the thread root. But RECAP users would be paying, in that case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: