What's sad is that unionizing will accelerate whatever the decline of the company is causing the dissatisfaction. Wiser for employees to just jump ship or found a new game studio when this kind of decline happens.
The chances of a company turning around are super low, adding a union makes it harder. Just run.
The alternative to every company is to proactively repair the conditions incentivizing the formation of a union. It continues to amaze me that those in charge of making those decisions choose decline over alternatives.
> The thing holding back unions in the U.S is the unions themselves and the laws around them. Once a union forms, they have entirely too much power.
This is a nice summary of the central issue with unions in the U.S. A rational person can quickly see why people are clamoring for unions in the U.S. and also why American companies are so resistant.
Besides a complete stranglehold on labor markets in a number of industries where the government is required to use union labor for infrastructure projects and they limit the number of laborers to drive up price. Or how about the Plumbers union that forced the city of Chicago to continue installing lead pipes until the federal government had to force them to stop. Beyond that, the power to promote good workers or make necessary changes across the org. For example, why doesn't Chicago have any driver-less trains and a conductor shortage? The unions are preventing both.
I stole a lot of books too, reading them and all. Just integrated them into my worldview, and don't pay a license fee when I use the ideas in new contexts. Sometimes I even quote from them. A lot of them I didn't even pay for, I borrowed them from libraries or friends.
Have you considered that you have some traits that make you eligible to read books and access information freely in the country you live in*? Something about being a conscious human being enjoying human rights, perhaps? An implement that does the same but (A) at scale and (B) without thought or free will or agency, completely at the bidding of its operator, for profit, has no such protections. Instead, the operator carries all responsibility (in this case, Meta).
If a software service had legal protections like that, sure, I could build one that returns you any book you request and say that the service had integrated it into its worldview. Who can check, eh?
* Actually, in some countries you could be in trouble for reading a book and incorporating it into your worldview, to say nothing about quoting it, but let’s set that aside.
>Have you considered that you have some traits that make you eligible to read books and access information freely in the country you live in*? Something about being a conscious human being enjoying human rights, perhaps?
Not a relevant factor when it comes to copyright law. Fair use (the law that's most applicable here) applies regardless if you're a student using incorporating news articles into your work, or google making thumbnails and displaying them on their search results.
This is not a good analogy. Google does not display the contents to any significant degree (you have to visit the search result). And even then it was/is in legal trouble, in fact (in some countries like Australia* more than others).
Furthermore:
> Examples of fair use in United States copyright law include commentary, search engines, criticism, parody, news reporting, research, and scholarship.
I do not see “automated generation of derivative works of arbitrary nature” in it.
>This is not a good analogy. Google does not display the contents to any significant degree (you have to visit the search result).
The point isn't that AI training is legal because it's like generating thumbnails. That is being argued in the courts right now. The point is that fair use exemptions isn't limited to "being a conscious human being enjoying human rights", as google generating thumnails and snippets using computers shows.
> Examples of fair use in United States copyright law include commentary, search engines, criticism, parody, news reporting, research, and scholarship.
> The point is that fair use exemptions isn't limited to "being a conscious human being enjoying human rights"
Sure. However, my point is that this is not fair use*, so other principles need to be applied. Whether legal systems in various countries find that fair use applies here or not, I agree we are yet to see.
* At least in cases where it’s an LLM operated at scale for profit (which I suppose would not hold for Meta’s models if they were truly open, but that’s not the case if they require obtaining a license in some conditions).
>Sure. However, my point is that this is not fair use (at least in cases where it’s an LLM operated for profit), so other principles need to be applied.
This isn't a complete argument. Most of AI companies' argument relies on the fact that AI models are "transformative". That's a plausible claim, and as Perfect 10 v. Google, and Authors Guild, Inc. v. Google, Inc. has shown, being a for-profit company is hardly a disqualification from getting fair protection.
“Transformative” is always a grey area. If my service just returns you a book you requested, but in upper case, then it was transformed.
But sure, the “transformative” argument is the one that could apply (and even I believe Google used it to argue its case), if it can be shown that an LLM can not verbatim reproduce a given work (which, incidentally, is something that you, a warm-blooded fleshy human with agency who has the freedom to read books, cannot do, but LLMs were shown to do).
That said, relevant laws existed before LLMs, and may are outdated. If the goal is to balance reasonable uses while protecting original output of authors that ultimately drives innovation and creativity, I am not sure if the preexisting laws are continuing to fulfil their function, but that’s my opinion.
>But sure, the “transformative” argument is the one that could apply (and even I believe Google used it to argue its case), if it can be shown that an LLM can not verbatim reproduce a given work.
You have to try pretty hard to get LLMs to reproduce a work verbatim, especially any lengthy passages that aren't famous (and thus re-quoted on the internet a bazillion times). Moreover just because LLMs can reproduce a work verbatim if you try hard enough doesn't mean it's not transformative. Google search snippets and google book search has been ruled "transformative" by the courts, but if you tried hard enough you can use them to extract the entire work.
>That said, relevant laws existed before LLMs, and may are outdater. If the goal is to balance reasonable uses while protecting original output of authors that ultimately drives innovation and creativity, I am not sure if the preexisting laws are continuing to fulfil their function, but that’s my opinion.
AFAIK the era of mining the public internet or published works for AI training data is over, or at least coming to an end. Everything that could be mined, has already been mined, and besides, the internet is getting increasingly polluted by AI output. Private training data is where it's at now, whether it's sourcing document troves from companies (eg. emails, documentation, source code, etc.), or paying "AI annotators" to produce training data for you. If the argument is that human authors should get a cut of AI profits because their works were "stolen" to train the models, this is going to be a increasingly losing argument, because it doesn't have a leg to stand on for private training data.
> If the argument is that human authors should get a cut of AI profits because their works were "stolen" to train the models, this is going to be a increasingly losing argument, because it doesn't have a leg to stand on for private training data.
The argument can be made that LLMs could not be created without expropriating the original works of all the authors they were trained on, and that argument would in fact be true and have quite sturdy legs as far as I’m concerned.
It’s not a historical instance of forgotten times, it started less than half a decade ago and I would be surprised if it’s not still ongoing (your argument about synthetic training data is forward-looking).
>The argument can be made that LLMs could not be created without expropriating the original works of all the authors they were trained on, and that argument would in fact be true and have quite sturdy legs as far as I’m concerned.
That makes as much sense as "American industry was built on the backs of British inventors (back it the day it was the "China" when it came to IP), so Britain should get perpetual (?) royalties from the US economy".
So we’re back the human vs. unthinking machine distinction. American inventors were human. We’re going in circles and this article was hidden on HN anyway.
> I do not see “automated generation of derivative works of arbitrary nature” in it
The “automated” isn’t really key. If you read a book, and learn from it, and are able to use that knowledge in other contexts, should you pay a licensing fee? It doesn’t matter if “you” is a human or machine.
“Automated” is key. You are not an automaton, not a machine, you do not infinitely scale with compute power; but unlike a machine you have free will and agency, and legal framework of developed countries grants you human rights that include freedom. That was, in fact, my entire point.
I just don’t get your angle. My point was that the human is the one who has some freedoms and the one who bears responsibility. If you read a bunch of books in a book store without buying them and use your imperfect memory of them to do your job better and get paid more, it is shady but if you are not shooed away by store owner you have the freedom to do it. No one can extract the books you already read from your brain, and you did not sign an NDA. But if you set up an industrial scale book scanner in the same store, the boss will call the police on you and you cannot point fingers and say the scanner “reads” books and incorporates them into its worldview just like you would do. Because scanner is not human and you are, so you’re the one responsible for operating the scanner.
I see no difference with cryptotokens here, the human has freedoms to do things and the human is responsible for them if those things are bad. (Just unlike LLMs, theft of property and all that is kinda always a crime, unlike reading a book in a shop without buying.)
Maybe not but even if Meta did buy 1 copy of every book I doubt it would stop anyone from making bad analogies to theft. (Not that the analogy on the other side to a human reading is any better.)
>You have bought the text so you have the readright, but you do not the copyright.
You do however, have the right to make derivative works based on the contents of the book. You reading a physics textbook doesn't mean you can't write a blog post about gravity or whatever, and you reading harry potter doesn't mean you can't write a series of fantasy books involving a young wizard trying to fight an evil wizard.
Last I looked, machines are considered unable to create copyrightable content, so your attempt to compare that to LLMs might not work in court.
> The application was denied because, based on the applicant’s representations in the application, the examiner found that the work contained no human authorship. After a series of administrative appeals, the Office’s Review Board issued a final determination affirming that the work could not be registered because it was made “without any creative contribution from a human actor.”
>Last I looked, machines are considered unable to create copyrightable content, so your attempt to compare that to LLMs might not work in court.
That just means whatever they produce can't be copyrighted, not that they can't produce derivative works. Courts have upheld the right for google to produce thumbnails of copyrighted works, even though the procedure for producing thumbnails is done by a computer and thus can't be copyrighted.
Sure, which is a thing we've sort of agreed on as a society based on human consumption and creativity. Not that we all agree, and not that this agreement is free of influence from megacorps. But the context in which we've enacted copyright law is based on values related to human consumption and creativity.
Maybe we will end up agreeing that we just want to stick with those same laws for machine consumption and creativity. But maybe we won't since they are quite different things.
What if we could buy the books for one human, make the human read all the books, and somehow we would be able to clone this human in a way that they remember the book contents.
I think we should pay authors a fair wage based on some measure of the quality of the content instead of how well the book sells.
There's no reason for Harry Potter for example being 10000 times more valuable than a book on quantum mechanics only because the former is more popular and the latter is on a more obscure topic.
> I stole a lot of books too, reading them and all. Just integrated them into my worldview, and don't pay a license fee when I use the ideas in new contexts.
That... doesn't make it okay...
> A lot of them I didn't even pay for, I borrowed them from libraries or friends.
This 2nd sentence doesn't fit your first. What is your message?
The "and then fed them into an AI" part of "facebook pirated a bunch of books and then fed them into an AI" part is irrelevant. It would be equally illegal if they pirated them and then sat around reading them. Unless you somehow hope that the entirety of copyright will be overturned by this court case (not a chance) then you should strongly hope that facebook loses, because the alternative is literally "rules for thee but not for me" where corps can pirate whatever they want, but nothing changes for ordinary citizens.
So first there was this ‘corporations are people’ and not we have ‘computers are people’.
So I expect to see that either you are no longer allowed to own computer software
Or a return of slavery.
Also if we find indecent portrayal of minors in a data centre I expect that we treat it as a strict liability crime and the entire data centre or corporation that owns it gets a long prison sentence, just like a human would. However that is suppose to work.
This viewpoint is one widely shared inside the AI community — that AI systems should be able to learn from material just as humans do.
Extrapolated out into some new future a hundred years from now when we have embodied AI humanoids walking alongside us, would it be weird if those humanoids were barred from buying a new book or charged a different rate than the humans they coexist with?
I’m still deciding how I feel about some of this too.
If we are going to afford models like this treatment equivalent to sentient beings in this regard, why not others? In your extrapolation these ai walking among us are property of giant tech companies…
Valid point. And to some extent a lot of existing licensing models cover this for humans too (are you using this thing for yourself, or are you using this thing on behalf of your company).
There will be a lot to figure out over the coming years.
> This viewpoint is one widely shared inside the AI community — that AI systems should be able to learn from material just as humans do.
I'm not even against this to a point. The issue is what comes after. The monetization. The enshitification. The derivatives in place of real creativity.
Even if we could all train evenly, those with money will always win out on the execution. You can't possibly believe just "letting things play out" as they have been so far, but with less copyright guardrails, is the solution here?
It's a straw man though, whether or not AI should be allowed to learn from books is irrelevant to the point that Meta stole tens of thousands of books to accomplish this. A fact that they've admitted to and even had they not would be trivially proven.
They're not being charged, that would be a vast improvement over reality.
At ten dollars per book, that'd just be a few hundred thousand dollars. They spent way more than that training the model, and probably will spend more on legal fees in this case.
But if they had done that, I bet they would have been sued anyway.
Even accepting that, the law should be encouraging creative output by individuals and there is justifiable fear that this will be used to bypass protections designed to reward such behavior.
For a more direct counterexample, I can memorize something and type it back out, but if it is copyrighted the law doesn’t make an exception just because it passed through my head.
If the AI is able to type back out a duplicate of the training data, then I agree that's copyright infringement. If it just learns from the data like a human with normal memory reading a large amount of material, then I don't see it. That's normally the case. There have been experiments where someone managed to make an AI spit out near-copies, but it's not the default situation and seems preventable.
I do agree that we should encourage human creativity. But if AI isn't making copies, and the output of AI isn't awarded copyright (as is currently the case) then I think humans still have sufficient reward.
The courts have already ruled that training on data is similar to reading it (sufficiently transformative) to be considered fair use, in the same way that I cannot claim a copyright on your brain because you read this comment.
On the other hand, they torrented books and then open sourced LLM weights. No punishment is too severe for that!
If you still don’t understand, I strongly suggest watching Max Headroom, “Lessons”, which you can get here:
I think he’s saying he works for meta and when the company employees committed mass copyright violation that’s ok because once someone read Winnie the Pooh to him at story hour at the library.
Honestly, if people continue to conflate human development with a mega corps trawling copyright material to build a mathematical model and then wrap it up and charge a subscription for it, then there's really not much you, I or anyone else can do to avoid the inevitable fallout and we really deserve everything we get for it.
I agree with you until this part. There comes a time where I don't think I deserve to get my eyes poked out just because other people find that fashionable.
OP is making the spurious argument that technology should have the same ethical entitlements as humans. It's on par with "information wants to be free".
I don't read it as an ethical argument, it's an argument about the purpose of copyright. Copyright is intended to restrict reproduction of a work for the purpose of incentivizing the creation of new works. Copyright is not intended to restrict the transmission of knowledge.
My thoughts as well. I just prefer to remove the nuance on these types of things. If OP wants to draw a line and clearly state "I'm with the corpo robots" that's fine. Just state it plainly so I can proceed accordingly.
It greatly limits your available flying days if you think "hey 30mph crosswinds are super sketchy, I don't want to fly today." So that means delayed trips or delayed returns. Hard to plan around.
I realize this isn't part of the current iteration and requires lots of regulatory hoops... But in future with automation, it would be amazing to know you could fly in clouds or evenings easily with only a basic private pilot's license.
Aviation self driving is so much older and reliable than automotive self driving, it's frustrating that it isn't generally available. It's awesome that you are working to bring it to low cost flying, thanks for working on this and congratulations on the launch!
I'll add that I think the "easy to drive as a boat or car" size of the market is easily 10x the existing private pilot market. (And the easy as a car and the price of a pickup truck size of the market is probably 1000x the current market). So I think you are on to something big.
You're not landing this thing in a 30mph crosswind in something this small, no matter how fancy the control logic. You'd be too skewed to land safely. 30mph is pushing is for some airliners. The aircraft this is built on is only certified for a 15kt cross.
The bigger problem is that low level winds that strong are often associated with bad weather...which again doesn't mix well with small planes.
I dearly wish Apple would just publish Dark Sky again. Let the Weather app be whatever super clean design hero you want, just give us back this perfect information dense weather app to use day to day.
There have to be dozens of devs in apple who would love to be on the 1-2 person team it would take to maintain it. (It was a 2 person startup for years, don't come at me with how hard stuff is.) It could even be a reward for good service, "ok you successfully mucked around with weird EU privacy law in the health app for 2 years, instead of a sabbatical for therapy how about you get to work on Dark Sky for a year?"
What good would that be without the information backing it? (The DarkSky API server)
And if that information does still exist in the (public) Apple Weather API, why hasn't anyone (not just some Apple Engineer) just created an app with the views people care about?
An excellent unintended consequence of forcing builders into modular housing might be a much more robust modular housing market. Sort of like Tesla starting with higher cost Roadsters and Model S cars until there is scale to compete at volume. It would be terrific to see some productivity gain in construction, which as actually declined since 1987. See the Single-Family graph here: https://www.bls.gov/productivity/highlights/construction-lab...
It's worth also noting that despite the drama, Costco is not looking to lose money on these. So the apartments will be nice enough to rent. Nimbys tend to forget the discipline of the market, especially in their talking points.
Clayton, a leading single-family home builder, recently made big environmental news by deciding to convert nearly all of the 42,000 modern manufactured homes it builds annually to be certified ENERGY STAR and Zero Energy Ready Homes (ZEHR). These certifications mean that its manufactured homes will be much more efficient, save homeowners money on their utility bills (up to 50%) and provide premium energy-efficient appliances that are often considered unaffordable to the average family.
Clayton’s switch to ZERH also means nearly all of its homes now come with a heat pump water heater (HPWH), which likely represents the single largest procurement of HPWHs in the history of the technology. To put it in context, the entire HPWH market in 2022 shipped 141,000 units, and Clayton alone will increase this total by 30%!
Context is important, this is targeted at journalists. They are usually trying to make a point to casual readers.
For readers with more interest or who are numerate in their day jobs (engineers, finance, or economists), dual axis charts can often be a great choice.
Since we are engineers or founders trying to deal with very complex systems, adding detail and clarity like the Economist or Edward Tufte does is the better way to go.
Author here. Thanks for setting the context: Datawrapper – the data vis tool I write articles like this for – is indeed for people who want to make a point with their charts and maps, often to a broad audience. I agree that people who have learned to read dual axis charts can benefit greatly from them (the same is true for rainbow color maps).
Financial Times journalist John Burn Murdoch changed my mind on dual axes charts – even for casual readers! – a bit over the last six years, too. Here's a dual axis chart he created for the FT: https://x.com/AlexSelbyB/status/1529039107732774913
The next article I write on dual axis charts will probably be a "What to consider when you do use them" one.
At first glance, sure, but without further context or supporting data I'm suspicious:
1. Why just the Daily Mail? Is that the only paper that matters in Britain, or just the one that happens to correlate?
2. I would expect public opinion to lag coverage in the paper if there were a causal relationship. This graph is over too great a period to really see that, but if the creator wants to convince me, they'd show that.
3. I might expect the lag to differ when coverage is increasing vs. decreasing. Again, if I'm to believe this graph, more context would help.
4. No consideration of other factors that might lead to changes in public concern?
5. No consideration of factors that might lead to *both* an increase in coverage *and* an increase in concern?
I'm sure I could come up with 5 more reasons to doubt this graph if I thought for another 60 seconds...
The economist is a fantastic benchmark when it comes to data visualisation. One thing to note is they publish a lot of the underlying data and models behind their visualisations on their github. If you know R it's a tremendous resource.
I generally find that a second Y axis creeping in is perhaps an indicator to stop and have a really deep think about what you are trying to achieve. You might try doing a 3D graph for example where x, y1, y2 becomes x, y, z then spin and explore. However you have to remember that y1 and y2 are both dependent on x (by definition) so when you put y2 to a separate dimension, it is not independent from y1 (or is it?)
There are no hard and fast rules when it comes to spin doctoring via graphs, and as the old adage doesn't go: There are liars, damned liars and politicians.
The only one that's improved is the one from Brazil, to be honest. The rest is taste.
Besides, it's ok if the graph takes a bit to digest, other wise you can just keep printing the same three graphs over and over merely renaming the axis.
This is a pretty good article and for the most part, should be heeded. It's quite rare for the audience of a chart to exclusively be highly-numerate people (and these people, who are often inundated with data, are not immune from being misled by poorly-conceived charts). It's kind of strange that the top-voted comment points to "better" advice while also directly contradicting the article's main point ("dual axis charts can often be a great choice").
I mean, certainly you have the right to add some color but it comes off like you are saying to ignore the article entirely in favor of your alternatives.
Western water does need to be managed and there are lots of silly water rules that should be changed. But the Mountain West drying up is just a fantasy. For weather, economics, and governance it's probably the region with the brightest future over the next 30 years.
The problem is that average sms security is higher than email, but email CAN be much more secure. So for mass market accounts sms makes a good login confirmation and improves security.
But if you've bothered to have somewhat secure email it sure would be nice to use that instead, and not worry about the 50,000 retail and support staff at telcos who can grab your sms account based on a convincing phone call.
So, please, I beg of you login developers, offer email wherever you use sms now.
I understand it’s a naive statement, but in order to log in into your email you would end up relying on some other sort of 2FA. And we’re back to square one to relying on SMS, because UX of other authentication flows has irrecoverable flaws.
Exactly. You could use a trustworthy mail provider with a domain you own (registrar and DNS provider in two other accounts, probably), and then a second mail account for the 2FA for the other three accounts, but then what's the 2FA for the second email account?
Because San Francisco sales tax is 8.63 and something the costs 1 dollar is really 1.083. And I would like 91.7 bach cents when I give 2 dollars.