I suspect interacting with the real physical world and its realities and to realize one can affect it would be good, no matter what career they'd pick later. Picking a career in software development has been a good choice for bright kids for several decades now. In the long term view, the past is full of "good career choices for bright kids" that at some point no longer were not.
>But I'm also a musician/artist and so I find some of these conversations odd. The problem with them I see is that they are oversimplified. To get better at drawing I often copy other works. Or I'll play a piece exactly as intended. Then I get more advanced and learn a style of someone I admire and appreciate. Then after that comes my own flair.
>So I ask, what is different between me doing it and a machine?
You are a human. If you practice art as a hobby you can feel pleasure doing it, or you can get informal value out of the practice (there is social value in showing and sharing hobbies and works with friends). One could try to formalize that value and make a profession out of it, get livelihood selling it.
When all that "machinery" to (learn to) produce artistic works was sitting inside human skulls and difficult to train, the benefits befell on the humans.
When it is a machine that can easily and cheaply automate ... the benefits are due to the owner of machine.
Now, I don't personally know if the genie can be put back into bottle with any legal framework that wouldn't be monstrous in some other way. However, ethically it is quite clear to me there is a possibility the artists / illustrators are going to get a very bad deal out of this, which could be a moral wrong. This would be a reason to think up the legal and conceptual framework that tries to make it not ... as wrong as it could be.
It could be that we end up with human art as a prestige good (which it already is). That wouldn't be nice, because of power law dynamics of popularity already benefit very few prestige artists. So it could get worse. But could we end up with a Wall-E world where there are no reason for anyone to learn to draw any well? When a kid asks "draw me a rabbit", they won't ask any of the humans around, they ask the machine. The machine can produce a much more prettier rabbit, immediately and tailored to their taste.
> One could try to formalize that value and make a profession out of it, get livelihood selling it.
I really think it is bad to frame this about profits. I mean in my case I am doing it purely for the pursuit of pleasure. I'd argue that these models allow more people to do so as it lowers the barrier to create quality work. It can also be used as a great tool for practice, as you can generate ideas and then copy and/or edit them. It is also a great tool to make quick iterations as you explore what certain ideas and concepts might look at. But I do not believe that it is anywhere near ready to be a replacement for humans. Especially since it is highly limited in its creativity (something not discussed).
Also, I want to add that these methods allow for new types of art that didn't exist before. There are artists working and exploring this path. Questioning how these tools can be used to modify things or create things to be modified.
> When it is a machine that can easily and cheaply automate ... the benefits are due to the owner of machine.
In what way? If you are not paying for the system and it is freely handed over, why is it not "you" who is benefiting? I would understand this comment if the benefit was behind a paywall (e.g. Dall-E) but it isn't (e.g. Stable Diffusion).
> Now, I don't personally know if the genie can be put back into bottle with any legal framework that wouldn't be monstrous in some other way. However, ethically it is quite clear to me there is a possibility the artists / illustrators are going to get a very bad deal out of this, which could be a moral wrong. This would be a reason to think up the legal and conceptual framework that tries to make it not ... as wrong as it could be.
I guess part of the issue I have with this is that it sounds a lot like the arguments made when digital art itself was beginning. How do we differentiate the "I hate it because it's new" from the "I hate it because it is unethical?" This is not so obvious to me to be honest, because one can think the former and say the latter. I am not going to shy away from the fact that transitionary periods can be rough, but I'm not convinced it is going to kill artists' livelihoods. Especially since there is a lot of effort that is still needed to produce high quality images.
I think this might be a point where people working on these machines (like me) vs the people that aren't (maybe you? idk) have different biases. All day I see a ton of crap come out of these. But if you just payed attention to articles like this or Twitter you'd think they are far more powerful than they are. These selected images are being created by expert artists too, that deeply understand aesthetics and the prompt engineering required to make high quality work. Maybe we'll get there, and that makes the point moot, but I'd argue that we're still pretty far away from that. I don't think this is going to kill off professional artists by any measure (especially because this exclusively affects the digital media domain and no other form) but may make the barrier to entry slightly higher (but it also might help artists become more creative as you can quickly explore ideas).
> The machine can produce a much more prettier rabbit, immediately and tailored to their taste.
Actually I'd argue the opposite. While this may be true for your average person making a drawing I still have significant doubts that the machines will be able to create better results than professional artists within the next 5-10 years (plenty would bet against me though, so that's fair). I also think there's issues with the diversity of these images and that it can't be resolved simply by adding more data (the "scaling" paradigm). I think they will rather reinforce that certain things look a specific way. Especially since these models do not understand a lot of basic concepts that we humans take for granted (a fundamental problem in AI: causal understanding). I don't think these issues are insurmountable, but they are a lot harder than many give credit for.
But I do want to make it clear that while we disagree I respect your position and still do think you bring up some good points. But I do think we also have fundamentally different vantage points. Which probably makes it a good thing that we're actually discussing this together and not in our respective bubbles.
> But if I train my own neural network inside my skull using some artist's style, that's ok?
How well the network inside your skull can manipulate your limbs to reproduce good-quality work in some artist's style?
Our current framework for thinking about "fair use", "copyright", "trademark" and similar were thought about into existence during an era when the options for "network inside the skull" were to laboriously learn a skill to draw or learn how to use a machine like printing press/photocopier that produces exact copies.
Availability of a machine that automates previously hand-made things much more cheaply or is much more powerful often requires rethinking those concepts.
If I copy a book putting ink on paper letter by letter manually, that's ok, think of those monks in monasteries who do that all the time. And Mr Gutenberg's machine just makes that ink-on-paper process more efficient...
>How well the network inside your skull can manipulate your limbs to reproduce good-quality work in some artist's style?
An experienced artist can probably do this in a couple weeks, depending on how complex the style is.
>If I copy a book putting ink on paper letter by letter manually, that's ok, think of those monks in monasteries who do that all the time.
According to copyright, no, that's not okay. Copyright does not care about the method of reproduction, it just distinguishes between authorized and unauthorized reproduction. A copyist copying a book by hand without authorization is just as illegal as doing it with a photocopier. Likewise, if you decide to copy a music CD using a hex editor and lots of patience, at the end of the process you will end up with a perfectly illegal copy of the original CD.
So the question stands. Why is studying artwork with eyeballs and a brain and reproducing the style acceptable, but doing the same with software isn't?
On the other hand, extroverted people have similar advantage in the real life. I myself am quite happy for every lesson where I was pushed to practice people-facing skills (presentations, demonstrations, etc). Even an introverted person can learn to talk about topic knowledgeably if they know it -- which often is valuable confidence-building experience to have. Despite the introversion, one can do it!
If the professor - lecturer administering the test is any good, empty rhetoric won't help too much. If they are lazy, students one can try to give "answers" without showing what they don't know in written exams, too.
Early on, the Swedish king was elected at the Stones of Mora. The Holy Roman Emperor was nominally elected by prince-electors (who most of the time elected a Habsburg).
And even withing a hereditary framework, there are other alternatives to retirement in addition to outright abdication. An elderly monarch could for all intents and purposes retire and a let the crown prince (and I suppose in current British succession order, crown princess) rule, appointing them as a co-ruler.
Coincidentally just yesterday there was a big news article in the largest daily newspaper about the problems teachers have with uncooperative parents. One memorable case was of the parents calling the teacher and informing them that the parents have agreed with their kid is exempt from reading books. In another, during a disagreement with a teacher, kid called their parent, put the parent on speaker, who then proceeded disparage the teacher in very low language in front of the rest of class.
To piggyback on the OPs question, I for one think the part in parenthesis is actually most important:
>(Data cleaning and management should also be learned)
There are many students and graduates who either didn't want to do research in the first place or didn't get that research grant or position and looking to get employed in private sector with their degree. Many universities and colleges have now also retooled some of their statistics degrees as dedicated "data science" curriculum who either know basics of ML/DL or have the prerequisite background to learn quickly.
However, in my experience (I am extrapolating from my own past job search experiences) while "understanding theory behind the algorithms" counts still for something, it is much less than one would think. Familiarity with the software technologies and practical implementation is what counts much more. This includes not only "data management", a phrase which makes it sound like the data simply exists somewhere and only needs to be managed (not unlike a Kaggle competition), but also the data pipeline management from generation/collection to analysis and communication of the results, and deploying the software the implements it all, and so on. I suppose (never been on that end of the interview table) given any two candidates to interview, it is very difficult to evaluate how deeply one understands theory of some algorithm compared to other if they both demonstrate some basic understanding (and what is the practical use of possible difference in insight from such differential, anyway?). Likewise, I assume it is somewhat easier to gauge whether someone seems to able start delivering results or contributing to their on-going work quickly if they have the relevant technical skills and/or domain knowledge.
>Here's an illustrative thought experiment: imagine you have a time machine. Now pick a worker at random from some time and place in the past 5 centuries, and carry them forward by 30 years. will they be able to earn a living?
I don't find Stross' thought experiment very convincing. One doesn't need to imagine time travel. A CS graduate from the 1990s who didn't timetravel directly to 2020s but got there regular way and didn't do anything to update their skills during those years would find themselves with equal difficulties in job market than the time-traveler. (edit: Or worse difficulties.) That is why it is a good idea to continuously develop ones skills.
However, on much shorter timescales, say, 5 years, one can make a reasonable guess what kind of degree is more likely to result in gainful employment after graduation than other. A degree doesn't equip one for a job, but a useful one results in one enough understanding of some field that one obtains, should I say, a fighting chance or more to equip oneself for a job related to the field. And having a job often results in better chances to learn more and further equip oneself for one's next job.
Then in a later part of the blog post Stross argues that as arts sector is today very profitable to the UK, it warrants continued government support for arts education. This strikes me a bit inconsistent with his earlier claim that prediction of the future need for skilled jobs from the current state is impossible.
A better argument would be that it is possible that arts are going to be more useful than STEM in the future, and it would be unwise to cease arts degrees. It has certain ring of truth to it. However, I came under impression that Stross is in favor of keeping the number of arts degrees at the same level or increasing their amount, but if we take "impossibility of prediction" seriously, there is no telling the current amount -- or higher amount, or lower amount -- of arts degrees awarded is any better in 30 years either.
I am not sure the education allocation is best done by the government giving commands how many artists and engineers are needed to be trained (or given subsidies to be trained, or whatever). If that choice is for each individual to decide without government planners intervening, they at least have some idea of their personal talents, wishes, and circumstances than either Rishi Sunak or Charlie Stross.
>Britain's descent from the powerhouse of world-changing ideas to one giant housing estate and Tesco superstore is almost complete.
In my limited foreigner's understanding, Britain was "the powerhouse of world-changing ideas" during period that has fuzzy limits but starts maybe around Newton and continues until maybe Turing -- but after WW2, what was left the powerhouse was certainly eclipsed by the US, and after the 1980s, Asiancountries.
Maybe one can stretch it bit further after the WW2 if one thinks that popular culture production like Beatles is a worthwhile substitute. [1]
How was the education in Britain organized during that era?
[1] I don't; AFAIK income distribution in popular culture production is very winner-takes-all top-heavy, much worse than the software income distribution often denigrated as favoring the 10X developers. 10X coders may make much more than a marginal software developer (I am imagining soon-to-graduate CS student who would-be entry-level dev who has difficulties getting the first interview), but I believe it easier for the marginal software developer land a software job that pays the bills than for a marginal would-be musician to land a music job that pays the bills.
That's generally accurate. The heavy hitters, with a few exceptions
like Faraday, were of noble ancestry or otherwise privileged. We
really need to look to the Americans, to Horace Mann and John Dewey
who understood the wider non-functional ends of education; nation
building, supporting democracy, living a happy life etc. That fed back
into British society mainly post-war, driven by the need to rebuild a
devastated society. Today, in our coddled complacency, that seems far
away and 'optional'.
The physical world is not going anywhere.