It's good at writing/updating tedious test cases and fixtures when you're directing it more closely. But yes, it's not as great at coming up with what to test in the first place.
Yesterday I wanted to change a white background to transparent on some clip art. I’m still learning Affinity so asked Google Gemini Nano Banana PRO 2. The output looked ok at first but the grey squares were a little off. They didn’t make a perfect grid. I opened it in mspaint and was able to erase the grey squares. It didn’t change the white background to transparent, it just drew an array of grey squares, but only good enough for a first glance. I have no idea how these AI tools can make anything of use if left to their own devices.
If AI makes people more productive then labor is cheaper than it was pre-AI, even at pre-AI salaries, because you're getting more done at the same cost.
3. Your profit from this particular employee is z = y - x.
So far so good. Let's assume z > 0, though of course it's not an easy statement to make with many roles, that are more like investments. But let's assume they're good investments and you're confident z is positive.
Now, if the same employee produces 2y, but doesn't receive a raise, z just improved significantly, it more than doubled. So effectively, labour just became cheaper in relation to the value of its output.
If it was that simple, layoffs would hurt profitability significantly.
Now what if you can't translate improved productivity to additional value? Simple example would be an agency with a fixed contract volume. If increases in y can't be realised, e.g. by finding more business, then the only way for the company to realise the gains is to reduce x, i.e. layoffs. z goes up right away, no business development required.
I think it's a defensive stance companies are taking. The economy is not great, they're hitting the breaks on investments, increasing their runway, shrinking to force the organisation to become more efficient. Once they're ready to invest again, they can always hire again. But I read layoffs-because-AI as "we don't know what to invest in right now, so we'll buy some time to figure that out".
All words in a thesaurus would generally also be in a dictionary? The difference between a thesaurus and a dictionary is what each tells you about a word.
This happened to me once, they just brought out someone (supervisor?) who asked questions about what addresses I've lived at, other similar questions I'd probably only know the answer to.
It does take longer than regular screening (most of the time was just spent waiting for the supervisor -- I'm not sure they were spending time collecting some data first), if that causes you to miss your flight you miss your flight.
It seems plausible to me that $45 could be about a TSA employee's wage times how much longer this takes. In aggregate, this (in theory) lets them hire additional staff to make sure normal screening doesn't take longer due to existing staff being tied up in extra verifications.
It's not that they'd pay individual employees more, it's that they'd hire more workers to account for the fact that their existing workers are tied up doing extra verification.
I wasn't flying 25 years ago but I'm not sure what you mean, or how that's relevant actually. The point is just that it takes them more time to do the "extra screening" if you don't have your ID than the standard screening if you did have your ID.
1. They're not doing screening. The screening comes later. At this stage, they're attempting to identify someone. That has never been the job. The job is to prevent guns, knives, swollen batteries, or anything else that could be a safety threat during air travel.
2. Regardless, the reality is that they do identify travelers. Even so, the job has not changed. If you don't present sufficient identification, they will identify you through other mechanisms. The only thing the new dictate says is that they don't want this document, they want that document.
> That has never been the job. The job is to prevent guns, knives, swollen batteries, or anything else that could be a safety threat during air travel.
A job that by their own internal testing, they do well less than 5% of the time (some of their audits showed that 98% of fake/test guns that were sent through TSA got through checkpoints).
I’m not going to take bitter advice from someone who either hasn’t used them in a long time, or is terribly bad at using them. Especially as it seems like you hate them so much.
I don’t particularly like them or dislike them, they’re just tools. But saying they never work for bug fixing is just ridiculous. Feels more like you just wanted an excuse to get on your soapbox.
It's not that they can't fix bugs at all, but I find that if I've already attempted to debug something and hit a wall, they're rarely able to help further.
Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).
Objecting to this on some kind of philosophical grounds of "being able to generalize from existing patterns isn't the same as thinking" feels like a distinction without a difference. If LLMs were better at solving complex problems I would absolutely describe what they're doing as "thinking". They just aren't, in practice.
> Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).
"Seem". "Feel". That's the anthropomorphisation at work again.
These chatbots are called Large Language Models for a reason. Language is mere text, not thought.
If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.
> "Seem". "Feel". That's the anthropomorphisation at work again.
Those are descriptions of my thoughts. So no, not anthropomorphisation, unless you think I'm a bot.
> These chatbots are called Large Language Models for a reason. Language is mere text, not thought. If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.
They use the term "thinking" all the time.
----
I'm more than willing to listen to an argument that what LLMs are doing should not be considered thought, but "it doesn't have 'thought' in the name" ain't it.
> Those are descriptions of my thoughts. So no, not anthropomorphisation
The result of anthromorphisation. When we treat a machine as a machine, we less need to understand it by seems and feel.
> They use the term "thinking" all the time.
I find not. E.g. ChatGPT:
Short answer? Not like you do.
Longer, honest version: I don’t think in the human sense—no consciousness, no inner voice, no feelings, no awareness. I don’t wake up with ideas or sit there wondering about stuff. What I do have is the ability to recognize patterns in language and use them to generate responses that look like thinking.
> Even this evidence of woodworking is largely unremarkable .... this find is most notable for its preservation.
This somewhat contradicts the subheading, no?
> The finding, along with the discovery of a 500,000-year-old hammer made of bone, indicates that our human ancestors were making tools even earlier than archaeologists thought.
That subheading is complete nonsense and I can't think of a single charitable reading of that sentence that in any way makes sense. Archaeologists have known that our ancestors have been making tools for over a million years since the Acheulean industry was conclusively dated in the 1850s. It took half a century for archaeologists to figure that out after William Smith invented stratigraphy. Scientists didn't even know what an isotope was yet.
The original paper's abstract is much more specific (ignore the Significance section, which is more editorializing):
> Here, we present the earliest handheld wooden tools, identified from secure contexts at the site of Marathousa 1, Greece, dated to ca. 430 ka (MIS12). [1]
Which is true. Before this the oldest handheld wooden tool with a secure context [2] was a thrusting spear from Germany dated ~400kYA [3]. The oldest evidence of woodworking is at least 1.5 million years old but we just don't have any surviving wooden tools from that period.
[2] This is a very important term of art in archaeology. It means that the artefact was excavated by a qualified team of archaeologists that painstakingly recorded every little detail of the excavation so that the dating can be validated using several different methods (carbon dating only works up to about 60k years)
reply