Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I make (sigh) AI for a living, and arguably have been since before we started calling it AI.

Based on my own first-hand experience, if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful. If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

The thing is, companies who have that kind of product are relatively rare, because getting to that point takes work. Lots of it. And it's often quite grueling work. The kind of work that's fundamentally unattractive to the swarms of ambitious, entrepreneurial-minded people looking to get rich starting their own business who drive most attempts at launching new products.



The hype gets pushed down from the C-suite because prospects are always asking "are you doing anything with ${latest}?" and the salesdroid has to answer "of course! we'll be showing a teaser in the next couple of months".

Then it gets pushed up from the bottom by engineers practicing Resume-Driven Development. Everybody perks up when a project using ${latest} gets mentioned in the CTO's office. Wouldn't it look cool to say I was a pioneer in ${latest}?

When it's being pushed from the top and the bottom, it's gonna happen.

Left out of the process is thoughtful/imaginative product design and innovation. Sometimes it happens but it's more of an accident in most cases.


I worked in Google for 9 years and even up at the director level there was no way to avoid this.

You either contradicted it and got defunded, slowed it to apply it appropriately and got removed from the project due to not appear ambitious enough, or you went full speed on prematurely scaling an application of it and inevitable failed at scale.

I did founding work on the Google assistant and I was caught in this exact conundrum. There was no solution.


When otherwise smart people do seemingly dumb things, you have to ask if there is some rational explanation for the behavior. My take, having experienced everything you describe here, is that from upper management's position, each of these new shiny objects is a bet. Maybe it'll work out (networking, internet, cloud) or it'll go bust (push, blockchain, etc).

If it works out, the company gets to ride a new tech wave while avoiding obsolescence (see Yahoo, DEC, Sun, etc). If it doesn't pan out, the company writes off the investment and moves on to the next shiny thing.

From the leadership perspective, it actually makes sense to jump on the latest shiny thing. From the mid-level manager's perspective, it sucks to be the one who has to go make sense of it.


I spent time at Google and found this to be the case as well. I think the only "cure" for this is good upper management that is isn't swayed by the flavor of the moment hype, and also a culture of being monomaniacally product-focused.

Places that are intensely product-focused aren't immune from frothy hype, but at least it's forced through the critical filter of "ok but what does this do for our product really", which is a vital part of separating the wheat from the chaff when it comes to new ideas.

My main beef with Google is that the company's culture is intensely not product-focused. The company's defined by its origin story of a groundbreaking technology that happened upon product-market fit, and it's pathologically unable to do the opposite: start with a clear-eyed product vision and working backwards to the constituent technologies.


Maybe a minor tangent, but I really enjoyed playing with Google Assistant when it first came out. Great novelty, especially asking it for jokes.


All aboard the Zeitgeist Express, next stop Adrenalineville!


> if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful.

Great take-away, and we know this is true because of other examples from the past. Remember when every product had to be made out of Blockchain, and startups lead their marketing copy with "Blockchain-powered"? We're doing the same thing with AI.

Generative AI is a developer tool, not a product. Like the programming language you used, the fact that you are using this tool should not be relevant to users. If you have to mention AI to explain what your product does, you're probably doing it wrong. Some of these "AI startups" pitches sound ridiculous. "We use AI to... [X]" is like saying "We use Python to... [X]". Who cares? You're focusing on a detail of what the solution is before we've even agreed I have a problem.


Corollary: If a product marketed as AI is useful, that's a strong signal it's a logistic regression.



Even when I have a model that isn't logistic regression, there is always a logistic regression stage at the end for probability calibration.

I mean, what good is a prediction that is 50% accurate? If you are classifying documents for a recommendation model a "up/down" classification is barely useful, a probability calibrated classification is golden. With no calibration you have an arXiv paper, with a calibration you can build a classifier into a larger system that takes actions under uncertainty.

The generative paradigm holds progress back. You can ask ChatGPT to do anything and it will do it with 70-90% accuracy in all but the hardest cases. Screwing around with prompts can get you closer to the high end of that range, but if you want to do better than that you've got to define your problem well and go through a lot of the grindy work that you had to do with symbolic A.I. and have always had to do with machine learning. (You're going to need a large evaluation set to know how well your prompt-based solution works, and know that it didn't get broken by a software update, at the very least.)

The image that comes to my mind, almost intrusively, is Mickey Mouse from the movie Fantasia where he shows various sins, laziness most of all

https://www.youtube.com/watch?v=VErKCq1IGIU

So many of these efforts show off terrible quality control. There is a site that has posted about 250 galleries (at a rate of 2 day) of about 70 pornographic images a piece generated by A.I. At best the model generates highly detailed images including the stitching on the seams of clothes, clothing with floral prints matching cherry blossom trees in the background and sometimes crowds of people that really click thematically. Then you notice the girls with two belly buttons and if you look enough you'll see some with 7 belly buttons and realize the model doesn't really understand the difference between body parts and skin so there is a nipple that looks like part of the bra rather than showing through the bra, etc.

Then there are the hideously distorted penises that are too long, too short, disembodied, duplicated, bifurcated, pointing in the wrong direction and would otherwise be nightmare fuel for anyone with castration anxiety.

If the wizard was in charge he'd be cleaning these up, I mean looking at 150 images a day and culling the worst is less than an hour of work. But no, Mickey Mouse is in charge.

"Chat" in "ChatGPT" is a good indication of what is going on because it is brilliant at chat where it can lean on a conversation partner to provide meaning and guidance and where the ability to apologize for mistakes really seduces people, even if it doesn't change its wrong behavior. The trouble is trying to get it to perform "off the leash" at a task that matters is a matter of pushing a bubble around under a rug, that "chasing an asymptote" situation is itself seductive and one of the worst problems in technology development that entraps the most sophisticated teams, but put it together with unsophisticated people who don't think systematically and a system which already has superhuman powers of seduction (e.g. "chat" as opposed to problem solving) and you are cruising for a bruising.


*linear logistic regression

I mean, a typical LLM is also logistic regression, but it's not linear.


>>If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

THIS

The most basic thing about marketing is the table of

| Features | Functions | Benefits |

If $THING_01 is actually useful, any competent marketer or salesperson will talk about the BENEFITS, right up front

(And the good ones will also provide info on the Functions and Features for the curious or extra diligent customers, but not obscuring the benefits even if the info is readily accessible).

The main thing about marketing & selling is touting BENEFITS TO THE CUSTOMER.

"$THING_01 will make you sexier!!"

Not how it makes you sexier.

If they are talking about the feature of $THING_01 without also talking about the functions and benefits, they either have no benefits (and maybe even no function), or don't even understand their product.

Either way, do you really want to spend time and/or money on that company?


Double sigh. I am even guilty of building this because well investors need to see AI in our product description. So what do we do? Slap an ai button everywhere and call openai. Never mind that you could just do the same thing say by calling an existing python library!


A direct, objective comparison with a well-crafted set of heuristics is the kryptonite of many a deep learning model.


"if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful."

i have a product team that is totally disconnected from the engineering team. yeah, we use neural networks. they don't understand or know about neural networks so they just call everything "AI" and it's very cringe. but it doesn't mean we don't have good products.


More importantly you would want to keep the secret sauce secret. If I develop something actually near magical, I’m not going to blast the underlying technology from the rooftops, that’s my entire edge.


I've also been around a while and I agree completely, but I think articles such as this amount to more uncritical skepticism than substance.

The cat fairy was a cherry picked example.

Back in the early days of the internet I dealt a lot in retail. Adults would come in and say things like "What would I ever do with the internet?". Today feels a lot like those early days. Make of it what you will.


Back in the early days of the internet the market was flooded with ill-conceived Web-powered products. We don't remember any of them because they didn't last, but in the late 90s they were EVERYWHERE.

Similarly... it's not that I don't think machine learning is useful. I wouldn't have built my career on it if I didn't. But it is no more immune to Sturgeon's Law than the Information Superhighway was.


> Adults would come in and say things like "What would I ever do with the internet?". Today feels a lot like those early days. Make of it what you will.

That makes perfect sense, as online shopping wasn't what sold consumers on the Internet. It was email.

Online shopping came years later and early Amazon wasn't much more than an electronic mail-order catalog.

You wouldn't be able to sell Internet to somebody on the promise 'A lot of cool stuff is coming down the pipeline soon'. Same thing with consumer AI currently. Lots of potential, no killer app.


Also importantly, there are no points for predicting a broad field, only points for being correct on specific things.

There were a zillion "virtual mall" products in the early days of the internet. Exactly zero of them convinced anyone to buy stuff online. Amazon ended up cracking the formula and made billions doing it.

The investors in the virtual malls lost their money. And people on the sidelines who predicted that we would all shop online are IMO only correct in the most meaningless and facile sense, because they had no specific predictions of what would work and what wouldn't, just a vague gesture at a futuristic buzzword.

It's easy to wave generally in the direction of an abstract concept and say "that's a big deal", literally anyone can do it (and did! with crypto!), but it's specific predictions and hypotheses that separate those who know WTF they're talking about from LinkedIn thought-leadership pablum.

Likewise "AI is a big deal" in and of itself is not an astute statement or meaningful prediction. What about it? What specifically can be leveraged from this technology that would appeal to users? What specific problems can you solve? What properties of this technology are most useful and what roadblocks remain in the way of mass success?

pg coined the "middlebrow dismissal", I'd like to suggest a corollary: the middlebrow hype. All hot air without enough specificity to be worth anything.

"The information superhighway will be huge!" is the 90s equivalent of "AI is the future". Ok. How?


>> The cat fairy was a cherry picked example.

Every example of AI is a cherry picked example.


> if the first thing a company has to say about a product or feature is that it's powered by AI

They actually mean: we couldn't get it to work, so we added a black-box method to make it work, sometimes. And the examples on our website are all cherry-picked.


> and arguably have been since before we started calling it AI.

What was it like working in tech in the 1950s ?


> If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal

Respectfully, I don’t think we’re there yet. You and I are tired of the overabused AI label, but for the wide public, as of today, it’s still a stronger selling point. A solution to a specific problem could only be sold to people struggling with this particular problem, a product with a flashy page and AI capabilities could be sold to a wide tail of not overly tech-savvy enthusiasts. Makes a good bang for a buck, even if in short term.


> but for the wide public, as of today, it’s still a stronger selling point.

Is it really? People care that their phone takes great pictures each and every time, I doubt think you need to add that the way you do this is by applying various machine learning algorithms.

Where A.I falls down for me is in the failure cases, simply telling me that more training is required or that the training set was incomplete isn't good enough. You need to be able to tell me exactly why the computer made the mistake, and the current A.I products can't do that. That should be a strong indicator to shy away from A.I powered products in many industries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: