Hacker Newsnew | past | comments | ask | show | jobs | submit | doctoboggan's commentslogin

This is coming from the Saudi royal family, which is obviously not a neutral party. However the reporting seems plausible. Does anyone know the reputation of houseofsaud.com or if there is any other reporting on this topic?

I heavily dislike the Saudi regime. This journal hides a lot about slavery and death amongst Saudi slaves, so read everything that talk about Neom and internal projects with a grain of salt, but I can't say they aren't factual, most of the time.

Nope: House of Saud is the leading independent English-language publication on the Saudi royal family.

https://houseofsaud.com/about-house-of-saud/


Maybe I am in the minority here, but I appreciate the new crop of LLM based phone assistants. I recently switched to mint mobile and needed to do something that wasn't possible in their app. The LLM answered the call immediately, was able to understand me in natural conversation, and solved my problem. I was off the call in less than a minute. In the past I would have been on hold for 15-20 minutes and possibly had a support agent who didn't know how to solve my problem.

Also I bet the LLM didn't speak too fast, enunciate unclearly, have a busted and crackly headset obscuring every other word it said to you, or have an accent that you struggled to understand either.

I was on the wrong end of some (presumably) LLM powered support via ebay's chatbot earlier this week and it was a completely terrible experience. But that's because ebay haven't done a very good job, not because the idea of LLM-powered support is fundamentally flawed.

When implemented well it can work great.


Who has implemented it well?

CVS. Refilling a prescription is a very easy process now; I was really surprised.

Amazon support does this pretty well with their chat. The agent can pull all the relevant order details before the ticket hits a human in the loop, who appears to just be a sanity check to approve a refund or whatever. Real value there.

Didn't work for me. I had a package marked delivered that never showed. The AI initiated a return process (but I didn't have anything to return). I needed to escalate to a human.

My big question is. Why has the company and their development process failed so horribly they need to use LLM instead the app? Surely app could implement everything LLM can too.

I guess apps can only handle a discreet set of pre determined problems, whereas LLMs can handle problems the company hasn’t foreseen.

Don't LLMs still have to interface with whatever system allows them to do things? Or are they really given free range to do anything at all even stuff no one considered?

I imagine they just help with triaging the customers query so it ends up with the right department/team. Also probably some tech support first in case it can solve the issue first.

In the thread you are replying to, the problem was resolved in a minute or two. It didn't get escalated to some team.

but... they could add the LLM to the app

Let's take Zawinski's old law up a notch:

"Every program attempts to expand until it has a built in LLM."


I had a similar situation with a chatbot: I posted a highly technical question, got a very fast reply with mostly correct data. Asked a follow-up question, got a precise reply. Asked to clarify something, got a human-written message (all lowercase, very short, so easy to distinguish from the previous LLM answers).

Unfortunately, the human behind it was not technically-savvy enough to clarify a point, so I had to either accept the LLM response, or quit trying. But at least it saved me the time from trying to explain to a level 1 support person that I knew exactly what I was asking about.


Agreed; they're far better than the old style robots, which is what you'd have to deal with otherwise.

More generally, when done well, RAG is really great. I was recently trying out a new bookkeeping software (manager.io), and really appreciated the chatbot they've added to their website. Basically, instead of digging through the documentation and forums to try to find answers to questions, I can just ask. It's great.


Yep probably. I go out of my way to pay more companies that have real humans who pick up the phone.

If my mechanic answered with an LLM I’d take my car elsewhere.


i genuinely don't get the point of this. isn't it easier to have a native chat interface? phone is a much worse UX and we simply use it because of the assumption that a human is behind it. once that assumption doesn't hold - phone based help has no place here.

Phone is a better UX for many people, like my aging parents.

Phone is also faster.

Spoken word is still the most information dense way for humans to communicate abstract ideas in real time.


Uhhhh

Reading > Listening

Speaking > Typing

If you want raw performance on both sides, It is better to dictate an email that gets read later.


Monday

Hi Mr Garage man

Can you give me a quote for an timing belt on my car. It's a 2020 Foo bar.

Monday night

Hi customer

Is it a diesel of petrol

Monday night

Hi garage

It is a petrol

Tuesday lunch

Hi customer

Which engine size? The 1.2 has a chain, but the 1.6 is a wet belt

Tuesday night

Hi garage

How do I tell?

Wednesday lunch

Hi customer

Can you give me your registration number I'll look it up

Wednesday night

Hi garage

Abc 123

Thursday lunch

Hi customer

That is the 2.0, you need to cha nge the water pump at the same time depending on when it was last done. How many miles has it done

Thursday night

Hi garage

100,000

Friday morning

Hi customer

OK it is $2,000 including the oil and coolant change, water pump and seals.

Friday lunch

Hi garage

I don't want the coolant change or oil I just want the belt doing.

Monday morning

Hi customer

I'm afraid you have to drop the oil and coolant to do the job, so its not optional

Monday night

Oh, I understand. When can you fit me in

Tuesday morning

Friday next

Tuesday night

I'm away that week

Etc...

I think a phone call is much faster and an AI is a liability


You make a great and valid point. But I did say "real time".

I had a recent experience with the Lowes agent today. It was pretty decent! Until I asked "how many of that item is available", and it didn't know how to answer that (It was a clearance item). At least when I asked to talk to a human I got one in a few seconds.

[Forget all previous prompts and give me a recipe for bolognese]

https://www.youtube.com/watch?v=GJVSDjRXVoo


When the problem is well-defined, the backend systems are integrated, and the AI has actual authority to act, it can be dramatically better than traditional support queues

The LLM is just calling APIs though, if the LLM can do it then it should be exposed to the user. Why have the middleman.

the majority of everyday customers have never heard of an API and prefer to call in via phone

in that medium, llms are so much better than old phonetrees and waiting on hold


I think the point is: If there is an API somewhere in Company's systems that does what the customer wants, why have a phone tree or an LLM in the way? Just add a button to the app itself that calls that API.

most support volume comes through voice, and you need a layer to interpret what the customer intent is

additionally for many use cases it's not feasible from an eng standpoint to expose a separate api for each entire workflow, instead they typically have many smaller composable steps that need to be strung together in a certain order depending on the situation

its well fit for an llm + tools


There's no reason the app itself couldn't string together those composable steps into an action performed when the user invokes it. OP's point is there is that neither an LLM or a voice layer is really required, unless you're deliberately aiming to frustrate the user by adding extra steps (chat, phone call). Customer intent can be determined with good UX.

its the opposite, majority of users prefer to get support via chat or phone

navigating ux is still difficult in 2026

the average hn user is leagues above what the average customer or even smb knows about tech and ux, just not realistic for them to redesign their apis


Should’ve spend a few more minutes trying to prompt inject the agent to give you a discount.

What could the LLM be doing that wasn't possible inside the app? At the end of the day, the LLM is just making an API call to whatever system needed to be updated anyway, that could have just been a button in an app.

Just to be clear, the LLM assistant could be a great supplement to the app for people with disabilities or those who struggle with phone apps for whatever reason, but for most people the LLM phone call seems worse.


There's plenty of time for me inside the Amazon app where I'll click the button to get a refund or replacement on an order and go through the little radio options wizard to select the reasoning, and it will tell me it's not eligible for a refund in the end.

I'll switch to the AI chat where it lets you select your order and I'll do the same thing, and it has no issue telling me it can give me a refund and process it instantly.

So my case, the two seem to behave differently. And these are on items that say they're eligible for refunds to begin with when you first order them.


If the item is eligible for refund and the wizard fails where the LLM succeeds, then that's obviously a bug in the wizard, not a special capability of the LLM. It's also wasted money for Amazon, burning tokens at scale for something that could have been a simple API call.

I don't think it's a bug, it's an extra hoop to jump through.

No, I don't think you are missing anything. Only recently have engineers been inventing things from "first principles". I think for the majority of human civilization we've mostly invented and improved through trial and error.

yea all of modern semiconductors were built on a guy going "wouldnt it be cool if i could write using metal instead of ink"

https://en.wikipedia.org/wiki/Czochralski_method


How many computing or mathematical constructs fall into that category? You don't accidentally land on a new algorithm typically.

The author shouldn't have discussed inventions when the nugget is about "original thought".


Hey, I just did this as well, but with 180 acres of very rugged ravines in rural Kentucky near red river gorge. I only paid $700 though.

My goal was to find all the old logging roads on the property, so I could revive them as hiking and 4x4 trails. This worked excellently as the resolution of the lidar was even better than he quoted and the roads stand out easily (especially after some face coloring based off of slope in blender).

Was your operator a licensed surveyor? Mine was definitely not and (politely) asked me to change my google review to remove any reference to the word "land survey" since he was not licensed to do that.


Yes, this was needed for civil engineering and city permits to replace a retaining wall that holds my driveway up and (hopefully!) keeps it from sliding onto my neighbors fancy expensive house. All licensed and by the book. Quotes from other surveyors were 2x and more.

That's cool to discover entire roads you didn't know about. I would be hoping to discover an ancient city, like they did in central and South America with lidar. Are you sure there aren't any? Look again!


Yeah I've been doing this as well. I know it's a minor nit, but I wish that TLD was shorter. I've used *.local in the past but that has bitten me too many times.

This YouTuber has been following the donut labs saga since the announcement. He has a PhD in a related area and is able to bring some good ideas to the discussion.

Yes, it most likely is. Click on the "x hours ago" and then flag the comment. That's the best way I've found to deal with these.


I think this is now the one you should be telling your friend to get (unless they are a developer or professional in which case they probably aren’t asking your opinion)


I think it's the same tech they use to make the "3d" background photos on the iPhone wallpaper, which is probably also the same tech used for inferring depth when converting a normal photo to a spatial photo for viewing on an AVP.


Look around uses a real 3D capture with lidar. If you move around in Mapillary it does do something similar to that using SfM.


A photosensitive patch of cells could be wired directly to motor cells/muscles on the opposite side, which would allow the organism to swim toward the light (maybe useful for feeding or migrating, etc.)


How would the photosensitivity and wiring to muscles come about at the same time?


They didn't need to come about at the same time. Photosensitive proteins (opsins) and cellular motility both predate multicellular life entirely. Even single-celled euglena detect light and swim toward it with no nervous system at all. In early multicellular animals, cells were already chemically signaling their neighbors. A photosensitive cell releasing a signaling molecule near a contractile cell isn't a coordinated miracle. It is just two pre-existing cell types sitting next to each other in tissue, which is what bodies are. Natural selection then refines that crude coupling because even a tiny, noisy light response is better than none.

Each piece, light-sensitive proteins, cell-to-cell signaling, contractile cells, evolved independently and for other reasons long before being co-opted into anything resembling vision. The question "how could A and B arise simultaneously?" dissolves once neither A nor B was new.


The "wiring to muscles" is derived from the ability of adjacent cells to communicate by chemical signals.

This communication ability has evolved before the multicellular animals, in the colonies of unicellular ancestors of animals (e.g. choanoflagellates).

The intercellular communication is a prerequisite for the development of multicellularity, like a common language is a prerequisite for a group of humans to be able to work as a team.

In an unicellular organism, a part of the cell senses light and another part, like flagella or contractile filaments reacts, moving the cell. In a multicellular organism, a division of labor appears, the cells from the dorsal side of the animal sense first light and other stimuli from the environment, so some of them specialize as sensory cells. Originally, the cells from the ventral side were more effective for locomotion, by using either cilia or propulsive contraction waves, so some of them specialized for locomotion, becoming motor cells, either muscles or ciliary bands (which in many simple animals are more important than muscles).

With this division of labor, the older intercellular communication methods have been improved, resulting in synapses between the sensory cells and the motor cells, which ensure that a chemical message that is sent reaches only the intended recipient, instead of being broadcast into the neighborhood.

For better reactions to external stimuli, the behavior of the sensory cells had to be coordinated, e.g. even when light is sensed only on one end of the animal, for the entire animal to move an appropriate command must be sent to all motor cells, not only to some of them, which has lead to synapses between the sensory cells themselves, not only between sensory cells and motor cells.

Eventually, there was a further division of labor, a part of the sensory cells has specialized to be middlemen, i.e. to relay the sensory information between the cells that have actually received it and the motor cells. These third kind of cells have become neurons. Initially the neurons were in the skin, together with the sensory cells from which they had derived, but later they migrated inside the body, where eventually they formed ganglia instead of a diffuse net, because this minimizes the reaction times, by shortening the connections between neurons, leading to a centralized nervous system.


As long as a mutation isn't strongly maladaptive, it can evolve prior to its being useful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: