This is coming from the Saudi royal family, which is obviously not a neutral party. However the reporting seems plausible. Does anyone know the reputation of houseofsaud.com or if there is any other reporting on this topic?
I heavily dislike the Saudi regime. This journal hides a lot about slavery and death amongst Saudi slaves, so read everything that talk about Neom and internal projects with a grain of salt, but I can't say they aren't factual, most of the time.
Maybe I am in the minority here, but I appreciate the new crop of LLM based phone assistants. I recently switched to mint mobile and needed to do something that wasn't possible in their app. The LLM answered the call immediately, was able to understand me in natural conversation, and solved my problem. I was off the call in less than a minute. In the past I would have been on hold for 15-20 minutes and possibly had a support agent who didn't know how to solve my problem.
Also I bet the LLM didn't speak too fast, enunciate unclearly, have a busted and crackly headset obscuring every other word it said to you, or have an accent that you struggled to understand either.
I was on the wrong end of some (presumably) LLM powered support via ebay's chatbot earlier this week and it was a completely terrible experience. But that's because ebay haven't done a very good job, not because the idea of LLM-powered support is fundamentally flawed.
Amazon support does this pretty well with their chat. The agent can pull all the relevant order details before the ticket hits a human in the loop, who appears to just be a sanity check to approve a refund or whatever. Real value there.
Didn't work for me. I had a package marked delivered that never showed. The AI initiated a return process (but I didn't have anything to return). I needed to escalate to a human.
My big question is. Why has the company and their development process failed so horribly they need to use LLM instead the app? Surely app could implement everything LLM can too.
Don't LLMs still have to interface with whatever system allows them to do things? Or are they really given free range to do anything at all even stuff no one considered?
I imagine they just help with triaging the customers query so it ends up with the right department/team. Also probably some tech support first in case it can solve the issue first.
I had a similar situation with a chatbot: I posted a highly technical question, got a very fast reply with mostly correct data. Asked a follow-up question, got a precise reply. Asked to clarify something, got a human-written message (all lowercase, very short, so easy to distinguish from the previous LLM answers).
Unfortunately, the human behind it was not technically-savvy enough to clarify a point, so I had to either accept the LLM response, or quit trying. But at least it saved me the time from trying to explain to a level 1 support person that I knew exactly what I was asking about.
Agreed; they're far better than the old style robots, which is what you'd have to deal with otherwise.
More generally, when done well, RAG is really great. I was recently trying out a new bookkeeping software (manager.io), and really appreciated the chatbot they've added to their website. Basically, instead of digging through the documentation and forums to try to find answers to questions, I can just ask. It's great.
i genuinely don't get the point of this. isn't it easier to have a native chat interface? phone is a much worse UX and we simply use it because of the assumption that a human is behind it. once that assumption doesn't hold - phone based help has no place here.
I had a recent experience with the Lowes agent today. It was pretty decent! Until I asked "how many of that item is available", and it didn't know how to answer that (It was a clearance item). At least when I asked to talk to a human I got one in a few seconds.
When the problem is well-defined, the backend systems are integrated, and the AI has actual authority to act, it can be dramatically better than traditional support queues
I think the point is: If there is an API somewhere in Company's systems that does what the customer wants, why have a phone tree or an LLM in the way? Just add a button to the app itself that calls that API.
most support volume comes through voice, and you need a layer to interpret what the customer intent is
additionally for many use cases it's not feasible from an eng standpoint to expose a separate api for each entire workflow, instead they typically have many smaller composable steps that need to be strung together in a certain order depending on the situation
There's no reason the app itself couldn't string together those composable steps into an action performed when the user invokes it. OP's point is there is that neither an LLM or a voice layer is really required, unless you're deliberately aiming to frustrate the user by adding extra steps (chat, phone call). Customer intent can be determined with good UX.
What could the LLM be doing that wasn't possible inside the app? At the end of the day, the LLM is just making an API call to whatever system needed to be updated anyway, that could have just been a button in an app.
Just to be clear, the LLM assistant could be a great supplement to the app for people with disabilities or those who struggle with phone apps for whatever reason, but for most people the LLM phone call seems worse.
There's plenty of time for me inside the Amazon app where I'll click the button to get a refund or replacement on an order and go through the little radio options wizard to select the reasoning, and it will tell me it's not eligible for a refund in the end.
I'll switch to the AI chat where it lets you select your order and I'll do the same thing, and it has no issue telling me it can give me a refund and process it instantly.
So my case, the two seem to behave differently. And these are on items that say they're eligible for refunds to begin with when you first order them.
If the item is eligible for refund and the wizard fails where the LLM succeeds, then that's obviously a bug in the wizard, not a special capability of the LLM. It's also wasted money for Amazon, burning tokens at scale for something that could have been a simple API call.
No, I don't think you are missing anything. Only recently have engineers been inventing things from "first principles". I think for the majority of human civilization we've mostly invented and improved through trial and error.
Hey, I just did this as well, but with 180 acres of very rugged ravines in rural Kentucky near red river gorge. I only paid $700 though.
My goal was to find all the old logging roads on the property, so I could revive them as hiking and 4x4 trails. This worked excellently as the resolution of the lidar was even better than he quoted and the roads stand out easily (especially after some face coloring based off of slope in blender).
Was your operator a licensed surveyor? Mine was definitely not and (politely) asked me to change my google review to remove any reference to the word "land survey" since he was not licensed to do that.
Yes, this was needed for civil engineering and city permits to replace a retaining wall that holds my driveway up and (hopefully!) keeps it from sliding onto my neighbors fancy expensive house. All licensed and by the book. Quotes from other surveyors were 2x and more.
That's cool to discover entire roads you didn't know about. I would be hoping to discover an ancient city, like they did in central and South America with lidar. Are you sure there aren't any? Look again!
Yeah I've been doing this as well. I know it's a minor nit, but I wish that TLD was shorter. I've used *.local in the past but that has bitten me too many times.
This YouTuber has been following the donut labs saga since the announcement. He has a PhD in a related area and is able to bring some good ideas to the discussion.
I think this is now the one you should be telling your friend to get (unless they are a developer or professional in which case they probably aren’t asking your opinion)
I think it's the same tech they use to make the "3d" background photos on the iPhone wallpaper, which is probably also the same tech used for inferring depth when converting a normal photo to a spatial photo for viewing on an AVP.
A photosensitive patch of cells could be wired directly to motor cells/muscles on the opposite side, which would allow the organism to swim toward the light (maybe useful for feeding or migrating, etc.)
They didn't need to come about at the same time. Photosensitive proteins (opsins) and cellular motility both predate multicellular life entirely. Even single-celled euglena detect light and swim toward it with no nervous system at all.
In early multicellular animals, cells were already chemically signaling their neighbors. A photosensitive cell releasing a signaling molecule near a contractile cell isn't a coordinated miracle. It is just two pre-existing cell types sitting next to each other in tissue, which is what bodies are. Natural selection then refines that crude coupling because even a tiny, noisy light response is better than none.
Each piece, light-sensitive proteins, cell-to-cell signaling, contractile cells, evolved independently and for other reasons long before being co-opted into anything resembling vision. The question "how could A and B arise simultaneously?" dissolves once neither A nor B was new.
The "wiring to muscles" is derived from the ability of adjacent cells to communicate by chemical signals.
This communication ability has evolved before the multicellular animals, in the colonies of unicellular ancestors of animals (e.g. choanoflagellates).
The intercellular communication is a prerequisite for the development of multicellularity, like a common language is a prerequisite for a group of humans to be able to work as a team.
In an unicellular organism, a part of the cell senses light and another part, like flagella or contractile filaments reacts, moving the cell. In a multicellular organism, a division of labor appears, the cells from the dorsal side of the animal sense first light and other stimuli from the environment, so some of them specialize as sensory cells. Originally, the cells from the ventral side were more effective for locomotion, by using either cilia or propulsive contraction waves, so some of them specialized for locomotion, becoming motor cells, either muscles or ciliary bands (which in many simple animals are more important than muscles).
With this division of labor, the older intercellular communication methods have been improved, resulting in synapses between the sensory cells and the motor cells, which ensure that a chemical message that is sent reaches only the intended recipient, instead of being broadcast into the neighborhood.
For better reactions to external stimuli, the behavior of the sensory cells had to be coordinated, e.g. even when light is sensed only on one end of the animal, for the entire animal to move an appropriate command must be sent to all motor cells, not only to some of them, which has lead to synapses between the sensory cells themselves, not only between sensory cells and motor cells.
Eventually, there was a further division of labor, a part of the sensory cells has specialized to be middlemen, i.e. to relay the sensory information between the cells that have actually received it and the motor cells. These third kind of cells have become neurons. Initially the neurons were in the skin, together with the sensory cells from which they had derived, but later they migrated inside the body, where eventually they formed ganglia instead of a diffuse net, because this minimizes the reaction times, by shortening the connections between neurons, leading to a centralized nervous system.
reply