AGI doesn’t need to be “solved” for humanoid robots to be valuable at scale. The role of teleoperation is often underestimated; in the near term, many humanoids will likely be operated remotely by people halfway across the world, performing deliveries and other tasks cheaply.
We already pushed them into shittier and shittier jobs, what will it be next ? There is 0 overlap between people pushing for these bots and people who care about workers rights
Of course not, I just think the idea of little humanoid robots being controlled remotely, running around doing deliveries and such would be amusing.
The potential applications of this tech in war is concerning of course, but we don't have to allow that. And the more I've thought about it over time, I find the prospect of combat robots much less terrifying than the FPV drones we've been seeing used in Ukraine, or being obliterated by an unseen, unheard drone flying kilometers up in the sky before you even know what's happening.
There are plenty of jobs where 24/7 operation would be beneficial.
Like a road paving crew, or a nighttime security guard. You can pay daytime wages for someone in another timezone.
This is a real risk. I know of someone who got phished with a fake number for Apple Support (the fake number was promoted and appeared at the top of the search results). Apparently they do this with banking phone numbers as well.
Spotify needs to embrace a user-centric payment model (where a portion of your subscription fee goes directly toward artists you listen to the most) rather than the current “pooled” system which massively favors large artists.
YouTube, for instance splits 55% of its YouTube Premium subscription revenue with creators, who get paid based on their share of watch time. Your YouTube Premium fee is distributed to the creators you view the most.
University research grants that have the word "mRNA" present are currently being flagged and frozen, even though mRNA technology has been used for things like cancer vaccine research for years. Politicizing a technology is incredibly absurd and will have long-term repercussions on science & medicine.
I know of a professor at one university that had grants frozen due to being flagged as "woke" gender discourse. His lab researches...(wait for it)... immunotherapy treatments for breast cancer in women.
I own a Tesla, and here's my take on the biggest software issue:
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
Why is that so though? Because of false marketing to the degree that is criminal. Elon does have one excuse: Tesla would be bankrupt several times over except for his purposeful criminal lies. Does he actually care about the company? He pumped out OOM more value from Tesla than anyone in history of any company. That is how much he cares about company surviving.
Criminal. And too dum* to think of anything innovative to save the company.
Withholding safety-relevant features unless you pay a subscription sounds like something from dystopian fiction, not something that should be allowed in the real world.
In my experience, even most Tesla owners don't really seem to understand the difference between autopilot or FSD.
However, even though Autopilot doesn't obey traffic control devices, it still DOES issue warnings if taking over may be required.
Most Tesla owners I've talked with, are actually completely unaware of the v12 and v13 improvements to FSD, and generally have the car for other reasons than FSD. So, if anything, Tesla is actually quite behind on marketing FSD to the regular folk, even those who are already Tesla owners.
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
Let's expound some more on this. There's a parallel between people feeling forced to use online dating (mostly owned by one corporate entity) despite hating it, and being forced to use LinkedIn when you're in a state of paycheck-unattached or even just paycheck-curious.
> now the interface deliberately suggests AI-generated responses to posts
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.
LLM dates back to 2017, Google added that to internal gmail back then. Not sure when linkedin added it so you might be right, but the tech is much older than most thinks.
I believe we will see integrated optical glucose sensors in a popular consumer wearable fairly soon.
I don't think they'll be as accurate as blood sensors, however they will be a game-changer for many people (pre-diabetics, or gestational diabetes etc).
The next 2-3 years are going to be incredibly interesting.