It has A19 Pro. A19 Pro has matmul acceleration in its GPU, the equivalent of Nvidia's Tensor cores. This would make future Macs extremely viable for local LLMs. Currently, Macs have high memory bandwidth and high VRAM capacity but low prompt processing speeds. Give it a large context and it'll take forever before the first token is generated.
If the M5 generation gets this GPU upgrade, which I don't see why not, then the era of viable local LLM inferencing is upon us.
That's the most exciting thing from this Apple's event in my opinion.
PS. I also like the idea of the ultra thin iPhone Air, the 2x better noise cancellation and live translation of Airpods 3, high blood pressure detection of the new Watch, and the bold sexy orange color of the iPhone 17 Pro. Overall, this is as good as it gets for incremental updates in Apple's ecosystem in a while.
It is almost strange, since iPhones were only available in ugly drab colors for several generations. And the Pro models in particular were previously never available in a decent color.
If this was true wouldn't there be a market for a ruggedized version that has the toughness of a case, from the factory as shipped? Its a little silly for Apply to shave every possible half-millimeter from the design and then have 99% of people add back the thickness plus a lot of extra by adding a case. Why not have a factory-ruggedized version which isn't as thick as adding that case but is just as rugged?
Considering the price and re-sale value of iphones I would add a case even if they ruggedized it.
My current (Android) phone is from 2020 and I have bought three cases for it because the previous ones got wear and tear. The phone inside still looks brand new.
But yeah, the trend of ultra-thin phones is silly.
That open a new business for them to sell $60 cases that are worth $2 of materials and have a great panel to match people’s taste which is even more appealing to buy
Ask workers of cell phone stores and you’ll find that figure is way off. Not everyone wants a case. Having a case significantly changes the feeling of the device in hand.
The 15s and 16s both had titanium bodies which (as I recall at least) don't take on colour as well when they're anodized, so that could be the cause of drab colour ways.
edit: It was only the Pros and up which had titanium bodies. The 17s are all aluminum.
Anodizing titanium creates an oxide layer, the thickness of which varies with the voltage used. The thickness of that oxide layer determines which wavelengths of light it refracts [0]. In practical terms, your choices for color are pretty limited[1].
I'm not a chemist, but I looked into this years back when I was wondering why everything titanium is offered in the same couple of colors. Personally, I like the plain gray.
I don't get why Apple doesn't do consistent colors. I loved the blue of my iPhone 12 Pro, but I can't even get that anymore... I would have upgraded a few generatiosn back if they had kept consistent colors.
And i have no idea what color my phone was when i got it. It has been inside an otterbox case since the hour i first had it. For me, the color of a cellphone is about as relevent as the color of a motherboard. It will look cool for, at most, a few minutes before it is forever locked inside a case.
> I don't get why Apple doesn't do consistent colors
The materials used is a factor: the last few iPhones were built with aluminum (iPhone 16), titanium (iPhone 16 Pro) and stainless steel (iPhone 13 Pro).
Not all colors work with all materials; my understanding is titanium is particularly bad for bright colors. The colors for the iPhone Pro models have been pretty drab--not this year.
Which is a very powerful feature for anyone who likes security or finding bugs in their code. Or other people's code. Even if you didn't really want to find them.
It certainly took them a while to introduce MTE! Pixel 8 came out in 2023. I wonder how it compares against hardened_malloc with 48-bit address space and 33-bit ASLR in Graphene. Apple's security team has reported that MIE could break all "known" exploit chains, but so does hardened_malloc. Hard to tell right now which one is best (most def MIE) but everything else included in Graphene is probably making the point moot anyway.
In the past few weeks the oxymeter feature was enabled by a firmware update on series 10. Measurements are done on the watch, results are only reported on a phone.
As of September 9, 2025, hypertension notifications are currently under FDA review and expected to be cleared this month, with availability on Apple Watch Series 9 and later and Apple Watch Ultra 2 and later. The feature is not intended for use by people under 22 years old, those who have been previously diagnosed with hypertension, or pregnant persons.
I believe this is for the fringe cases where you have been diagnosed with hypertension, but your apple Watch does not tell you that you have hypertension risk, then you may decide to not take your drugs, since your watch told you all clear. This could trigger lawsuits if complications set in when you decide not to take your drugs because of "lack of alarm"
Then you get a new fringe case: you are not yet diagnosed with hypertension, but you are aware that your apple watch has that functionality so you decide you don't need to be diagnosed.
The point of the detection features are to notify people who are not diagnosed with the condition to go to the doctor about it possibly get diagnosed with it.
It's not useful for people already diagnosed because they already know they have it, so the notification is just telling them something they already know.
Going to be interesting comparing the series 10 blood pressure sensing against my Hilo (formerly Aktiia) band on the other wrist. Although without calibration against a cuff, I'm not super convinced the Apple Watch will give reliable information.
The color line up reminds me of the au MEDIA SKIN phones (Japanese carrier) circa 2007. Maybe it's because I had one back in the day, but I can't help but think they took some influence.
Wow, thanks for sharing the name, these are really good! I don't know why I was surprised to realize that great designers have made fantastic products even in the past...
If you like these, check out the INFOBAR phones from a few years prior. https://platform.theverge.com/wp-content/uploads/sites/2/cho... People like the multi-colored one, but I've always been partial to the green. I believe there's been a few newer homages to these over the years.
I've always been a bit confused about when to run models on the GPU vs the neural engine. The best I can tell, GPU is simpler to use as a developer especially when shipping a cross platform app. But an optimized neural engine model can run lower power.
With the addition of NPUs to the GPU, this story gets even more confusing...
In reality you don’t much of a choice. Most of the APIs Apple exposes for running neural nets don’t let you pick. Instead some Apple magic in one of their frameworks decides where it’s going to host your network. At least from what I’ve read, these frameworks will usually distribute your networks over all available matmul compute, starting on the neural net (assuming your specific network is compatible) and spilling onto the GPU as needed.
But there isn’t a trivial way to specifically target the neural engine.
You're right there is no way to specifically target the neural engine. You have to use it via CoreML which abstracts away the execution.
If you use Metal / GPU compute shaders it's going to run exclusively on GPU. Some inference libraries like TensorFlow/LiteRT with backend = .gpu use this.
Competing at high end budget PC laptop will increase market share, but more importantly it’d be an easy recommendation to make when people see a lower priced PC laptop. Rumours were $699.
That seems incredibly prescient for accounts created before even GPT-1. Obviously broad data scraping existed before then, but even amongst this crowd I find it hard to believe that’s the real motivator.
Yes, they do. They're called Neural Engine, aka NPUs. They aren't being used for local LLMs on Macs because they are optimized for power efficiency running much smaller AI models.
Meanwhile, the GPU is powerful enough for LLMs but has been lacking matrix multiplication acceleration. This changes that.
From a compute perspective, GPUs are mostly about fast vector arithmetic, with which you can implement decently fast matrix multiplication. But starting with NVIDIA's Volta architecture at the end of 2017, GPUs have been gaining dedicated hardware units for matrix multiplication. The main purpose of augmenting GPU architectures with matrix multiplication hardware is for machine learning. They aren't directly useful for 3D graphics rendering, but their inclusion in consumer GPUs has been justified by adding ML-based post-processing and upscaling like NVIDIA's various iterations of DLSS.
I wish they would offer the 17 pro in some lighter colors (like the new sage green for the regular 17). Not everyone wants bold, and the color selection for pro is always so limited. They don't even have white with this generation, just silver.
The first SoC including Neural Engine was the A11 Bionic, used in iPhone 8, 8 Plus and iPhone X, introduced in 2017. Since then, every Apple A-series SoC has included a Neural Engine.
The Neural Engine is its own block. Neural Engine is not used for local LLMs on Macs. Neural Engine is optimized for power efficiency while running small models. It's not good for LARGE language models.
This change is strictly adding matmul acceleration into each GPU core where it is being used for LLMs.
The NPU is still there. This adds matmul acceleration directly into each GPU core. It takes about ~10% more transistors to add these accelerators into the GPU so it's a significant investment for Apple.
> If the M5 generation gets this GPU upgrade, which I don't see why not, then the era of viable local LLM inferencing is upon us.
I don't think local LLMs will ever be a thing except for very specific use cases.
Servers will always have way more compute power than edge nodes. As server power increases, people will expect more and more of the LLMs and edge node compute will stay irrelevant since their relative power will stay the same.
LocalLLMs would be useful for low latency local language processing/home control, assuming they ever become fast enough where the 500ms to 1s network latency becomes a dominate factor in having a fluid conversation with a voice assistant. Right now the pauses are unbearable for anything but one way commands (Siri, do something! - 3 seconds later it starts doing the thing...that works but it wouldn't work if Siri needed to ask follow up questions). This is even more important if we consider low latency gaming situations.
Mobile applications are also relevant. An LLM in your car could be used for local intelligence. I'm pretty sure self driving cars use some about of local AI already (although obviously not LLM, and I don't really know how much of their processing is local vs done on a server somewhere).
If models stop advancing at a fast clip, hardware will eventually become fast and cheap enough that running models locally isn't something we think about as being a non-sensical luxury, in the same way that we don't think that rendering graphics locally is a luxury even though remote rendering is possible.
Network latency in most situations is not 500ms. The latency from New York California is under 70ms, and if you add in some transmission time you're still under 200ms. And that's ignoring that an NYC request will probably go only to VA (sub-15ms).
Even over LTE you're looking at under 120ms coast to coast.
> Servers will always have way more compute power than edge nodes
This doesn't seem right to me.
You take all the memory and CPU cycles of all the clients connected to a typical online service, compared to the memory and CPU in the datacenter serving it? The vast majority of compute involved in delivering that experience is on the client. And there's probably vast amounts of untapped compute available on that client - most websites only peg the client CPU by accident because they triggered an infinite loop in an ad bidding war; imagine what they could do if they actually used that compute power on purpose.
But even doing fairly trivial stuff, a typical browser tab is using hundreds of megs of memory and an appreciable percentage of the CPU of the machine it's loaded on, for the duration of the time it's being interacted with. Meanwhile, serving that content out to the browser took milliseconds, and was done at the same time as the server was handling thousands of other requests.
Edge compute scales with the amount of users who are using your service: each of them brings along their own hardware. Server compute has to scale at your expense.
Now, LLMs bring their special needs - large models that need to be loaded into vast fast memory... there are reasons to bring the compute to the model. But it's definitely not trivially the case that there's more compute in servers than clients.
The sum of all edge nodes exceed the power in the datacenter, but the peak power provided to you from the datacenter significantly exceed your edge node capabilities.
A single datacenter machine with state of the art GPUs serving LLM inference can be drawing in the tens of kilowatts, and you borrow a sizable portion for a moment when you run a prompt on the heavier models.
A phone that has to count individual watts, or a laptop that peaks on dual digit sustained draw, isn't remotely comparable, and the gap isn't one or two hardware features.
I adore this machinery, there's a lot of money riding on the idea that interest in AI/ML will result in the value being in owning bunch of big central metal like cloud era has produced, but I'm not so sure.
I'm sure the people placing multibillion dollar bets have done their research, but the trends I see are AI getting more efficient and hardware getting more powerful, so as time goes on, it'll be more and more viable to run AI locally.
Even with token consumption increasing as AI abilities increase, there will be a point where AI output is good enough for most people.
Granted, people are very willing to hand over their data and often money to rent a software licence from the big players, but if they're all charging subscription fees where a local LLM costs nothing, that might cause a few sleepless nights for a few execs.
We could potentially see one-time-purchase model checkpoints, where users pay to get a particular version for offline use, and future development is gated behind paying again- but certainly the issue of “some level of AI is good enough for most users” might hurt the infinite growth dreams of VCs
tts would be an interesting case-study. it hasn't really been in the lime-light, so could serve as a leading indicator for what will happen when attention to text generation inevitably wanes
I use Read Aloud across a few browser platforms cause sometimes I don't care to read an article I have some passing interest in.
The landscape is a mess:
it's not really bandwidth efficient to transmit on one count, local frameworks like Piper perform well in alot of cases, there's paid APIs from the big players, at least one player has incorporated api-powered neural tts and packaged it into their browser presumably ad-supported or something, yet another has incorporated into their OS, already (though it defaults to speak and spell for god knows why). I'm not willing to pay $0.20 per page though, after experimenting, especially when the free/private solution is good enough.
IMO the benefit of a local LLM on a smartphone isn't necessarily compute power/speed - it's reliability without a reliance on connectivity, it can offer privacy guarantees, and assuming the silicon cost is marginal, could mean you can offer permanent LLM capabilities without needing to offer some sort of cloud subscription.
If the future is AI, then a future where every compute has to pass through one of a handful of multinational corporations with GPU farms...is something to be wary of. Local LLMs is a great idea for smaller tasks.
Sure, that's why local LLMs aren't popular or mass market as of September 2025.
But cloud models will have diminishing returns, local hardware will get drastically faster, and techniques to efficiently inference them will be worked out further. At some point, local LLMs will have its day.
"Servers are more powerful" isn't a super strong point. Why aren't all PC gamers rendering games on servers if raw power was all that mattered? Why do workstation PCs even exist?
Society is already giving pushback to AI being pushed on them everywhere; see the rise of the word "clanker". We're seeing mental health issues pop up. We're all tired of AI slop content and engagement bait. Even the developers like us discussing it at the bleeding edge go round in circles with the same talking points reflexively. I don't see it as a given that there's public demand for even more AI, "if only it were more powerful on a server".
You make a good point, but you're still not refuting the original argument. The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand. If you own an iPhone, you're probably still using ChatGPT.
Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices. Apple found this out the hard way shipping Xserve prematurely.
> Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices.
Right, and that's despite the datacenter hardware being far more powerful and for most people cheaper to use per hour than the TCO of owning your own gaming rig. People still want to own their computer and want to eliminate network connectivity and latency being a factor even when it's generally a worse value prop. You don't see any potential parallels here with local vs hosted AI?
Local models on consumer grade hardware far inferior to buildings full of GPUs can already competently do tool calling. They can already generate tok/sec far beyond reading speed. The hardware isn't serving 100s of requests in parallel. Again, it just doesn't seem far fetched to think that the public will sway away from paying for more subscription services for something that can basically run on what they already own. Hosted frontier models won't go away, they _are_ better at most things, but can all of these companies sustain themselves as businesses if they can't keep encroaching into new areas to seek rent? For the average ChatGPT user, local Apple Intelligence and Gemma 3n basically already have the skills and smarts required, they just need more VRAM, and access to RAG'd world knowledge and access to the network to keep up.
> The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand.
Correct, though to me it seems that this comes at the price of narrowing the target audience (i.e. devs and very high-demanding analysis + production work).
For almost everything else people just open a bookmarked ChatGPT / Gemini link and let it flow, no matter how erroneous it might be.
The AI area is burning a lot of bridges and has done so for the last 1.5 - 2.0 years; they solidify the public's idea that they only peddle subscription income as hard as they can without providing more value.
Somebody finally had the right idea some months ago: sub-agents. Took them a while, and it was obvious right from the start that just dumping 50 pages on your favorite LLM is never going to produce impressive results. I mean, sometimes it does but people do a really bad job at quickly detecting when it does not, and are slow to correct course and just burn through tokens and their own patience.
Investors are gonna keep investor-ing, they will of course want the paywall and for there to be no open models at all. But happily the market and even general public perception are pushing back.
I am really curious what will come out of all this. One prediction is local LLMs that secretly transmit to the mothership, so the work of the AI startup is partially offloaded to its users. But I am known to be very cynical, so take this with a spoonful of salt.
I switch between gpt-oss:20b/qwen3:30b. Good for green fielding projects, setting up bash scripts, simple CRUD apis using express, and the occasional error in a React or Vue app.
That's assuming diminishing returns won't hit hard. If a 10x smaller local model is 95%(Whatever that means) as good as the remote model it makes sense to use local models most of the time. It remains to be seen if that will happen but it's certainly not unthinkable imp.
It's really task-dependent, text summarization and grammar corrections are fine with local models. I posit any tasks that are 'arms race-y' (image generation, creative text generation) are going to be offloaded to servers, as there's no 'good enough' bar above which they can't improve.
Apple literally mentioned local LLMs in the event video where they announced this phone and others.
Apple's privacy stance is to do as much as possible on the user's device and as little as possible in cloud. They have iCloud for storage to make inter-device synch easy, but even that is painful for them. They hate cloud. This is the direction they've had for some years now. It always makes me smile that so many commentators just can't understand it and insist that they're "so far behind" on AI.
All the recent academic literature suggests that LLM capability is beginning to plateau, and we don't have ideas on what to do next (and no, we can't ask the LLMs).
As you get more capable SLMs or LLMs, and the hardware gets better and better (who _really_ wants to be long on nVIDIA or Intel right now? Hmm?), people are going to find that they're "good enough" for a range of tasks, and Apple's customer demographic are going to be happy that's all happening on the device in their hand and not on a server [waves hands] "somewhere", in the cloud.
It's not difficult to find improvements to LLMs still.
Large issues: tokenizers exist, reasoning models are still next-token-prediction instead of having "internal thoughts", RL post-training destroys model calibration
Small issues: they're all trained to write Python instead of a good language, most of the benchmarks are bad, pretraining doesn't use document metadata (ie they have to learn from each document without being told the URL or that they're written by different people)
I think they will be, but more for hand-off. Local will be great for starting timers, adding things to calendar, moving files around. Basic, local tasks. But it also needs to be intelligent enough to know when to hand off to server-side model.
Android crowd has been able to run LLMs on-device since LlamaCPP first came out. But the magic is in the integration with OS. As usual there will be hype around Apple, idk, inventing the very concept of LLMs or something. But the truth is neither Apple nor Android did this; only the wee team that wrote the attention is all you need paper + the many open source/hobbyist contributors inventing creative solutions like LoRA and creating natural ecosystems for them.
Couldn’t you apply that same thinking to all compute? Servers will always have more, timesharing means lower cost, people will probably only ever own dumb terminals?
Huh? GeForce NOW is a resounding success by many metrics. Anecdotally, I use it weekly to play multiplayer games and it’s an excellent experience. Google giving up on Stadia as a product says almost nothing about cloud gaming’s viability.
Do you mean Stadia? Stadia worked great. The only perceptible latency I initially had ended up coming from my TV and was fixed by switching it to so-called "gaming mode".
Never could figure out what the heck the value proposition was supposed to be though. Pay full price for a game that you can't even pretend you own? I don't think so. And the game conservation implications were also dire, so I'm not sad it went away in the end.
The crux is how big the L is in the local LLMs. Depending on what it's used for, you can actually get really good performance on topically trained models when leveraged for their specific purpose.
> don't see why not, then the era of viable local LLM inferencing is upon us.
I don't think local LLMs will ever be a thing except for very specific use cases.
I disagree.
There's a lot of interest in local LLMs in the LLM community. My internet was down for a few days and did I wish I had a local LLM on my laptop!
There's a big push for privacy; people are using LLMs for personal medical issues for example and don't want that going into the cloud.
Is it necessary to talk to a server just to check out a letter I wrote?
Obviously with Apple's release of iOS 26 and macOS 26 and the rest of their operating systems, tens of millions of devices are getting a local LLM with 3rd party apps that can take advantage of them.
I'm running Qwen 30B code on my framework laptop to ask questions about ruby vs. python syntax because I can, and because the internet was flaky.
At some point, more doesn't mean I need it. LLMs will certainly get "good enough" and they'll be lower latency, no subscription, and no internet required.
pretty amazing, as a student I remember downloading offline copies of Wikipedia and Stack Overflow and felt that I have the entire world truly in my laptop and phones. Local LLMs are arguably even more useful than those archives.
local LLM may not be good enough for answering questions (which I think won't be true really soon) or generating images, but it today should be good enough to infer deeplinks and app extension calls or agentic walk-through... and ushers a new era of controlling phone by voice command.
Because of the prompt processing speed, small models like Qwen 3 coder 30b a3b are the sweet spot for mac platform right now. Which means a 32 or 64GB mac is all you need to use Cline or your favorite agent locally.
Yeah you can, so long as you're hosting your local LLM through something with an OpenAI-compatible API (which is a given for almost all local servers at this point, including LM Studio).
That said, running agentic workloads on local LLMs will be a short and losing battle against context size if you don't have hardware specifically bought for this purpose. You can get it running and it will work for several autonomous actions but not nearly as long as a hosted frontier model will work.
Unfortunately, IDE integration like this tends to be very prefill intensive (more math than memory). That puts Apple Silicon at a disadvantage without the feature that we’re talking about. Presumably the upcoming M5 will also have dedicated matmul acceleration in the GPU. This could potentially change everything in favor of local AI, particularly on mobile devices like laptops.
Cline has a new "compact" prompt enabled for their LM Studio integration which greatly alleviates the long system prompt prefill problem, especially for Macs which suffer from low compute (though it disables MCP server usage, presumably the lost part of the prompt is what made that work well).
It seems to work better for me when I tested it and Cline's supposedly adding it to the Ollama integration. I suspect that type of alternate local configuration will proliferate into the adjacent projects like Roo, Kilo, Continue, etc.
Apple adding hardware to speed it up will be even better, the next time I buy a new computer.
I will believe this when I see it. It’s totally possible that those capabilities are locked behind some private API or that there’s some weedsy hardware complication not mentioned that makes them non-viable for what we want to do with them.
They might recommend using CoreML to leverage them, though I imagine it will be available to Metal.
The whole point of CoreML is that your solution uses whatever hardware is available to you, including enlisting a heterogeneous set of units to conquer a large problem. Software written years ago would use the GPU matmul if deployed to a capable machine.
According to Apple's comparison tool, the Air has 27 hrs of video playback, compared to 30 for the 17 and 39 for the Pro.
Based on that, it doesn't sound like it's that much worse. Of course, if you're trying to maximize battery longevity by not exceeding 80% charge, that might make it not very useful for many people.
Yes, and then don’t even make it compatible with other phones. I’m a big fan of their previous (discontinued) MagSafe battery as that supports reverse charging, charge state display on phone and has the perfect size.
This new battery however is only compatible with the Air as other phones have a bigger camera bump.
And how much better would be if it had a physical connector so it's much more efficient. So you would have a bigger total charge and it wouldn't cook both batteries in the process. One can dream though.
The previous MagSafe battery has the size of the MagSafe wallet and thus fits onto all the iPhones that have MagSafe down to the "Mini" variants. It's the perfect emergency power backup. But Apple discontinued this a while ago.
Selling a thin phone with half a battery where you have to buy the other half and keep it attached to get a proper battery runtime (turning it into a normal-sized phone) can't be the solution Apple intended. At least I hope so. And that battery doesn't fit other iPhones as the camera bump of those other phones is in the way.
Well, swappable batteries has the UX advantage of being able to swap in full charges.
I don't really understand all the complaining since it's merely a variant of the iPhone for people who prioritize thinness over battery.
For over a decade, HNers have complained that they don't want thinness to be forced on them and that there should be a separate SKU for it. Yet when it finally happens, HNers complain about the trade-off.
Either way, people have options now. If one doesn't like the compromises of the thin phone, they can buy the thick phone. Seems silly to complain about the thinness if you're not the target demographic.
Magsafe battery has also been a great fix for a 5-year old built-in battery for 1/3 the cost of a battery replacement, and when I finally replace this phone I'll still have the magsafe battery for travel/emergency.
3rd party versions of course, the official one was much more expensive than that.
The camera bump on other models protrudes more towards the centre of the body. And thus the battery wouldn't fit (flush) and the Qi charging wouldn't engage properly.
So running the battery perpendicular to this photo isn't an option?
I'm sorry if I'm not completely familiar with this product: you are to have this battery attached at all times while you're charging, and it just stays in place? (gawd I sound like I'm from a different planet, I apologize -- wireless charging just never has been interesting to me)
What on earth are you talking about? It includes a battery and there is both a cheaper and a more expensive version that has more battery, plus an add on battery pack. And you’re complaining about what exactly?
IMO it's underwhelming considering folding phones have been out for many years now and we still don't have a folding iPhone. What are the PMs doing at Apple.
I think folding phones will remain a small niche unless someone figures out how to make a foldable screen that doesn't get permanently scratched by your fingernails.
Pebble had like a week long battery life. Apple’s pitch wasn’t better battery life, it was just “that thing for nerds? This is the same, but for everyone else.” I.e. it came with seamless integration with your phone, rings to close, a more expensive look, and more polished fitness tracking.
The breakthru that made touchscreen phones works wasn’t an app ecosystem. That came after people were already crazy about iPhones. It was capacitive touch screens. Basically everything before was resistive touch, which is why they usually had styluses. Getting touch, and really multi-touch, working well was the game changer that redefined cell phones.
IMO there's a gap between "charge every day" and "charge once a week" that needs to be crossed.
In other words, if they made the battery last twice as long it'd still be equally as annoying (since your daily routine would be nearly the same, except now you also need to remember if it's a charge day or non-charge day).
To be fair maybe 3/4 days buys you some convenience. But anyways charging once a day is a reasonable place to get to, to get something better would require at minimum a 3x improvement which probably means a ground-up rework instead of continuous refinement.
A battery band might get you there but I suspect it'd be too clunky. At best Apple may redesign their watch to support a battery band and allow 3rd parties to make them for folks that need weeks of battery life.
For me, it comes down to two things. First, I do not want to have to charge every night since I use my watch as a silent vibrating alarm, and I track my sleep. It seems like Apple has basically overcome this hurdle, now that you can charge while you shower and basically get by.
The other issue is that I don't want to have to bring Yet Another Dongle™ every time I go away for a weekend or short business trip. Most of my trips are ≤ 4 days, so if AWs could reliably go that long (including battery degradation over time) then I'd consider getting one.
Right now, only the AWU even approaches this, and only in low-power mode. If it weren't a thousand dollars, I'd consider it. But between the low-power requirement and the pricing, it's just no contest in my book. I'm getting a new Pebble, which offers a month of battery life at 1/3 of the cost.
I watched the announcement yesterday and was very surprised to hear the watch battery life is still so shocking.
Especially considering how useful sleep data is, then I was surprised to see they're only getting sleep scores now.
My dirt cheap Huawei watches have had this for years. It's accurate enough (my own perception based on use). And I get a weeks battery life too (although I don't have the distracting fancy notifications perhaps). It does check blood oxygen levels, heart rate, stress etc.
I truly thought this was a solved problem (looking at headphones battery life, although I might need to check my assumptions here also apply to Airpods).
And that's the fancy screen, gimmick edition garmin watch - the normal MIP display garmin watches (even an old, midrange Forerunner 255) will easily get a couple of weeks of battery life, more for the higher end ones.
OLED is just the wrong screen tech for these devices, never made any sense to me given how little I care about graphics and how little time I spend reading the display.
Yes. My Pebble Steel got over a week of battery in 2015, had physical, tactile buttons that worked even wearing thick winter gloves, and had an always-on-no-matter-what screen that was clearly readable in full sunlight.
Every smartwatch that hasn't met that bar, which is almost all of them ever made, is a joke to me. I'd have ordered a RePebble had I not moved back to analogue dumbwatches instead just before they were announced (and were iOS not actively hostile to competing watch implementations).
And motorcycles get way better gas mileage than cars. But it’s still odd to frame a (totally understandable!) preference for one product category in those terms.
If you are okay with less smart smart watches, and okay with no hackability, Garmin should have a few with black and white display and >1 week battery life (even indefinite with sufficient solar).
depends which camp of apple watch (or smart watch in general) users you are asking.
the camp that sees the smartwatch as an accessory to their smartphone that does fitness tracking and maybe a few other useful things to avoid pulling their phone out constantly - those people want MUCH longer battery life.
the camp that sees the smartwatch as a REPLACEMENT to their smartphone, they are perfectly fine with the current battery life.
I am closer to the first camp than the second, and I don’t understand why I would need longer battery life. The watch charges very quickly, and there is never a day when I don’t have the chance to charge at some point. I usually do it during my morning shower.
1. People use these GPS watches for Ironman triathlons, ultra running & cycling events etc. They can't and won't charge before the battery is done - and remember the battery with a daily charge will degrade significantly. If it's borderline on release, it'll be inadequate after a year.
2. Just for general convenience, having to take another special cable for every late night or overnight trip is maddening. I always have a phone anyway for any actual interactions.
I find it hard to believe many people are writing texts on their watches, it's just a nice to have gimmick feature that everyone I know has stopped using.
> and remember the battery with a daily charge will degrade significantly. If it's borderline on release, it'll be inadequate after a year.
That has not been my experience though - having used both an Apple Watch and a Pixel Watch for years on end every single day. Absolutely outside my area of expertise, but I would imagine that you can design batteries to have a much longer lifetime (no of recharge cycles) when their capacity is smaller.
That’s not how Lion charging works - degradation and lifetime (to a first approximation) depend on full charges. If you charge daily from 80% to 100% or charge every 5 days from 1% to 100%, your battery degradation and lifetime will be the same.
The new one isn't actually longer. It's just that they changed how they measure it. It assumes 16 awake hours and 8 asleep hours, so the watch lasts 24 hours, but only when you are sleeping and thus not using it for 8 hours.
Why? You can get 8 hours of sleep tracking for a 5 minute charge. You really can't charge your watch for 5 minutes before bed? How about during your bathroom routine?
You are brushing your teeth for like half that alone.
I think a lot of people reach into their pocket and get their phone out if they need "interactivity, ability to respond to notifs, cellular, etc"
But if you want to leave your smartphone at home, but you still want cellular and notifications, I agree the apple watch is the only game in town even if the battery life sucks.
Most of this is because of the always-on screen. If you can live without it and switch back to the motion or button to wake mode, you get 30-50% more usage before the battery runs out, which is not a huge improvement but is a legitimate option.
A side effect is that this makes your watch look less new, and therefore less of a theft target.
And bicycles go much further without needing petrol than cars.
I agree that Apple Watches don't last long enough between charges, but comparing them to a completely different class of device that's technically the same broad category is pointless.
Is this a thing? I've been using a Pixel 9 Pro Fold for one full year now and my inner screen looks pretty flawless. I don't see a single scratch, and I've never used any kind of protector on the inside. This kind of sounds like a "sour grapes" excuse where a really good thing is presumed to suck only because you can't have it. Personally, as someone who isn't really interested in a full tablet, the foldable is really really nice.
I have an OG Pixel Fold and the inner screen is flawless. My iPhone 14 Pro screen is visibly scratched. The Fold replaced tablets and e readers for me.
I don't know what to tell you - I don't want to brag about my eyesight, but it's pretty good - No matter what angle, no matter what phone, the crease is visible. What would I have to gain lying about this? I could say the same thing - Stop trying to copium your purchase?
For those that are not chronically online, a mobile phone from a decade ago has everything they need. If you only have to phone the family, WhatsApp your neighbours, get the map out, use a search engine and do your online banking, then a flagship phone is a bit over the top. If anything, the old phone is preferable since its loss would not be the end of the world.
I have seen a few elderly neighbours rocking Samsung Galaxy S7s with no need to upgrade. Although the S7 isn't quite a decade old, the apps that are actually used (WhatsApp, online banking) will be working with the S7 for many years to come since there is this demographic of active users.
Now, what if we could get these people to upgrade every three years with a feature that the 'elderly neighbour' would want? Eyesight isn't what it used to be in old age, so how about a nice big screen?
You can't deliberately hobble the phone with poor battery life or programme it to go slow in an update because we know that isn't going to win the customer over, but a screen that gets tatty after three years? Sounds good to me.
I'm still want a phone with expandable storage and a headphone jack. Sony had one, but I don't know if they're selling them and I've heard they have their own issues too.
One extra 'thing' to need - at the moment I know that I can play music through anything that has a line-in, with just a cable. However Bluetooth seems to work ok - for devices that support it.
Probably trying to find better screen materials, and addressing reliability issues.
I used Palm devices with resistive touch screens. It was good, but when you go glass, there's no turning back.
I would never buy a phone with folding screens protected by plastic. I want a dependable slab. Not a gimmicky gadget which can die any moment. I got my fix for dying flex cables with Casiopeia PDAs. Never again.
my girlfriend broke her iphone screen twice in two weeks, the second time we didn't bother repairing the screen and now she has a broken screen which looks really ugly. I've dropped my google pixel fold 9 countless time and the screen is still intact and flawless. So not sure what you're talking about.
Sure, but I wouldn't buy one even if it was in the same price range as phones I usually buy. For me, it will be useful rarely and cumbersome to use the rest of the time.
I picked up a folding phone a while back just to test it out, and honestly they're still pretty underwhelming.
The screen isn't really big enough or the right shape to feel like a real upgrade for movies, and a lot of apps just aren't built with foldables in mind. Most of the time it just feels like a weirdly shaped, less powerful, less durable tablet.
On top of that you're dealing with a visible crease across the screen, higher prices for something that's actually more fragile, and bulkier hardware with smaller or split batteries. The tech is cool in theory, but in practice it's a lot of compromises without a clear killer use case.
which phone was that? I bought the pixel folding 9 last year and it has basically replaced my ipad pro. I watch movies, shows, youtube videos, read PDFs on it, it's really good
Things have evolved a ton. I've got an Oppo Find N5. Thinner than iPhone Air when unfolded. Same size as iPhone 16 Pro Max when folded. 16GB Ram, fastest Snapdragon, okay cameras, the screen is magnificent, crease basically invisible in day to day use. Battery larger than any iPhone battery (thanks to Silicon Carbon)
Free with a plan just means you paid for it in installments without them breaking down how much of your monthly payment is going towards the device vs towards the network use. Had you opted for a cheaper device you could have got the same plan for less money. The phone is never actually free, just cleverly marketed to seem free.
I recently got one of these (Galaxy Z7 Fold) and I can't imagine ever going back to a regular phone. The big screen is what makes the phone finally begin to resemble actual productivity tool.
It was a prepaid plan that was the same price whether I got the phone or not. I guess you could say everyone who didn’t get the phone was subsidizing those who did, but there’s no way to opt for lower pricing if you BYOD. So no, in this case that’s not really true. If it were Verizon where you can pay less if you BYOD then sure but that’s not what I did.
> It’s not a big enough slice for them to want to chase.
Typical strat for them is not to be first with an innovation, but to wait and work out the kinks enough that they can convince people that the tradeoffs are well worth making. Apple wouldn't be chasing that existing slice, they'd be trying to entice a larger share of their customers to upgrade faster.
Yes, in some way everybody is in the 1.5% of something. Apple users will therefore never be 100% happy. Apple is a compromise. But they're also opinionated and very good at telling their users what they should like.
Folding phones are extremely popular in China, where nobody cares about Apple anymore. They are now seen as a status symbol because they are significantly more expensive.
I think they'd rather sell you an iPhone and an iPad Mini rather than one device that does both, just like they'd rather sell you an iPad Air/Pro and a MacBook with basically the same internals, rather than a convertible macOS tablet.
Aside from the obvious mechanical issues, the screen quality compromises, et cetera, folding phones are just dorky. Apple wants their products to be anything but dorky.
Apple watch is like the definition of dorky looking - so much for that theory.
Also flip phones aren't dorky and have a 2000s vibe - but they don't fit Apple "you can have any color as long as it's black" approach to design.
In some ways I can't even fault them - fragmenting your device shapes/experiences to chase a niche look is not good business. But this is exactly what's pushing me out of Apple ecosystem - it's so locked down that if you don't want to fit into their narrow product lines you have no other options. There are no third party watch makers using apple watch hardware and software. No other phone makers with access to iPhone internals and iOS. Nobody can hack a PC OS onto an iPad or build a 2in1 MacOS device.
I feel like this is the last gen of Apple tech I'm in on - I just find there are so many devices that are compelling to me personally but don't fit into the walled garden. Plus Google seems light-year ahead on delivering a smart assistant.
I was going to write that the only nerdier thing I can think of is wearing a calculator watch - but even that's like nerd fashion and having a rectangular screen strapped to your wrist is just all about utility.
If you mistake any of these for an apple watch at less than 100m you need glasses.
There's nothing wrong with rectangular watches - a fat bezel less screen rectangle around your wrist is not the same thing. The pro comes closest to a proper watch look but even that's "inspector gadget" teritory, not fashion accessory.
Don't know why you're downvoted. My boyfriend wears the Apple Watch Ultra in public and looks like a complete dork. He's got a pretty big wrist, too!
I left the ecosystem after Catalina, and my experience with macOS at work has horrified me enough to stay well away. Nowadays I'm happily using NixOS on the desktop, laptop and homeserver. My biggest gripe is that I didn't switch sooner, probably could have saved a decent amount of cash eschewing the Apple tax, SaaS fees and macOS migration hamster-wheel.
I'm going to respectfully disagree with the Apple Watch being labeled "dorky". I think they look pretty nice - and I don't own one. I wear a Timex Ironman.
It’s hard to find a source of how many iPhone owners specifically also own a smartwatch, but in the US it seems like 35% might be a decent estimate of smartwatch ownership, so it’d be more in the realm of ~28% of iPhone owners also having an Apple Watch.
True, it did seem a bit unbelievable. Either way, if you look around, Apple Watch is worn by all walks of life and just doesn't have the dorky vibe HNers might insist it has.
They're in the right. Folding phones are great, and I've used one for years, but the technology hasn't reached Apple levels. Get rid of the crease, make the screen less scratchable, and make them waterproof, and then it could go in an iPhone.
Im never going back to non-foldable. The ability to have a full sized phone take up half as much space in my pocket is amazing. Consistently more comfortable moving around.
I have a google fold 9 for a year and I've never noticed it unless I look at the phone from the side. It's interesting that this is the criticism that comes up the most here where it's already been a solved issue
If you hold it up to light to get a reflection, you are telling me there's zero perceptual warping of that reflection around the crease? None? It's as flat and perfect as a single sheet of glass?
iPads are better for watching shows on a stationary bike, since they fit on the bike
iPads are better for reading manga, since you can hold them vertically
and iPads are clearly better for drawing--you can't draw on a laptop.
There are some hybrid laptops that do these things, but they're bad at them. Especially drawing, I've used enough HP convertibles with "stylus support" over the years to know that.
> there’s essentially (literally?) no difference between an iPad and MacBook hardware
Form factor. Touch screen. GPS. Cellular. Circular polarization. These are all literal hardware differences between the iPad and MacBook, and every single one of them makes the iPad suitable for my use case (ForeFlight running on an iPad mounted to the yoke) where a MacBook would not be.
Also, can you give an example of a laptop (or non-Apple tablet) with a circularly polarized LCD? I've never been able to find one, but it's not a spec that's often published…
Are you serious? For anything that needs more screen estate - reading, browsing, photo/video watching/organizing, or simply if your sight is not as good anymore, it's so much better than phone. And with the pricetag around $350 that is amazing value.
Someone else commented that the reason the iPhone Air is so thin is the result of Apple building a folding phone (they have to be thin). I agree. The iPhone Air basically looked like a low hanging fruit while they're still at it. Apple is known to take its time so that makes sense
I've had the past three generations of Samsung folding phone (4,5,6).
My use-case is for travel, where I want to read books, and the very occasional time when I want to do some design work outside the office -- draw a diagram that sort of thing. A third rare use case is where a web site is buggy or limited in functionality for mobile browsers. In all these cases the unfolded screen allows me to do the thing I need to do without carrying a second device (tablet, eReader). Another marginal use-case is to show another person a photograph. The fold out screen is much easier to see and I think has better color rendition too.
For these use-cases I find the folding phone very worthwhile.
But...the benefit that trumps all that is that the phone itself is smaller (narrower) than the typical flagship phones these days. It fits in my pocket and my hand reaches across it. I'd never go back to a non-folding phone for this reason alone, even if I never unfolded it. In fact I almost never do unfold it, except when traveling.
fwiw it wasn't until the Fold6 that the "cover screen" typing experience was ok. I understand that the Fold7 is a bit wider and so probably better, but I can't justify the expense to upgrade so will sit out until the Fold8.
they do work well but are fragile. I broke one by gently closing it on a hot day (about 100F). Saw another break from the kind of short fall that used to break phones before they all got gorilla glass.
I guess if you're the sort that is not clumsy and you're in a mild climate you might get your money's worth
The vertical fold ones might be better. I had the newest Samsung Flip (horizontal folding) and the screen died twice. Both times from a small rupture on the seam. The tech at the phone place said it happens constantly, and it costs hundreds of dollars to replace out of warranty.
I dunno, I always felt folding phones added unnecessary complexity and moving parts. The slab phone seems closer to a platonic ideal and from a user/engineering perspective, has less compromises
> IMO it's underwhelming considering folding phones have been out for many years now and we still don't have a folding iPhone. What are the PMs doing at Apple.
They're buying another year of very-high margin phones I guess...
I would never have bought one before but nowadays it could actually be useful. You could have Codex or Claude Code in your pocket, and every ~15min check the work and write a new prompt. Tablets are too big (for me) to constantly carry around for this, and phones annoyingly small for that use.
If the M5 generation gets this GPU upgrade, which I don't see why not, then the era of viable local LLM inferencing is upon us.
That's the most exciting thing from this Apple's event in my opinion.
PS. I also like the idea of the ultra thin iPhone Air, the 2x better noise cancellation and live translation of Airpods 3, high blood pressure detection of the new Watch, and the bold sexy orange color of the iPhone 17 Pro. Overall, this is as good as it gets for incremental updates in Apple's ecosystem in a while.