It's possible to lock pages in Notion (from the ... menu on the page), which prevents editing without unlocking. Most org-level pages in our workspace are locked. People also typically lock project pages after they ship, for example.
This is also likely a performance nightmare. Funny that they mention that "new hardware has enabled us to..." which means that this will perform poorly on old devices.
At a previous company, we were forbidden from using translucency (with a few exceptions) because of the performance cost of blending. There are debugging tools we'd use fairly often to confirm that all layers were opaque.
Unlikely. Frosted glass blur was introduced almost twelve years ago in iOS 7, and was supported all the way down to the iPhone 4. Many apps like control center have used a full screen blur without any performance issues for a long time.
Apple at the time created their own 'approximate gaussian blur' algorithm specifically to enable this, and it ran crazy fast on devices where a simple gaussian blur would barely achieve double digit FPS. Even if this 'liquid glass' effect is heavier to compute, on the hardware we have today it will be a negligible performance concern.
> Unlikely. Frosted glass blur was introduced almost twelve years ago in iOS 7, and was supported all the way down to the iPhone 4. Many apps like control center have used a full screen blur without any performance issues for a long time.
"Without any performance issues"? Entirely false - reviews at the time noted iOS 7 dramatically reduced battery life - all across the board for Apple devices, even for the then latest iPhone 5S and 5c (https://arstechnica.com/gadgets/2013/09/ios-7-thoroughly-rev...).
The abuse of transparency/translucency in the UI was the primary reason - you could go to Accessibility settings and disable animations + transparency/translucency and get notable increases in both runtime speed of the OS UI and battery life.
Indeed, I remember the switch to iOS 7, for me battery life seemed to get slightly worse but there were conflicting opinions at the time. It's fresh in my memory as it was around the same time I binged on all five seasons of Breaking Bad :)
I's also true that iOS 7 made the 4/4S seem much slower, but the frosted glass effect still ran at 60FPS - that was my point. It was really impressive at the time. Though unless you spent hours sliding the control center up and down, it's hard to blame the blur effect for the reduced battery life, as it rarely appeared inside apps. Most likely the result of increased OS bloat and proliferation of background services.
You can’t judge battery life and performance off a .0 release when the priority is on delivering features with the minimum number of showstopper bugs. At least wait until the .1.
It has been like this for every Apple release for over 20 years.
Maybe for "Apple", but there's one team that takes performance seriously. The WebKit team has a zero tolerance policy for performance regressions (https://webkit.org/performance/) dating back to the implementation of the Page Load Test in 2002 (Creative Selection, p. 93).
WebKit sounds like the kind of scrappy startup Apple might want to acquire and gain some hard-earned engineering knowledge.
If Apple has been shipping betas for 2 decades that do not meaningfully prepare the release candidate for users, something is horribly wrong. They're either not listening to the feedback they receive or they're not giving themselves enough time; both are firmly within Apple's control.
Well, firstly, this is a developer beta. So the target audience are developers that want to get a head start on getting their app(s) ready. So measuring battery performance of those dev betas is dumb.
Also, they do listen to feedback and do gather it. They won't change entire design language now tho.
The parent comment wasn't talking about the developer beta, they were talking about the .0 release. They should use the release candidates as an opportunity to dogfood new solutions instead of shipping an MVP to prod.
It’s almost certain to be a fairly cheap thing, at least for a GPU that can sling pixels at the gigabytes per second necessary to get smooth touch scrolling at these screen resolutions.
The demos only show a very limited array of shapes. Precompute the refraction, store the result in a texture, and the gist should be sample(blur(background), sample(refraction, point)). Probably a bit more complicated than this—I’m no magician of the kind that’s needed to devise cheap graphics tricks like this—but the computational effort should be in that ballpark. Compared to on-device language models and such, I wouldn’t be worried.
(Also, do I need to remind you of the absolute disdain directed by 95/98/Me/2000 users at the “toy” default theme of XP? And it was a bit silly, to be honest. It’s just that major software outfits don’t dare to be silly anymore, and that way lies blandness.)
> It’s just that major software outfits don’t dare to be silly anymore, and that way lies blandness
Great observation! We need some of that sillyness back. Everything is all serious and corporate nowadays, even 'fun' stuff like social media or games. Even movies can't be silly anymore.
Not sure about 'serious and corporate', the big corps like to appear cute, folksy etc. and recently we even saw new Google Material Design advertised as judged more "rebellious" by focus groups. Maybe bland and toothless is just a general direction of contemporary culture and style that they follow.
Myself, I can appreciate corporate stuff presenting corporate. More truthful, feels a little less manipulative.
>It’s almost certain to be a fairly cheap thing, at least for a GPU that can sling pixels at the gigabytes per second
Okay, but what about the battery connected to the GPU? The battery in my iPhone has already degraded below 80% health in the 2.7 years I've had it, so I'd rather not waste its charge on low-contrast glass effects.
> And all of this just to make the whole UI white and generic.
3:30–3:45 in the video is painful. Describing “giving you an entirely new way, to personalise your experience”, while showing… white. White white white. Oh, and light tinted backgrounds to set your white on. I hope the personalisation you wanted was white.
We used to have such customisation, then it kinda went away for a while because it was too hard and limited development, and then dark mode was hailed as a brilliant new invention.
But it is worth remembering that dark mode does actually get you some things; it’s not all bad: the restrictions do have some value.
Full customisation became paradoxically limiting: when you give too much power to the user, the app is essentially operating in a hostile environment. Of course, a lot of it was laziness on app and UI framework developers’ parts, but it really did limit innovation, too.
Dark mode gets you a pair of themes that you can switch between easily, and an expectation that there are only two themes you need to consider, with well-defined characteristics. This is a much more practical target, a vastly easier sell for app and framework developers.
The funny thing with monochrome icons is that in some ways they were actually a better fit for a full-customisation environment, where you had arbitrary background and foreground colours. Once it’s just mundane light and dark themes, you could more safely have full colour in two variants.
Certainly light mode and dark mode does not mean things need to be monochrome.
From what I've seen,the refractions happen in predictable contexts so I suspect that they'll be able to create shaders, etc that will limit the performance hit
I would imagine that for a known geometry of glass, you can do the ray tracing once, see where each photon ends up, and then bake that transformation into the UI. If you do this for each edge and curve your UI will produce, you can stitch them together piecewise to form UI elements of different shapes without computing everything again from scratch.
Early iPhone hardware was barely keeping with rendering the UI with a total ban on transparency. Even on iPhone 4 which improved the hardware a lot had the issue that it also increased amount of pixels to be pushed around.
And yes, later iOS on early hardware was huge PITA and slowdown.
I suspect that their new technique implements the existing fast gaussian blur, and since the patent is about to expire, it was a good time to spice it up.
I suspect as others have mentioned here, they use a "Liquid Glass" shader which samples the backing layer of the UI composition below the target element and applies a lens distortion based on the target element's border radius, all heavily parameterized so as to be used with the rest of the system's Liquid Glass applications like the new icon system.
Surely it's a performance nightmare because whatever is behind the frosting has to be rendered in full. Without this it can see that it's occluded and not have to render. Or does MacOS not do that?
I don't know how long you've been following Apple but with previous "high cost on old hardware" features they just disabled them for old hardware.
Apple loves their battery life numbers, they won't purposefully ship a UI feature that meaningfully reduces them. Now bugs that drop framerates and cause hangs, they love shipping those.
Maybe in the past, but my iPhone 13 still has pretty good battery life considering the battery has physically degraded over the years. No update felt like it killed the battery.
Eh, I use an iPhone 11 that's 5.5 years old, with the original battery and to this day the battery life is not noticeably different from when it was new.
It's the first iPhone I bought and has lasted longer than any of the three Android phones I had before it.
Literally impossible for your battery life not to have degraded in 5.5 years, battery tech just degrades - my 14 Pro was noticably worse in less than a year.
Windows Vista introduced this same concept. Performance was awful unless you had compatible graphics acceleration. 20 years later, I think most devices should be fine, especially Apple devices.
Vista was dogged by issues caused by migrating display drivers from NTDDM to WDDM 1.0, something that was only finished by 7 (which dropped NTDDM fully and introduced WDDM 1.1) and 8 (which afaik had mandated WDDM 1.1 only).
Unlike previous GDI acceleration, DWM.EXE could composite alpha channel quickly with the GPU, and generally achieved much higher fill rates on the same hw - if the drivers worked properly.
Yeah one of the easiest ways to make windows vista+7 perform better was to simply disable all the fancy UI graphics that add nothing. I don't care if my window title bars have a gradient and animated transparency. It's actually a bit distracting and makes the system perform worse, so I just turned it off.
Even on modern devices though which have more computation and graphics power to the point that they aren't going to actually lag or anything while rendering it, why waste cycles and battery animating these useless and distracting things? There's no good justification.
these performance hungry "improvements" are forcefully introduced to legitimately slow down older devices and force the device refresh across the user base.
I have been using 8 year old iPhone just fine, but features like these over time will make the experience slower and slower and slower, until I am forced to refresh my iphone
I think probably a much bigger problem is app bloat. Devs are usually using very recent if not brand new top end devices to test and develop against which naturally makes several types of performance degradation invisible to them (“works on my machine”). Users on old and/or low end devices on the other hand feel all of those degradations.
If we want to take increasing device lifetimes seriously we need to normalize testing and development against slow/old models. Even if such testing is automated, it’d do wonders for keeping bloat at bay.
More likely it's a result of pressure to ship highly visible "improvements," combined with a lack of ideas that could improve the experience in a meaningful way. What do you do in that situation? Ship an obvious UI update that wouldn't have performed on the last gen hardware.
I haven’t used the new UI, so don’t assume this to be an endorsement of it, but even if you have good ideas about UI improvements and implement them, there still is pressure to make the UI look different because that, at a glance, shows users that they get something new.
And yes, “looking different” doesn’t have to mean “requires faster hardware”, but picking something that requires faster hardware makes it less likely that you will be accused of being a copy-cat of some other product’s UI.
And you base your first sentence on…? Surely not the ol’ “my phone slows down when my battery is failing so that I’ll buy a new phone” canard?
To be clear, these are new features that will likely have a setting to turn off. There’s no conspiracy, nothing “forcefully” added for the purpose of driving upgrades. (Ah, ninja edit): There’s not even a guarantee these features will be supported on an eight year old phone. EDIT: wait a minute...your eight year old phone won't even be supported.
(EDIT: reworded first paragraph to account for the ninja edit.)
When is the last time a company has admitted wrong-doing? No, Apple admitted to slowing down phones when the battery was shot so it wouldn’t just suddenly shut down.
I adamantly believe this was the right call for Apple to make. I frequently switch between Apple and Android phones across different generations. At the time I had an aging flagship Samsung that did NOT do this. My battery indicator would say "18%" and it would last however long that implies...if I didn't do anything remotely CPU-intensive. If I did anything that boosted the CPU, the current draw caused the battery voltage to fall off a cliff and the phone would instantly shut down without warning.
The worst part was that during the boot sequence, the CPU ran at full-throttle for a few moments until the power-management components were loaded. So I couldn't restart it. As long as I didn't open a game or YouTube or a wonky website with super awful javascript, I could continue using the phone for another couple hours. But if the phone turned off, it couldn't be turned back on without charging it more ... even though it had "18%" battery left (as determined by voltage, not taking into account increased internal resistance in the battery as it ages).
I was envious of iPhone users that got a real fix for this (Apple slowing down the phone when the internal voltage got low). I would have greatly preferred that Samsung had done the same for my phone too.
I agree, it was the right call to make -- a temporarily-impaired device is always better than a temporarily-failed device, especially when you're talking about something you may need in an emergency situation.
That said, Apple _significantly_ erred in not over-communicating what they were doing. At that point, the OS would pop warnings to users if the phone had to thermal throttle, and adding a similar notification that led the user to a FAQ page explaining the battery dynamics wouldn't have been technically hard to do.
That was fake, tho. They slowed down old iPhones to make you buy a new one. My iPhone 7 wasn't auto shutting down, battery health was good, but they still made it so slow it was unusable the same week they released the iPhone X.
There is literally a zero percent chance it was anything to do with batteries. This is not a conspiracy theory. It's an objective fact.
They didn't admit bad intent. They admitted to doing something with good intent(the slowing was to stop crashes with near EOL batteries) but that they weren't transparent about it.
I'd much rather us have progress and people with 8 year old phones suffer than ensure that everything continues to run smoothly on any old device for eternity.
I would prefer to be told that my battery is weak so I could make a decision on if I want to replace the battery, replace the phone, live with the phone shutting down randomly when battery is low, or continue with a slower phone. That's just me.
Apple absolutely effed up by not communicating the specifics well, but that’s corporate policy. Apple docs have always been targeted at the non-technical user and therefore inadequate for others.
But this one is true. Apple obviously puts out slowdown updates right as they release a new phone. They made my iPhone 7 unusable the same week they released the iPhone X.
I don’t think your overall take is wrong (it’s about money), but maybe the simplicity of it is.
Reality is that designers, product managers, engineers — they all wanna build cool things, get promoted, make money etc.
You don’t do that by shipping plain designs, no matter how tried and true. The pressure to create something new and interesting is ever present. And look we have these powerful Apple silicon chips that can capably render these neat effects.
So no I don’t think it’s a shadowy conspiracy to come after your iPhone 8. Just the regular pressure of everyday men and women to build new and interesting things that will bring success.
In the late 90s/early 2000s desktop computing was moving at such a pace that an 8 year old PC was near unusable. Overtime progress slowed and its not unusual to have a decade old desktop now. The problem is thinking that mobile has slowed that much too. Mobile is still progressing quite rapidly so yeah an almost decade old device is going to feel slow.
You have what an iPhone 6? 1GB of RAM vs 8GB for modern devices, the first A chip came out 2 generations after yours as has 2% of the power of a current chip so modern chips are likely close to 100x as powerful as your phone.
Why should we hold back software to support extreme outliers like you?
> Why should we hold back software to support extreme outliers like you?
What are apps and mobile sites doing differently today besides loading up unnecessary animations and user tracking? How has user experience improved for those operating on devices fast enough to make up for developer laziness?
if I want to play games, I will buy the latest iPhone.
If I want to a smartphone with couple simple primitive apps that just send JSON and call REST APIs in the cloud, I don’t want to be forced to shell out $1500 every couple years
Yes, everything has a lifetime, 10 years is a very good run for a complex piece of technology you can carry in your pocket. Send it in for recycling.
So that we can have better features and functionality in our future systems. Backwards compatibility is an anchor. If you want new things then expect to get new platforms to run them on don't expect everyone to limit their possibilities to support you.
The vast majority of things don't get recycled properly.
We are not talking about new features. Of course no one expects to run a LLM on an ten year old phone, again we are talking about fashion. It is change for change's sake. It is not providing value to users it is so the the designer gets to eat and management and shareholders are kept happy.
There is a difference between actual technical progress and you throwing out your skinny jeans because baggy pants are now in fashion.
Why shouldn't we build phones that last ten year, twenty years, or even more?
Windows 10 keeps telling me I need to buy a new Desktop in October. I don't remember when I bought it, but it runs fine for everything I do. I've been running Linux for ages on my laptops, I be upgrading my desktop to Linux too!
Windows 10 is EOL. As a fellow internet user I'm glad Microsoft is taking a harder line these days on people running EOL software. The internet has a history of being swamped by people running EOL versions of Windows full of security issues causing problems for everyone else.
No one is holding back software. You're not running local LLM or anything useful, you're adding performance cost for merely displaying icons on screen.
No one is holding back software because they aren't being allowed. If we were forced to support decade+ old devices though software would for sure be held back.
Laggards cost society by running insecure devices that generally impact the rest of the world besides just complaining about no one continuing to support them long after the useful life of their devices.
> Laggards cost society by running insecure devices that generally impact the rest of the world
Maybe there's also a cost to updating phones as frequently as people do, and inefficient software running across billions of devices.
I wouldn't blame people who make their hardware last longer and call them "laggards". And it's not their responsibility to write security patches for their device, that falls on the manufacturer.
For these people, me included, they don't need the latest hardware features to ray trace a game or run some local LLM. We're just taking some photos, making calls, getting map navigation, messaging, interacting with CRUD apps, and web browsing. None of that requires the latest hardware, and especially Apple hardware from 8 years ago is more than capable of handling it smoothly.
Ask anyone who had to deal with supporting IE back in the day what the cost to the world is fort supporting tech laggards. They are an anchor on tech growth and a real issue.
If you're running an insecure device past it's support life it's your responsibility and your fault if it's used to attack others. You are fully to blame for choosing to use something past it's serviced life. You cannot expect companies to support old software forever.
Currently replying from my iPhone 16 pro (granted, not old by any means) on the iOS 26 dev beta. MOST things actually feel smoother/snappier than iOS 18. Safari is a joy to use from a performance perspective.
It’s in beta so ofc I’m getting a ton of frame hitches, overheating, etc. but my summarized initial thoughts are “it’ll take some getting used to, but it feels pretty fast”
> MOST things actually feel smoother/snappier than iOS 18
I have a feeling the whole smooth animations thing contributes to this a lot. Obsessing about the reaction time and feeling of how stuff comes on the screen. But yeah iPhone 16 pro is probably a bad performance test case
After using it for a couple more days, battery life hasn't really changed from 18. I'm tempted to say that it's better but I don't want to make any claims before I actually track battery life across a week and compare it to my battery life pre-update.
The overheating is a common occurrence, but it doesn't persist. It seems to be certain things (setting the animated backgrounds in iMessage is a good example), but the moment I'm not doing one of those things the temp feels fine. My battery does drop a percent or two during those cases (which sucks), but my typical use of the phone hasn't yielded any noticeable battery life loss compared to 18.5
There's a difference between something like a transparent background (you can run i3/picom on a potato) and having to composite many little UI elements to render a frame.
I can think of a couple of creative ways to dramatically optimize rendering of these effects. There is probably quite some batching and reordering possible without affecting correctness.
Ceteris paribus your performance is always going to be substantially worse even with tons of fancy tricks. Those also get much harder to implement when you're building a complete UI toolkit that has to support a ton of stuff rather than just writing first-party apps/OS components.
I think that the batching that I have in mind would work especially well with complex layouts. The thing to realize is that even if you have tons of elements on a screen, their visual components aren't actually stacked deeply in most cases and the type and order of applied effects is quite similar for large groups of elements. This allows for pretty effective per-level batching in hierarchies, even if elements don't have the same parents.
Right. My point is the response to this is "well if we optimize it more we'll improve performance", but oftentimes if you optimized the existing code you would also improve performance. Your end state is still worse.
Is it really worse if the GPU spends maybe 0.5ms more per frame on these effects? I'd be surprised if a good implementation adds much more to the per frame rendering time.
> At a previous company, we were forbidden from using translucency (with a few exceptions) because of the performance cost of blending.
I imagine this was on mobile devices.
Blending was relatively expensive on GPUs from Imagination Technologies and their derivatives, including all Apple GPUs. This is because these GPUs had relatively weak shader processors and relied instead on dedicated hardware to sort geometry so that the shader processor had to do less work than on a traditional GPU.
Other GPUs vendors rely more on beefier shader processors and less on sorting geometry (e.g. Hierarchical-Z). This turned out to be a better approach in the long term, especially once game engines started relying on deferred shading anyway, which is in essence a software-based approach that sorts geometry first before computing the final pixel colors.
Interestingly, in iOS 18, suppressing transparency (there’s a setting for it) makes performance worse, not better. The UI lags significantly more with transparency disabled. I expect it will be the same with iOS 26: there will be setting to reduce the transparency (which I find highly distracting) but it will make performance actually worse…
Thanks for this insight. It's very counter intuitive. Normally transparency is additional work for a GPU.
I had "Reduce Transparency" check-box in settings turned on because I distaste semit-transparent interfaces. Was not noticing performance problems except one application - Ogranics Maps which were unusably slow after switching to another app and returning to maps so I had to restart it freqently (swipe up). I was thinking that the problem is with Ogranics Maps code.
After seeing this comment re-enabled transparency (iOS default) and Ogranics Maps working fast even if I switch between Organic Maps and other apps!
My phone is always in power save mode. Re-enabling transparency actually made the UI less jerky. It was mostly the keyboard that became unresponsive, I could type 15-20 letters while it froze and it would then „catch up“.
Re-enabling transparency improved this a lot, also keyboard still hangs a bit from time to time. I’m always in power save mode, on an iPhone 12, running iOS18.
> This is also likely a performance nightmare. Funny that they mention that "new hardware has enabled us to..." which means that this will perform poorly on old devices.
Not sure if it is planned obsolescence but it certainly is an upsell to upgrade.
Translucency being a main feature of Mac OS X is decades old at this point. I remember a magazine article touting it as an advantage over the upcoming release of Windows XP!
> At a previous company, we were forbidden from using translucency (with a few exceptions) because of the performance cost of blending. There are debugging tools we'd use fairly often to confirm that all layers were opaque.
I feel like a few years back when I still used an Intel macbook i noticed an increase in battery life and less frames dropping (like during 'Expose' animations) by disabling transparency in Accessibility settings.
Yes, but I think it’s about giving the consumer “more” so that the upgrade train doesn’t stall out and stop moving. They need everyone upgrading iPhones every 3 years and,people won’t do that for just an abstract “it’s faster.”
Meh, Vista laptops could run lots of translucency fine (well as long as they were actualy Vista era laptops and not just XP era laptops with Vista installed)
you just proved that MSFT released slow OS to force people refresh hardware.
Plus, vista was released in 2007, XP SP2 (the most popular version) was in 2004. so its like ~3 years diff. So its not like hardware has progressed in 3 years, its more like new software got significantly slower
I don't think upgrading was the reason for Vista performance. MS wasn't in the hardware business back then (and is just a marginal player even today).
They WAY overreached in their goals with Longhorn. When they finally decided to cut back features to something actually attainable, they didn't have enough time to make a high-performance OS.
Windows 7 was a well-loved rebrand of what was essentially just a Windows Vista service pack and improved performance (though it was still too heavy for a lot of the older machines people tried to upgrade to Vista). If they'd have cut back on their goals earlier, Windows 7 is likely a lot closer to what would have shipped as Vista.
"Be careful… about this reading you refer to, this reading of many different authors and books of every description. You should be extending your stay among writers whose genius is unquestionable, deriving constant nourishment from them if you wish to gain anything from your reading that will find a lasting place in your mind."
- Seneca, Letters
I was surprised to learn that the temptation to read too many things was also a problem 2,000 years ago. This inspired me to work on a short list of books that I know deeply.
Sucks that the vast majority of those books were lost forever. Early Christianity was a scourge in that regard, how much culture we lost forever because of those zealots.
I didn't realize Early Christianity had a monopoly on the destruction of books? As far as I know the burning of rival civilizations has been happening for thousands of years.
Which centuries did you have in mind? In what countries were these purgings? Which authors were purged and by which sects of early Christianity?
I ask because "Early Christianity" as a period ends with the First Council of Nicea, and Christians before Nicea were (a) not at all unified and therefore had very different opinions on things like pre-Christian literature and (b) not very politically powerful and therefore unlikely to have had a major impact on book survival.
If you have specific citations for when and where these systematic and catastrophic destructions are purported to have taken place I'd love to hear it, otherwise I'm more inclined to chalk it up to "most texts don't survive from most eras for all kinds of reasons".
My understanding is that the idea that Christians destroyed vast numbers of ancient texts was heavily exaggerated by anti-Christian polemics during the Enlightenment. There were some isolated instances, but even some of those, like the Library of Alexandria, were much less significant than the polemics assumed. The Library of Alexandria had already largely fallen into ruin and disuse by the time Christians are supposed to have gotten to it, and the works left there had already been copied all over the Mediterranean anyway.
The other problem with this line is that it ignores the enormously important role that Christians had in preserving large numbers of texts. Did they selectively preserve texts while allowing others to disappear to time? Probably. But when each copy has to be transcribed by hand, it's to be expected that what survives is the stuff that the people responsible for preserving texts thought was most important.
It would be very unfair to accuse someone of burning books just because they didn't manually copy everything that had ever been written and instead only copied their favorite texts.
I would say that material decay, neglect, economic collapse, war, and changing cultural priorities played a far greater role in the loss of these texts rather than the ones lost to zealous Christians. It's just weird to single that group out when their actions represent a minority in the overall loss of these ancient texts.
During COVID, the block I live on in San Francisco started doing outdoor happy hours every Saturday afternoon. People weren't traveling much then, so we had near 100% attendance of every person on the block for almost a year. I went from knowing none of my neighbors to knowing all of them quite well, and it has surprised me how much it has improved my day-to-day happiness.
Since then, we've hosted a "progressive" Thanksgiving dinner, which moves from house-to-house on the block for different courses. We shut down the street one day each year and set up bounce houses for the kids. I've made pint glasses with the name our street engraved in them, and given them to my neighbors. It's shown me that there really can be something valuable outside of your immediate family and circle of friends.
One thing that struck me when visiting Tokyo (as an American living in San Francisco), was that it was not uncommon to go to a restaurant or bar on, say the 3rd or 4th floor of a building.
In America and Europe, restaurants and shops are basically all zoned to be on the ground floor, with residential or office units above. This gives the density a different feeling, because commercial/dining space extends upward.
In the US, Chicago is also like this. I've been to a "shopping mall" that had ten stores but was spread among four floors.
Chicago used to have a number of "vertical malls." I think Water Tower Place (7 floors) and The Shops at 900 (7 or 8 floors, IIRC) are the only ones left. Unless you also count smaller places like Block 37 (4 floors).
Some are now shadows of their former selves. Some sit empty (Chicago Place), or in various stages of redevelopment.
THIS! I was just talking to a city council member about my trip to Japan and how this level of density (multiple stores in the same location vertically but not horizontally) Had some interesting effects on walkability, sales tax revenue per sq mile, and mixed use residential.
ADA law in the US financially prohibit this. Once you need an elevator, the costs go though the roof for the building. Elevator design, installation, inspection and repair are incredibly expensive and eat up a lot of square footage on every floor.
Oh, interesting point. So when I see a marker on every block in places, that doesn't mean you can just walk off the street into them, they might be upstairs?
Convenience stores are almost always ground floor. I can only think of times where there is a mezzanine or similar that they might be on an upper floor. They are always placed to have high foot or vehicle traffic.
I'm curious about the main use-cases for physical notebooks from folks on HN. I love the idea of physical notebooks, but also have years of digital notes that are searchable and that I can access on any device. I feel like I'm in too deep with digital, and like the ability to access it anywhere.
Has anyone made the switch from digital to physical and loved it? What kind of notes are you taking, how did you get it to stick?
I often go to Barnes & Noble to sit and work on my laptop with a coworker. They have nice seats, no shortage of reference material to settle debates, and happen to be in closer proximity to my office than a library.
One cold winter day, as I was typing out a rough design for a major project, I decided it was just too tedious to work that way. My hands were cold, typing hurt, and my fingers couldn’t keep up with my head. I was trying to track all sorts of interdependent services in my head.
I got up, grabbed a notebook and pen from the shelves, and walked to the checkout counter. Coincidentally, both were Moleskine-branded, but to this day, I know nothing about the company. All I know is that it was far less frustrating to scribble crude diagrams on paper than it was to type them up.
Once I got everything down on paper, I still had to type it all. The scribbles were barely legible to me, let alone the other people on my team.
Pen and paper didn’t replace digital; rather, they augmented it.
This is my experience as well. As PG notes in "Hackers and Painters", figuring out the architecture of a program is more like sketching than engineering. Scribbling in a notebook is more freeing than typing or diagramming on a laptop.
I have 14 years of personal journals and 7 years of programming/math/music notes. I can usually find old entries right away because it's much easier to remember where I was when I wrote it, and why. Part of it is the muscle memory of moving a pen, and part of it is because I would have to care enough about a topic to sit down and put ink to paper in the first place.
A good deal of my technical notes are write-only anyway. Slowing down and jotting things once gave me all the understanding I've ever needed. This is less likely to happen with copy/paste.
I think paper exercises your brain, while these fancy programs attempt to replace it.
I feel like I'm in too deep with digital, and like the ability to access it anywhere.
You can have both.
My wife uses a smart pen that tracks her writing in her notebooks and creates searchable PDFs.
Every couple of months she unloads it via Bluetooth into iCloud and the pages are available everywhere she is.
She recently turned off the pen's built-in OCR after she found that macOS does a much better job of automatically OCRing the pages just by dropping the PDFs into the file system.
I started carrying notebook and pen in my breast pocket when my children were small. I would do a lot of thinking about my work while I was caring for them, and I wanted something to catch my thoughts. Something I could put away instantly when someone needed a push on the swings and pull back out half an hour later, and have it be just as I left it. I still do this occasionally, but these days I can just use a full sized notebook for sketching out ideas.
Nevertheless, I’ve found it incredibly useful to carry a pocket notebook still. Moleskine for a while, but the paper kind of sucks. These days I pick up anything with a sewn binding and hope I get lucky. But anyway, a big reason is social. People react much better when you grab a notebook and start writing than they do if you pull out your phone. One says, “your words are very important to me,” the other says, “I’m ignoring you.”
I use physical notebooks for ephemeral information (i.e. todo lists and ideas). The problem with digital is its convenience: it can grow infinitely without affecting you. There’s nothing to distinguish an old note and a recent one.
A notebook’s pages physically accumulate as they’re written on. It forces me to acknowledge them. If I need to write something new and must skip ten sheets before I find a blank one (I rip out and throw away pages as they’re done), it means there’s a fair amount of unrealised stuff that I haven’t gotten to. Time to reevaluate: read what’s in there and decide what still needs to be in there and what realistically has passed its expiration date of relevance/excitement/importance and should be trimmed.
I have a stack of moleskeine's small flip up art collection sketch pads and take one with me most places. I have one of their music notebooks but that was more aspirational to buy, though I've used it. the use cases include sequence diagrams for processes and code, product ideas, song lyrics, character sketches and story fragments, thoughts I wouldn't put into an electronic device, training diagrams, etc. I write and draw to think and reason things through, and the notebooks are essential to that.
Been filling notebooks for years while also keeping pretty meticulous digital notes. Physical is mostly personal or ideas (sometimes for work). Digital is mostly work.
I like to doodle and draw alongside note-taking and there's no substitute for analog there IMO. Plus, being able to write and not be on a device after a long day at work is a relief.
Lack of search can be an issue. But then I sometimes create indexes to things like book notes or stuff I'm learning and that is a pleasure in itself.
Two reasons. 1: you retain more information writing it down on physical paper. 2: I have never needed to search ancient notes, most last a couple weeks at the longest, and thanks to point #1 you know where things are in that notebook better than you’d guess.
Handwriting recognition is still very hit-or-miss. The best results I've had so far are by running the handwriting through the Google Cloud Vision API, and then asking ChatGPT to fix transcription mistakes. The problem with that is that effectively you are asking it to hallucinate.
It's great at producing something that sounds a lot like what I might have written, but I can't trust anything that it says, because it frequently hallucinates numbers, dates, people's names—the exact kind of thing that I take notes to have a good record of.
Banks' work assumes that AI exceeding human capabilities is inevitable, and the series explores how people might find meaning in life when ultimately everything can be done better by machines. For example, the protagonist in Player of Games gets enjoyment from playing board games, despite knowing that AI can win in every circumstance.
For all of the apocalyptic AI sci-fi that's out there , Banks' work stands out as a positive outcome for humanity (if you accept that AI acceleration is inevitable).
But I also think Banks is sympathetic to your viewpoint. For example, Horza, the protagonist in the first novel, Consider Phlebas, is notably anti-Culture. Horza sees the Culture as hedonists who are unable to take anything seriously, whose actions are ultimately meaningless without spiritual motivation. I think these were the questions that Banks was trying to raise.
I suppose its ainteresting that in the Culture, human intelligence and artificial intelligence are consistently kept separate and distinct, even when it becomes possible to perfectly record a person's consciousness and execute it without a body within a virtual environment.
One could imagine Banks could have described Minds whose consciousness was originally derived from a human's, but extended beyond recognition with processing capabilities far in excess of what our biological brains can do. I guess as a story it's more believable that an AI could be what we'd call moral and good if it's explicitly non-human. Giving any human the kind of power and authority that a Mind has sounds like a recipe for disaster.
Banks did consider this. The Gzilt were a quite powerful race who had no AI. Instead they emulated groups of biological intelligences on faster hardware, in a sort of group mind type machine.
Yes, the problem is that from a narrative perspective a story about post-humans would be neither relatable nor comprehensible.
Personally I think the transhumanist evolution is a much more likely positive outcome than “humans stick around and befriend AIs”, of all the potential positive AGI scenarios.
Some sort of Renunciation (Butlerian Jihad, and/or totalitarian ban on genetic engineering) is the other big one, but it seems you’d need a near miss like Skynet or Dune’s timelines to get everybody to sign up to such a drastic Renunciation, and that is probably quite apocalyptic, so maybe doesn’t count as a “positive outcome”.
I don't see why post-humans can't be relatable even though they'd be very distant from our motivations.
Take Greg Egan's "Glory". I don't think we're told the Amalgam citizens in the story are in some sense human descendants but it seems reasonable to presume so. Our motives aren't quite like theirs, I don't think any living human would make those choices, but I have feelings about them anyway.
I haven’t read that one, will check it out. If we take his “Permutation City”, I think the character Peer is quite unrelatable, and only then because they give some human background. A story consisting only of creatures hacking their own reward functions makes motivations more alien than “not quite like ours” IMO.
I assume post-humans will be smarter and unlock new forms of cognition. For example BCI to connect directly to the Internet or other brains seems plausible. So in the same way that a blind person cannot relate to a sighted person on visual art, or an IQ 75 person is unlikely to be able to relate to an IQ 150 person on the elegance of some complex mathematical theorem, I assume there will be equivalent barriers.
But I think the first point around motivation hacking is the crux for me. I would assume post-humans will fundamentally change their desires (indeed I believe that conditional on there being far more technologically advanced post-humans, they almost certainly _must_ have removed much of the ape-mind, lest it force them into conflict with existential stakes.)
The Meatfucker acts as a vigilante and is unpopular because of the privacy invasions. The Zetetic Elench splintered off. The Culture's morals were tested in the Idiran war. They might not have greed as a driver because it's unnecessary but they do have freedom of choice so they're not exactly saints.
It can right now. This isn't the problem. The problem is the power budget and efficiency curve. "Self-contained power efficient AI with a long lasting power source" is actually several very difficult and entropy averse problems all rolled into one.
It's almost as if all the evolutionary challenges that make humans what we are will also have to be solved for this future to be remotely realizable. In which case, it's just a new form of species competition, between one species with sexual dimorphism and differentiation and one without. I know what I'd bet on.
> the series explores how people might find meaning in life when ultimately everything can be done better by machines.
Your comment reminds me of Nick Land's accelerationism theory, summarized here as follows:
> "The most essential point of Land’s philosophy is the identity of capitalism and artificial intelligence: they are one and the same thing apprehended from different temporal vantage points. What we understand as a market based economy is the chaotic adolescence of a future AI superintelligence," writes the author of the analysis. "According to Land, the true protagonist of history is not humanity but the capitalist system of which humans are just components. Cutting humans out of the techno-economic loop entirely will result in massive productivity gains for the system itself." [1]
Personally, I question whether the future holds any particular difference for the qualitative human experience. It seems to me that once a certain degree of material comfort is attained, coupled with basic freedoms of expression/religion/association/etc., then life is just what life is. Having great power or great wealth or great influence or great artistry is really just the same-old, same-old, over and over again. Capitalism already runs my life, is capitalism run by AIs any different?
Or Robin Hanson, a professional economist and kind of a Nick Land lite, who's published more recently. That's where the carbon robots expanding at 1/3rd the speed of light comes from.
I just want to add that I think you might be missing an component of that optimal life idea. We often neglect to consider that in order to exercise freedom, one must have time in which to choose freely. I’d argue that a great deal of leisure, if not the complete abolition of work, would be a major prerequisite to reaching that optimal life.
Banks' Culture isn't capitalist in the slightest. It is however, very humanist.
If you want a vision of the future (multiple futures, at that) which differs from the liberal, humanist conception of man's destiny, Baxter's Xeelee sequence is a great contemporary. Baxter's ability to write a compelling human being is (in my opinion) very poor, but when it comes to hypothesizing about the future, he's far more interesting of an author. Without spoilers, it's a series that's often outright disturbing. And it certainly is a very strong indictment to the self-centered narcissism that the post-enlightenment ideology of liberalism is anything but yet another stepping stone on an eternal evolution of human beings. The exceptionally alien circumstances that are detailed undermine the idea of a qualitative human experience entirely.
I think the contemporary focus on economics is itself a facet of modernism that will eventually disappear. Anything remotely involving the domain rarely shows up in Baxter's work. It's really hard to give a shit about it given the monumental scale and metaphysical nature of his writing.
> I think the contemporary focus on economics is itself a facet of modernism that will eventually disappear. Anything remotely involving the domain rarely shows up in Baxter's work. It's really hard to give a shit about it given the monumental scale and metaphysical nature of his writing.
I’m curious to check it out. But in terms of what I’m trying to say, I’m not making a point about economics, I’m making a point about the human experience. I haven’t read these books, but most sci-fi novels on a grand scale involve very large physical structures, for example. A sphere built around a star to collect all its energy, say. But not mentioned is that there’s Joe, making a sandwich, gazing out at the surface of the sphere, wondering what his entertainment options for the weekend might be.
In other words, I’m not persuaded that we are heading for transcendence. Stories from 3,000 years ago still resonate for us because life is just life. For the same reason, life extension doesn’t really seem that appealing either. 45 years in, I’m thinking that another 45 years is about all I could take.
The ending of Ring, particularly having everything contextualized after reading all the way to the end of the Destiny's Children sub-series, remains one of the most strikingly beautiful pieces I've ever seen a Sci-Fi author pull off.
Easily the best "hard" Sci-Fi I've read. Baxter's imaginination and grasp of the domains he writes about is phenomenal.
But OP and Horza's viewpoints are the same strawman argument. The sci-fi premise is that superhuman AIs coexist with humans which are essentially ants.
The correct question is, then what ought to be the best outcome for humans? And a benevolent coexistence where the Culture actually gives humans lots of space and autonomy (contrary their misinformed and wrong view that the Culture takes away human autonomy) is indeed the most optimal solution. It is in fact in this setting that humans nevertheless retain their individual humanity instead of taking some transhumanist next step.
Their sailing videos are very inspiring—from what I remember, they sailed from Vancouver to Japan, then down to New Zealand and back to Vancouver over the course of a few years.
Among other things, it compares the Idirans—who have what we consider a more traditional, modern-day culture—with the Culture. For example:
"The war between the Idirans and the Culture is peculiarly asymmetrical, since the Culture is not an empire, or even a “polity” in any traditional sense of the term, it is simply a culture. It has no capital city, or even any “territory” in the conventional sense."
I also love Heath's criticism of Dune (I appreciate the series, but now can't help but notice how often sci-fi series use regressive social structures).
"In fact, modern science fiction writers have had so little to say about the evolution of culture and society that it has become a standard trope of the genre to imagine a technologically advanced future that contains archaic social structures. The most influential example of this is undoubtedly Frank Herbert’s Dune, which imagines an advanced galactic civilization, but where society is dominated by warring “houses,” organized as extended clans, all under the nominal authority of an “emperor.”"
Of the spectrum of possibilities of how (if) we end up co-existing with AIs then anything like the Culture is definitely towards the positive end. Yes it can look like the humans in the Culture are being kept like pets - but if any group or individuals want to leave the Culture then they get support and encouragement - which isn't very pet like?
The Foreign Service Institute training is also full-time, 8+ hours a day, working with dedicated tutors in small groups. Basically, it's your job to learn the language.
This isn't taking a semester of high school language instruction or playing with Duolingo for six months. It's basically doing nothing else during working hours and maybe homework for over half a year in what would have probably been close to a $100K crash course.