This article seems to conflate strong type systems with functional programming, except in point 8. It makes sense why- OCaml and Haskell are functional and were early proponents of these type systems. But, languages like Racket don’t have these type systems and the article doesn’t do anything to explain why they are _also_ better for reliability.
Thank you for saying that. I regularly attend the International Conference on Functional Programming, which grew out of the LISP and Functional Programming conference. Except for the Scheme Workshop, which is the reason I attend, it might as well be called the International Conference on Static Types. Almost all of the benefits of functional programming come from functional programming itself, not from static types, but one would never get that impression from the papers presented there. The types are all that anyone talks about.
I get your point about ICFP drifting into “types, types, types.” I don’t think FP benefits are only static typing or immutability, pure-ish core/imperative shell, and explicit effects matter a lot even in dynamic languages.
My angle was narrower: static types + ADTs improve the engineering loop (refactors, code review, test construction) by turning whole classes of mistakes into compiler errors. That’s not “what FP is”, it’s one very effective reliability layer that many FP ecosystems emphasize.
Static types and ADTs are orthogonal to being FP, as Rust clearly shows. But to speak in terms of FP when those are the important things for you is just wrong since even non FP languages now have ADT, including also mainstream languages like Java, Kotlin, Dart, C# and more.
Even purity is not something exclusive to FP, D and Nim also support separating pure from impure functions. And if you ask me, the reason not many other languages have support for that is that in practice, it has been demonstrated again and again that it’s just not nearly as useful as you may think. Effects, as in Unison and Flix, generalizes the concept to include many more concepts than just purity and may perhaps prove more useful in general purpose programming, but the jury is still out on this.
I worked through https://htdp.org (which uses untyped Racket), and funny enough, that's what really for me thinking about type driven development. The book gets you to think about and manually annotate the types coming in and out of functions. FP just makes it so natural to think about putting functions together and thinking about the "type" of data that comes in and out, even if you're using a dynamically typed language.
You don't need a strong type system or even really ANY compile-time type system for this strategy to work! I use all these techniques in plain JS and I can still get the benefits of correct-by-construction code style just by freezing objects and failing fast.
I agree. So I write tests. I use architecture to defend against the risk of super-rare code paths where I wouldn't rapidly notice if they broke. I dogfood so I find prod bugs before users do.
None of this seems that new. Even people who write TS code still write tests and still ship bugs and still have to think about good architecture patterns, like the ones in the linked post.
I'm not personally aware of any companies doing this in plain JS aside from my own (I am co-founder/CEO of a two-person startup). I really like working in plain JS. It feels malleable where TS code feels brittle, almost crystalline. Even though I don't have compile-time types there's still only a small handful of different shapes of objects in the core of my software (far fewer than the average TS codebase, I'd wager), and it shouldn't take long at all for people to learn the highly consistent naming conventions that tip you off to what type of data is being handled. The result is that I'd expect that it would only be a handful of days learning the mental model for the codebase before the average person would find it far easier to read the JS code as opposed to TS code, thanks to the lower amount of visual clutter.
I also ship code super fast. When I find bugs I just fix them on the spot. When I find variables named wrong, I just rename them. The result that I often smash bugfixes and features and cleanup together and have a messy git history, but on the flip side you'll never find bugs or naming deceptions that I've left sitting for years. If something is wrong and I can reproduce it (usually easy in functional code), the debugger and I are going to get to the bottom of it, and quickly. Always and only forward!
> […] it shouldn't take long at all for people to learn the highly consistent naming conventions that tip you off to what type of data is being handled.
I’ve used languages with an approach like this. The difference in what I’ve used is that you separate the conventional part from the rest of the name with a space (or maybe a colon), then only refer to the value by the non-conventional part for the rest of the scope. Then the language enforces this convention for all of my co-workers! It’s pretty neat.
Sure! Let’s say I want to enforce that a variable only ever holds an integer. Rather than put the conventional prefix and the name together, like this:
var intValue = 3;
…I separate the conventional prefix with a space:
int value = 3;
…so now my co-workers don’t need to remember the convention – it’s enforced by the language.
I wasn't talking about Hungarian notation. I meant more like if you see a variable named `user` or `activeUser` you know that it's going to contain a predictably-shaped data object that describes a user. E.g. it will always have a `user.id` property. I would never call an string-ish ID a user, then. I would call it `activeUserId` or `userId` or just `id` if the distinction between those was already obvious from context... But that's very different from writing `strUserId` which I never do: I try to make sure my names always convey semantic distinctions.
Mhm! Exactly! In the system those other languages use, once you see the variable’s declaration:
User activeUser
…you’ll always know that `activeUser` contains a User value – something that might have an `Id` property. And the convention is enforced by the language, so it’s easy to communicate. These semantic distinctions are very useful, I agree.
Haha I knew you'd say that. I'm not pretending there aren't advantages to strict systems of declared types. There are many! But my point is simple to the point of stupidity: there's just more stuff on screen when you have to write `User` twice. In this simple example it looks trivially simple to write the word "user" twice, but in a reasonably-complex real example the difference will be far more noticeable.
I should add a few more things: much of how I got here was exposure to Facebook's culture. Move fast and break things. React with prop types. Redux. Immutable.js. I did UI there on internal tools for datacenter operators and it was a drinking-from-the-firehose experience with exposure to new programming philosophies, tools, and levels of abstraction and refactoring velocity beyond anything I had previously encountered. Problems which in other companies I had learn to assume would never be resolved would actually consistently get fixes! Well, at that time. This was before the algorithm was fully enshittified and before the disastrous technopolitical developments in the way facebook and facebook messenger interact with each other.
Perhaps the most direct inspiration I took from there though was from the wonderful "opaque types" feature that Flow supports (https://flow.org/en/docs/types/opaque-types/) which for reasons known only to Hejlsberg and God, Typescript has never adopted; thus most people are unfamiliar with that way of thinking.
Yes, I am wondering if opaque types would be difficult to implement somehow in TypeScript? It should really be part of TypeScript if at all reasonably possible.
I'm not that familiar with the TS internals. They'd have to add a keyword to the language which could break stuff. The smart move would be to reserve the `opaque` word a few versions in advance of introducing the feature that gives it a meaning
I've seen it pointed out that the main point of functional programming is immutability, and that the benefits mostly flow from that. I haven't really learned much of any lisp dialect, but my (admittedly fuzzy) general perception is that this is also the preferred way to work in them, so my guess is that's where the benefit in reliability might come from.
Correct. If things are mutable, then in most languages, there can be spooky action at a distance, that mutates some field of some other object or does so indirectly via some calls. This then can change how the thing behaves in other circumstances. This style of programming quickly becomes hard to fully grasp and leads to humans making many mistakes. Avoiding mutation therefore avoids these kinds of faults and mistakes.
Agreed, I conflated FP with “typed FP.” My claim is mainly about static types + ADTs/exhaustiveness improving refactors/review/tests. Racket can get FP benefits, but absent static typing you rely more on contracts/tests (or Typed Racket), which is a different reliability tradeoff.
The term "functional programming" is so ill-defined as to be effectively useless in any kind of serious conversation. I'm not aware of any broadly accepted consensus definition. Sometimes people want to use this category to talk about purity and control of side effects and use the term "functional programming" to refer to that. I would advocate the more targeted term "pure functional programming" for that definition. But in general I try to avoid the term altogether, and instead talk about specific language features / capabilities.
> The term "functional programming" is so ill-defined as to be effectively useless in any kind of serious conversation.
This is important. I threw my hands up and gave up during the height of the Haskell craze. You'd see people here saying things like LISP wasn't real FP because it didn't match their Haskell-colored expectations. Meanwhile for decades LISP was *the* canonical example of FP.
Similar to you, now I talk about specific patterns and concepts instead of calling a language functional. Also, as so many of these patterns & concepts have found their way into mainstream languages now, that becomes even more useful.
to add a grain of salt, some of the lisp world is not functional, a lot of code is straight up imperative / destructive. but then yeah a lot of the lisp culture tended to applicative idioms and function oriented, even without the static explicit generic type system of haskell.
Sure, but that's part of my point in agreeing that definitions of "functional programming" are muddy at best. If one were to go back to say 1990 and poll people to name the first "functional programming" language that comes to mind, I'd wager nearly all of them would say something like LISP or Scheme. It really wasn't until the late aughts/early teens when that started to shift.
maybe FP should be explained as `rules not values`. in scheme it's common to negate the function to be applied, or curry some expression or partially compose / thread rules/logic to get a potential future value that did nothing yet
I usually define functional programming as "how far away a language is from untyped lambda calculus". By that definition, different languages would fall in different parts of that spectrum.
Yeah, I know Rust isn’t everyone’s favorite but I’d expect at least some awareness that we’ve seen a lot of reliability improvements due to many of these ideas in a language which isn’t focused on FP. I ended up closing the tab when they had the example in TypeScript pretending the fix was result types rather than validation: that idea could be expressed as preferring that style, an argument that it makes oversights less likely, etc. but simply ignoring decades and decades of prior art suggests the author either isn’t very experienced or is mostly motivated by evangelism (e.g. COBOL didn’t suffer from the example problem before the first FP language existed so a far more interesting discussion would be demonstrating awareness of alternatives and explaining why this one is better).
Sure, my point was simply that it’s not as simple as the author assumes. This is a common failure mode in FP advocacy and it’s disappointing because it usually means that a more interesting conversation doesn’t happen because most readers disengage.
I get why it reads like FP evangelism, but I don’t think it’s “ignoring decades of prior art.” I’m not claiming these ideas are exclusive to FP. I’m claiming FP ecosystems systematized a bundle of practices (ADT/state machines, exhaustiveness, immutability, explicit effects) that consistently reduce a specific failure mode: invalid state transitions and refactor breakage.
Rust is actually aligned with the point: it delivers major reliability wins via making invalid states harder to represent (enums, ownership/borrowing, pattern matching). That’s not “FP-first,” but it’s very compatible with functional style and the same invariants story.
If the TS example came off as “types instead of validation,” that’s on me to phrase better, the point wasn’t “types eliminate validation,” it’s “types make the shape explicit so validation becomes harder to forget and easier to review.”
I would keep in mind how much the title communicates your intentions on future posts. The conversation about preventing invalid states has to be somewhat inferred when it could have been explicitly stated, and that’d be really useful comparing other approaches - e.g. the classic OOP style many people learned in school also avoid these problems as would something like modern Python using Pydantic/msgspec so it’d be useful to discuss differences in practice, and especially with a larger scope so people who don’t already agree with you can see how you came to that position.
For example, using the input parsing scenario, a Java 1.0 tutorial in 1995 would have said that you should create a TimeDuration class which parses the input and throws an exception when given an invalid value like “30s”. If you say that reliability requires FP, how would you respond when they point out that their code also prevents running with an invalid value? That discussion can be far more educational, especially because it might avoid derails around specific issues which are really just restating the given that JavaScript had lots of footgun opportunities for the unwary developer, even compared to some languages their grandmother might have used.
Once you accept Curry-Howard, untyped FP languages are hard to take seriously as a foundation for reliability. Curry-Howard changes the entire game. FP and strong types were clearly meant for each other.
Untyped FP languages can be productive, flexible, even elegant (I guess) but they are structurally incapable of expressing large classes of correctness claims that typed FP makes routine.
That doesn’t make them useless, just, you know. Inferior.
I was at Microsoft until July of this year until I left for an SF-based company (not AI though).
The difference between the two with regards to AI tool usage couldn’t be more different- at Microsoft, they had started penalizing you in perf if you didn’t use the AI tools, which often were under par and you didn’t have a choice in. At the new place, perf doesn’t care if you use AI or not- just what you actually deliver. And, shocker, turns out they actually spend a lot building and getting feedback on internal AI tooling and so it gets a lot of use!
The Microsoft culture is a sort of toxic “get AI usage by forcing it down the engineer throats” vs the new “make it actually useful and win users” approach at that new place. The Microsoft approach builds resentment in the engineering base, but I’m convinced it’s the only way leadership there knows how to drive initiatives.
Microsoft is forcing the company to dogfood their own tools. They do this because they need the feedback so they can improve their tools, and they think these tools are a critical part of their future.
Presumably your new company isn't building AI tools, so they don't care what you use.
Imagine a developer in 1990s Microsoft saying "I want to use Borland C++ because it's better than the Microsoft IDE". Maybe it is, maybe it isn't, but that's not the point.
This is true, but is effective only if the dogfooders' feedback is accepted and worked upon. Which is not the case, I can tell you this from first-hand experience (currently work at Msft).
Also unleashing and forcing tech savvy people to use immature tools is only asking for trouble when you don't have allocated enough manpower to deal with the fallout (that is, incessant downpour of improvement feedback). One cannot just force engineers to dogfood tools and then ignore them. This is precisely like Win-8 era mania, only this time it's infecting the whole company not just single org!
Not only are you discouraged from criticizing half baked manager-metric led implementations, you’re deeply incentivized to openly praise it if you want to be considered for the next well-funded initiative.
People with fiefdoms don’t like criticism. Microsoft pays their vassal dependent companies to use their products, no users actually like or would choose the products (Teams? 365 copilot? Azure?), and the whole enclosed ecosystem is pretty awful.
Count me as that weird user that would choose Azure any second over AWS. The integration and interface stability they offer is simply better. Teams sucks indeed, but as I don't know any less-suck alternative I'll have to trust you, and the with Copilot I never bothered much so again can't tell.
They do actually build internal tooling! They key is that it’s actually good enough that feedback to the limited, targeted, and quickly actionable. Microsoft’s internal was immature enough that the general feedback you’d always have is “this is unusable”, which is something the teams building the tools could probably figure out themselves before making the whole company spend time beta testing the tools.
The main point is that the tools need to be of a certain quality/maturity for dogfooding to be effective.
I keep telling to the WinUI marketing team that instead of talking about how "great" doing XAML C++ is, they should actually buy a copy of C++ Builder.
Microsoft either doesn't care about feedback or doesn't have the engineering ability to act on them, otherwise Visual Studio and Microsoft Teams wouldn't be such terrible pieces of software despite tens of thousands of Microsoft employees using them daily.
In 2025, C++ Builder is still better than Visual C++, in what concerns doing Windows GUI development in C++, some things never change, and management keeps being blind to them.
Regarding dog fooding, Project Reunion was also a victim of all engines AI, now the damage is done and only the Windows team cares, because their job depends on using it.
Look, if you want the people to dig trenches with spoons, you can expect them to do it. But if all you're giving them is spoons, you're going to need to give them a lot of slack on the expected digging schedules.
Being forced to use a shit tool because <some other department somewhere in the company wants your feedback>, while your deadlines haven't been adjusted for all this wasted time is not acceptable behaviour. It's the kind of authoritarian horseshit that's that's so often pushed by unproductive parasites onto people who do actual work.
Dogfooding is great and all, but if you're forcing your engineering staff to dogfood something, you should make sure you're in the same industry as your customers. I've always had a bit of respect for MSFT products in the "I'm a company with about 5 reasonable, but not stellar developers" space. Do I want to build an operating system with Visual Basic? No. Do I force C++ on our loading dock foreman who upskilled to a VB4 dev 'cause he knows the problem domain inside and out? Also no. MSFT traditionally attracted "above average" devs who had the support to work on big projects for a (comparatively) long time.
> At the new place, perf doesn’t care if you use AI or not- just what you actually deliver
I work at Google, and I am of the overall opinion that it doesn't matter what you deliver from an engineering perspective. I've seen launches that changed some behavior from opt-in to opt-out get lauded as worth engineering-years of investment. I've seen demos that were 1-2 years ahead of our current product performance get buried under bureaucracy and nitpicking while the public product languishes with nearly no usage. The point being, what you objectively deliver doesn't matter, but what ends up mattering is how the people in your orbit weave the narrative about what you built.
So if "leadership" wants something concretely done, they must mandate it in such a way that cuts through all the spin that each layer of bureaucracy adds before presenting it to the next layer of bureaucracy. And "leadership" isn't a single person, so you might describe leaders as individual vectors in a vector space, and a clear eigenvector in this space of leadership decisions in many companies is the vector of "increase employee usage of AI tools".
Hang around old Microsofties and you'll encounter a phrase: "The Deal." The Deal is this informal agreement: Microsoft doesn't pay amazingly but you're given the time to have work-life balance, you can be relatively assured that upper leadership gives a shit about the ICs, there's space for "... So I was thinking..." to become real "... and that's our next product" discussions and that it's okay to fall so long as you can get back up and keep walking afterwards.
The Deal is dead.
People fired for performance after a bad review their manager didn't give them. The constant slimming of orgs and the relentless gnawing at budgets. I watched as a team went from reasonable to gutted because it got the short straw in "unregretted attrition quotas"
AI is driving this, and I want to see the chat logs between executives and copilot. What sycophantic shit is it producing that is driving them to make horrible decisions?
The Deal died when Microsoft got on the layoff bandwagon in Q1 2023 for no good reason and became very aggressive with perf after that. If Microsoft is just as toxic and unstable as Meta, why not just work at Meta for double the money?
Funnily, Apple also has an unspoken "deal" (pay a bit low but treat really well) and they stuck to it even through the layoff era.
AI is busy quietly convincing every executive that uses it that they have no use for people to work out the details of their ideas anymore. It’s so frustrating to have these drive by executives come into a space you’re working in, drop in a 15 page deep think report they got from a 2 sentence prompt and call that contributing. Bonus points if the report is from an AI platform your company doesn’t have approved so you as a line employee can get written up for.
From the outside this is clearly visible how Project Reunion crashed, C++/WinRT went into maintenance, VC++ losing steam to ISO compliance after bosting about C++20 and C11/17, .NET focus on Aspire/Blazor all AI in detriment of the rest,....
Thankfully I am technology mercenary, polyglot, and use whatever the clients need, regardless of my point of view, but it is sad to see the human part behind those decisions being affected.
The only ones I can really think of are the cloud providers themselves- I was at Microsoft, and absolutely everything was in-house (often to our detriment).
> it exposes that it really was never about emissions or combustion or pollution, you either wanted to control people’s freedom of movement
This isn’t the problem- the real problem is that in dense cities, transporting everyone where they want to go via private vehicles just doesn’t work geometrically- see the traffic and parking needs that grow as cities grow assuming private vehicle use only.You end up needing a more space-efficient form of moving people, namely public transit.
Public transit also doesn't scale. Germany, during the Covid years, introduced a cheap country-wide flat fee ticket (~50 bucks per month, all you can ride) for public transit. Which lead to road traffic going down measurably. Which lead to trains and busses being packed, people traveling more and farther on those. Leading to higher cost, degraded service and packed bus stations and rail lines. Building more of those isn't possible geometrically as well, cities are already packed. We are currently, at tremendous cost and effort, moving railway lines and stations underground, like Stuttgart 21, for that reason.
The point is, moving people inherently is bad. Shifting from cars to public transit reduces the badness a little, but you still need infrastructure that scales O(p * s * f), where p is the number of people, s is the average distance travelled per journey, and f is the frequency of journeys. Scaling doesn't change the slightest bit with public transit, you just have a different constant factor which is irrelevant for scalability.
So the solution isn't public transit. It is the avoidance of any unnecessary travel, meaning that we actually need something like a tax on non-home-office jobs and stores that you personally have to visit (as opposed to shopping online). We need better delivery infrastructure so people don't need to travel. We need close-to-home shopping options. Because actually moving goods instead of people scales logarithmically, the network of countrywide, regional, local distribution centers, bigger shops, smaller shops behaves like a tree.
> you just have a different constant factor which is irrelevant for scalability.
So then why bother with cities at all? High rises are just a different constant factor relative to 40 acre farms after all.
I feel like this is analogous to a case where someone says that an algorithm only differs by a constant factor but it turns out that because of that difference it hits the cache for 99% of use cases and as a result you see better than 100x speedups for all real world workloads.
Cities with cars obviously work up to some size. The sheer number of personal vehicle trips that a single rail line can replace is huge. And just to give you an idea of how large the scale factor here is, consider that you can pack at least 3 rail lines into the width of a typical two lane road, and that large cities commonly have 4 lane arterials.
If the political will existed to reallocate the space it could be done and the viable density would scale accordingly. On the extreme end we have Tokyo as a practical example, and even they are far from saturating all the available space for building rail lines.
> moving goods instead of people scales logarithmically
This is obviously false. There's some average parcel size, and an associated maximum capacity for a delivery van. Thus any given delivery run has a limit on the number of shipments it can service. Obviously that scale factor is significantly larger than the one for passenger rail versus cars, but it's still "just" a constant factor and thus irrelevant for scalability by your own logic.
In fact by your own logic I'm fairly certain that you will find that life as a whole is unscalable. Better nuke the planet I guess.
> So then why bother with cities at all? High rises are just a different constant factor relative to 40 acre farms after all.
Because of the logarithmic scaling of infrastructure (like supermarkets, doctors, hospitals), and because of the square root scaling of people-density per area vs. total travel distance to said infrastructure.
>> moving goods instead of people scales logarithmically
> This is obviously false. There's some average parcel size, and an associated maximum capacity for a delivery van.
You do move bananas by ship from continent to continent. Then you split it up, move it by smaller ships up-river. Then you split it up, move it by truck to each cities distribution center. Then you split it up, move it by smaller truck to each store. Then people buy it.
Obviously logarithmic and the whole reason for ships and freight trains to exist, otherwise we would all get our tropical fruit in person by airplane.
And since we are on a CS-heavy site, talking about scalability, constant factors are irrelevant. That is what scalability means. It is the extrapolation to big numbers, where those constants no longer matter anymore. Of course there might be a local equilibrium for sufficiently small numbers. But that is always temporary as humanity keeps growing.
I take it you've never examined thev theoretical landscape of matrix multiplication algorithms?
The splitting you describe could be carried out just as easily with human transportation as with fruits and vegetables. I still object that it isn't logarithmic in either the space or trip requirement though. It's linear with one of those constant factors that you say are irrelevant. Just as you can never move a human using less than a human sized volume, you can never move a banana using less than a banana sized volume. Hence, linear.
But it can often do so more efficiently than just carrying one container at a time when they're empty - see this company that's solving the "return container" problem by folding up to 4 containers into the space of one for the return trip.
Raise the speed limits such that they're closer to the actual typical speed of traffic, which will reduce the dangers posed to (and by) the "I drive the speed limit" crowd.
To be honest, while I’m not a Tesla fanboy, I don’t think the Cybertruck deserves this hate- as a cyclist I feel worlds safer near a Cybertruck than eg an F150 (though it of course weighs a lot) because of the sloped hood design.
Do I prefer to cycle next to Honda Fits? Of course, but the cybertruck is better than a lot of other truck alternatives
The Brown CS curriculum has in the past few years started including “socially responsible computing” material across intro and non-intro level courses.
This is not quite right - a specification is not equivalent to writing software, and the code generator is not just a compiler - in fact, generating implementations from specifications is a pretty active area of research (a simpler problem is the problem of generating a configuration that satisfies some specification, "configuration synthesis").
In general, implementations can be vastly more complicated than even a complicated spec (e.g. by having to deal with real-world network failures, etc.), whereas a spec needs only to describe the expected behavior.
In this context, this is actually super useful, since defining the problem (writing a spec) is usually easier than solving the problem (writing an implementation); it's not just translating (compiling), and the engineer is now thinking at a higher level of abstraction (what do I want it to do vs. how do I do it).
Surely a well written spec would include functional requirements like resilience and performance?
However I agree that's the hard part. I can write a spec for finding the optimal solution to some combinatorial problem - where the naive code is trivial - a simple recursive function for example - but such a function would use near infinite time and memory.
In terms of the ML programme really being a compiler - isn't that in the end true - the ML model is a computer programme taking a spec as input and generating code as output. Sounds like a compiler to me.
I think the point of the AK post is to say the challenge is in the judging of solutions - not the bit in the middle.
So to take the writing software problem - if we had already sorted the computer programme validation problem there wouldn't be any bugs right now - irrespective of how the code was generated.
The point was specifically that that obvious intuition is wrong, or at best incomplete and simplistic.
You haven't disproved this idea, merely re-stated the default obvious intuition that everyone is expected to have before being presented with this idea.
Their point is correct that defining a spec rigorously enough IS the actual engineering work.
A c or go program is nothing else but a spec which the compiler impliments.
There are infinite ways to impliment a given c expression in assembly, and doing that is engineering and requires a human to do it, but only once. The compiler doesn't invent how to do it every time the way a human would, the compiler author picked a way and now the compiler does that every time.
And it gets more complex where there isn't just one way to do things but several and the compiler actually chooses from many methods best fit in different contexts, but all of that logic is also written by some engineer one time.
But now that IS what happens, the compiler does it.
A software engineer no longer writes in assembly, they write in c or go or whatever.
I say I want a function that accepts a couple arguments and returns a result of a math formula, and it just happens. I have no idea how the machine actually impliments it, I just wrote a line of algebra in a particular formal style. It could have come right out of a pure math textbook and the valid c function definition syntax could just as well be pseudocode to describe a pure math idea.
If you tell an ai, or a human programmer for that matter, what you want in a rigorous enough format that all questions are answered, such that it doesn't matter what language the programmer uses or how the programmer impliments it, then you my friend have written the program, and are the programmer. The ai, or the human who translated that into some other language were indeed just the compiler.
It doesn't matter that there are multiple ways to impliment the idea.
It's true that one programmer writes a very inefficient loop that walks an entire array once for every element in the array, while another comes up with some more sophisticated index or vector or math trick approach, but that's not the definition of anything.
There are both simple and sophisticated compilers. You can already right now feed the the same c code into different compilers and get results that all work, but one is 100x faster than another, one uses 100x less ram than another, etc.
If you give a high level imprecise directive to an ai, you are not programming.
If you give a high level precise directive to an ai, you are programming.
The language doesn't matter. What matters is what you express.
Other common modes of transport that lack fixed schedule, route, or stops:
- biking
- micromobility (scooter-share, etc.)
- walking
- dial-a-ride transit options
But they have other cons as well. You need to have good bike lane infrastructure or to be confident taking the entire lane, whereas most everything is already created around the car or increasingly being created around the car (in the case of the developing world beginning its nascent highway networks). You have to have fair weather or be able to pack around gear like rainpants wherever you are going. You probably make use of the cargo capacity of your car once a week when you buy groceries and goods from stores that tend to size their products around that sort of interval of a trip. I ride my bike plenty but honestly when I go to the grocery store three blocks a way I am usually taking the car, because its easier when I realize oh crap I need milk, I need a gallon of vinegar, I need paper towels, I need toilet paper, I need olive oil, and that alone will overload the panniers and be nigh impossible to get on the bike, especially the paper products and their awkward bulk. I haven't used my panniers for groceries personally since I broke three eggs in a carton with them once. I either walk and grab a small handful of things or just take the car most times.
reply