> I wonder from a modding perspective would it be better if all public methods are just the API users can call and they themselves create a way for mods to exist?
It's the way vintage story implemented modding. They developed the whole game as engine + modapi + hooking engine for stuff outside of hookapi.
Then most of gameplay is implemented as mods on top of engine using api and hooking. And those tools are open source, with central distribution point for mods, so servers can dispatch and send update of required mods to clients as they join.
Marvellous and elegant design. Makes running a server with client side mods a breeze, because mods are automatically pushed to the clients.
Though in the end, you can't really open all the interfaces and expect it to be stable without making some huge trade offs. When it works, it's extremely pleasing. Some mods for vintage story that are made purely using mod api can work between major game versions. Even server/client version check is intentionally loose as mismatched versions can still for the most part interact across most of mechanics.
In practice, to preserve balance of api evolution and stability, not everything in the game is in the api, and thus you have to use their hooking api, and stuff that is not exposed tends to break much more often, so mods require manual updates, just like in minecraft(though not as bad, tbh. In minecraft nowadays modders tend to support both fabric and neoforge/forge apis, targeting each for at least a few major versions. In vintage story, you only gotta support one modding api heh).
You are right! I totally forgot about Vintage Story, I only read about it briefly.
> ... you can't really open all the interfaces and expect it to be stable without making some huge trade offs.
Another game I often play with a huge open interface is Crusader Kings 3 and paradox games in general. Most of the gameplay is implemented in their scripting language for the engine. But as you said when the game gets a big update most mods simply dont work anymore.
If the support of the community dies down many mods with much work and craft dont get updated anymore and rot away as the game gets updates. Quite sad actually.
Thats why I also quite like Star Wars Empire at War mods. The game does not get any updates anymore. The API here is mostly frozen, even old mods still work.
I don't that's the kind of gotcha you imply it is. Of course radiation is somehow involved in everything, it's what keeps matter together. We also read off the measurements with our eyeballs, and even if you used some sort of braille system, touching things boils down to EM interaction. The primary probe here is gravitational, any musings beyond that are of little value. I challenge you to even conceive an experiment where your own objection wouldn't apply.
I've been using Niagara since its first days. Absolutely loving it and it keeps getting better year over year.
The only thing I miss still is not being able to open drawer by swiping up (though fastscroll on letters is good enough, but kinda sucks that it only handles latin letters there, cuz like 3rd of my apps are in Cyrillic and thus stuck at the first category).
UX on foldable device is unmatched, especially after it added support for side-to-side widgets and widget stacking.
Another slight annoyance is that in modern android quickstep is no longer standard part of AOSP, and depends on how OEM implemented it, which means that on many devices (including mine mix fold 2) you either lose access to gesture navigation in order to use Niagara(I get around it by using Infinite Gestures + OMS overlay to hide the navbar) or get broken animations when swapping between apps and going home.
UPD: it also recently got client-side implementation of monet and built-in contextual variable icons, which works even on devices that don't have it as a part of AOSP. It works marvels and I can't get enough of how good it looks.
And in general, I wish more launchers had fastscroller as overlay option, so that you can get to any app via tapping/flicking onto a letter in a single tap/swipe. Once you experienced it, it's hard to go back to the app grid.
While it's good to have core technology as part of OS, it also makes it non-updateable separately from the OS itself, which slows the spread of developments.
That's my main gripe with apple's approach to swift and swiftui. Yes, tech is getting better every day, but unless you target latest OS version, you can't use new fresh stuff until it's on enough devices around you. And that pretty much guarantees that no matter what apple adds, you still have to wait a year or two until you can safely start using it.
In modern android, kotlin(and compose) also part of the system, yet all the apps do not rely on the system libs, but rather inject the latest available runtime with each of the apps. It takes more space, but instead allows developers to target latest available stack, no matter what core os this app is being run on.
I'm surprised that Apple didn't opt for a hybrid approach.
They control the OS. They also control Swift. So why not embed the Swift version that the app was built against, and then download that swift version to the user's device when they download an app that uses it. Then:
- The app runs against the exact Swift version it was built against
- The app size is smaller because you're no longer shipping the Swift libraries with the app
- The impact on the user's device space is minimized since it only downloads each version of Swift once (to be shared by all apps that use that version), and only on demand.
- If a new version of Swift comes out before an OS upgrade, simply add it to the list. It gets downloaded the same as the rest.
They could even add some predictive Swift downloading for the most popular versions of Swift to avoid unnecessary delays downloading it.
I think you’re forgetting about apple’s frameworks, and the fact that _they_ want to use Swift. Their code runs inside your address space, and I think you’ll need a more complicated scheme to solve those problems.
I guess the problem then is that you end up with one copy of Swift installed per Swift version that's ever released more or less, which doesn't seem ideal from a space perspective.
MS does this because they aren’t in the same business as Apple.
Apple is in the device business. They make most of their revenue by selling you a new device.
A 10 year old Mac is basically a brick unless you want to mess with OpenCore Legacy.
The solution to this problem is to target a newer OS like Apple wants you to. Their users are going to buy a new system anyway, they’re the most affluent segment of the PC market.
> Their users are going to buy a new system anyway, they’re the most affluent segment of the PC market.
I thought Apple users were more likely to buy/own used devices than PC users were? I'm not fully caught-up on the statistics, but I'd assume that's still true.
Meanwhile PC users, plug a new disk into their desktop, or replace the hard disk on their laptop (plenty of options still available where disk, memory and battery can be exchanged).
Still doing weekly live dj sets with my macbook from 2013, editing in Logic, researching music online, listening to Spotify,… Except for the battery, there is nothing wrong with this device of more than 10 years old.
So you’ve already lost feature and most security updates, 3 major releases behind.
I’ve got a 2012 Mac mini and it’s limping along with the OpenCore Legacy patcher. I got the kernel panics to stop but they came back with a recent update. I’m gonna sell the thing and probably switch those duties to a Linux server.
That should in theory be possible, yeah, though I can't imagine a great way of doing it. Do you want to add a bunch of complexity to the system's dynamic linker to make it understand "base + binary patch" dynamic libraries?
In any case, maybe you can add heaps of complexity to core OS things and save some disk space; but you still need the full patched dynamic library in memory when the process is running, so at the very least you'll end up with bloat from lots of versions of the dynamic libraries loaded in memory when processes with different versions are running...
Maybe you could tackle both of the problems by storing a base version of the dylibs and then have other dylibs which provide replacements for only the symbols which have changed between versions... but this would severely limit the kind of thing you can do without just basically having to override all symbols. And automating this process would be hard, since compiler upgrades could cause small code gen changes to a bunch of symbols whose behavior haven't changed and you wouldn't want to ship overrides for those.
In the end, while I'm sure there are things you could do to make this work (Apple has some talented engineers), I also understand why they wouldn't.
I think this tight coupling between the language and the platform compromised a very promising language. Swift is one of few if not the only modern language that, at the same time, has excellent performance (due to AOT compilation and optimization, deterministic garbage collection via ARC etc.), has modern security features (algebraic nil as opposed to NULL, bounds checking etc.), and is relatively easy to learn and become productive in, perhaps on par with Python/JavaScript for the core language.
I don't think there's something else in the "general purpose languages with substantial real-world use" camp that touches on those 3 points quite like Swift does.
On the other hand, non-Apple developers have good reason to avoid Apple, due to the extreme anti-competitive behavior. It's the C# story all over again.
Its performance is unfortunately very far from "near Rust". I don't know if that's a result of one virtual dispatch too many or upfront overhead of its ARC implementation, but Swift almost always underperforms on microbenchmarks, despite expectations.
If there's a deep dive on this, I'd love to read it. Could one of the possible reasons be targeting few-core systems and providing as deterministic memory usage as possible to fit into RAM on iOS devices without putting the burden on the programmer?
I generally feel that for a language like Java/C#, which Swift is, you really need a JIT and a tracing, moving GC to get optimal performance. Apple has pushed the COM-like model of static code generation and non-moving reference-counted GC about as far as it can go at this point (impressively far--the compiler heroics in Swift are incredible), and it still can't quite make it to Java/C#, which end up having a simpler implementation than that of Swift in the end. The fact is that the ability to dynamically observe the behavior of the program and recompile with optimizations on the fly is just too powerful to give up.
Perhaps aggressive PGO could help to close some of the gap. The problem is that PGO requires effort on the part of developers to write comprehensive test cases and it's not clear how to scale that workflow. Large companies can write representative test cases and scale PGO on their performance-sensitive services, but your average iOS app developer won't be willing to do that.
Initially I was kind of disappointed of how AOT evolved on Android verus Windows Phone, then I came to realise Google was actually right.
Whereas Windows Phone would use Windows Store to AOT compile the application, Android would AOT on device.
Thus initially, it felt like using the tiny phones for that would be a bad decision, and it was, as the JIT was reintroduced 2 versions later (Android 7).
However, it was a mix and match of all modes, interpreter hand written in Assembly for quick startup, a JIT with PGO data gathering, AOT compiler with feedback loop from PGO data, latter on, sharing PGO data across devices via Play Store services.
This mix of JIT/AOT with PGO sharing across everyone, brings the optimal execution flow that a given application will ever get, allows reflection and dynamic loading to still be supported, and AOT compiler toolchain can have all time of the world to compile on the background.
It's most likely just reference counting and the way abstractions work in Swift (dynamic dispatch?). In particular, it still loses to C# even if you use AOT for the latter, especially in multi-threaded scenarios.
HotSpot C2 and .NET Dynamic PGO-optimized compilations first and foremost help to devirtualize heavy abstractions and inline methods that are unprofitable to inline unconditionally under JIT constraints, with C2 probably doing more heavy lifting because JVM defaults to virtual calls and .NET defaults to non-virtual.
With that said, I am not aware of any comprehensive benchmarking suites that would explore in-depth differences between these languages/platforms for writing a sample yet complex application and my feedback stems mostly from microbenchmark-ish workloads e.g. [0][1].
Performance aside, I do want to compliment Swift for being a pleasant language to program in if you have C# and Rust experience.
On the case of Java, it isn't only HotSpot, there are several other options, and in the case of OpenJ9 and Azul, cloud JIT also plays a role for cloud workloads.
I also like Swift, if anything it helped to bring back the pressure that AOT compilation also matters.
The performance difference is a small constant factor, perhaps 1.05x, perhaps 3x depending on the workload. If you are writing kernels, high performance graphics, signal processing, numeric analysis - sure, that's significant.
For the typical application though, it's fast enough, you get same order of magnitude performance as C++ or Rust with a almost Python like mental load. As the success of other dog-slow languages show, this is a major selling point.
I can confirm that one big reason why Apple went with reference counting in Swift is because they like it when garbage is freed right away. This lets them get away with smaller heaps than they'd need to get comparable performance with tracing garbage collection. This does slow down execution somewhat; the overhead of updating all those reference counts isn't terrible, but it is significant. It's just a price they consider to be well worth paying.
Partially, you need to rely on the community for GUI stuff (Avalonia and Uno), and old Microsoft still pushes VS/Windows as the best experience, anyone else that wants a VS like experience has to buy Rider.
Yes, there is VS Code, which besides being Electron based, Microsoft is quite open that will never achieve feature parity with VS.
With regards to UI there is Microsoft’s MAUI, which I personally prefer over Avalonia. I love the single project approach of MAUI. I think Avalonia also relies on MAUI controls to some extent (I seem to recall a <UseMaui /> project setting in Avalonia projects.
MAUI doesn't count if "supports GNU/Linux" is part of being considered FOSS proper, and on macOS they took the shortcut of using Mac Catalyst instead of macOS UI APIs.
RC is slower than a modern garbage collector, ARC (if the A means it requires an atomic increment/decrement) is significantly so.
I’m not saying that it is a bad choice, it is probably a good one in case of battery-powered machines with small RAMs, but I think tracing GCs get a bad look for no good reason.
Additionally, people comparing ARC to Objective-C's conservative GC in the replies don't seem to understand that (1) refcounting is a form of GC, often times inefficient compared to a mark-and-sweep GC, and (2) conservative GCs are quite limited and Apple's implementation was pretty bad compared to other implementations.
Objective-C objects are basically all a struct objc_class* under the hood, and conservative GCs in general cannot distinguish whether a given word is a pointer. Even worse, for a conservative GC to determine whether a word points into a heap-allocated block, it has to perform a lengthy, expensive scan of the entire heap. It also doesn't help that Apple decided to kickstart the GC if your messages began with "re" (the prefix for "retain" and "release" messages, which were used all the time before ARC came around). So at one point in time, you were able to marginally boost performance of a garbage collected Objective-C application by avoiding messages beginning with "re"!
But you are right about memory usage and battery ... this is why iOS devices require less memory than Android devices for comparable performance (or better performance in some cases).
Indeed, in microbenchmarks the only time you see swift faster than java, c#, or go, is when the bounds checking is turned off. It is not a very performant language. I do like the syntax and semantics though.
What Apple calls "ARC", the rest of the word simply calls "RC". Unlike in Objective-C, the vast majority of RC implementations do not need the developer to modify refcounts by hand. It was already "automatic", so to speak.
Moreover, ARC itself indeed modifies refcounts atomically. If it didn't, you would not be able to reliably determine the liveness of an object shared between threads. Now ask yourself whether atomically updating dozens of integers is faster than flipping a bit across a table of pointers.
Basically Apple turned Objective-C's GC failure to deal with C semantics, picked COM's approach to smart pointers, and made a market message out of it into ARC, for people that never dealt with this kind of stuff before.
ARC can stand for multiple things and is more of a marketing name here, than anything. The relevant garbage collector algorithm is called reference counting - and depending on whether it is single-threaded or have to do it over multiple threads, it can have quite a big overhead. Also, ObjC was also ref counted AFAIK before.
Yes, everyone that points that out usually has no idea that Objective-C GC failed due to C's semantics making it quite prone to crashes, and that automating Cocoa's retain/release calls was a much more easier and safer approach than making C code work in a sensible way beyond what a conservative tracing GC will ever be able to offer, while dealing with C pointers all over the place.
As others have pointed out, there’s some tradeoffs here. One of them I’m not seeing mentioned is better forward compatibility. As a user, when Apple adds new features like built-in photo OCR, updated UI elements, better navigation patterns, etc., and I update iOS, every SwiftUI app I have gets those features without the developer doing anything.
> every SwiftUI app I have gets those features without the developer doing anything.
I'm not sure that's right. The developer does a ton -- it just feels automatic to you. All your SwiftUI apps are not going to use Metal shaders automatically now just because they're now so neatly supported in iOS 17. And every update to Swift/SwiftUI comes with deprecations and those need to be addressed (sooner rather than later), including adding #ifavailable macros to cover all the bases.
The real benefit is that developers can add new functionality or write simpler code than they would have to before. I could be wrong, but the stuff you get "automatically" is minimal IMHO. It's just that developers start targeting the newest iOS version during beta so that by the time users upgrade, apps can be ready
It’s a little bit of column A and a little bit of column B.
Sure, as a dev, there’s plenty to do and test and update, but there’s also heaps I get for free, especially when following Apple’s “best practices.”
A simple example was a year ago when Apple tweaked some UI elements to look sleeker and changed default styles (e.g., lists, pickers, etc.).
My SwiftUI app and other SwiftUI apps compiled against the then-current iOS SDK immediately showed the new UI elements when run on the latest iOS beta. Didn’t even require me to recompile.
Stuff like that is elegant and can only be done when the libraries are included in the OS.
The flip side is, of course, that if you, for some reason, have a particular thing in mind and you don’t take precautions to lock it down in your code, it’ll stop looking that way when changes are made in the next iOS.
But I guess that’s what beta testing is for, and I’ve yet to come across a freebie that I didn’t like, but I’ll concede that it’ll depend from dev to dev.
I think i'd rather be sure my code behave like it did when i compiled it over having unexpected side effets at each os release.
If all it takes to get the new features is recompile & deploy a new version of my code, i'm fine with it.
On the other hand, from a user perspective who knows when that recompile will happen in the case of devs who are less attentive, too busy, or only doing this app thing as a side hobby.
Sure, but the only case i see it really useful is for unmaintained apps. At which point you can be sure the app is going to break no matter what in a not distant future.
One of the major benefits of software is that you do _not_ need to re-create it if it already exists and solves your problem.
An unmaintained app that solves a specific problem is never going to break by itself “in a not distant future”. Only if its environment changes so much that it cannot be run anymore (including security fixes missing in the app) does this happen.
I recall a firm making good money while using their internally custom-built software for MS DOS with no way to change it whatsoever -- the software firm that wrote it was probably already long out of business, source code was not available -- and that was in 2019 which was already long past the days of MS DOS.
I think it is worthwhile for OSes and other core software infrastructure to support running even unmaintained apps as much as possible because it reduces the need to rewrite (or overhaul) programs only due to a lack of maintainance for the existent one.
Not caring about this is IMHO accepting to waste a huge portion of the advantages that software gives us compared to other technology (software by itself never breaks from physical defects or continuous use, does not stain, etc.).
It’s also an insidious way to make perfectly good hardware obsolete. As Apple upgrades the OS, they slowly drop support for older hardware. This is usually justifiable, that’s just how OSs work for a variety of reasons.
However it also means that if all apps have to target the latest OS just to get some UI features, they will quickly end up dropping support for older hardware as well.
If you want a good example of this, check out the download page for Calibre, a simple app that helps to manage and convert eBooks.
Apple provides longer support windows than most android manufacturers. If you don’t need the latest features then there’s nothing forcing you to throw away a working device.
But there is pressure forcing you to abandon functioning hardware, that’s my whole point. Obviously nobody should expect OS updates forever, but third party software stops supporting old hardware that they would run fine on because of the way that the runtime tightly couples with the OS.
The tradeoff is that the app runs faster, looks better, works better -- in quite the indirect way.
Now that the developers on the core part don't need to spend time on compatibility -- or, just dont want have to make the base choice of being a runtime dependency -- they can spend time on other things instead.
This seems like a net negative at a glance, on the surface it means the apps are less compatible, so the second level is forced onto the older iterations, in practice, since each iteration has to worry about a lot less, the older iterations are _also_ a lot better instead.
It is of no surprise to me these Apple or Apple-like systems tend to be better overall, as opposed to the other philosophy of Android.
It leaks into all the levels. In the Java app, it is usual to see a deprecated warning that keeps working and it is maintained, and someone pays for that. The negative side is that there's no reason to get rid of the said dependency, either.
My point is that lowering the maintenance cost of _any_ app or systems in general, leaves room for improvement in all the other areas, as long as you don't fall behind -- if you are allowed to fall behind, you can afford to, if not, the end result is better given enough time.
> It is of no surprise to me these Apple or Apple-like systems tend to be better overall
This might have been true in the past, but it's been getting worse over the last decade.
For instance: The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings window written in SwiftUI, which UX wise is a joke compared to the old one, even though the old settings windows wasn't all that great either - case in point: try adding two DNS servers, searching for 'DNS server' only allows adding one item, then the DNS server panel closes and cannot be opened again without repeating the entire search, no idea how this mess made it through QA).
If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?
> The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings windows written in SwiftUI, which UX wise is a joke compared to the old one
Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.
> If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?
Because Apple are institutionally incapable of writing software at a sustainable pace and things gradually get worse and worse until somebody high up enough at Apple gets fed up and halts development to catch up with all the quality issues. This isn’t anything new to Swift; they took two years off from feature development to release Snow Leopard with “zero new features” because things had gotten too bad, which happened years before Swift. They are just far enough along in the current cycle that these problems are mounting up again.
There were many significant changes to the underlying technologies of Snow Leopard. Moreover, Snow Leopard was not, despite the common misconception, a "bug fix release". Mac OS X 10.6.0 was vastly buggier than Mac OS X 10.5.8.
> Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.
Swift the language strongly informed SwiftUI, which in turn strongly informed the applications written in it. The path of least resistance defines the most likely implementation. If I have to go the extra mile to do something, I probably will not, so worse UX (by some metric) is a direct consequence of that constraint.
The weirdest thing about System Settings is that SwiftUI already supports much more Mac-like idioms. They deliberately chose to use the odd-looking iOS-style switches, bizarre label alignment, and weird unique controls. While also keeping the annoying limitations of the old System Preferences app, such as not being able to resize the window.
I assume that Apple does not have traditional QA that tries to break flows that aren’t the new hotness. The amount of random UX breakage of boring old OS feature is quite large. Or maybe Apple doesn’t have a team empowered to fix the bugs?
To be somewhat fair to Apple, at least they try to keep settings unified. Windows is an utter mess.
That setting window.. whilst I don't really think swift UI has anything to do with.. it's just so awful, lifted right out of ios, where it is also just awful. As an android user, I don't understand how people put up with that app
The issue has got nothing to do with programming languages or UI toolkits, it's just that before there were more people with more attention to details, or now the management is so broken that there is no QA and no time to fix things.
A good operating system UI framework should enforce the operating system's UX standards though and make it hard to create an UI which doesn't conform to the rules.
But yeah, in the end, software quality needs to be tackled on the organizational level.
It's been a gradual process, looks at the Music app, how it has continuously got worse and buggier over the year, even without Swift and SwiftUI.
You can't blame SwiftUI for that.
I see the whole thing a bit more holistically. Swift and SwiftUI are both symptoms of the more general malaise, and then contribute back to it.
We had this in hardware, with machines getting worse and worse and Apple getting more and more arrogant about how perfect they were. In hardware, they had their "Come to Jesus" moment, got rid of Jonathan Ive (who had done great things for Apple, but seemed to be getting high on his own fumes), pragmatically fixed what was wrong and did the ARM transition.
With software, they are still high on their own fumes. The software is getting worse and worse at every level, and they keep telling us and apparently themselves how much better it is getting all the time. Completely delusional.
> it also makes it non-updateable separately from the OS itself, which slows the spread of developments.
Hasn't it always been this way with native desktop applications using the OS vendor's UI toolkit? What I hear some folks describe as "the old days" of Windows, Mac, etc.
I don’t know about tech getting better every day. If I look at what Apple itself is able to do with SwiftUI, especially compared to Cocoa, for example with the Settings app or Journal (which I assume is SwiftUI, I don’t know though)… it’s kinda pathetic.
While it's good to have core technology as part of OS, it also makes it non-updateable separately from the OS itself, which slows the spread of developments.
I think this is a great way to slow down unconditional hype based adoption of new technologies imo, it gives time to make training materials. It’s probably why apple don’t feel the need to make good tutorials, bc they make an artificial delay between new features and actual practical use of them
While philosophy is good, Java makes a very questionable syntactic choices when adopts that said features.
Pure slowness in how java develop feels like you have to wait 5 years to get something that other languages have, only to get it in the most aesthetically unpleasant way.
It seems like devs add unnecessary verbosity whenever they can.
Java adopted lambdas after many other languages, yet still decided that allowing to move last parameter closure {} outside of () is too radical, even though it is much more visually pleasant choice and quite common in other languages.
default methods in interfaces instead of static extensions. This one is controversial as static extension methods were not really common when java came up with this, but end result does not look impressive.
Same with their new sealed classes. They just chose the most verbose version they came up with.
Some inconsistency of choices also kinda baffles me. First they added support for skipping necessity of mentioning generic type inside <> during instantiation, only to introduce local vars later that require you to mention generic types within the same <>. Now you either mention generics by their full type always, or embrace the inconsistency and have it skipped when type mentioned in prefix and type it when using local vars. Or do not use local vars and have consistent codestyle with increased verbosity.
Java is still a good language, but it feels dated in syntax. TBH I have no idea why would anyone chose java in 2023 when there's kotlin, which not only fully covers the same functionality(okay, no pattern matching and no loom/valhalla until java merges it), but does it while being much more concise and readable.
I'd choose Java over Kotlin in 2023, in fact I'd do so most years. I'll pretty much take conservative language design over developer comfort every day of the week.
But you do not have to choose Java over Kotlin - you can use them both in the same project without any side effects.
This was by design for Kotlin, and is probably their smartest/best feature. All existing Java code/libs just work in Kotlin. Conversely, most Kotlin code/libs just work in Java, although some care needs to be made there.
Kotlin is amazing to use and read. For most things, the Kotlin version is easier to read and comprehend than the Java version in my experience.
As Java adds language features, Kotlin either gets them for free or already had them (and now can use native features instead of their own custom features).
This is less and less true as Java evolves. For example Java streams expect you to use Optionals to represent absence, which means they interoperate awkwardly with Kotlin which expects you to use Kotlin nullable types to represent absence. Another example: Kotlin has gone through multiple rounds of different ways of doing async, none of which is compatible with Java's new fiber implementation, so they'll either have to go through yet another incompatible rewrite of how they do async or be more incompatible with future Java libraries.
Subjectively, a lot of Kotlin features are implemented in a kind of ad-hoc way because they're designed to be as easy to use up-front as possible, at the cost of consistency. So the language has a kind of "technical debt" - it's hard to evolve it in the future. Add in the fact that the language lacks the higher-level abstraction facilities that would let you work around these incompatibilities (e.g. Scala has its own Option that's different from Java Optional - but this isn't a big problem because you can write a generic function that works on both. But you can't write a function Kotlin that works on both Java Optionals and Kotlin nullable types), and I'm really skeptical about its future.
Personally, I find Java's Optional to be kludgy and annoying to use.
An extension value clears this up in my Kotlin code, something like:
val <T> Optional<T>.value: T? get() = orElse(null)
Allows you to do:
repository
.findByFoo(bar)
.value
?.doSomething()
Slightly ugly, but allows you to gracefully unwrap Java Optional's into something Kotlin understands.
Regarding project loom and coroutines - I don't see why loom won't work in Kotlin codebases as-is. The change would be made in the corountines library, not user codebase.
> An extension value clears this up in my Kotlin code, something like:
You can convert at a boundary, which is fine if you have a line between Java and Kotlin parts of your codebase. But you can't seamlessly mix Java and Kotlin and use the same functions with both, which is what some Kotlin advocates try to claim.
> Regarding project loom and coroutines - I don't see why loom won't work in Kotlin codebases as-is. The change would be made in the corountines library, not user codebase.
I would bet they'll need incompatible changes, because there are subtle differences in the models. (Or they could keep the same API but with subtly different thread safety guarantees - but that would be a whole lot worse, breaking existing code in nondeterministic ways).
Kotlin adds Jetbrains only as IDE, additional build plugins, an ecosystem of Kotlin libraries for more idiomatic code, stack traces completly unrelated to Kotlin as JVM only understands Java, and it needs plenty of boilerplate to emulate co-routines and functions in JVM bytecodes.
Other than Android, there are no reasons for additional complexity in development tooling.
Most of your points are not really valid if you understand Kotlin. It's not different than understanding Java really...
> Kotlin adds Jetbrains only as IDE
Not true, you can use any IDE you want. Of course IntelliJ is the "blessed" IDE, but really, Kotlin is just a bunch of libs, any IDE will work.
> additional build plugins
I don't see why this would matter. Any non-trivial build is going to use a bunch of plugins.
> an ecosystem of Kotlin libraries for more idiomatic code
Which are optional.
> stack traces completly unrelated to Kotlin as JVM only understands Java
I don't know what you mean. The stack traces are from bytecode, which Kotlin compiles to just like Java. The stacks are identical...
> needs plenty of boilerplate to emulate co-routines and functions in JVM bytecodes
You do not write the boilerplate though. That's the difference. Of course all higher level languages with High Order functions are going to suffer this same issue. It's abstractions all the way down..
You probably already use Kotlin and don't even know it. The popular okHttp library from Square is Kotlin - but you'd never know that if you just used it in your Java project.
> Not true, you can use any IDE you want. Of course IntelliJ is the "blessed" IDE, but really, Kotlin is just a bunch of libs, any IDE will work.
No I can't, because InteliJ is the only one that actually supports Kotlin.
Otherwise I would just be using Notepad++.
> I don't see why this would matter. Any non-trivial build is going to use a bunch of plugins.
It shows you don't work as build engineer.
> I don't know what you mean. The stack traces are from bytecode, which Kotlin compiles to just like Java. The stacks are identical...
Try to debug a Kotlin stacktrace with free functions and co-routines, and then try to tell the world it looks like what the Java compiler would generate.
> You probably already use Kotlin and don't even know it. The popular okHttp library from Square is Kotlin - but you'd never know that if you just used it in your Java project.
No I don't, because at my level libraries are validated and only internal repos are allowed in the CI/CD pipeline.
Also I no longer do native Android for work, other than mobile Web.
If you do not understand an ecosystem, then you will believe the things you are saying. Nearly everything you claim is simply not true, and demonstrates a lack of understanding more than anything.
You can indeed use notepad++ to write Kotlin. Nothing is stopping you.
Kotlin is not limited to mobile development either.
Kotlin is not limited to mobile development, it only matters on Android thanks to Google's shenanigans, which is a different thing.
And even despite of it, they were forced to update Android Java to keep up with the JVM ecosystem.
> The next thing is also fairly straightforward: we expect Kotlin to drive the sales of IntelliJ IDEA. We’re working on a new language, but we do not plan to replace the entire ecosystem of libraries that have been built for the JVM. So you’re likely to keep using Spring and Hibernate, or other similar frameworks, in your projects built with Kotlin. And while the development tools for Kotlin itself are going to be free and open-source, the support for the enterprise development frameworks and tools will remain part of IntelliJ IDEA Ultimate, the commercial version of the IDE. And of course the framework support will be fully integrated with Kotlin.
personally i think kotlin is fine, but when compared to modern java, i feel like i need to really see a quantum leap before i'd go all-in if i was running a business (to say nothing of the labor pool for kotlin vs java)
> to say nothing of the labor pool for kotlin vs java
This is the biggest issue, in my opinion. Although, if you have a java developer that really likes the higher order features that are being added, Kotlin might be a very easy switch (after some basic syntax learning).
The nice part is everything you already know about JVM performance still applies within Kotlin. So you're not starting over from square-one like you might assume.
Programmers rarely have consensus on anything language-related, but their preferences are not always evenly distributed. Our decisions re Java aim to cater for the majority of professional teams, which may not necessarily be represented by the majority of commenters on HN. If you don't understand why many more teams prefer a language with choices that appeal to you less over languages with choices that appeal to you more, the answer may be that your preferences are not the same as those of the majority of teams.
This is the most infuriating kind of answer: There are concrete complaints and this comment addresses none of them. Claiming that there are people who prefer Java's syntax to be inconsistent isn't useful without saying why they do.
1. I don't agree the syntax is inconsistent at all. All Java type declarations are terminated with a `}`. Using a different terminator is inconsistent, and that's the thing that requires justification. For the time being, while records are still new, the justification of saving a single character was deemed insufficient. When it comes to inference, I also don't see an inconsistency. Inference infers types or type argument based on the available data. For example, in `var x = new ArrayList<>()` there is simply no information that can allow us to infer a type argument; on the other hand, if a method's return type is `List<String>`, `return new ArrayList<>()` can infer the argument.
2. Even if there were inconsistencies, and certainly when it comes the the concrete complaints, people not only disagree on what's a preferable feature (I, for one, strongly dislike extension methods -- and consider them an anti-feature with a negative overall contribution -- and much prefer default methods) but they also don't pick a language based on one feature or another but based on a gestalt of properties. The languages mentioned by the commenters make different tradeoffs that have significant downsides alongside their upsides (e.g. they don't match the evolution of the platform; they add implicitness that makes it harder for some to read code; they have a lot of features that need to be learned, each of which is pretty underpowered), and it seems that more people prefer the overall tradeoffs Java makes.
BTW, languages also have important meta-features. For example, every language needs to adapt over time, and the question is how it does so. The three languages that have managed to successfully support large programs that can evolve over time, maintained by changing teams -- C, Java, and to a lesser extent C++ -- have shown they take evolution seriously. They maintain compatibility and choose their features carefully (well, C++ maybe less so). Kotlin has lots of features, but it already has more outdated large features than Java because it adds features to address a certain problem and then the platform addresses them in an altogether different way (data classes, async functions), and as a result it's showing its age quicker, too. Java has proven that it evolves well, and many think it evolves better than most other languages.
> Some inconsistency of choices also kinda baffles me. First they added support for skipping necessity of mentioning generic type inside <> during instantiation, only to introduce local vars later that require you to mention generic types within the same <>.
I just came across this as a potential footgun the other day. I had a var list = new ArrayList<>(), then later added a bunch of Foo's into the list, and then when I called an overloaded method with the signatures log(Object obj) and log(List<Foo> fooList), it was calling the Object version. I would think some better type inference should be possible there, but if not, making programmers declare the type(s) at least once is a necessary constraint.
Why don’t just write var list = new ArrayList<Foo>()? The point of inference inside <> has always been to save you from repeating yourself from copying from the left to the right, which is a non-issue when you are saved from specifying it on the left.
You clearly doing something wrong as I get about 3 minutes per image on m1 mac mini.
But yeah, at this stage most of guides are early hacks and require individual tweaking. It is quite expected that people get varying results. I assume in a week or a month situation will get much better and much more user-friendly.
I wonder whether it counts people who use internet, but don't know it. My grandma uses her phone to chat with me, check the weather, recipes and news. But if you ask her whether she uses internet or not - she'd say no and she has no use for it.
Yeah you can basically arbitrarily expand or restrict the circle of people. If I don't use the internet otherwise but I go to a machine to buy myself a train ticket. The machine is internally connected with the train company's servers which might be in the cloud, so the signal goes via "the internet" for a short time. Does that count as use of the internet or is the internet just an implementation detail?
No you could say, because you don't use it at home and you use an appliance. But then is your grandma using the internet, because she uses a bunch of appliances, too? And someone who uses the internet through an internet cafe, something very common in some asian countries, wouldn't be "using" it either as they are not at home.
Basically all modern cars built in the last 5 years send telemetry data back to their manufacturers. Is driving a Mercedes EQS/Tesla usage of the internet?
Up until two weeks ago my car didn't (or at least it didn't send any data anywhere, I know one mechanic hooked up an iPad to it to get some sort of real-time data from the engine once when diagnosing a transmission issue).
Drove a sedan from 2008. Had over 150k miles on it. Still drives fine, next owner might be able to get another 50k miles from it.
Pfft, my ‘88 hilux is on 1,350,000 km. Drive train, engine, all original. The only stuff that gets changed are filters, glow plugs, tyres and oil, and the very occasional thing that goes wrong.
When it doesn’t work I can see what’s wrong by looking at it, and can fix it myself with cheap parts that are available everywhere. It’s cost me €200 in maintenance in the last three years.
Conversely I stupidly bought a 2016 truck last year and just sent it to be scrapped, as it has been absolutely nothing but trouble, has cost me about €8,000 to repeatedly repair, and it then still needed a new engine, computer, and gearbox - I’ve driven it maybe 100km and it just keeps falling apart.
They’re getting seriously good at the planned obsolescence bit.
I drive an ‘89 4Runner which at the time was the same platform as your Hilux. I have the same experience as you. It just keeps running and thrives on neglect. I hope to never replace it. The new stuff is so poorly made.
I used to read the encyclopedia when I was little, before I had the internet. Was I fractionally connected to the internet, as opposed to 100% unfettered Wikipedia access?
I see this headline as 63% of the world now uses the internet in some shape or form. Irrespective of whether they know this or not, any threats to this vast communication network will affect their lives in some shape or form. A large part of the world has started to treat internet as a utility and the disruption of this utility has implications for people irrespective of how it's structured, managed or made available.
The Network is a utility. Historically the previous two iterations of the Network were treated that way, the Universal Postal Union and the Public Switched Telephone Network.
It's true that it's possible to live without access to the Network, people did up until the Treaty of Bern (in 1874) - it is also possible to live without access to electricity, or fresh water and people did that too, they're still utilities.
I read it differently - 63% of people when asked will say that they directly use the internet. Most of the data seems to come from surveys, and some of the data notes seem to mention stuff around households being asked.
Im not sure a person using an internet connected ATM would count, or a person who uses the internet in some form at work but does not have a broadband connection at home etc
If you count indirect usage, I suspect the figure is much higher - ie do we really believe that c13% of Europe manages to get through a month without using the internet in any form? I’m a bit sceptical on this data as it appears to be poorly explained in terms of methodology (and some is even just forecast through a model).
I was trying to think of other technologies people use but don't know it. If people drive they probably also know that they use the road, but maybe they don't know that they "use" the drainage works that were put in hundreds of years ago. At what point does it become silly to ask the question? "Do you use the water treatment facility on the edge of town?" "Do you use money?"
Banking. (Which is a technology, albiet quite an old one.) If someone doesn't personally have a mortgage or car loan, and gets paid in cash, and doesn't use banks at all; Does such a person need banks to exist? Does society? Some people I've talked to honestly believe the answer is no.
To be clear, are YOU advocating for the belief that banks don't need to exist?
Because the statement is technically true, but basically meaningless... Cars don't need to exist, modern medicine doesn't need to exist, electric lightbulbs don't need to exist. But they all provide conveniences and functionality that most people seem to appreciate, and would not be willing to part with.
Water's a really good one! Where does your tap water come from? What water district are you in? Or conversely, what's your ISP's ISP (Level3/etc)? Where's your closest Internet Exchange (IX)? How many hops away are your closest friends on the Internet?
It's the way vintage story implemented modding. They developed the whole game as engine + modapi + hooking engine for stuff outside of hookapi.
Then most of gameplay is implemented as mods on top of engine using api and hooking. And those tools are open source, with central distribution point for mods, so servers can dispatch and send update of required mods to clients as they join.
Marvellous and elegant design. Makes running a server with client side mods a breeze, because mods are automatically pushed to the clients.
Though in the end, you can't really open all the interfaces and expect it to be stable without making some huge trade offs. When it works, it's extremely pleasing. Some mods for vintage story that are made purely using mod api can work between major game versions. Even server/client version check is intentionally loose as mismatched versions can still for the most part interact across most of mechanics.
In practice, to preserve balance of api evolution and stability, not everything in the game is in the api, and thus you have to use their hooking api, and stuff that is not exposed tends to break much more often, so mods require manual updates, just like in minecraft(though not as bad, tbh. In minecraft nowadays modders tend to support both fabric and neoforge/forge apis, targeting each for at least a few major versions. In vintage story, you only gotta support one modding api heh).