At least where I work, writing new Java code is discouraged and you should instead use Kotlin for backend services. Spring Boot which is the framework we use, supports Kotlin just fine, at the same level as Java. And if you use Jetbrains tools, Kotlin tooling is also pretty good (outside Jetbrains I will admit it is worse than Java). Now, even in new Java projects you can still be using Kotlin because it is the default language for Gradle (previously it was Groovy).
So sorry for your loss. Some months ago I was very angry to found out that a person was dead for a bike injury that can be easily solved but he had to wait for almost 50 minutes for the ambulance. Not because he was far away, but because he was in between two regions and the 112 was discussing who should send the ambulance. In fact, they initially send one and later told the driver to go back while in the highway. He's dead just because he happened to have the accident near the border of two regions of the same country, each one with its own public health system.
As always with this kind of stuff, there are so many inaccuracies, at least in the parts I know of. Roads are mostly ok, although some of them are more like "suppositions" that real roads we have found. Let's take a look at the area around Valladolid: https://imgur.com/xMW6yiY
- Pintia is almost confirmed to be near the Duero/Douro river, much more to the south and to the east. It is one of the most explored pre-Roman settlements in the area and while there has not been a definitive proof, there are many hints that show that it's on the place I showed and not where it's shown on the map
- Amallobriga is also, for most historians, located in Tiedra, but it shows Tordesillas. As you can see on the map, the actual location of Tiedra is also a road intersection. The location in Tiedra is consistent with archeological evidence and with route books that show the distance from Amallobriga to other cities we know.
- Nobody really knows where Intercatia or Tela are. But note that a there's a big road intersection at the south. It is confirmed that there was a settlement but we do not know the name of it, several have been proposed. In any case, Intercatia is very difficult to be located as it is shown in the map with no roads going to it. Many archaeologists say it could be in the actual town of Paredes de Nava.
- I don't think there's any real evidence of a bridge that crosses the Douro/Duero river there. What we know is that there's a medieval bridge closer to Septimanca and that it could have had a Roman origin, but according to the map there's no road there.
I was wondering the same thing about the road crossing the Douro between present-day Vila Nova de Gaia and Porto. Was there a bridge there during Roman times? Interestingly, it would be right where the Luiz I Bridge is now.
After a quick research, there's no evidence of a bridge there and it seems difficult to do even for Romans. But there could have been some people with boats in Cale to help cross the river and still be considered part of the road.
I know that not too long ago there was a "bridge", which was a bunch of boats aligned from one margin to the other. Not sure if this counts as a bridge.
Having learned assembly with the book "Computer Organization And Design" from Patterson and Hennessy, it really shows how much RISC-V takes from MIPS. After all they share some of the people involved in both ISAs and they have learned from the MIPS mistakes (no delay slots!). Basically if you come from a MIPS the assembly is very very similar, as it was my case.
Now that book is also available with a RISC-V edition, which has a very interesting chapter comparing all different RISC ISAs and what they do differently (SH, Alpha, SPARC, PA-RISC, POWER, ARM, ...),...
However I've been exploring AArch64 for some time and I think it has some very interesting ideas too. Maybe not as clean as RISC-V but with very pragmatic design and some choices that make me question if RISC-V was too conservative in its design.
> However I've been exploring AArch64 for some time and I think it has some very interesting ideas too. Maybe not as clean as RISC-V but with very pragmatic design and some choices that make me question if RISC-V was too conservative in its design.
Not enough people reflect on this, or the fact that it's remarkably hazy where exactly AArch64 came from and what guided the design of it.
AArch64 came from AArch32. That's why it keeps things like condition codes, which are a big mistake for large out-of-order implementations. RISC-V sensibly avoid this by having condition-and-branch instructions instead. Otherwise, RISC-V is conservative because it tries to avoid possibly encumbered techniques. But other than that it's remarkably simple and elegant.
> That's why it keeps things like condition codes, which are a big mistake for large out-of-order implementations. RISC-V sensibly avoid this by having condition-and-branch instructions instead.
Respectfully, the statement in question is partially erroneous and, in far greater measure, profoundly misleading. A distortion draped in fragments of truth remains a falsehood nonetheless.
Whilst AArch64 does retain condition flags, it is not simply because of «AArch32 stretched to 64-bit», and condition codes are not a «big mistake» for large out-of-order (OoO) cores. AArch64 also provides compare-and-branch forms similar to RISC-V, so the contrast given is a false dichotomy.
Namely:
– «AArch64 came from AArch32» – historically AArch64 was a fresh ARMv8-A ISA design that removed many AArch32 features. It has kept flags, but discarded pervasive per-instruction predication and redesigned much of the encoding and register model;
– «Flags are a big mistake for large OoO» – global flags do create extra dependencies, yet modern cores (x86 and ARM) eliminate most of the cost with techniques such as flag renaming, out-of-order flag generation and using instruction forms that avoid setting flags when unnecessary. As implemented in high-IPC x86 and ARM cores, it shows that flags are not an inherent limiter;
– «RISC-V avoids this by having condition-and-branch» – AArch64 also has condition-and-branch style forms that do not use flags, for example:
1) CBZ/CBNZ xN, label – compare register to zero and branch;
2) TBZ/TBNZ xN, #bit, label – test bit and branch.
Compilers freely choose between these and flag-based sequences, depending on what is already available and the code/data flow. Also, many arithmetic operations do not set flags unless explicitly requested, which reduces false flag dependencies.
Lastly, but not least importantly, Apple’s big cores are among the widest, deepest out-of-order designs in production, with very high IPC and excellent branch handling. Their microarchitectures and toolchains make effective use of:
– Flag-free branches where convenient – CBZ/CBNZ, TBZ/TBNZ (see above);
– Flag-setting only when it is free or beneficial – ADDS/SUBS feeding a conditional branch or CSEL;
– Advanced renaming – including flag renaming – which removes most practical downsides of a global NZCV.
You are, of course, most welcome to offer your contributions — whether in debate or in contestation of the points I have raised – beyond the hollow reverberations of yet another LLM echo chamber.
The information I used to contest the original statement comes from the AArch64 ISA documentation as well as from the infamous «M1 Explainer (070)» publication, namely sections titled «Theory of a modern OoO machine» and «How Do “set flags” Instructions, Like ADDS, Modify the History File?».
Thanks for the link to that article, by the way! I missed a lot of the “ephemeral literature” that was being passed around when M1 was first released and we were collectively trying to understand it.
That will be amazing when it happens, and a year is VERY soon!
Tenstorrent's first "Atlantis" Ascalon dev board is going to be similar µarch to Apple M1 but running at a lower clock speed, but all 8 cores are "performance" cores, so it should be in N150 ballpack single-core and soundly beating it multi-core.
They are currently saying Q2 2026, which is only 4-7 months from now.
Afair, AArch64 was basically designed by Apple for their A-series iPhone processors, and pushed to be the official ARM standard. Those guys really knew what they were doing and it shows.
It's clear that Arm worked with Apple on AArch64 but saying it was basically designed 'by Apple' rather than 'with Apple' is demonstrably unfair to the Arm team who have decades of experience in ISA design.
If Apple didn't need Arm then they would have probably found a way of going it alone.
Apple helped develop Arm originally and was a (very) early user with Newton. Why would they go it alone when they already had a large amount of history and familiarity available?
I get the same impression w.r.t. RISC-V v. MIPS similarities, just from my (limited) exposure to Nintendo 64 homebrew development. Pretty striking how often I was thinking to myself “huh, that looks exactly like what I was fiddling with in Ares+Godbolt, just without the delay slots”.
Instructions are more easily added than taken away. RISC-V started with a minimum viable set of instructions to efficiently run standard C/C++ code. More instructions are being added over time, but the burden of proof is on someone proposing a new instruction to demonstrate what adding the instruction costs and how much benefit it brings and in what real-world applications.
> Instructions are more easily added than taken away.
That's not saying much, it's basically impossible to remove an instruction. Just because something is easier than impossible doesn't mean that it's easy.
And sure, from a technical perspective, it's quite easy to add new instructions to RISC-V. Anyone can draft up a spec and implement it in their core.
But if you actually want wide-spread adoption of a new instruction, to the point where compilers can actually emit it by default and expect it to run everywhere, that's really, really hard. First you have to prove that this instruction is worthwhile standardizing, then debate the details and actually agree on a spec. Then you have to repeat the process and argue the extension is worth including in the next RVA profile, which is highly contentious.
Then you have to wait. Not just for the first CPUs to support that profile. You have to wait for every single processor that doesn't support that profile to become irrelevant. It might be over a decade before a compiler can safely switch on that instruction by default.
It's not THAT hard. Heck, I've done it myself. But, as I said, the burden of proof that something new is truly useful quite rightly lies with the proposer.
The ORC.B instruction in Zbb was my idea, never done anywhere before as far as anyone has been able to find. I proposed it in late 2019, it was in the ratified spec in later 2021, and implemented in the very popular JH7110 quad core 1.5 GHz SoC in the VisionFive 2 (and many others later on) that was delivered to pre-order customers in Dec 2022 / Jan 2023.
You might say that's a long time, but that's pretty fast in the microprocessor industry -- just over three years from proposal (by an individual member of RISC-V International) to mass-produced hardware.
Compare that to Arm who published the spec for SVE in 2016 and SVE 2 in 2019. The first time you've been able to buy an SBC with SVE was early 2025 with the Radxa Orion O6.
In contrast RISC-V Vector extension (RVV) 1.0 was published in late 2021 and was available on the CanMV-K230 development board in November 2023, just two years later, and in a flood of much more powerful octa-core SpacemiT K1/M1 boards (BPI-F3, Milk-V Jupiter, Sipeed LicheePi 3A, Muse Pi, DC-Roma II laptop) starting around six months later.
The question is not so much when the first CPU ships with the instruction, but when the last CPU without it stops being relevant.
It varies from instruction to instruction, but alternative code paths are expensive, and not well supported by compilers, so new instructions tend to go unused (unless you are compiling code with -march=native).
In one way, RISC-V is lucky. It's not that currently widely deployed anywhere, so RVA23 should be picked up as the default target, and anything included in it will have widespread support.
But RVA23 is kind of pulling the door closed after itself. It will probably become the default target that all binary distributions will target for the next decade, and anything that didn't make it into RVA23 will have a hard time gaining adoption.
I'm confused. You appear to be against adding new instructions, but also against picking a baseline such as RVA23 and sticking with it for a long time.
Every ISA adds new instructions over time. Exactly the same considerations apply to all of them.
Some Linux distros are still built for original AMD64 spec published in August 2000, while some now require the x86-64-v2 spec defined in 2020 but actually met by CPUs from Nehalem and Jaguar on.
The ARMv8-A ecosystem (other than Apple) seems to have been very reluctant to move past the 8.2 spec published in January 2016, even on the hardware side, and no Linux distro I'm aware of requires anything past original October 2011 ARMv8.0-A spec.
I'm not against adding new instructions. I love new instructions, even considered trying to push for a few myself.
What I'm against is the idea that it's easy to add instructions. Or more the idea that it's a good idea to start with the minimum subset of instructions and add them later as needed.
It seems like a good idea; Save yourself some upfront work. Be able to respond to actual real-world needs rather than trying to predict them all in advance. But IMO it just doesn't work in the real world.
The fact that distros get stuck on the older spec is the exact problem that drives me mad, and it's not even their fault. For example, compilers are forced generate some absolute horrid ARMv8.0-A exclusive load/store loops when it comes to atomics, yet there are some excellent atomic instructions right there in ARMv8.1-A, which most ARM SoCs support.
But they can't emit them because that code would then fail on the (substantial) minority of SoCs that are stuck on ARMv8.0-A. So those wonderful instructions end up largely unused on ARMv8 android/linux, simply because they arrived 11 years ago instead of 14 years ago.
At least I can use them on my Mac, or any linux code I compile myself.
-------
There isn't really a solution. Ecosystems getting stuck on increasingly outdated baseline is a necessary evil. It has happened to every single ecosystem to some extent or another, and it will happen to the various RISC-V ecosystems too.
I just disagree with the implication that the RISC-V approach was the right approach [1]. I think ARMv8.0-A did a much better job, including almost all the instructions you need in the very first version, if only they had included proper atomics.
[1] That is, not the right approach for creating a modern, commercially relevant ISA. RISC-V was originally intended as more of an academic ISA, so focusing on minimalism and "RISCness" was probably the best approach for that field.
It takes a heck of a lot longer if you wait until all the advanced features are ready before you publish anything at all.
I think RISC-V did pretty well to get everything in RVA23 -- which is more equivalent to ARMv9.0-A than to ARMv8.0-A -- out after RV64GC aka RVA20 in the 2nd half of 2019.
We don't know how long Arm was cooking up ARMv8 in secret before they announced it in 2011. Was it five years? Was it 10? More? It would not surprise me at all if it was kicked off when AMD demonstrated that Itanium was not going to be the only 64 bit future by starting to talk about AMD64 in 1999, publishing the spec in 2001, and shipping Opteron in April 2003 and Athlon64 five months later.
It's pretty hard to do that with an open and community-developed specification. By which I mean impossible.
I can't even imagine the mess if everyone knew RISC-V was being developed from 2015 but no official spec was published until late 2024.
I am sure it would not have the momentum that it has now.
it's basically impossible to remove an instruction.
Of course not. You can replace an instruction with a polyfill. This will generally be a lot slower, but it won't break any code if you implement it correctly.
While I agree with you, the original comment was still valuable for understanding why RISC-V has evolved the way it has and the philosophy behind the extension idea.
Also, it seems at least some of the RISC-V ecosystem is willing to be a little bit more aggressive. With Ubuntu making RVA23 the minimum profile for Ubuntu, perhaps we will not be waiting a decade for it to become the default. RVA23 was only ratafied a year ago.
For the uninitiated in AArch64, are there specific parts of it you're referring to here? Mostly what I find is that it lets you stitch common instruction combinations together, like shift + add and fancier adressing. Since the whole point of RISC-V was a RISC instruction set, these things are superfluous.
My memory is a bit fuzzy but I think Patterson and Hennessy‘s “Computer Architecture: A Quantitative Approach” had some bits that were explicitly about RISC-V, and similarities to MIPS. Unfortunately my copy is buried in a box somewhere so I can’t get you any page numbers, but maybe someone else remembers…
Henessey and Patterson "Computer Architecture: A Quantitative Approach" has 6 published editions (1990, 1996?, 2003, 2006, 2011, 2019) with the 7th due November 2025. Each edition would have a varying set of CPUs as examples for each chapter. For example, the various chapters in the 2nd edition has sections on the MIPS R4000 and the PowerPC 620, while the 3rd edition has sections on the Trimedia TM32, Intel P6, Intel IA-64, Alpha 21264, Sony PS2 Emotion Engine, Sun Wildfire, MIPS R4000, and MIPS R4300. From what I could figure out via web searches, the 6th edition has RISC-V in the appendix, but the 3rd through 5th editions has the MIPS R4000.
Patterson and Hennessy "Computer Organization and Design: The Hardware/Software Interface" has had 6 editions (1998, 2003, 2005, 2012, 2014, 2020) but various editions have had ARM, MIPS, and RISC-V specific editions.
> As a mathematician doing a fair amount of numerical analysis, I must know several programming languages, all of which do roughly the same sort of thing.
But Mercury is not a language of the same paradigm as those (imperative, array oriented maybe). It's a logical programming language which I must guess, you probably never used any language of this category. In fact many features of logic programming languages never made to mainstream programming languages or they're behind some uncommon libraries.
For sure, I've never used a language of this paradigm. I'm also bothered by the fact that I don't have a single good reason why I should, and would love to know if somebody has one. The currently given reason is curiosity.
I guess that is my point; all of the languages I know are of the same paradigm, but I need to know them all for work. So I disagree with the assertion that only languages of a different paradigm from the one you know is worth learning.
> I guess that is my point; all of the languages I know are of the same paradigm, but I need to know them all for work. So I disagree with the assertion that only languages of a different paradigm from the one you know is worth learning.
I think you're taking that statement too literally, and way too seriously. Many of the epigrams are a bit tongue in cheek, and that one is too.
> 127. Epigrams scorn detail and make a point: They are a superb high-level documentation.
Don't take them literally and act like they're gospel truths you must live your life by. That's not what Perlis was going for with them. Just like you shouldn't take DRY (don't repeat yourself) literally. You should use judgement.
If you need to learn Fortran to write your numeric code, even though Fortran isn't teaching you anything, you should learn Fortran. You have a job to do. But if you don't need to learn Fortran for work, and it has nothing to offer over the other languages you know, why bother with it? That's the key point of the epigram.
These sanctions are not really effective. I think Cuba is the country that has had them for the most amount of time and... Nothing changed. Instead they force them to develop in house tech which may be better for them in the long term
The sanctions on the Mullah regime have prevented financing of more proxies. There is a direct correlation between loosening enforcement and lifting such sanctions and increased support for Hezbollah, Hamas, etc. It's preposterous to claim that they do not work.
You might say that, but the proxies weren't so keen on saving their citizens' lives either. Bashar Assad killed 600'000 of his own citizens. Hamas uses hospitals and UN schools as their command centers. Hezbollah has underground tunnels under civilian infrastructure where their leaders hide. The Iranian government assists with and finances the bombing on Jewish people all over the world. I wouldn't start pointing fingers so soon.
1. Hamas, an Iranian proxy, attacked Israel on October 7th 2023. Until that time, the death toll in the 100 years of Israeli-Palestinian conflict was about 10k Israels to 40k Palestinians. It since became 12k Israelis to 110k Palestinians.
I don't think that proxy fulfilled its intention.
2. Hezbollah, another Iranian proxy, attacked Israel on October 8th in support of Hamas. The ensuing war killed about 4k Lebanese. That proxy, too, seems to have failed in its mission to reduce the death tol..
3. The Houthis, yet another Iranian proxy, attacked Israel in support of Gaza. Since then, Israeli counter attacks killed several hundreds in Yemen. Not good.
Now these proxies are heavily damaged, deprived of most strategic capabilities and the death toll in the ensuing battles dwarfed all of the past deaths in the Israeli-Arab conflict.
Israel is occupying a piece of land it violently seized by force and ethnic cleansing: https://en.m.wikipedia.org/wiki/Nakba . The people of Palestine have a right to self defense just like anyone else; exercising that right doesn't make them terrorists.
The notion that Israel was created by force is just not true. Jews were always a part of historical Palestine, and most of the land they inhabited was legally bought, by bodies like the JNF, up until the war of 1948. That war, started by the Arabs, came as a result of the League of Nations declaration of the formation of Israel.
It's important to stress that there was never a Palestinian country, nor a cohesive Palestinian nation ahead of the formation of Israel. That historical land was controlled by various empires, and people used to come and go through it quite freely.
Tell that to the Palestinians families whose homes they owned for generations were stolen by Israel and their children are now murdered in the most horrible ways.
I'll tell them that opening the war on Israel and the Jews on 1948 was the wrong move and they unfortunately paid a hefty price. It was a choice, done at a community level: many Arabs chose to accept Israel and not attack it in 1948 and are now Israeli-Arabs (about 23% of the population). Nothing was stolen from them and they are amongst the most prosperous Arabs in the Middle East.
This is ahistorical propaganda. Nakba was the ethnic cleansing of indigenous Palestinians in favor of invading Europeans. Israel was fully created through terrorism and war crimes, which continue at scale to this day.
No, that's false. "Nakba" refers to the exodus of the Palestinians during 1948. It says exactly that in the Wiki. The exodus came as a result of war, first between the local Arabs and the Jews, and more prominently when Arab armies joined the war against Israel after it was declared a state.
Israel was given statehood by a decision of the League of Nations, the body preceding the UN.
Verify that what exactly is false? It literally says that:
"The term is used to describe the events of the 1948 Palestine war in Mandatory Palestine as well as Israel's ongoing persecution and displacement of Palestinians"
> The Nakba (Arabic: النَّكْبَة, romanized: an-Nakba, lit. 'the catastrophe') is the Israeli ethnic cleansing[14] of Palestinian Arabs through their violent displacement and dispossession of land, property, and belongings, along with the destruction of their society and the suppression of their culture, identity, political rights, and national aspirations.
Sorry, it is not possible for Cuba to develop any sort of functioning in-house tech with a complete economic embargo. US also "pays" Cuba less than the cost of a NY apartment for its "leased" huge naval base - Guantanamo - the price of which has never changed since the original agreement taken at gun-point.
This is not true. The sanctions definitely hurt countries like Cuba or Russia. They have a far harder time growing their economy. Cuba is stuck in the last century and often has total blackouts that last for days. Russia needs to beg countries like Iran or North Korea now for imports.
Russian economy is booming actually, thanks to continued purchasing of oil and gas by Europe and other parts of the world, and the sanctions that block capital outflow out of the country.
Mostly the military, from what I hear. The rest of the economy is in shambles [1].
No matter what: once the war in Ukraine is over by whatever solution, it's going to get nasty. Either Putin (or, more likely, his successor - the dude is old and it's by far not sure if he will be able to stay in power should the war end up bad for Russia) manages to turn around the economy once again from producing tanks and other instruments of war to a regular economy, or they'll keep it that way... and attack another country with all that firepower.
There's a trickle down effect from it, same as with all massive government spending. Those factory workers who now have cushy jobs with large paychecks assembling tanks and cruise missiles then go and spend that money on other things.
Yet the ruling elites and military still enjoy decent quality of life, it´s mostly the ordinary people who suffer. In case of Russia that´s okay since large parts of the population genuinely support the war, but I´m not so sure about Iran and Cuba, where most are not supportive of their governments anyway.
The point isn't necessarily to make leadership suffer, but rather to prevent suffering of everyone who might be threatened by a strong Cuba/Iran/North Korea
In case of Cuba I think the sanctions regime is in part motivated by vindictiveness and in part to make an example for other nations in the americas of what happens when you evade US hegemony..
If the sanctions weren't effective Russia wouldn't be insisting on the lifting of the sanctions as a part of any Ukrainian deal.
The sanctions significantly slow down Russian development and are more and more making it into just a mineral mining satellite of China. With time the weakened Russia would just split, and the large eastern part will go to China. Some midparts, with Turkic speaking population may even fall into Turkey orbit. Without the oil and gas rich East, the European part of Russia will be just a destitute village on the far margins of civilized Europe as it had been for centuries in the past.
> If the sanctions weren't effective Russia wouldn't be insisting on the lifting of the sanctions as a part of any Ukrainian deal.
If the sanctions were effective Russia wouldn't be offering entirely one-sided deals that it knows nobody is going to accept, because it would be desperate enough to get those sanctions lifted that it would actually have to concede something in a deal.
Sanctions don't prevent the short-term behavior. Russia thinks that it is winning in Ukraine - thus the one-sided deals, it isn't about the other side accepting it, for Russia it is about Russia forcing such a deal, and it always includes lifting of the sanctions as a key element of a deal that Russia is trying to force because even with traditional lack of long-term vision they understand that the sanctions are crippling the country. They think that winning the war, forcing such a one-sided deal, is the only way to get the sanctions lifted "cleanly", and any deal with concessions, etc. would look less than victorious and would probably have the sanctions lift incomplete/incremental and boggled down with conditions, etc.
Answering that specific question, 98 jumped up 16% this spring and... that's it. Except for that, it's an ordinary curve compared to, say, 5 years ago. And that's mostly the result of actually bombing the refineries.
And yes, if US and EU were serious about helping Ukraine win, this would have already happened back in 2022. Or better yet, back in 2014.
As it is, US & EU sanctions seem to be more of a theater mostly for the benefit of the population of those countries, so that their politicians can sincerely say that they "support Ukraine".
Depends on the claim - some are, some aren't. The problem obviously exists, but the coverup is good enough to make lots of people think that the war happens on another planet and doesn't affect them (e.g. gasoline export ban).
"In breaking news, Russia is extending the complete ban on all petrol exports through the rest of 2025 for producers and distributors, and is banning diesel exports for distributors.
Fuel shortages have spread to almost all of Russia, with the Ministry of Energy insisting that “all necessary measures are being taken to ensure the timely and uninterrupted supply of all essential fuel.”"
The only reasonable definition of "work" is "stop the thing that motivated the sanction from happening". With that definition, sanctions rarely work (or if they do, not in a very effective way). Russia is still at war with Ukraine. Iran is still developing nuclear weapons. North Korea did develop nuclear weapons.
Let's not kid ourselves. Russia is still killing Ukrainians right now. They're still occupying Ukraine's land right now. Is this what "work" looks like in your dictionary?
> Ask a russian about the price of fuel.
Oh I see. In your dictionary a working solution is not to stop the war or get lands back, but to ensure average Russian people suffer. Never mind then.
Oil is how Russia funds their war-machine. Bombing refineries makes it harder and less sustainable to keep the war going. It's not about making civilians suffer when you literally need to pressure the enemy into stopping the war by blowing up their infrastructure.
There's one recourse in the Constitutional Court driven by Clouflare and RootedCON but the thing about the Constitutional Court is that it can be very slow and it's heavily politicized and I'm not really sure the government position on this. Right now, only one leftist Catalan party has said anything against the blocks in the Congress. Also many mass media are not reporting this issues because they're also an interested party.
One important focus for Kotlin is mobile app development. However if you're just a thin layer over Java you can't create iOS apps, that's the whole story behind Kotlin Native/Kotlin Multiplatform. Being able to have apps that share code between Android and iOS. On the other hand I think a good Kotlin Native backend could be a great language in the Go space. Scala Native and Jank (Clojure) show that some people want a JVM less version of them. But currently Native is less performant than the JVM version. And now there's GraalVM, which is the Java solution for AOT and Kotlin could use it too. But it was invented after Kotlin Native.
The same thing with coroutines and virtual threads. Or with Kotlin data classes and Java records. Kotlin was first, Java then implements something to solve the same issue but it's not the same way as Kotlin.
> I think a good Kotlin Native backend could be a great language in the Go space
With Ktor there is the beginnings of that. It can actually compile to native; but with some limitations. There's also the web assembly compiler which would make sense for things like server less and edge computing type use cases. Both are a bit neglected from a server side perspective by Jetbrains. But the potential is there. And the whole Kotlin native path could be a lot easier and more lightweight than dealing with Graal.
The weak spot for Kotlin native outside the IOS ecosystem is compiler maturity, the library ecosystem, and lack of support for system libraries (e.g. posix and WASI for wasm).
Those are fixable problems. But it's only partially there currently and I would not recommend it for that reason. However, I think the whole effort with IOS native has moved things forward quite a bit in the last two years. Getting Kotlin to the level of Go in terms of compilers, tools, performance, and libraries on Linux and wasm would be similar in scope. But a few years of focus on that could make a lot of difference. IMHO, Kotlin could be perfect as a system programming language with some effort.
reply