They said that you can't subnet a /64, not that you can't subnet in IPv6. And while technically you can subnet even a /64, it's not supported by SLAAC, which means that, for example, you can't get an Android phone to work with auto-assigned addresses in a /80 IPv6 network.
At least CPython and CRuby (MRI), the most common implementations of each language, ignore all type hints and they are not able to use them for anything during compile or runtime. So the performance argument is complete nonsense for at least these two languages.
Both Python and Ruby (the languages themselves) only specify the type hint syntax, but neither specifies anything about checking the actual types. That exercise is left for the implementations of third party type checkers.
The problem is there are a lot of developers who have only coded with static typing and have no idea about the terrible drawbacks of static typing.
They don't understand what static typing does to code verbosity and development times.
Take Turborepo going from Go's typing light system (designed to emulate duck typing) to Rust's heavy typing system (true static typing). Originally the code was 20,000 lines and was coded by 1 developer in 3 months. When moved into the typing style you like so much, the same code is now 80,000 lines and was coded by a team of developers in 14 months.
Well said. There are many problems you have to deal with when writing code and type annotations only solve one particular kind. And even type annotations can be wrong: when you're dealing with data from external sources, dynamic languages like Python, JavaScript and Ruby will happily parse any valid JSON into a native data structure, even if it might not be what you specified in your type hints. Worse yet, you may not even notice unless you also have runtime type checks.
The kind of messy code base that results from (large) numbers of (mediocre) developers hastily implementing hacky bug fixes and (incomplete) specifications under time pressure isn't necessarily solved by any technical solution such as type hints.
In my honest opinion, if you can't live without static typing, Ruby just isn't for you.
Adding static typing to a dynamic language mostly gives you the disadvantages of both, without a lot of benefits. It's better to stick to languages that were designed with static types from the start.
I love programming in Ruby, having to worry about type annotations and the additional constraints that come with them would take a lot of the fun out of that.
> Adding static typing to a dynamic language mostly gives you the disadvantages of both, without a lot of benefits.
As an engineer at a firm doing heavy duty data pipelines and internal tooling in a Sorbet-ified codebase, I disagree pretty strongly. While Sorbet type signatures are never going to win a syntax beauty contest, they are more than worth their weight in the way I can rely on them to catch typing and nilability goofs, and often serve as helpful documentation. Meanwhile, the internal code of most functions I write still looks like straight Ruby, fluent and uncluttered.
A good CI story that leans on tapioca was crucial here for us.
> Adding static typing to a dynamic language mostly gives you the disadvantages of both, without a lot of benefits.
Can you elaborate? I don't share this experience, and I'm interested in bringing static typing to a language without static typing, so I'd like to understand. In new Python and JavaScript codebases, optional typing has had clear benefits for refactoring and correctness and low costs for me. Legacy codebases can be different.
I don't have a great code example at hand unfortunately, but I found that people often tend to write more "nominally" typed code (expecting explicitly named classes) rather than taking advantage of duck typing (interfaces, structural types), meaning the code becomes more rigid, harder to change and more time wasted on resolving all the type checks, even if the code otherwise is perfectly reasonable and free of bugs.
In other words, I found that the resulting code often looked more like Java but with weaker guarantees about types and much worse performance.
Part of it is because Ruby imo, have a very nice syntax. With type annotation, it's becoming "ugly", a lot more verbose. It's no longer English-like. I do agree type have some advantages, but we need to get the DX right.
I've been using Ruby for more than 10 years now, and I only started using LSP recently. To me it's a nice addition but I can live without it. Type is just one of the tools, not the only one imo. Not trying to sound negative but type is becoming more like a hammer analogy nowadays.
And it's not limited to Ruby. Javascript, Python, all similar languages. Not everyone is a fan of type. We won't reach consensus imo and that's ok.
> With type annotation, it's becoming "ugly", a lot more verbose. It's no longer English-like.
In our codebase that uses Sorbet I find this is really only true at function boundaries. Within a function it is pretty rare that anything needs to be spelled out with inline annotations to satisfy the compiler.
This is my biggest irk about Sorbet: because its signatures are wordy and because it can't infer the generic type of a private method, it slightly pushes you towards NOT extracting helper methods if they are going to be 2-5 lines. With Sorbet annotation, it'd easily become 10 lines. So it pushes towards bigger methods, and those are not always readable.
If only private methods would be allowed not having typing at all (with a promise of not being used in subclasses, for example), and Sorbet would be used mostly on the public surface of classes, it'd be much more tolerable for me.
Even if the hardware is really good, the software should be even better if they want to succeed.
Support for operating systems, compilers, programming languages, etc.
This is why a Raspberry Pi is still so popular even though there are a lot of cheaper alternatives with theoretically better performance. The software support is often just not as good.
If you want your customers to spend supercomputing money, you need to have a way for those customers to explore and learn to leverage your systems without committing a massive spend.
ARM, x86, and CUDA-capable stuff is available off the shelf at Best Buy. This means researchers don't need massive grants or tremendous corporate investment to build proofs of concepts, and it means they can develop in their offices software that can run on bigger iron.
IBM's POWER series is an example of what happens when you don't have this. Minimum spend for the entry-level hardware is orders of magnitude higher than the competition, which means, practically speaking, you're all-in or not at all.
CUDA is also a good example of bringing your product to the users. AMD spent years locking out ROCm behind weird market-segmentation games, and even today if you look at the 'supported' list in the ROCm documentation it only shows a handful of ultra-recent cards. CUDA, meanwhile, happily ran on your ten-year-old laptop, even if it didn't run great.
People need to be able to discover what makes your hardware worth buying.
The implication wasn't to use the raspberry pi toolchain. Just that toolchains are required and are a critical part of developing for new hardware. The Intel/AMD toolchain they will be competing with is even more mature than rpi. And toolchain availability and ease of use makes a huge difference whether you are developing for supercomputers or embedded systems. From the article:
"It uses technology called RISC-V, an open computing standard that competes with Arm Ltd and is increasingly being used by chip giants such as Nvidia and Broadcom."
So the fact that rpi tooling is better than the imitators and it has maintained a significant market share lead is relevant. Market share isn't just about performance and price. It's also about ease of use and network effects that come with popularity.
Try MinerU 2.5 with two-step parsing. It gives good results with bounding boxes per block. Not sure if you can get it to do more detailed such as word or character level.
If I understand the article correctly it's for those cases where you want memory safety (i.e. not using "unsafe") but where the borrow checker is really hard to work with such as a doubly linked list, where nodes can point to each other.
A doubly linked list is not the optimal case for GC. It can be implemented with some unsafe code, and there are approaches that implement it safely with GhostCell (or similar facilities, e.g. QCell) plus some zero-overhead (mostly) "compile time reference counting" to cope with the invariants involved in having multiple references simultaneously "own" the data. See e.g. https://github.com/matthieu-m/ghost-collections for details.
Where GC becomes necessary is the case where even static analysis cannot really mitigate the issue of having multiple, possibly cyclical references to the same data. This is actually quite common in some problem domains, but it's not quite as simple as linked lists.
I just foresee it become irrevocably viral, as it becomes the “meh, easier” option, and then suddenly half your crates depend on it, and then you’re losing one of the major advantages of the language.
I believe patents play a big role here as well. Anything new must be careful to not (accidentally) violate any active patent, so there might be some tricks that can't currently be used for AV1/AV2
I think patents are quickly becoming less of a problem. A lot of the foundational encoding techniques have exited patent protection. H.264 and everything before it is patent free now.
It's true you could still accidentally violate a patent but that minefield is clearing out as those patents simply have to become more esoteric in nature.
You can't patent something that's in use. Prior art is a defense to a patent claim/lawsuit.
But that's not my main point. My main point is that we are going down a fitting path with codecs which makes it hard to come up with general patents that someone might stumble over. That makes patents developed by the MPEG group far less likely to apply to AOM. A lot of those more generally applicable patents, like the DCT for example, have expired.
There are numerous patent trolls in this space with active litigation against many of the participants in the consortium who brought AV1. The EU was also threatening to investigate (likely to protect the royalty revenues of European companies)
Nice, but I did one of the obvious things and looked for Geneva. Near the bottom left it says "Genève-Aéroport". Is that labelling the small portrait-format grey rectangle to the left of the larger landscape-format rectangle that should be labelled Genève-Cornavin?
As someone who spends a lot of time looking at wiring diagrams: not really.
I think the concept of the map is neat (the arrival/departure times let you plan transfers!), but it is way too busy to be practical. It has at the same time too many details to give you a high-level overview, and not enough details for trip planning.
For a general idea a map like [0] would be better: it shows the stations and track layouts, so it gives you a general feeling of where to go. Want to go from Utrecht to Zutphen? You'll have to go either over Arnhem or Amersfoort. Want to go from Amsterdam to Groningen, but there are issues in Zwolle? Yeah, you're screwed.
For planning purposes a map like [1] is better: it shows you the actual services being run, with a vague indication of their frequency. It tells you that an issue in Alphen isn't a big issue for your journey from Utrecht to Leiden, as there are six trains an hour going Utrecht-Schiphol-Leiden. Want to go from Utrecht to Amsterdam? Don't bother planning, there are trains every 5 minutes. Want to go from Den Haag to Groningen? A direct connection is possible - but only once an hour, so don't be late or you risk having to transfer in Zwolle!
But honestly? They are more for nerds than practical use. Transit planner apps are far easier to use, will be more accurate, and provide exactly the information which is relevant to your journey.
It's a technical diagram - it's really not supposed to be that comprehensible, but more of a reference. The passenger-facing publications use regular timetable layouts.
reply