My question is probably off because I lack the knowledge but how do the commercial games/game engines do this then if this is such a rocket science? Something like Fortnite or an aged GTA do what you've described (downloading assets on demand without any fps-drop) for quite some time now.
The claim isn't that it's impossible, or "rocket science", it's that it's hard to do right and was made much easier. You're bringing up for comparison a game engine that has been in constant development by experts for over two decades (Unreal Engine) and a game engine in constant development for over a decade ago (RAGE). Just because someone makes a professional product using tens or hundreds of millions of dollars doesn't mean it was easy.
There's a reason why Epic is able to charge a percentage of sales for their engine, and companies still opt for it. That's because it's hard to do reliably and with good performance and visuals.
Yeah, but just picking one of many requirements in game dev and advocating why lang x can do this better than y ignores all the other checkboxes. Yeah, C++ is nerve-wrecking but Rust can be even more. IIRC there was a thread about Zig vs Rust and why Rust is just the wrong tool (for OPs use case in that case). IDK but there is a reason why C++ dominates game dev and a reason why Rust still struggles with mainstream adoption compared to same agers like Go or TS.
The claim was that Rust made making something parallel easy/easier, and the illustrative example given was someone trying to parallelize something in a game engine. Whether Rust is good for game development or not is irrelevant to the point that was being made.
Even if everyone accepted that Rust was horrible for game development on most metrics, the example given would still carry the exact same weight, because it's really about "parallelizing this would be very hard in C++, but it was very easy because of Rust."
OT: Imagine "experts" from our industry explain the mainstream our world: e.g. what's the right programming language, or database technology, why types matter, or not?
Because software development is a skilled trade, not a science. The consortium discussed in the article is made up of actual scientists who do actual science.
"I am surrounded by bullshitters therefore all fields are equally deep in it" is not a correct view.
Then take data science or deep learning which is a science as sophisticated, same there.
We trust experts from other industries so much that we do not tolerate any other view without having a clue what is going on ourselves. The reason is simple, we have our views, they are political and we refer to "experts" to make our views scientific. Just my opinion and I bet that a huge number will disagree again with this comment. But—could I be right? Why not admit that I might be right? Because it's a political topic and your opinion is set.
False claims. There're as many who do and do you know what it does not matter for this discussion because data science and deep learning is a science independently of what some people do or not. This was the point.
Haha sure, read some papers about Bert and its successors. Come back and summarize what you have learned. But your comments shows so clearly that your opinion is set and politely you should check out data science and deep learning before you again write such shallow dismissals (which is against the guidelines btw).
Because no two fields are alike? We don't do anything even remotely scientifically in this industry because it's one of the few where we don't need to for various reasons
I've always thought that true engineering is anything that involves human lives. Bridges, pipes, roads, pace makers, space ships, rockets. Websites don't kill people, it ain't engineering.
That doesn't sound right. Most applications of electronic engineering are not safety-critical, unless we count requirements such as avoiding improper use of toxic chemicals, but setting the bar that low would mean just about any physical undertaking counts as safety-critical.
Projects like the Linux kernel, or the HotSpot and LLVM compiler systems, presumably count as engineering. Building a new GPU presumably counts as engineering. These systems aren't intended for safety-critical use, though.
Another example: it still counts as aeronautical engineering even if you're building an unmanned drone that only ever flies in a lab.
But do you mean: so-called experts? As in, not experts? Well, then that's just a question of finding the real ones, if they exist.
Or do you dislike how they oversimplify when talking to the public? I do too. All nuance is erased. The trick, then, is find out what (true, non-politized) experts say to each other when the public isn't listening.
Problem is that most people don't listen and turned off their brain. Look at this thread. They rather believe some random experts and news outlets calling some random dudes with random degrees experts because they have themselves no clue, never lead always followed others. What do we expect?
Few weeks ago I was looking for something like this. If there was anything similar, it was always sold out or did never launch, I don't expect better availability with item. And if they ask for $100 which is way too high and would give me better boards, pls provide two M2 slots.
If you're going to split a 1x PCIe lane into two M2 slots, you might as well save the cash and get a Pi4 and use two USB3 thumbdrives instead. The performance would be about equal by that point.
The PI 4 only has a single PCIe lane. Two M.2 Slots would require a PCIe switch which would make things more complex and expensive, as well as limiting the overall performance if you were trying to use both drives simultaneously.
OT but it this decade not more about types than tests?
Edit: Since this seems to be an unpopular opinion why stop here, haha: The more one stresses tests the more he/she signals that he must have missed years of advancements in software engineering. If you do this with peers, ok, but publicly? Even Ruby added types, not the kind of types we hoped for but still, it's a strong signal. If your lang has a mature type system you don't need half of your tests anymore and might not be into such write-ups. Praising and writing gazillions of tests don't make you look smarter, very much the opposite.
I'm not sure how they're mutually exclusive. Also TDD is more of a _design_ technique than a testing technique. I've worked in languages with extremely expressive typesystems (like Scala) and I still practiced TDD
Where did I say that? I just meant, and my apologies for not being clear, focus should be types, ofc you still need tests. But not as many as a decade ago and more important, people must drop this dogma that tests are the key to everything. There are not and the more tests a codebase has the worse its quality and maintainability.
> I still practiced TDD
You can do this ofc but my experience differs: Once you have an excellent type system, both in terms of language features and tool chain eg editor, you can literally code for days without running even the compiler once. This is pure flow and very much the opposite of TDD. But the entry barrier of is much higher than TDD. Don't get me wrong, you still need tests but TDD?? IDK, this feels like trial-and-error-coding from 2010. I mean if we still used all Ruby, yes tests and TDD everywhere but the environment has changed.
> There are not and the more tests a codebase has the worse its quality and maintainability.
I'm fully sold that types are important, personally I would object to starting any mid- to large-scale project in a dynamically typed language, but this doesn't ring true at all.
When you're writing and refactoring code that uses complex logic, you aren't necessarily able to encode that logic in the type system. Carefully written tests allow you to confidently edit the code without worrying that you might have broken something in the process.
If anything, strong type systems allow you to change the way you write and structure tests (more towards property-based testing as opposed to dumb test cases), but I wouldn't advocate completely doing away with them.
> but I wouldn't advocate completely doing away with them.
didn't say this
> Carefully written tests allow you to confidently edit the code without worrying that you might have broken something in the process.
yes true but again you get this with typed code without any tests for 80% of the code as well, look, it's about the quantity and what you are going to test. with types you need way less unit tests (some like ben awad say none!) but still integration tests. still doing tests like crazy and like it's 2010 defocusses your devs and makes refactoring much more tedious, change a small thing and rewrite twenty unnecessary tests from a too ambitious test warrior who didn't understand types. this creates a notion where codebases get stale and untouched for years. nobody likes to refactor test code bc it's an unattached piece of code which complicates things more than it helps, it rarely feels like a true spec but rather like some random code and the next one wonders why his predecessor wrote this test at all. this is so the past idk why people are worshipping this.
Write tests where types don't help anymore (integration tests!) and things are crucial, otherwise focus on the core logic. I have rather devs who write excellent typed code with just very few integration tests than somebody who drives nuts and goes down the full rabbit hole writing 10x more test code nobody asked for than actual code paired with such blog posts like from OP on top, they've missed the boat.
> doing tests like crazy and like it's 2010 defocusses your devs and makes refactoring much more tedious
I'm fully sold on this as well, but:
> it's about the quantity and what you are going to test
I'd say it's more about how you're going to test. What is covered by the type system should be handled by the type system, that's an absolute no brainer, using tests, or even worse, comments or conventions as opposed to types is just objectively wrong.
Because you can now be confident that trivial mistakes will be caught by the compiler, you can have actually meaningful tests, like "this property is satisfied", as opposed to "this object has this field set to this string".
So I wouldn't say "write less tests", I'd say "since types free you from the burden of testing stupid things, write better tests".
It's a misconception that developers who use dynamically typed programming languages write tests that perform tasks of a static type system. They do not write tests like this:
Having used both dynamic and statically typed languages rather extensively, I always end up recreating some subset of the type system in the test suite for dynamically typed languages. Often to at least test that the functions correctly handle erroneous input. Taking your example, I would definitely have at least one of those assertException kind of tests, and more of them (but automatically generated) if using a property-based test system where I could say something like: any type but string should result in an exception/error result. Now, this wouldn't be exhaustive (again, sans automation), but I'd have at least one test covering this.
Those tests that you list later are "happy path" tests. We want to know that that works, but we can't rely on only that sort of test especially if the type system doesn't work with us to avoid incorrect inputs to the function.
I currently use a dynamically typed pl in my professional capacity (statically typed pl in my side projects). I write a lot of tests, and 0 of them assert the type of the flowing data.
So you have no tests that would trigger an error/exception by giving bad data? I'm not saying that I'd, necessarily, call out the type explicitly, but I would give bad data to trigger the exceptional control flow/guard which can be tantamount to specifying a type. Of course, this also depends on where the function sits. If it's an internal/private function in a module that only my own functions would call I can more safely focus on the happy path. But if it's part of the interface to a module, then I want to make sure that users of the module get proper feedback/responses, whatever the contract is (be it a result type or an exception or a default value). I mean, that's a large part of the value of testing: ensuring that the code matches the specification/contract that you present to users.
Elixir/ecto has something called schema changesets that are a very robust way of validating user input. I do test against bad values (not just types, out of range, correct type but unsupported value, etc), but only really at data ingress, and no where else.
Honestly if a sad path causes a typing error in elixir it's not the worst thing. Sentry will catch it, only the user thread crashes, and you go patch it later.
A type is literally a description of a set of valid values. So when you say you test with bad values, then the answer is: you could use types and would not need these tests anymore.
However, the more interesting question is: is your typesystem capable of expressing your type and, if so, is it worth the effort and implications to do so.
But on a more theoretical level, OP is right: you _can_ save the tests with, given a powerful enough typesystem.
> you could use types and would not need these tests anymore.
No. These are not internal contracts, these are contracts with user input. In a statically typed language, You are still advised to write tests that your marshalling technique provides the expected error (and downstream effects) that you plan for, if say the user inputs the string "1.", For a string that should be marshalled as an integer.
> A type is literally a description of a set of valid values.
That is generally not the case. There are, for example cases where certain floating points are invalid inputs (either outside of the domain of the function or a singular points), and I don't know of a mainstream PL that lets you define subsets of the reals in their type system.
In go, or c, c++, or rust, you could have a situation where a subset of integers are valid values (perhaps you need "positive, nonzero integers", because you are going to do an integer divide at some point) and that is not an expressible set of value in that type system. Ironically, that is a scenario that IS typable in Elixir and erlang, which are dynamically typed languages.
I think you are referring to concrete/mainstream languages - so what you are writing is correct from a practical perspective, i.e. I would do that. From a theoretical perspective however it is not necessary, even if such a type-system might not exist yet.
You can retreat to your corner of theory if you wish, I'll actually build stuff. The real world has scary things like malicious actors that will send payloads designed to break your system through side channels like timing and cosmic rays that can flip bits on your disk and erase the guarantees that you believed you had in your type system.
> But on a more theoretical level, OP is right: you _can_ save the tests with, given a powerful enough typesystem.
Yes and no. The trick is building a type system simultaneously strong enough to encode the properties you want, and weak enough that it's statically decidable.
There will always be properties you can't encode in a (useful) type system.
It's a fine line to walk, really. There's an argument to be made that most of the times we don't actually need Turing-completeness, and we'd better off using only types to encode computations, but OTOH I don't really want to think about what coinductive types I must define to solve a problem that could be solved with five lines of javascript instead.
> There will always be properties you can't encode in a (useful) type system.
If you define useful by "we know that the compiler will finish in finite time" then I agree. And that's indeed a good point! In practice, there will always be runtime tests, at least for how long any of us and our children will live. :)
Let's say you have implemented a function that sorts a list in Haskell, which has a relatively strong type system. How do you make sure that it sorts the list and does not reverse it instead, how do you know that your job is done?
This is not the same problem AFAIU. You don't know whether your tests encode the correct specification either, but you must run them first to find out whether they pass, but with (dependent) types you can have the sorted list property encoded in compile time, but still can't be sure whether it is correct.
Sure, they are validated at compile time because they are propositions as types, but in the end they are still basically test cases: expected output for a given input, and the compiler is the test runner.
I don't know how to encode the full "Game of Life" property as dependent types, I am still an Agda newbie.
Sure, I don't know what the correct term is.
But I personally did not really gain anything from the type system that helped me get the logic of the program right. Whether a Turing complete type system "runs" my assertions at compile time, or a test runner does it on save does not make a difference to me. Actually, I could measure the compile time for this simple one-page program in seconds, while the Go tools compile and test a Game of Life implementation in a fraction of a second.
Idris and F* have a much much more advanced type system, compared to Typescript and Rust. But even if you deem these languages not "mainstream" enough, there is still e.g. Scala which has a typesystem that beats the one of Rust by far and also the one of Typescript, minus certain special cases where Typescript is really nice.
It is great to see though that even frontend-mainstream languages like Typescript start to get proper support for type-systems (especially considering that they had to do it on top of Javascript).
Very excited to see where we go with languages and I agree with you that TDD should now by default mean "type driven development". :)
It should be both, I think. Types take away part of the need of tests, but they're not mutually exclusive. I know I ended up writing a lot more tests when doing Javascript, just to cover type edge cases.
do you think this always praised evil-mode feels natural to a vimmer or feels like something close but yeah, not great like all vim emulations in all other editos?
me too, plus some oddities like c-d and c-u are not remapped to d and u like on vimium, while not standard compliant it's a must because it is overridden by a browser bind. otherwise it's fast but there was more stuff which was not great/
Moved to Win after a decade on a Mac, this doesn't make sense, most Mac keys are not much better and you are just used to it, rather install AHK, something which no other platform has, and remap keys in a sane way.
Using control for the most common operations is much more straining for the hand. It is especially noticeable by heavy keyboard users such as software developers. Using your thumb to hit CMD (or Alt on a Win keyboard), is much less straining.
i haven't used the original ctrl keys for a decade, nowadays everyone puts control on caps, even on a mac haha. then your left pinky is closer to ctrl + faster than your left thumb to command and if you want you beloved spotlight just install ms powertoys which has powerrun, it's the same, bind to alt-space
Thought how great, for a split-second. But key for being 24/7 in front my computer with zero issues for decades is among other things—I move all the time, hence I need a keyboard which moves all the way with me and you can imagine that my feet are also moving their location and nobody wants to arrange all the foot paddles every time you move. And in these times in your home office it's great to put your feet up here and then.
IDK I am totally fine with i, I and A haha and the brain is also faster when processing chords from the fingers only. If you blend feet or even just thumb strokes in you significantly slow down.
edit: just thinking of piano players, maybe I am wrong but this thing feels odd