Hacker Newsnew | past | comments | ask | show | jobs | submit | tom_'s commentslogin

I'm struggling to picture how we get there from here? There's a huge pile of second hand PCs available, and almost all of them are massively more powerful than necessary for terminal purposes only.

As a data point, running a Python program I've been working on lately, which is near enough entirely Python code, with a bit of I/O: (a prototype for some code I'll ultimately be writing in a lower-level language)

(macOS Ventura, x64)

- System python 3.9.6: 26.80s user 0.27s system 99% cpu 27.285 total

- MacPorts python 3.9.25: 23.83s user 0.32s system 98% cpu 24.396 total

- MacPorts python 3.13.11: 15.17s user 0.28s system 98% cpu 15.675 total

- MacPorts python 3.14.2: 15.31s user 0.32s system 98% cpu 15.893 total

Wish I'd thought to try this test sooner now. (I generally haven't bothered with Python upgrades much, on the basis that the best version will be the one that's easiest to install, or, better yet, is there already. I'm quite used to the language and stdlib as the are, and I've just assumed the performance will still be as limited as it always has been...!)


I have a benchmark program I use, a solution to day 5 of the 2017 advent of code, which is all python and negligible I/O. It still runs 8.8x faster on pypy than on python 3.14:

    $ hyperfine "mise exec [email protected] -- python e.py" "mise exec [email protected] -- python e.py" "mise exec [email protected] -- python e.py" "mise exec [email protected] -- python e.py"
    Benchmark 1: mise exec [email protected] -- python e.py
      Time (mean ± σ):     148.1 ms ±   1.8 ms    [User: 132.3 ms, System: 17.5 ms]
      Range (min … max):   146.7 ms … 154.7 ms    19 runs

    Benchmark 2: mise exec [email protected] -- python e.py
      Time (mean ± σ):      1.933 s ±  0.007 s    [User: 1.913 s, System: 0.023 s]
      Range (min … max):    1.925 s …  1.948 s    10 runs
     
    Benchmark 3: mise exec [email protected] -- python e.py
      Time (mean ± σ):      1.375 s ±  0.011 s    [User: 1.356 s, System: 0.022 s]
      Range (min … max):    1.366 s …  1.403 s    10 runs
     
    Benchmark 4: mise exec [email protected] -- python e.py
      Time (mean ± σ):      1.302 s ±  0.003 s    [User: 1.284 s, System: 0.022 s]
      Range (min … max):    1.298 s …  1.307 s    10 runs
     
    Summary
      mise exec [email protected] -- python e.py ran
        8.79 ± 0.11 times faster than mise exec [email protected] -- python e.py
        9.28 ± 0.13 times faster than mise exec [email protected] -- python e.py
       13.05 ± 0.16 times faster than mise exec [email protected] -- python e.py
https://gist.github.com/llimllib/0eda0b96f345932dc0abc2432ab...

> [...] and I've just assumed the performance will still be as limited as it always has been...!)

Historically CPython performance has been so bad, that massive speedups were quite possible, once someone seriously got into it.


And indeed that has proven the case. But my assumption was that Python had been so obviously designed with performance so very much not in mind, that it had ended up in some local minimum from which meaningful escape would be impossible. But I didn't overthink this opinion, and I've always liked Python well enough for small programs anyway, so I don't mind having it proven wrong.

The MIT licence does not require this.

I'm not an expert, but I very much doubt this.

The FSF calls it a "free license" [1] and I don't think they would if they didn't make the source code available.

Source code available is necessary but not sufficient for Free software, see [2]

> Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code can range from highly impractical to nearly impossible.

[1] https://www.gnu.org/licenses/license-list.en.html#Expat

[2] https://en.wikipedia.org/wiki/Free_software

EDIT Oh sorry, you mean for the LICENSE to be available. Never mind then.


And you're entirely wrong. MIT just require attribution, not giving the source code.

That is why companies and corpo programmers LOVE BSD/MIT code, they can freely steal I mean use it in their for-profit products without giving anything back but some bit of text hidden in about box


You can compile MIT software and distribute the binary while saying “fuck you” to anyone who asks for the source.

You are thinking of copyleft (e.g. GPL)


If that were true, the FSF wouldn't call it a free license.

> If that were true, the FSF wouldn't call it a free license.

It is true; the license gives you the source, to do with as you please, including closing it off.

Famously, Microsoft included BSD licensed tools in Windows since the 90s and did not distribute the sources!

And that is completely legal. If you want to force the users to distribute their changes to your open source product when they are redistributing the product, you need to use GPL.


You should have linked the MIT License on Wikipedia (or anywhere else) instead of Free Software.

The license is only three paragraphs long. You can see it does not contain text supporting your claim.

https://en.wikipedia.org/wiki/MIT_License


Well, I'm confused.

It's actually very simple:

MIT/BSD licenses are pro-business - any business can take the product, change a few lines and redistribute the result without making their changes available.

GPL is pro-user - anyone who gets the source, makes changes, and then redistributes the result has to make their changed sources available as well.


The FSF has written extensively on why (in their opinion) you should prefer copyleft licenses over non-copyleft licenses, but they don't require a license to be copyleft in order to be considered free. It's worth spending a bit of time on their site to understand their point of view. Just be careful not to drink too much of the Kool-Aid or you'll become one of those annoying people who never shut up about the GPL on forums.

> you should prefer copyleft licenses over non-copyleft licenses,

For most, but not all, software. Stallman did famously argue for libvorbis, which you may know as the ogg codec used mostly by games and spotify, to be licensed under BSD instead of the (L)GPL.


True, there are exceptions. Stallman thought strategically. Having a free-but-non-copyleft licensed reference implementation is necessary if you are trying to wrest dominance from an established but proprietary standard.

But I'm willing to bet that he'd have pushed for GPL if he wasn't trying to topple MP3.


Don't listen to spauldo, GP. Drink the delicious Kool Aid that is free software. Bring that joy to everyone else you find.

No, you don't sacrifice refresh rate! The refresh rate is the same. 50 Hz interlaced and 50 Hz non-interlaced are both ~50 Hz, approx 270 visible scanlines, and the display is refreshed at ~50 Hz in both cases. The difference is that in the 50 Hz interlaced case, alternate frames are offset by 0.5 scanlines, the producing device arranging the timing to make this work on the basis that it's producing even rows on one frame and odd rows on the other. And the offset means the odd rows are displayed slightly lower than the even ones.

This is a valid assumption for 25 Hz double-height TV or film content. It's generally noisy and grainy, typically with no features that occupy less than 1/~270 of the picture vertically for long enough to be noticeable. Combined with persistence of vision, the whole thing just about hangs together.

This sucks for 50 Hz computer output. (For example, Acorn Electron or BBC Micro.) It's perfect every time, and largely the same every time, and so the interlace just introduces a repeated 25 Hz 0.5 scanline jitter. Best turned off, if the hardware can do that. (Even if it didn't annoy you, you'll not be more annoyed if it's eliminated.)

This also sucks for 25 Hz double-height computer output. (For example, Amiga 640x512 row mode.) It's perfect every time, and largely the same every time, and so if there are any features that occupy less than 1/~270 of the picture vertically, those fucking things will stick around repeatedly, and produce an annoying 25 Hz flicker, and it'll be extra annoying because the computer output is perfect and sharp. (And if there are no such features - then this is the 50 Hz case, and you're better off without the interlace.)

I decided to stick to the 50 Hz case, as I know the scanline counts - but my recollection is that going past 50 Hz still sucks. I had a PC years ago that would do 85 Hz interlaced. Still terrible.


It's not absurd at all (in my view). A test checks that some obtained result matches the expected result - and if that obtained result is something that got printed out and redirected to a file, and that expected result is something that was produced the same way from a known good run (that was determined to be good by somebody looking at it with their eyes), and the match is performed by comparing the two output files... then there you go.

This is how basically all of the useful tests I've written have ended up working. (Including, yes, tests for an internal programming language.) The language is irrelevant, and the target system is irrelevant. All you need to be able to do is run something and capture its output somehow.

(You're not wrong to note that the first draft basic approach can still be improved. I've had a lot of mileage from adding stuff: producing additional useful output files (image diffs in particular are very helpful), copying input and output files around so they're conveniently accessible when sizing up failures, poking at test runner setup so it scales well with core count, more of the same so that it's easy to re-run a specific problem test in the debugger - and so on. But the basic principle is always the same: does actual output match expected output, yes (success)/no (fail).)


Did an LLM write the readme?

For sure it did. Every possible telltale sign, but that table and the emdash were the final proof we needed.

A couple of the comments to the article suggest using 64-bit numbers, which is exactly the right solution. 2^64 nanoseconds=584.55 years - overflow is implausible for any realistic use case. Even pathological cases will struggle to induce wraparound at a human timescale.

(People will probably moan at the idea of restarting the process periodically rather than fixing the issue properly, but when the period would be something like 50 years I don't think it's actually a problem.)


> but when the period would be something like 50 years I don't think it's actually a problem

I think you have that backwards. If something needs to be done every week, it will get done every week. That's not a problem.

If something needs to be done every fifty years, you'll be lucky if it happens once.


My parting shot was slightly tongue in cheek, apologies. Fifty years is a long time. The process, whatever it is, will have been replaced or otherwise become irrelevant long before the period is up. 64 bits will be sufficient.

At some point a random bitflip becomes more likely than the counter overflowing.

I agree with that sentiment in general but even though I've seen systems in continuous operation for 15 years, I've never seen anything make it to 20. I wouldn't write something with the external expectation it never made it that far, but in practical terms, that's probably about as safe as it gets. Even like embedded medical devices expect to get restarted every now and again.

Just as an example the Voyager computers have been restarted and that's been almost 60 years.


> using 64-bit numbers, which is exactly the right solution

On a 64-bit platform, sure. When you're working on ring buffers with an 8-bit microcontroller, using 64-bit numbers would be such an overhead that nobody would even think of it.


https://mastodon.gamedev.place/@draknek/115713018435458495

> The funding for underrepresented creators was a condition of my involvement in this project, so doesn't represent his values so much as mine. He was at least willing to do it though, which I'm not sure he would be today. (https://mastodon.gamedev.place/@draknek/115713018435458495)

> ...his company was the public face of that grant, my involvement in it isn't common knowledge. (https://mastodon.gamedev.place/@draknek/115713113473398888)

Seems like this was all sorted out by early 2019 - and nearly 7 years have passed since! Plenty of time for a person to change from somebody you'd be happy to associate with to somebody you might not.

> Some people have mentioned they couldn't tell from this thread whether these games are used with permission. For clarity, yes, we agreed to this in mid 2016 and signed a contract in late 2018/early 2019. (https://mastodon.gamedev.place/@draknek/115707937686651789)


> The funding for underrepresented creators was a condition of my involvement in this project, so doesn't represent his values so much as mine. He was at least willing to do it though, which I'm not sure he would be today.

Interesting, and thanks for the sources. I was under the impression that it was the same fund as that announced in 2010 [1] but the date in [2] plus the apparent timeline does align.

"I'm not sure he would be today" is a strawman and just Hazelden's own current views of Blow, but I doubt there's going to be a direct quote (or even better, a new grant from Thekla) to back it up. But yes, 7 years is a long time and the political landscape has changed "somewhat".

[1] http://the-witness.net/news/2010/03/announcing-indie-fund/

[2] https://www.gamesindustry.biz/the-witness-studio-offering-us...


That specific quote feels to me like a strange one to complain about, given that it's so obviously his own subjective opinion. Even if you're English, perhaps inclined to employ this sort of phrasing to state something that you are certain is incontrovertible fact (and will be so to everybody listening), the subjective nature has hardly been downplayed!

The LLM is not a person.

It's got a definite creamy tinge to it, not B&W.

A quality=90 jpeg exported from GIMP is ~1.4 million bytes and not obviously visually different. (Test process was loading original image into one Firefox tab, and quality=90 jpeg image into another, holding Ctrl+PgDn to flip between them quickly, and looking at the screen with my eyes to see if any obvious differences leapt out.)

quality=20 (~0.32 million bytes) wasn't obviously different either.

quality=10 (~0.21 million bytes) was noticeably different. And, on second glance, the obviously different areas were actually slightly different in quality=20 too.

I didn't do any more tests. So, they could have made the image less than 10% the size, I guess - but, they can probably afford the bandwidth, and the thing needs to end up fully uncompressed at some point anyway just so that it can be displayed on screen. It's not even like 4 MBytes is a lot of memory nowadays.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: