Hacker Newsnew | past | comments | ask | show | jobs | submit | bnferguson's commentslogin

Absolutely love this little thing. Picked a couple up back when it was still a kickstarter and was super surprised at the build quality (shockingly heavy for its size) and how smooth everything went.

It's not a thing I use everyday but sooo much nicer than having to unplug and lug my proxmox server up from the meter closet anytime there's an issue.


The book "Mathematica: A Secret World of Intuition and Curiosity" has a large thread exploring this among both historical and contemporary mathematicians. How people who seem to have an almost supernatural gift for math are often just able to "see" more clearly. Not in equations or words, but images.

Also discussing the development of the ability/discipline and the difficulties in transcribing what you now intuitively know but need to describe to other mathematicians so they can understand (notation/equations).

It's a book that's stuck in my head since reading it and wondering how to apply some of this to other problem spaces.


I think you remember correctly! A lot of the BBS software would just show what the current user was doing as if it was their screen. I believe from there you could start a chat as the sysop and interact with them, else you kinda just watched.

The few times I ran one (was a sysop on a few, but remotely), I was always a little creeped out that I could see people typing messages, etc. Felt like invading their privacy.


Feels like Zig is starting to fill that role in some ways. Fewer sharp edges and a bit more safety than C, more modern approach, and even interops really well with C (even being possible to mix the two). Know a couple Rust devs that have said it seems to scratch that C itch while being more modern.

Of course it's still really nice to just have C itself being updated into something that's nicer to work with and easier to write safely, but Zig seems to be a decent other option.


(self-promotion) in principle one should be able to implement a fairly mature pointer provenance checker for zig, without changing the language. A basic proof of concept (don't use this, branches and loops have not been implemented yet):

https://www.youtube.com/watch?v=ZY_Z-aGbYm8


How close are Zig's safety guarantees to Rust's? Honest question; I don't follow Zig development. I can't take C seriously because it hasn't even bothered to define provenance until now, but as far as I'm aware, Zig doesn't even try to touch these topics.

Does Zig document the precise mechanics of noalias? Does it provide a mechanism for controllably exposing or not exposing provenance of a pointer? Does it specify the provenance ABA problem in atomics on compare-exchange somehow or is that undefined? Are there any plans to make allocation optimizations sound? (This is still a problem even in Rust land; you can write a program that is guaranteed to exhibit OOM according to the language spec, but LLVM outputs code that doesn't OOM.) Does it at least have a sanitizer like Miri to make sure UB (e.g. data races, type confusion, or aliasing problems) is absent?

If the answer to most of the above is "Zig doesn't care", why do people even consider it better than C?


safety-wise, zig is better than C because if you don't do "easily flaggable things"[0] it doesn't have buffer overruns (including protection in the case of sentinel strings), or null pointer exceptions. Where this lies on the spectrum of "C to Rust" is a matter of judgement, but if I'm not mistaken it is easily a majority of memory-safety related CVEs. There's also no UB in debug, test, or release-safe. Note: you can opt-out of release-safe on a function-by-function basis. IIUC noalias is safety checked in debug, test, and release-safe.

In a sibling comment, I mentioned a proof of concept I did that if I had the time to complete/do correctly, it should give you near-rust-level checking on memory safety, plus automatically flags sites where you need to inspect the code. At the point where you are using MIRI, you're already bringing extra stuff into rust, so in practice zig + zig-clr could be the equivalent of the result of "what if you moved borrow checking from rustc into miri"

[0] type erasure, or using "known dangerous types, like c pointers, or non-slice multipointers".


This is very much a "Draw the rest of the fucking owl" approach to safety.


what percentage of CVEs are null pointer problems or buffer overflows? That's what percentage of the owl has been drawn. If someone (or me) builds out a proper zig-clr, then we get to, what? 90%. Great. Probably good enough, that's not far off from where rust is.


Probably >50% of exploits these days target use-after-frees, not buffer overflows. I don’t have hard data though.

As for null pointer problems, while they may result in CVEs, they’re a pretty minor security concern since they generally only result in denial of service.

Edit 2: Here's some data: In an analysis by Google, the "most frequently exploited" vulnerability types for zero-day exploitation were use-after-free, command injection, and XSS [3]. Since command injection and XSS are not memory-unsafety vulnerabilities, that implies that use-after-frees are significantly more frequently exploited than other types of memory unsafety.

Edit: Zig previously had a GeneralPurposeAllocator that prevented use-after-frees of heap allocations by never reusing addresses. But apparently, four months ago [1], GeneralPurposeAllocator was renamed to DebugAllocator and a comment was added saying that the safety features "require the allocator to be quite slow and wasteful". No explicit reasoning was given for this change, but it seems to me like a concession that applications need high performance generally shouldn't be using this type of allocator. In addition, it appears that use-after-free is not caught for stack allocations [2], or allocations from some other types of allocators.

Note that almost the entire purpose of Rust's borrow checker is to prevent use-after-free. And the rest of its purpose is to prevent other issues that Zig also doesn't protect against: tagged-union type confusion and data races.

[1] https://github.com/ziglang/zig/commit/cd99ab32294a3c22f09615...

[2] https://github.com/ziglang/zig/issues/3180.

[3] https://cloud.google.com/blog/topics/threat-intelligence/202...


yeah I don't think the GPA is really a great strategy for detecting UAF, but it was a good try. It basically creates a new virtual page for each allocation, so the kernel gets involved and ?I think? there is more indirection for any given pointer access. So you can imagine why it wasn't great.

Anyways, I am optimistic that UAF can be prevented by static analysis:

https://www.youtube.com/watch?v=ZY_Z-aGbYm8

Note since this sort of technique interfaces with the compiler, unless the dependency is in a .so file, it will detect UAF in dependencies too, whether or not the dependency chooses to run the static analysis as part of their software quality control.


Fair enough. In some sense you’re writing your own borrow checker. But (you may know this already) be warned: this has been tried many times for C++, with different levels of annotation burden imposed on programmers.

On one side are the many C++ “static analyzers” like Coverity or clang-analyzer, which work with unannotated C++ code. On the other side is the “Safe C++” proposal (safecpp.org), which is supposed to achieve full safety, but at the cost of basically transplanting Rust’s type system into C++, requiring all functions to have lifetime annotations and disallow mutable aliasing, and replacing the entire standard library with a new one that follows those rules. Between those two extremes there have been tools like the C++ Core Guidelines Checker and Clang’s lifetimebound attribute, which require some level of annotations, and in turn provide some level of checking.

So far, none of these have been particularly successful in preventing memory safety vulnerabilities. Static analyzers are widely used in industry but only find a fraction of bugs. Safe C++ will probably be too unpopular to make it into the spec. The intermediate solutions have some fundamental issues (see [1], though it’s written by the author of Safe C++ and may be biased), and in practice haven’t really taken off.

But I admit that only the “static analyzer” side of the solution space has been extensively explored. The other projects are just experiments whose lack of adoption may be due to inertia as much as inherent lack of merit.

And Zig may be different… I’m not a Zig programmer, but I have the impression that compared to C++ it encourages fewer allocations and smaller codebases, both of which may make lifetime analysis more tractable. It’s also a much younger language whose audience is necessarily much more open to change.

So we’ll see. Good luck - I’d sure like to see more low-level languages offering memory safety.

[1] https://www.circle-lang.org/draft-profiles.html


One of the key things in Sean's "Safe C++" is that, like Rust, it actually technically works. If we write software in the safe C++ dialect we get safe programs just as if we write ordinary safe (rather than ever invoking "unsafe") Rust we get safe programs. WG21 didn't take Safe C++ and it will most likely now be a minor footnote in history, but it did really work.

"I think this could be possible" isn't an enabling technology. If you write hard SF it's maybe useful to distinguish things which could happen from those which can't, but for practical purposes it only matters if you actually did it. Sean's proposed "Safe C++" did it, Zig, today, did not.

There are other obstacles - like adoption, as we saw for "Safe C++" - but they're predicated on having the technology at all, you cannot adopt technologies which don't exist, that's just make believe. Which I think is already the path WG21 has set out on.


> Safe C++ will probably be too unpopular to make it into the spec.

Not just that, but the committee accepted a paper that basically says it's design is against C++'s design principles, so it's effectively dead forever.


This was adopted as standing document SD-10 https://isocpp.org/std/standing-documents/sd-10-language-evo...

Here's somebody who was in the room explaining how this was agreed as standing policy for the C++ programming language.

"It was literally the last paper. Seen at the last hour. Of a really long week. Most everyone was elsewhere in other working group meetings assuming no meaningful work was going to happen."


> Good luck

Thanks! I think this could be implemented as a (3rd party?) compiler backend.

And yeah, if it gets done quickly enough (before 1.0?) it could get enough momentum that it gets accepted as "considered to be best practice".

Honestly, though, I think the big hurdle for C/C++ static analysis is that lots of dependencies get shipped around as .so's and once that happens it's sort of a black hole unless 1) the dependency's provider agrees to run the analysis or 2) you can easily shim to annotate what's going on in the library's headers. 2) is a pain in the ass, and begging for 1) can piss off the dependency's owner.


As usual the remark that much of the Zig's safety over C, has been present since the late 1970's in languages like Modula-2, Object Pascal and Ada, but sadly they didn't born with curly brackets, nor brought a free OS to the uni party.


Love this write up (and one's like it - there's so much good stuff in their archives). The stuff GloriousCow is doing with MartyPC is just so impressive.


What impresses me most is that, back then, we thought the PC to be an extremely boring machine with absolutely crappy graphics.

We didn't have hackers like these, it'd seem. Or they were (understandably) distracted by Amigas and Ataris.


I might be wrong but doesn’t he mention at the start that the effect isn’t possible on real hardware? Like there’s an emulator trick happening? I’m not diminishing the skill involved in this but if it requires a trick in the emulator it can’t be said that a person could have accomplished this on the hardware at the time


Video footage of it running on real hardware: https://www.youtube.com/watch?v=BdM5j96tEpE

Back when the demo was released, there were no emulators capable of running it all the way through. People are free to believe what they want, but as we presented it at a demoparty (Evoke 2022), I can tell you that the organizers wouldn't have allowed it in the compo lineup if they didn't see it running on the hardware in person. :)

I am in awe of GloriousCow's crazy achievement in debugging it on MartyPC and making it work to perfect accuracy... that's true next-level skill, and the rest of the group would undoubtedly echo my sentiment!


No. The author is saying that, in order to run the demo on MartyPC, the emulator previously required a small "hack" (in this context, that means a slight inaccuracy in emulation) to get the effect started. The post then covers the process of fixing multiple bugs such that the "hack" is no longer needed to match behavior on hardware, and thus MartyPC's behavior now more accurately models hardware.

Offtopic: in recent years, I've stopped saying "real hardware" unless also using the term "emulated hardware". To me, it's either "hardware" vs "emulator", or "real hardware" vs "emulated hardware".


Not really sure I see the benefit to the linguistic choices there. It makes sense, but I just don't see the point. Doesn't help that you can implement emulators in hardware (e.g. FPGA-based emulation), and that the "emulated hardware" phrase can be misinterpreted as a reference to the real device in question.


I can see the point of trying to make a distinction, but it is muddy.

An emulator can operate on many different levels of trying to match the behavior of the underlying hardware. CPU emulators can emulate at the opcode level (easier) or try to increase accuracy by emulating the CPU pipeline cycle by cycle (harder).

In this particular case, the distinction between an "emulator" and a "hardware emulator" seem apt because the article discusses that the required fixes needed to start tracking the state of individual pins of the hardware chips. This, to me, represents that the emulation needed to "go down another level" and model the physical hardware to a certain degree to gain the needed accuracy.

Having a way to mark that difference is useful.


No, the main thing to take away is that MartyPC is the only emulator that can accurately run this demo. Before that, the demo would only run on real hardware.


The "hack" needed to be there for the emulator to reproduce the effect as it happens on real hardware, before the author managed to fix those last few bugs.

>Historically, many famous emulators have relied on title patches to work around bugs or inaccuracies to get games to play. As emulators have improved in accuracy and research has uncovered more details about how systems work, gradually these title hacks became less necessary.


Quite. I had a great Christmas break a couple years back going through all the missions. It has some edges to it but people have also implemented things like RISC-V in it so it's also quite complete. The game portion is enough for fun and exploration.

I got all the way to the final missions where you're writing your own assembly (that you created) to solve various programming puzzles. Only stopped because I got busy with something else and break was over. I def recommend it. Reminded me at the time of Code by Charles Petzold but applied.

(Last I checked the Author was rewriting much of it for performance reasons and to fix up a few gotchas that should be possible with circuits but aren't here - no idea if they still are but that was my understanding before)


That's the nice thing! You don't need to optimise the language and build a JIT as a smaller company, Shopify already did that for you. Just like Google did for Javascript, which lead to Javascript having any performance at all (which lead to node being a thing).

Also remember that Shopify didn't start out making billions. They started as a small side project on a far, far slower version of Ruby and Rails.

Same with GitHub, same with many others that are either still on Rails or started there.

You can optimise things later once you actually have customers, know the shape of your problem and where the actual pain points are/what needs to be scaled.

To me, I care a ton about performance (it's an area I work in), but there's not a lot of sense in sacrificing development agility for request speed on things that may not matter or be things people will pay for. Especially when you're small.


I've somewhat regularly seen around 1400 show up. Which is just.... wow.

I think of that whenever someone is angry about cookie banners. Like, that was just happening before without me knowing.

Now when I see that, often I just leave as this site is not a place of honor, no highly esteemed article is written here... nothing valued is here.


This question comes up a lot with synths and especially hardware and Eurorack gear.

For me not being focused on finishing anything is what got me back into playing music (and eventually performing live sets).

Previously there was so much pressure to produce end results I couldn't even get started and lost all joy in the process. At some point I got some hardware to just play with and never really looked up how to save anything (was a Drumbrute, Meeblip and I borrowed a Beatstep). Was making odd acid tracks and silly sketches on my kitchen table every night that I'd lose as soon as I turned it all off. It was oddly freeing, and really fun.

A year later I was playing out gigs ranging from 20 mins to 3 hours. And one could even argue that I've still not finished any songs despite hours of recordings and performance as it's mostly improvisational.

Now I'm happily back in the noodling around and having fun stage again and loving it (though about to switch gears again).

Anyway - just figured I'd weigh in as there's often a lot of pressure to finish or record things when just playing around or building weird synths is in and of itself a really really fun (if expensive) hobby and maybe someone needs to hear that. :)

(also perfectly fine if your focus is on complete songs, of course)


This is useful data, thanks.

I'm going through a sort of mini-midlife crisis right now and one of the things I'm thinking about is whether I can make music part of my life again. I used to be in a rock band and it was tons of fun, but I'm in my 40s with kids so the logistics of rehearsing with other people and playing shows at night make that unlikely.

Another path I'm considering is making electronic music. That's mostly what I listen to and I used to tinker with it before I started a band, so I have some experience with synthesizers, beats, etc. It's much more amenable to my life style now because I can do it after the kids go to bed. But also, back then, I had a lot of trouble getting anything done and often ended up feeling disappointed.

It's not enough for me to just noodle with a synth for a few hours. I want something I can share with other people, which implies to me that I need to be able to finish things. So I'm just trying to figure out strategies for that before I drop money on gear only to have it collect dust.


You don't need much money to get started with a MIDI controller and VSTs, assuming you have a good PC because most people do here on hn.

Even though the hardware people seem like the most outspoken on most forums, I enjoy doing everything in the box now and my hardware mostly just sits under its dust covers.

I have an Ableton Push2 and it's an amazing piece of kit, more like an instrument than a controller. The layout makes much more natural sense to me than a keyboard, I think because I used to play guitar.

I'm the same in that I would like to have something at the end that I can share with other people, but the finishing stuff part I haven't quite worked out.

So far that has required more discipline than I have been able to muster, but I do feel I'm making progress and my workflow is improving.


Yeah, I can definitely afford to sink some money into this. It's more that I'm wired to hate myself if I spend money on something and don't use it. There are few things I despise more than feeling like I'm a poseur.

> Even though the hardware people seem like the most outspoken on most forums,

Good point!


I don't like wasting money either.

Of course it's up to you, but if I were you I would just get a controller and just get started. It's never been easier, cheaper or better. If you want to save money you could just get something secondhand from ebay.

I am invested in and love Ableton but there are lots of good options out there.

Like I say, I don't think hardware is necessary these days. It is fun to have dedicated knobs for everything but if you get a good controller it's much more flexible overall.

I can put 10 instruments into a single instrument rack on a track in Ableton and immediately have 8 macro knobs to control whatever I want about all 10 instruments at once.

You can easily do crazy stuff like create mutant instruments that morph between completely different instruments or samples, and change effect settings according to how hard you play the notes, or what part of the bar you're in. The limit is really only your imagination!

Then if you want, you can just duplicate that whole complex track with a single keypress. It's crazy. You can't do that with hardware!

Getting it to sound musical is the hard part, and that's where hardware shines - because you're limited in options, you can usually just turn it on and get a good sound out if it immediately. But IMO that's not a real reason that hardware is better because of course you could limit yourself to that in software too.

You sound like you can afford it and have the desire. I would just buy something and get started. It's heaps of fun, as long as you don't put too much pressure on yourself!


> if I were you I would just get a controller and just get started.

This is basically my plan. Except that I'm deliberately putting it on hold until I'm done with the book I'm writing because I really don't have the time and definitely don't need the distraction.

> It's heaps of fun, as long as you don't put too much pressure on yourself!

But putting pressure on myself is like my #1 personality trait. :)


I'd defininitely agree with the GPs idea of just doing it for the fun of it, and if something finished pops out, great - otherwise it doesn't really matter, you still had fun.

One interesting aspect of the modular thing that hasn't been discussed yet is limitation. With an in-the-box setup with all the plugins it's easy to get lost in the endless choice of what could be done. Each song sounds different because there's no consistency of setup.

With a hardware setup you're usually limited to a small number of devices/modules - this can be very powerful in focussing the mind. Each time you come back to your setup it's the same, but you'll dig a bit deeper to get something else out of it. Eventually you master it and produce the best of what that thing can do.

A lot of great early electronic music came out of limitations. Voodoo Ray was originally going to be called Voodoo Rage, but the sampler had a limited amount of memory left, so he cut off 'Rage' to make it 'Ray'.

It can be argued that the amazing amount of music that comes out of the 'standard band' setup is also a product of limitations.


Yes, this has absolutely been my experience.

Joining a band was a revelation because all I had to do was make my bass part sound good and there were relatively limited ways to do that. On top of that, a bass or guitar just sounds pretty nice right "out of the box". With electronic music, I found it took quite a bit of effort to even get to a single sound that sounded rich and satisfying. It felt a lot more like having to be a luthier when all I wanted to do was play. (At the same time, I didn't want to just use presets either, because I didn't want sounds that were too familiar...)


Depends where you're comparing them to in the US but in general most European tech wages are lower than the US. If comparing to SF perhaps even shockingly lower.

It can be hard to swallow at first, but when you account for lower rent, much cheaper health insurance, and far more vacation days (~20-24 by law - and no one actually expects you to be available) along with safer cities with generally higher quality of life, in my opinion it ends up being a very, very good trade off.

I moved to Amsterdam some years back (and have worked for a Berlin based company) and don't know if I could ever move back to the states.


+1 for Amsterdam (the Hague, Rotterdam and Utrecht). Rent is getting more expensive, but quality of life is very good, healthcare is 60€/month, top universities at 2000€/yr and overall good job prospects


I'd like to add that I've found the Netherlands to be among the most welcoming countries to expats that I've spent meaningful amounts of time in. I've moved away, but I'll always feel like home there.

Also, at least in the cities, everyone will be fluent in English.

If you pick city other than Amsterdam, or are willing to have a 45 min+ commute, rents should still be reasonable.


Where do I get healthcare for so little? I pay €90 per month in insurance and an additional €240 per month through Zvw tax.


You should compare US wages to EU contractors income. The rights between the two are similar and the income is too (~ €150.000 per year). But everything is cheaper here so you quality of life will be much higher.


I've heard that before from contractor friends, especially in London. Another option is working for a US company from Europe which I've done twice now. Wages can be quite a bit higher than working for a local company. Often both the US company and the employee feels like they're getting a great deal as you meet in the middle on wages.

Also the rights point you bring up is pretty key, the amount of protections, safety net, and rights you get as an employee in Europe makes it so that needing to save copious amount of money in case things go sideways is not a thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: