Hacker Newsnew | past | comments | ask | show | jobs | submit | braho's commentslogin

Note that they were talking about _assembly_ code paths and parent explicitly mentioned using anything else then assembly would give you more portability.


It's easy to write RISC-V assembly language that works on both 32 bit and 64 bit, using a handful of simple macros.


This is what I told a commenter.

But I don't even want to deal with those macros, just a nearly zero straight copy/paste. The preprocessor scares me, because once some devs starts to use it, many will want to push to max its usage everywhere and in the end, your code path is actually tied to the preprocessor, and sometimes with actually a whole new assembly language defined with the preprocessor, and if it is complex, the "exit cost" from it will sky rocket.

For instance fasmg has an extremely powerful macro preprocessor, because this preprocessor is actually what's used to write assemblers. Due to the tendency of devs to maximize the usage of their SDK tools, some code paths actually end up with little assembly and a lot of macros! Then you must embrace that new language as a whole, like the opacity of the "object oriented model" of some big c++ projects BEFORE actually being able to do anything real for this very project.

Personnally, I use a "normal" C preprocessor to define a minimal layer for a intel syntax assembler, which allows me to assemble with fasmg, gas and nasm/yasm. And I am very careful at assembling all the time with all assemblers.

I do the same with C using cproc/qbe, tinycc, gcc, and I plan to add simple-cc/qbe. I actually do compile up to 1.1 times... the 0.1 accounting for cproc/qbe + tinycc and gcc get the rest.


Even though the FPGA fabric might encode the solution more effectively, there are other important differentiators: clock speed and memory bandwidth. GPUs have higher clock speeds and typically better memory bandwidth (related of course).

With the higher clock speed, GPUs can well outperform FPGAs for many problems.


I guess there is some AI algorithm that does zoom as postprocessing. That AI knows the moon so it can fill in the blanks and compensate for a (relatively) crappy sensor.

So in the future, there are either cameras that can see what others have seen before, and those that can truly capture new, true, detail (true as in, without filling it with estimations)


I second this.

I have been using sequence diagrams a lot for embedded or real-time systems a lot last year, and travel time is important there.

Besides, it allows to have one diagram abstract from another, without modifying execution times.


AutoHotkey seems to work for me


Not only is Martin Blais' explanation very helpful, his software, Beancount, is also very suitable for any personal finance projects, especially if you have a background as Developer.

Personally, I combine beancount with fava and find it much better than e.g. GnuCash.


Is beancount the kind of tool that's mostly useful if you put a lot of time into it, like Emacs? Or is it immediately useful with even small investments of time?


I found ledger-cli (upon which beancount is based) immediately useful. However, since you mentioned Emacs, I should say that I use ledger-mode in Emacs to add entries. But I do most of my reporting directly on the command line.


I found it easy to get started with the very basics (e.g. recording of simple transactions) and I'm just reading up more over time on how to handle more complex things like splitting expenses with a partner or investments. Thanks to Python I was also able to customise my setup almost right away. There is also an Emacs mode for those who have already made that investment.


If you're familiar with double-entry bookkeeping, I found it very easy to get started. If you're not, it will take some time to familiarize yourself with that concept.


Beancount+Fava is awesome if you're a developer who understands double-entry accounting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: