Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there even any performance benefit on modern CPUs? I tried to consult some real tables but I'm not experienced enough to be sure I'm reading them correctly. If I'm reading something like [1] properly, it looks like it is not a clear win on modern hardware to use fixed point & integer operations. It would depend on the ratio of addition/subtraction/multiplication to division.

(Obviously one must factor out a lot of local considerations, which modern CPUs are full unto overflowing with; I'm kind of looking at a very, very broad average performance question across code bases doing enough different math operations to average out, not whether one particular loop can run theoretically run faster or slower with one or the other.)

[1]: https://stackoverflow.com/questions/2550281/floating-point-v...



It depends a lot on CPU architecture. Floating point units may be tied to each core, or they may be shared, so it may further depend on other concurrent workloads.

There's also SIMD instructions. Modern CPUs have built-in instructions for handling multiple ints or floats as a vector. If you can get your fixed-point varies to for into 8-bit or 16-bit fields instead of 32, then the same sized vector units can handle more values per instruction.


My gut would be that you also get some benefit by some auxiliary choices. With a lot of precomputed constants, you can probably avoid a lot of known multiplications. That said, yeah, if you have to start doing a lot of fixed point multiplications, you could eat any savings you had.


Address calculations are a place where fixed point can still have an advantage -- it's often less latency and fewer ops to step a fixed-point accumulator and shift it into an address offset on the integer units than to step a floating point accumulator, convert it to integer, then move that over from the FP/vector units to the integer units.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: