Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Among other things, this paper is not about an implementation methodology that ensures the GCD algorithm will take exactly the same amount of time regardless of the input.

The introduction to the paper talks about the importance of developing constant-time implementations of Euler's algorithm to avoid timing attacks, pointing out how algorithms with conditional jumps are susceptible to, for example, cache timing attacks.

As far as I understand it, a constant time algorithm, in the cryptographic sense, is one whose running time is uncorrelated with the precise secret information. If it were correlated, then timing attacks would let you infer the secret information. Presumably, if there is no random model, then the definition is that the algorithm must run in the exact same amount of time no matter the input, assuming a fixed size for the inputs. (At least one of the cited papers agrees with this definition.)

The contribution of the paper is a constant time divstep algorithm. The quote "All of these algorithms take constant time, i.e., time independent of the input coefficients for any particular (n, c)" seems to have ambiguous quantifiers and should be rewritten. It should be something like "given a fixed number of iterations n and a fixed number of input power series coefficients c, our divstep takes a constant n(c + n) simple operations, and the algorithm is constant time, i.e., the running time is not correlated with the precise inputs." Section 1.1 gives exact cycle counts for various CPUs.

Section 7 discusses how to use bit masking to actually implement it in a constant time way (in the cryptographic sense).

Anyway, divstepsx is certainly not O(1) time unless n and t are fixed constants.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: