Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's pretty frustrating to see the discussion on this submission dominated by people litigating the "constant time" terminology. The authors, Bernstein and Yang, are using constant time in the conventional, complexity theoretic sense of the word. Here is a quote from Section 2, "Organization of this paper":

> We start with the polynomial case. Section 3 defines division steps. Section 5, relying on theorems in Section 4, states our main algorithm to compute c coefficients of the nth iterate of divstep. This takes n(c + n) simple operations. We also explain how “jumps” reduce the cost for large n to (c + n)(log cn)^2+o(1) operations. All of these algorithms take constant time, i.e., time independent of the input coefficients for any particular (n, c).

In particular note that last sentence. The asymptotic runtime of the presented algorithm does not depend on the inputs, n and c. This algorithm analysis is confirmed throughout the remainder of the paper, which walks through each stage of the algorithm. Now let's look at a few canonical definitions of "constant time", i.e. O(1).

From Skiena, we have:

Constant functions - f(n) = 1 - Such functions might measure the cost of adding two numbers, printing out the "Star Spangled Banner", or the growth realized by functions such as f(n) = min(n, 100). In the big picture, there is no dependence on the parameter n.

Likewise from Sedgewick & Wayne:

Constant. A program whose running time's order of growth is constant executes a fixed number of operations to finish its job; consequently its running time does not depend on N. Most Java operations take constant time.

I'll update if I find a choice example from Knuth in TAOCP, but I think this suffices. The discussion about whether or not the cryptographic use of the term satisfies the complexity theoretic sense of the term is a red herring; it's a distinction without a difference. Algorithm analysis focuses on asymptotic behavior, which is definitionally given by tail behavior, or rate of growth of a function. Among other things, this paper is not about an implementation methodology that ensures the GCD algorithm will take exactly the same amount of time regardless of the input.

______________________

1. The Algorithm Design Manual, 2nd Edition, § 2.3.1 Dominance Relations, Page 39

2. Algorithms, 4th Edition, § 1.4 Analysis of Algorithms, Page 187



> Among other things, this paper is not about an implementation methodology that ensures the GCD algorithm will take exactly the same amount of time regardless of the input.

The introduction to the paper talks about the importance of developing constant-time implementations of Euler's algorithm to avoid timing attacks, pointing out how algorithms with conditional jumps are susceptible to, for example, cache timing attacks.

As far as I understand it, a constant time algorithm, in the cryptographic sense, is one whose running time is uncorrelated with the precise secret information. If it were correlated, then timing attacks would let you infer the secret information. Presumably, if there is no random model, then the definition is that the algorithm must run in the exact same amount of time no matter the input, assuming a fixed size for the inputs. (At least one of the cited papers agrees with this definition.)

The contribution of the paper is a constant time divstep algorithm. The quote "All of these algorithms take constant time, i.e., time independent of the input coefficients for any particular (n, c)" seems to have ambiguous quantifiers and should be rewritten. It should be something like "given a fixed number of iterations n and a fixed number of input power series coefficients c, our divstep takes a constant n(c + n) simple operations, and the algorithm is constant time, i.e., the running time is not correlated with the precise inputs." Section 1.1 gives exact cycle counts for various CPUs.

Section 7 discusses how to use bit masking to actually implement it in a constant time way (in the cryptographic sense).

Anyway, divstepsx is certainly not O(1) time unless n and t are fixed constants.


I think you misread slightly.

> All of these algorithms take constant time, i.e., time independent of the input coefficients for any particular (n, c).

This means that once you have chosen a particular n and c, the time no longer varies. However, if n and c vary, the running time is definitely allowed to vary also (as the formulas n(c + n) and (c + n)(log cn)^2+o(1) clearly do).


No, I didn't misread. You and I are in (apparently violent) agreement. Constant time does not mean that running time cannot vary, in either complexity theory or cryptography. There are misconceptions on both sides here, with regard to what the terminology means in both complexity theory and cryptography.

For precision, I'll start with a good definition[1] for what "constant time" means in cryptography:

> Constant-time implementations are pieces of code that do not leak secret information through timing analysis. This is one of the two main ways to defeat timing attacks: since such attacks exploit differences in execution time that depend on secret elements, make it so that execution time does not depend on secret elements. Or, more precisely, that variations in execution time are not correlated with secret elements: execution time may still vary, but not in a way that can be traced back to any kind of value that you wish to keep secret, in particular (but not only) cryptographic keys.

Secure, constant time cryptographic algorithms need not have unvarying execution time. Now going back to complexity theory, it is also an extraordinarily common misconception that "constant time" means "the algorithm has the same execution time regardless of the size of the input." This is not the case. Big O notation doesn't even care about what always happens, it cares about what happens in the worst case. When we use "constant time" in the O(1) sense of the word, we are not precluding the possibility of an algorithm having variable execution time. Again, for precision, we are simply saying that the execution time (number of operations, etc) has an asymptotic upper bound which is independent of the input. The execution time may vary with the input, and generally speaking it will.

_________________________

1. Thomas Pornin, Why constant time crypto? https://bearssl.org/constanttime.html


> Constant time does not mean that running time cannot vary, in either complexity theory or cryptography.

Again, overloading of the term "constant time" causes pointless misunderstanding and arguments. Your statement here is wrong.

In complexity theory the term "constant time" does indeed mean the running time is bounded even with unbounded input (e.g. goes to infinity), although it could vary within this bound.

In cryptography the term "constant time" is sometimes used to mean a different concept, that the operation actually takes constant non-varying time, so that an attacker can't exploit this as a side channel to figure out the input values.

The paper seems to be using the latter meaning.


> In cryptography the term "constant time" is sometimes used to mean a different concept, that the operation actually takes constant non-varying time, so that an attacker can't exploit this as a side channel to figure out the input values.

Note that I cited Thomas Pornin for my definition of constant time cryptography, who is a cryptographer in theory and implementation. It is emphatically not necessary for software to run with unvarying execution time in order for it to be "constant time" according to the cryptographic sense of the term. This will be a poor hill for you to die on, but I invite you to provide literature supporting your alternative definition.


The definition of “constant time” in computational complexity theory is O(1) time as a function of the input size n, i.e., the time taken for an input of size n is bounded by some constant regardless of n. This is clear even from the two quotes you picked, from Skiena and from Sedgewick: both say that the running time does not depend on the input size n.

What we have here in the cryptography sense is different: the running time n(c+n) clearly does depend on n; it just does not depend on the actual input. That is, it is only constant across inputs of the same size.

I am not sure how you're claiming that n(c+n) is O(1) when it's clearly Θ(n^2) (i.e. O(n^2) and Ω(n^2)).


> What we have here in the cryptography sense is different: the running time n(c+n) clearly does depend on n; it just does not depend on the actual input.

Can you explain to me what the input is, if not c and n?


Consider the sentence just before “This takes n(c + n) simple operations”, as even quoted by you: “Section 5 […] states our main algorithm to compute c coefficients of the nth iterate of divstep”. So from this it is clear that c and n are the number of coefficients desired and the number of iterations desired; they are not the input to the algorithm (the two numbers or polynomials whose gcd is sought).

I'm really curious about your comments here. To understand them better, could you take a moment to say which of these statements you disagree with:

(1) A function f is O(1) if it is bounded. That is, if there exists a constant C such that for all n, we have f(n) < C.

(2) When we say that an algorithm (or more precisely, its running time) is O(1), we mean that the function “the algorithm's running time as a function of its input size” is O(1). In other words, let's denote the time that the algorithm takes on input I by T(I), and let us denote f(n) = max_{|I|=n} T(I) where |I| denotes the size of input I. Then we mean that f(n) = O(1). In simpler terms: that the worst-case running time does not grow without bound as a function of the input size.

(3) The algorithm in this paper, to compute the GCD of two polynomials or integers, does not have a bounded running time. (In fact, the authors describe it by the term “subquadratic”, and compare it to earlier subquadratic algorithms: “[…] a constant-time algorithm where the number of operations is subquadratic in the input size—asymptotically n(log n)^{2+o(1)}.”) That is, given any constant C, there exist inputs to this algorithm (two sufficiently large polynomials or integers) such that the running time of the algorithm is greater than C.

(4) As (3) shows that the property described in (2) does not hold, the running time of the algorithm here is not O(1).

One can argue separately whether “constant time” should be used to mean

• (the usual computational complexity sense) that the worst-case running time, i.e. the function f(n) is O(1), or

• (the cryptography sense) that T(I) does not give any exploitable information about I beyond what is known, typically |I|.

This is just an unfortunate clash of terminology across the cultures of the two fields, and IMO not an interesting thing to debate — everyone can choose whatever terminology works for them. But your claim that the two definitions are the same is puzzling, as all of (1), (2), (3), (4) seem obvious to me.


Agreeing with you by example: In computational complexity the size of the input varies. In cryptography it typically doesn't: time is identical for all inputs of the same size.

Imagine a magic sorting algorithm that always takes 1 second to check if an input is sorted, and 9 seconds to sort the input if wasn't sorted. It then returns the sorted value as output.

EG sorting "abcdefg" would check (taking 1 second) and then return (taking 0 seconds since it's sorted). Sorting "gfedcba" would check (taking 1 second) and then sort (taking 9 seconds) and return. Taking in the complete works of Shakespeare it would check (taking 1 second) and then sort (taking 9 seconds) and return.

It's O(1), yet the time varies based on the input. From a computational complexity terminology standpoint it's constant time, from a cryptographic terminology (and common intuition) standpoint it's clearly not, since it doesn't always take the same time to run.


The confusion is that the word-size of the operands plays a role.

For example, some people would say that an operation like a⨯b takes constant time or that its time complexity is O(1). However, other people (for example those implementing a bignum library) would say that the operation takes time depending on the word size of the operands. In the case of the multiplication operation, see for example [1] for a complexity analysis, to get an idea of what I mean.

[1] https://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strasse...


I would suggest that you take the time to reflect on the fact that if almost everyone disagree with you, it might point to something that you misunderstand. Either about the meaning of what they say or your understanding of the subject.

Their paper shows that the algorithm takes a constant time to compute the Cth coef. of the Nth step, for a given C and N. Not that the algorithm time is independent of C and N. C and N are not the input of the GCD. The algorithm running time is constant for the input of the GCD. It is dependent on C and N.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: