Hacker Newsnew | past | comments | ask | show | jobs | submit | rabedik's commentslogin

Great work, y'all


Hi HN, I was one of the engineers who worked on this. It was just a hackathon demo, but we're really excited about improving the way you interact with your computer.


Software Engineer, San Francisco CA

OcuSpec is a venture backed start-up developing motion control technology that is radically more powerful and affordable than anything currently available. We're seeking smart, passionate people interested in challenging problems and changing the way people interact with machines.

Desired Skills/Experience: Software architecture, cross-platform APIs, C/C++, parallel processing (GPU/CPU), computer graphics (openGL/DirectX), real-time systems. This is a great opportunity to work on and take ownership of bleeding edge technology at a early stage. We offer very competitive compensation, great benefits and an office near the Caltrain.


Software Engineer, San Francisco CA

OcuSpec is a venture backed start-up developing motion control technology that is radically more powerful and affordable than anything currently available. We're seeking smart, passionate people interested in challenging problems and changing the way people interact with machines.

Desired Skills/Experience: Software architecture, cross-platform APIs, C/C++, parallel processing (GPU/CPU), computer graphics (openGL/DirectX), real-time systems . This is a great opportunity to work on and take ownership of bleeding edge technology at a early stage. We offer very competitive compensation, great benefits and an office near the Caltrain.

http://www.ocuspec.com/


Software Engineer, San Francisco CA

OcuSpec is a venture backed start-up developing motion control technology that is radically more powerful and affordable than anything currently available. We're seeking smart, passionate people interested in challenging problems and changing the way people interact with machines.

Desired Skills/Experience: Software architecture, cross-platform APIs, C/C++, parallel processing (GPU/CPU), computer graphics (openGL/DirectX), real-time systems.

This is a great opportunity to work on and take ownership of bleeding edge technology at a early stage. We offer very competitive compensation, great benefits and an office near the Caltrain.

http://www.ocuspec.com/


Software Engineer, San Francisco CA

OcuSpec is a venture backed start-up developing motion control technology that is radically more powerful and affordable than anything currently available. We're seeking smart, passionate people interested in challenging problems and changing the way people interact with machines.

Desired Skills/Experience: Software architecture, cross-platform APIs, C/C++, parallel processing (GPU/CPU), computer graphics (openGL/DirectX), real-time systems.

This is a great opportunity to work on and take ownership of bleeding edge technology at a early stage. We offer very competitive compensation, great benefits and an office near the Caltrain.

http://www.ocuspec.com/


"For an arbitrary polynomial function [...] determining whether it’s convex is what’s called NP-hard. That means that the most powerful computers in the world couldn’t provide an answer in a reasonable amount of time."

eye twitch

A problem's complexity class only gives an indication of potential runtime. There are decent algorithms for a lot of NP-complete problems (airline/package routing, compiler backend optimizations, etc.) that don't require "the most powerful computer in the world". I'm pretty sure the correct definition of hardness/completeness in complexity theory can be boiled down to a sentence that's much more correct than that, especially if you include something like "at least as hard as all other NP problems"


In math (well, in proofs) an answer must be "the" answer, a good enough answer is not good enough.

Those algorithms you mention provide reasonable answers, but they do not guarantee that the answer is the best answer.


Your last statement is incorrect for one of the problems mentioned in the parent comment ("airline/package routing" which I take to be linear programming problems).

Simplex-type algorithms are typically used for these problems. These algorithms also have exponential time complexity in worst-case. In practice, well-designed simplex algorithms are of low-order-polynomial complexity. They terminate with an exact optimum.

This case, and others, are examples of situations where conventional complexity analysis fails to provide insight into real-world performance.

Last I knew, people were trying to analyze expected time complexity of certain (analysis-friendly) simplex algorithms under distributional constraints on the input structure matrices. But it's important to understand that this is mathematical back-filling to explain what is already known to hold in practice.


I would expect an airline routing problem to be an integer program (because you can't send 2/3 of an aeroplane along one route and 1/3 along another), and integer programming is NP-hard. On the bright side, at all times you have an estimate of the best possible solution so you can terminate early when you're within some percentage (say 1%) of optimal.

Interestingly they still use the simplex method repeatedly during the branch-and-bound solving process, because it only requires a little work to adjust at each step, whereas the methods with guaranteed polynomial worst case tend to take that polynomial time regardless of how "close" they start to the optimum. So, as you say, convential complexity analysis suggests that simplex would be an awful lot less useful than it is.


for linear programming there are several well known polynomial time algorithms (some interior point methods: ellipsoid, projective, etc)--the problem itself is by no means in EXP or NP.

for worst case complexities it's often just a small region of the problem space that makes the bound large--but enumerating or cutting out those cases is tricky. you just don't see them on average because they are few (c.f., quicksort: O(n^2) worst case, O(n log n) average).


Yes, here's a summary (somewhat dated) of this, given in a prize award for one of the key people who showed that worst-case is rare:

http://www.informs.org/Recognize-Excellence/Award-Recipients...

(expand out the "show more" button). Of course, there has been more progress since, but this summary is pretty good.


Well, a lot of them can approximate "the" answer to within an arbitrarily and provably small amount, so it's definitions. For example, there is a polynomial-time approximation for euclidean TSP that produces a solution with length at most (1 + epsilon) * optimal.


That still doesn't answer: Is this formula convex or not. It's a yes or no question, not a probabilistic one.

A probabilistic answer may be useful! But that's not the point of this proof.


That's nuttery. Borderline hero-worship. There are loads of epsilon-accurate proofs in math. Similarly, there are loads of non-perfect-but-more-efficient algorithms in CS.


It's great that those proofs exist - but this isn't one of them.


Let me take a crack at it:

"... is what's called NP-hard. That means that finding the answer is at least as hard as cracking a very strong cryptographic key, something that is currently considered too computationally expensive to be practical."

Would that be a better explanation, understandable by the article's target audience?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: