Hacker Newsnew | past | comments | ask | show | jobs | submit | frevd's commentslogin

No, if I understood right, this thing is a quantum computer solving NP-complete problems in linear time. And the article claims to have a simulation doing the same, but they are not sure whether this is actually only a special case scenario.


It's not a "quantum computer". It's an analog computer instead of a binary computer. It can be simulated with a normal (binary) computer using floating point numbers (remember that the amoeba has a finite precision).

(Form time to time there are articles that try to explain why analog computers are better than binary computers. Usually they assume infinite precision, that is impossible in a real system.)

The article claims that the time is linear, but it use quadratic space. So if you campare thy to simulate the algorithm with a big system in a sequential computer you will get probably cubic run time.

A better way to understand this is that the amoeba uses a good heuristic to solve the TSP in a small case. There are many heuristics out there, so it would be nice to implement this heuristic and compare with all the other one is some kind of standardized example set. It's more easy to find a good solution in a small system.

(Under the hood, the amoeba is a quantum system. But if you consider this a quantum computer then your phone is also a quantum computer.)


Yeah I guess in order to make this into a simulation, one would effectively add all the exponential time back, since it requires all that to "run" an amoeba on a classical computer. In that article they mention the organism communicating across its body, i.e. reacting to stimuli in all parts in a correlated fashion, that is the bit referring to a quantum system. Of course all physical systems are quantum systems (if I actually had a phone that one too yes, although the operating system does not make use of its nature). So there is no question whether we can simulate that algorithm, it cannot be possible without calculating all the parts, and that is no longer linear time.


> In that article they mention the organism communicating across its body, i.e. reacting to stimuli in all parts in a correlated fashion, that is the bit referring to a quantum system.

The internal communication inside the cell is classical, not quantic (classic, like your phone). I guess it use some kind of internal hormone, but it may be some signal that propagates in the cell membrane. IANAB.

You can't have a good quantum correlation in something that is as big as a cell (unless you freeze it at ridiculous low temperature that are not posible for now and would kill the cell anyway, or you have a more ordered system like a extremely pure crystal, or you only want some correlation for ridiculous small amount of time).

You can have big entangled systems, but they look more like pair of very clear optic fiber, not like a cell with a lot of water and crap moving randomly inside.

There are also some interesting "big" quantum effects in molecules like chlorophyll. (I'm not sure if they are 100% confirmad yet.) But a molecule of chlorophyll is much much much smaller than the cell and the effect is very short lived.


It found a solution. Not an optimal solution.


Similar to the way my NN (brain) can solve the TS problem by just guessing the best route. Sure that's <i>a</i> solution. edit:typo


On Hackernews you can use asterisks to produce italics, like in markdown.


Right, it is not guaranteed to find _the_ best solution possible, but at least it does find _some_ solution to NP-complete problems in linear time, which no classical algorithm is expected to be able to do. Now they claim in that article to have made a simulation of this running on a classical computer exhibiting the same characteristics, which frankly should not be possible. so where is the catch?


You can always find a solution to the TSP. All cities are connected with all cities, so you can travel the cities in the alphabetical order (or the order in list). It's (usually) a very bad solution that is much longer than the best solution.

There are good heuristic, specially if the distances are not chosen at random but are the distance between points in the plane. So it's posible in some cases to find quite good solutions.

The problem is NP-complete only is you ask for the best solution.


You can't say anything about the runtime of amoeba because they only solve a finite number of instances and you have no accurate model that would allow you to say something about the asymptotic behavior.


It's not Intel's issue, it's a design flaw per se, affecting _all_ CPUs that use predictive branch execution that has effects on the processor cache, which are pretty much all processors produced in this millenium.

That said, there _might_ be a solution to this problem in a way that predictive branch execution does not need to be removed completely from future architecture, which would be a thing we don't really want to loose, even if it increases safety. During that time, it makes sense to disable it, but not by default. The only implication is that older systems must be patched, which is every admin's responsibility.


> It's not Intel's issue, it's a design flaw per se, affecting _all_ CPUs that use predictive branch execution that has effects on the processor cache, which are pretty much all processors produced in this millenium.

Just because other CPUs have this flaw, doesn't mean this isn't Intel's issue. Regardless of the state of other CPU manufacturers, Intel is producing buggy CPUs.


> which are pretty much all processors produced in this millenium.

Is there a simple table of every mainstream purchasable CPU out-there and whether it was affected?


Does that mean we can cover the deserts with these plates that run purely on solar energy and release energy as infrared radiation to combat Global Heating?


Ah, sorry, duplicate question (see below). Well, even if it does not do much, it's a start, and it shouldn't require putting in extra energy from power plants.


Nobody seems to mention cars, they run on burning fossils and they are a large factor due to the amount of cars in use (ever increasing worldwide). Forcing the industry to replace the exhaust by whatever thing which does not consume fossils to create energy (directly or indirectly e.g. by electrical power that needs to be generated by burning coal) is what might contribute a lot to a better atmosphere (at least in the cities), and could even be good for the economy (as opposed to cost). Renewable energy production without burning fossils should be invested in. The only candidate for now to cover the consumption is nuclear (solar or wind power requires too much space and looks bad, so here you go with your efforts).


Apart from every article you see on Tesla Motors and Elon Musk...


Yes, but there is the major problem that charging electrical car batteries today still requires burning of fossils to create the energy in the first place.

Beside, that's controversial, Musk has got both electrical cars and space rockets.

Germany afaik wants to have predominantly electrical cars driving around in the next decade. Forcing a ban on exhaust cars might create some incentive on producing renewable energies due to the demand, so companies can invest in that. Of course, banning exhaust cars should go hand in hand with banning coal power plants (by way of regulating CO2 limits by law/tax, so that it is expensive to create CO2). Then the demand has to be covered by solar+wind and mainly nuclear energy (the latter not being very popular and the former too space-needy), way to go for clever businesses to come up with solutions (one of them might even be to develop ways to convert created CO2 to solids).


Why only 4, 8 or 12 rotors? Can a group of much smaller rotors replace the thrust a single bigger one produces? Would add a lot more redundancy and limit the costs.


Yes. You can sum rotor areas to add up to same as one big rotor.


I wonder if this was actually a sophisticated trade to profit from shaking up the market - claiming a substantial hack to make the market panic, then buy back the "lost" coins for way lower prices. If you control this kind of assets you can easily corner the market and participate from the emotional reactions. The lost coins can either not be lost at all or sold off way before that in small chunks, netting a profit after all is over.


Given that you can run the same program on a billion bacteria should quite improve the odds of getting the job done.

Surely, also one has to account for possible other "jobs" that get done due to interference.

However, if your goal is to automate processes rather than develop cures to "run" in the human body then this is a very interesting alternative to using silicon, the parallel pipeline potential is enormous.

EDIT: Would it be possible to develop a biological CPU this way? I.e. having "instruction sensors" and a touring-machine-like DNA-robot that can execute externally supplied instructions? Putting that into a bacteria that can clone itself would surely cut down on costs of computing.


>Would it be possible to develop a biological CPU this way? I.e. having "instruction sensors" and a touring-machine-like DNA-robot that can execute externally supplied instructions?

No, it is not possible (not this way). Tl;Dr how do you plan on storing information on the Turing machine tape? If you're happy doing computation with a relatively high stochastic failure rate things look better, but I wouldn't count on it.


Two remarks:

- There are a lot of exchanges that don't involve direct transactions on the blockchain (buying and selling BTC is nonlimited in terms of quantity or time).

- There are many altcoins, claiming to do certain things better such as proof of work or block size etc., however, the most convincing argument for using BTC seems to be the current market capitalization. As such, it is understandable that the Core wants to introduce the least amount of distraction possible, i.e. go against improvements whatsoever.

Can anybody please explain why increasing block size would mean more centralization?


Great idea. Google should pick it up to colorize black/white movies and historical footage. Can easily be extended to feeding desaturated movies for proper training data, can maybe even remaster the quality.


Instead of putting them into nature we should put them into our food, to digest all that plastic that will be in our food chain soon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: