Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's all realize that all of these research branches have been playing around with 10nm and 7nm chips for years now - the fact IBM cobbled together some working chip isn't surprising. Getting it to production is really the vastly more important part.

This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

The timing of this press release is entirely to boost investor confidence in IBM and GlobalFoundries given Intel's recent announcement of delays at the 10nm process node.

edit:

The Ars article is vastly better than the above link: http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industr...



>This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

Chip manufacturers like Intel and IBM have regularly made good on promises of exponential progress for at least a half century. Comparing them to press release-pushing biomedical researchers is tantamount to a slur.


Nitpick:

> Comparing them [chip manufacturers] to press release-pushing biomedical researchers is tantamount to a slur.

No, it isn't. Slower progress in biomedical research isn't a result of biomedical researchers exhibiting any of the qualities whose unwarranted attribution normally constitutes slur. It is the result of much greater complexity, lower predictability, higher safety requirements and weaker human understanding of biological systems compared to semiconductors.


The point is that most semiconductor predictions come true, whereas biomed predictions are much less reliable. Unfortunately, that is a reflection on the latter's practitioners as they are aware of their poor odds yet still publish.


I think this is unfair. Scientists often have a narrow-scope breakthrough in an extremely technical area and when they're asked to dumb it down for a wider audience, the tech press / university PR team runs wild. Something like curing diabetes is going to taken hundreds or thousands of small incremental improvements and breakthroughs so when they say "Could lead to a cure!!" they're usually correct but the nuance is often missed.


There is no excuse for publishing anything that does not stand up to replicability and a significantly high enough threshold chance that published prediction will be realised. Hence the OP is correct in pointing out the unfairness of equating comparatively reliable semiconductor process improvement predictions with the relative dartboard that is biotech.

If third parties ("PR") hijack the truth, it is up to the researcher publicly to denounce them.

If, as I suspect, such denunciation is bad for a researcher's funding, then we have a problem in research, if indeed, in such circumstances, it can even be called research (as opposed to, say, "marketing").

Clearly biotech is a younger field than semiconductors, and it should be given a wide berth to make mistakes without prejudice, but that does not exonerate it from explicitly communicating the expected uncertainty of its results.


The issue is less the progress of biomedical researchers, moreso the discrepancy between the headlines and the actual results.

Most of the blame lies with the scientific press, but the researchers don't seem to mind it all that much either. Misleading or overly-optimistic press releases written by university personal are also the source of much of it.


> Misleading or overly-optimistic press releases written by university personal are also the source of much of it.

Whatever it takes to get those sweet, sweet grant dollars


Color me astonished that a brand new account named POWERfan is vigorously defending IBM in a online discussion forum.


The Ars article you refer to appears to be fairly bullish on IBM's process technology and in particular the extreme ultraviolet lithography which has been problematic elsewhere. IBM has deep history of fundamental research turning into real product: just look at magneto-resistive drive heads as one example. I am much less sceptical than you that this company which has proven over many decades that it can drive fundamental technology forwards is only doing this for reasons of bamboozling the competition / investors.

Let's not forget that the chip in the Z series mainframes is the fastest commercial piece of silicon ever produced, and the high end Power8 chips handily outrun top-of-the-range 18-core Xeons on a number of benchmarks (though at worse power envelopes). (http://www.anandtech.com/show/9193/the-xeon-e78800-v3-review...).


I worked at IBM research for a bit (TJ Watson center Yorktown), when Gerstner was the CEO. They had PHDs chemists and physicists and mathematicians working on all sorts of things chip related. They had a mini fab in the building. I remember them testing building vibration levels.

Turning technology into something that can me manufactured and sold was something was something definetely on the mind of research. IBM was spending 6 Billion a year on research and they were looking for more results out of it.

They knew that an discovery/invention was good, but one that could be brought to market was better. The licensed a lot of their tech to the chip machine manufacturers if I remember correctly. Plus back in those days IBM had chip making facilities.


Another issue: currently the biggest bottleneck , cost wise, is in lithography - the process we use to draw the transistors into chips. Because of this issue, the two latest generations of chips are more expensive(per transistor) than an older version - stopping moore's law.

And moore's law probably won't return to life, until we learn how to solve that problem, which the current work doesn't help with.


This is where 450mm wafers and EUV (extreme ultraviolet lithography) were supposed to come in. EUV relieves the need for double patterning and the tremendous additional costs that entails (and was used to manufacture this 7nm chip).

The CEO of Applied Materials, Gary Dickerson, has stated that the 450mm wafer timeline “has definitely been pushed out from a timing standpoint.” That’s incredibly important, because the economics of 450mm wafers were tied directly to the economics of another struggling technology — EUV. EUV is the follow-up to 193nm lithography that’s used for etching wafers, but it’s a technology that’s spent over a decade mired in technological problems and major ramp-up concerns.

Toasting to the death of Moore's Law: https://www.youtube.com/watch?v=IBrEx-FINEI


And for comparison, scale fans, let's remember that we're talking about making 7nm features on wafers that are nearly a foot and a half wide, using near-as-dammit x-ray wavelengths.

A few teething problems would be expected.

7nm has been struggling for a while, 5nm is likely to be late, and I don't think anyone really knows what happens after that.

Longer term, industrial manufacture is probably going to have to move to something exotic like nano-assembly of individual atoms, with some extra finagling to work around tunnelling effects. (Easier said than done...)


... and why would we invest the money to do that when there is not enough (software-driven) demand for that performance.

The average person uses PCs and mobile devices to browse the web, write documents, order an Uber, and maybe play games. Nothing much is being done on the software front that challenges current systems. Maybe if VR took off or we got home applications for AI like domestic robotics that would change. I could see a domestic robot capable of folding clothes, cleaning up, etc. needing a low power chip that can do what a dual-12-core Xeon can do on smart phone power and thermal profiles. <5nm might be needed to accomplish that.

I'm not sure server and high-end compute demand is sufficient to pay for the R&D that would be required to far beyond 7nm.

But the good news is that we haven't even scratched the surface of what current systems could theoretically accomplish. Look into the demo scene and prepare to be blown away by what 8-bit CPUs in the 1980s could accomplish with non-crap code running on them. Maybe we need a software Moore's Law to take over for the hardware one -- right now software has more of an erooM's Law.

One thing is clear: if you do software, ball's in your court either way. Either you need to invent killer apps to keep demand high for high performance computers -- things that really need that much power -- or you need to take over for the hardware people and start finding new efficiencies.

Ball's really been in software's court for a while anyway with multi-core... linear performance max'd out (for consumer chips) a while ago.


I agree that most of the demand requires software innovation , but there are other good sources of demand, for example:

1. AI - variety of applications, both for consumer and business markets.

2. Telepresence. If we can get the real feeling of "being there" to telepresence, at a price point that's attractive for the consumer.

3. Simulation. Currently it's a complex process ,mostly done by experts. If it can be a tool for regular engineers , and maybe further down the road - for combining that with some sort of genetic-algorithms , maybe there's potential for a huge demand increase.


GPUs are basically as big as possible, and still can't really render at 4K. Now if I want to have a ray-traced Game Engine, fuggetaboutit.


> and EUV (extreme ultraviolet lithography) were supposed to come in

I wonder if they could use electrons instead of light to etch the surface.


Electron-beam lithography is a technique that works, but because it's very slow, it's also very expensive. There are efforts to parallelize e-beam writing, but they face hard challenges. For example, if you want to pattern faster, you just need to shoot more electrons. But if you shoot too many electrons, they repel each other and the pattern blurs. It's a difficult problem to overcome, but people are working on it.


I think we've been wasting far too much processing power in inefficient software for the past few decades, and it's only Moore's Law that let it happen for so long. Now that it's coming to an end, maybe we'll see more emphasis on efficiently optimised software and mindful resource usage.


This great article supports that. http://spectrum.ieee.org/semiconductors/design/the-death-of-...

It has a great graph of engineering effort vs Moore's law which made it cheaper to just wait for a faster chip then put in the effort.


I think you're incorrect and remembering performance that never existed based on shortcomings you glossed over at the time because you have that all too human bias of believing that things were better when you were younger.


There is so many low hanging fruit in software design that is simply there because of legacy design trade-offs and the cost associated with replacing those - we could gain huge performance gains over night if we for eg. eliminated reliance on hardware memory protection and context switching from kernel space to user space by using languages that can prove memory safety in software.

Then imagine how much performance you could get out of OS level VMs that understand the processes at VM level (ie. can access code in some IR that they can analyze easily, recompile it on the fly, etc.) there is already stuff like this in specialized markets (eg. kernel level GC for JVM) but it's still fairly specific.

Then there's all the shitty legacy abstraction layers in things like filesystems - ZFS is a perfect example of what kind of gains you can get for free if you just rethink the design decisions behind current stack and see what applies and what doesn't.

If the benefit of rewriting these systems ever overcomes the cost - we have huge potential areas for performance gains, modern systems are very far from being performance efficient, they are efficient based on various other factors (development cost, compatibility, etc.)


I wish Linux would just merge a ZFS implementation into the kernel already.

I also wish ZFS would grow an encryption layer (one that isn't based on Sunacle's implementation, since Sunacle doesn't want to share that one thus no one can use it).


I understand what you're saying, but do you really think most of a modern Android (to take the theme to its Javaesque extreme) stack is the most efficient way to accomplish processing?

Compare that to some of the code people ran through 6502-derivatives.

Abstraction may be more efficient in terms of programmer time, and performance efficiency may be high enough so as to be immaterial, but the two shouldn't be conflated.


> do you really think most of a modern Android (to take the theme to its Javaesque extreme) stack is the most efficient way to accomplish processing?

Reminds me of a version of this image[1] which has a discussion superimposed over it that says, "but if he had a big enough pile of ladders he could get over the wall!" and someone responds, "welcome to Android optimization." I think we see something similar with Javascript performance.

[1] http://i.imgur.com/AWG7LqR.jpg


Ha! That's the first non- https://interfacelift.com/ image I've put as my background for a while.

The older I get, the more I start seeing over-complexity in stacks as a security risk as well. I feel like there's a fundamental maximum to the number of levels of abstraction one can keep in one's head "enough" to avoid creating layer interaction bugs. Stack overflow, indeed. :)


We already see this comeback of C++ instate of truing to build everything in managed languages, like writing OS in C# .


"Coming to an end"? The sky isn't falling yet. Just because they're having some trouble with one process doesn't mean the whole party is over.

There are other materials to make chips out of besides silicon, gallium arsenide and carbon for instance, each of which has different scaling properties.

There's also ways to make chips more dense by stacking wafers instead of trying to shrink features.


It can sometimes be comforting to know that the universe imposes fundamental limits on how how efficiently computation can be done. Comforting because IIRC we've still got at least 15 orders of magnitude improvement available. But while I'm sure you're right that we're going to be able to switch to different materials (or maybe away from transistors entirely) when progress in silicon runs out we might have to expect an interregnum while other computational substrates are developed to the point where they can provide higher performance.

Stacking is certainly a thing and it's good for memory (see AMD's newest graphics card) but power dissipation provides limits in terms of how much high speed logic you can put under a given area.

http://www.anandtech.com/show/9266/amd-hbm-deep-dive


> This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

Well, incremental lab improvements of this and that technique do make it into practice all the time. The failure of various biological researches is symptom of some fundamental brokenness or inherent hardness to biological research (biological systems are inherently messy - the ability of biologists to work with uniform, mass-produced mice is actually a hindrance when they try to apply those researches to non-uniform humans, etc). None of these apply to chip manufacture. The increase in quantum effects as one goes down in size may be a barrier to 7nm but it seems like it would a barrier to working one-off chips as well as to final production.

Which is to say, the skepticism doesn't seem to have a basis. A working chip is an important and necessary step to getting to mass production - clearly mass production would be their aim.

Your supposedly better link agrees: "While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it's a working sub-10nm chip (this is pretty significant in itself); it's the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: