The Shannon-Nyquist sampling theorem only defines the sample rate to perfectly reproduce a signal of a certain bandwidth.
It says nothing about noise, distortion, dynamic range. In these areas it is impossible to create a "perfect" DAC, although granted the best DACs are indistinguishable from perfect as far as human perception is concerned.
I'm unfamiliar with the mathematics involved, but this is what Wikipedia says the theorem states:
> If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart
I took this to mean that it's any continuous function x(t), including amplitude information. I took a quick read through the proof and that's correct as far as I can tell.
Does that not mean that "noise, distortion, and dynamic range", as they are all encoded in the continuous function that is air pressure over time, can be perfectly captured and reproduced? All you need to do is throw out all information outside of human hearing range to have no frequencies higher than B hertz, and that's all you need for perfect reproduction.
If there exists a transformation f(x(t)), then said transformation can also be captured by the same sample, can it not?
Yes, a perfect recording of the amplitude of a function at a frequency of at least 2B contains enough information to perfectly reconstruct that function.
Reaching this frequency is not that hard, but getting a perfect amplitude measurement is. This is where "noise, distortion and dynamic range" come in, they act as disturbances to the amplitude.
Yes, now we just need mathematically pure materials...
Like..
Wires without resistance, capacitance and inductance.
Resistors without capacitance and inductance.
Capacitors without resistance and inductance.
Inductors without capacitance or resistance.
While we're at it, semiconductors with perfect linearity and so on and so on..
I'm not going to argue that audiophiles generally achieve much of these, or even that it's especially important for the perceived audio quality.. But, perfectly recording and reproducing anything is still not possible, not in audio, not in video.
I agree. I was not arguing against digital storage. Only against the sentiment that "just do digital and all problems are solved".. well, the storage, transport and editing is solved.. The analog problems are not solved, nor solvable, only optimizeable to a degree where further optimization becomes irrelevant, microphones/ADCs/DACs/Amplifiers/Speakers still have analog components that are inherently imperfect.
415 volts on a 30A breaker, like you might find in an undergraduate electrical engineering lab, is very lethal. I'd hazard a guess that's what the OP is talking about.
IBM had 5GHZ in 2014. [1] Clock speed is not a measure of performance alone.
Besides, most of the work in reaching a certain clock speed or target can be owed to the foundry (in this case TSMC which is world-leading, certainly beating Intel on most metrics at the moment.)
Comparing to the over-tuned enthusiast SKU of 2018 is not a fair comparison for either.
Also 600W is not impossible to cool, there are GPUs at that level of power for a while now.
The advantages of chopping up chopping your pipeline stages in half so that each is 10 FO4s long rather than the 16 FO4s most people use. You've generally got 2 FO4s of latching and 2 of clock skew so IBM was seeing 6 FO4s of useful work per stage compared to 12 with Intel. Or at least the overhead was 4 per stage in the mid 2000s, I've got no idea what they are in the early 2020s.
And, if you have enough threads per core, it's relatively simple to switch to another thread when an instruction stalls. Unfortunately, most our software is designed for machines with few fast cores.
Windows desktop. Fairly high end. Gaming, some side project development work.
Ubuntu laptop - XPS 9560 (don't buy one, they are crippled by thermals and poor sleep support even in windows). Mostly used for web browsing.