Which was part of a plan to raise a sunken Russia nuclear submarine. I remember a short TV documentary which promoted the cover story that they were planning to mine manganese nodules. Interestingly, there are many stories of the period to the effect that there were spies in every port in the 1970s and anyone who was doing anything suspicious (like L. Ron Hubbard and Scientology) was likely to have trouble.
The US is said to have a submarine that can cut into undersea cables and install a tap which is no small feet because there is a dangerous high voltage cable to power undersea signal repeaters in them.
they didn't cut the cable but wrapped something around it that got a signal by induction. Works in an electrical cable because the different wires are deliberately wound at varying pitches to minimize crosstalk, but you gotta cut to get into an optic fiber cable. It was a brilliant piece of signals intelligence that was countered by human intelligence, the device is in a museum, one of many in Russia that I'd like to go to but never will.
Ok one notable difference: did the linux researchers of yore warn about adversarial giants getting this tech? Or is this unique to the current moment? That for me is the largest question when considering the logical progression on "linux open is better therefore ai open is better".
We can't open source Linux because bad people might run servers?
Can you imagine the disinformation they could spread with those? With enough of them you could have a massively global site made entirely for spreading it. God what if such a thing got into the hands of an egocentric billionaire?
Are we on the same forum? Our entire field is building force multipliers to extend ourselves well beyond of what we're capable as individuals and the OS is tool that lets you get it done. Scale is like.. our entire thing. I feel like we're just so used to the world with computers that we forget how much power they allow people to wield. Which, honestly is maybe a good sign for the next of tools because AI isn't going to be more impactful than computers and we all survived.
This is exactly the kind of article that ai will not appreciate when it can read and comprehend the hype! Just kidding, but it is interesting that many people already consider this a form of sentience when it is simply matrices of info. How many matrices of info do we need before sentience can pop in via consciousness and hang out with us, here and now?
> Our opinion is that war to the death should be instantly proclaimed
against them. Every machine of every sort should be destroyed by the
well-wisher of his species. Let there be no exceptions made, no quarter
shown; let us at once go back to the primeval condition of the race. If
it be urged that this is impossible under the present condition of human
affairs, this at once proves that the mischief is already done, that our
servitude has commenced in good earnest, that we have raised a race of
beings whom it is beyond our power to destroy, and that we are not only
enslaved but are absolutely acquiescent in our bondage.
Lately I have a theory about perfect pitch training as an 18+ year old human, I think we will achieve it (my friend and I) via the following: Using Tuesdays to practice E chord songs, using Thursdays to practice C# minor chord songs. One day play only E songs, one day play only C#m songs, do this several weeks, pick 2 new chords for 2 fresh days, repeat.
Germane: there can be no green new deal without a robust manufacturing sector in America. Some sort of higher-tech revolution could be possible, though.
Very interesting take. In order to keep a democracy functioning properly, education is required. The lines between education and propagandalf wizardry become blurry when economic incentives and dearths of ethics enter the field.
My layperson understanding is that this is a Bad Thing, but not the surprising outcome SVB was. The market, other banks, consumers, etc had a long window to digest this, so it won’t cause new panic the same way.
Considering all of the stops the fed pulled out to prevent future bank liquidity failures, this probably means some gross negligence was revealed by the market state that goes well beyond even blatant risk mismanagement.
Businesses go under sometimes, that's the free market. Banks are a weird type of business which is why the FDIC exists, but this isn't a broken market, it's the market working.
How? Using what laws? If there are no clearly relevant laws, perhaps you expect congress to pass one expediently?
Let's suppose there is no law on the books that explicitly fleshes out liability if you train a model and it does not do as you expect, we'll need to figure out some nearest neighbor matching.
So you will have to shoehorn this novel issue into some preexisting framework of precedent. Maybe something about negligence liability and a common public use item, like a bridge or something. But a bridge isn't as complicated as an ML model and a bridge failure is way more auditable. Bridges don't hallucinate. Also there are unintended consequences to doing this sort of legal massaging, before you know it, campaign finance = speech.
"Hold them accountable" is so broad that verges on meaninglessness. Part of the whole living in a rapidly evolving tech landscape is living with laws that are always a few steps behind the times. Which bleeping sucks, but until we press for a better large scale solution, it is going to keep sucking.