>> Are light, radar, and sonar still not enough to detect obstacles that big?
> Radar is useless underwater, light would also be useless for spotting anything in time to react at speed, and sonar isn’t routinely used because it compromises stealth.
Any people with knowledge in the field care to comment on whether it's feasible for a light (em?)-type, active, near-range detection system to work?
It seems to me the basic problem is:
- enough energy to get a signal at a range sufficient to move around any detected obstacles
- high enough dissipation in water to minimize range at which detection by external sensors is possible
Water absorbs photons at most frequencies outside the visible spectrum. What we deems as visible light is actually where water is most transparent. Since eyes evolved underwater, it makes sense they would use this section of the spectrum.
Your two requirements contradict each other. You can choose one or the other, but not both.
The world's militaries have invested a great deal of time and money into researching underwater sensors. The solution is not going to be discovered by random people on the Internet.
> Any people with knowledge in the field care to comment on whether it's feasible for a light (em?)-type, active, near-range detection system to work?
I’m not in the field, but I follow it enough to say: in theory, yes, (blue-green lidar), and there is a lot of work on moving this from theory to practice.
But approximately everyone is working on it (for bathymetry and for detecting subs, and similar blue-green laser tech for underwater comms), so its likely once its deployed so that a sub could in principle use it as a “headlight”, it’ll raise similar concerns to sonar, because if it is used actively, passive sensors looking for active use will also be common.
The issue is that the cloudiness of water, even open water, varies greatly. So there would be either lots and lots of false positives, or the system would be too insensitive.
To clarify: I mean that, generally speaking, the presence of 'cloudy' water would presumably signify that the seafloor was close. In the open ocean, maybe a sea mount is surrounded by essentially a cloud of particles.
Edit: I think your reply made the thread reach maximum depth, so I'll end it here.. I should do more reading on the topic.
The problem seems to be with defining good tests for intelligence. FWIW, because GPT4 answers have a detectable pattern, they should presumably fail the Turing test.
At some level, intelligence requires logic, rationality, and conceptualization, all of which are topics which have evaded clear definition despite millennia of philosophy directly addressing the issues.
When you say we need ‘good’ tests for intelligence, you mean ‘tests that humans can pass but machines can’t’.
You’re demanding this because you aren’t comfortable with the implication that a computer can pass our existing tests for intelligence, so you rationalize that with the comforting thought that those tests were not meant to identify intelligence. Tests like the SAT or the bar exam or AP English. Or tests for theory of mind or common sense or logic. Those tests aren’t testing for ‘intelligence’ - they can’t be. Because a computer passed them.
I suppose it's a bit of a scotsman argument, but the turing test is to see whether an observer can correctly guess whether the interlocutor is _human_, so by definition the test would pass if the other correspondent was human.
To the point underneath, humans do not answer in as predictable a way as ChatGPT. Your answer, for example, I am confident does not come from ChatGPT.
Edit: if I've horribly mangled the Turing test definition, please let me know
I just imagined we could look at the oldest example of intelligence in human history. In contrast with AI, our chauvinism has us tend to pretend even the earliest monkey had it, fish?, insects? etc If it can rub 2 sticks together it gets the diploma.
Turing test is easy, I had 2 chat bots talk about other users in the channel while besides some trigger words ignoring what those other users had to say. The human subjects got angry successfully which means it was important to them.
Excellent observation. In fact the language part of the brain is only a (albeit a rather large) portion of the brain (not nearly as large as visual processing though). And people who suffer brain damage which renders them unable to speak (or understand speech; which interestingly is a different portion; albeit close to each other) are still able to demonstrate intelligent behavior with ease.
In fact it is damage to the prefrontal cortex (which has nothing to do with speech) which is mostly correlated with a detriment in intelligent behavior (suspiciously also social behavior; a food for though in what we consider “intelligence”). Victims of lobotomy had their prefrontal cortex destroyed, and their injuries resulted in them loosing their personalities and loosing basic function as human beings, even though they were still able (but perhaps not always willing) to speak and comprehend speech.
I don’t think you have an ‘arithmetic’ part of your brain.
What you have that LLMs lack is a visual part of your brain - one which can instantly count quantities of objects up to about 7. That gives you tools that can be trained to do basic arithmetic operations. Although you have to be taught how to use that natural capability in your brain to solve arithmetic problems.
And of course for more complex things than simple arithmetic, you fall back on verbalized reasoning and association of facts (like multiplication tables) - which an LLM is capable of doing too.
Poor GPT though has only a one dimensional perceptual space - tokens and their embedding from start to end of its attention window - although who’s to say it doesn’t have some sense for ‘quantity’ of repeated patterns in that space too?
It statistically outputs the next probable value of text. There is a lot of math related text in the CommonCrawl (likely the biggest source of it's training). That is all the company who owns it claims that it does. There is no calculation, therefor no basic arithmetic being done correctly.
emergent behavior does happen with neural networks. It is correct to say that GPT 4 doesn't do math correctly, but it may be incorrect to say a large language model can't in principle.
Yeah that's not true lol. It's arithmetic is not perfect (mostly multiplication, addition is fine) but there's nothing random about even the wrong/non-exact numbers
how am i supposed to have a conversation about someone who is gassing up "not perfect [arithmetic]" (something a wrist watch from the 80's can do) and won't even believe what the creators of said machine say about how it works
You probably used GPT-3.5. That said, I didn't manage to get GPT-4 to calculate 36,330 * 26,951 correctly. I suggested casting out 9s, casting out 11s, doing long multiplication, reversing the digits - nothing.
I have a theory that it does arithmetic badly because the logic goes right-to-left, when LLMs write left-to-right. If the digits were to be reversed, it might not make as many mistakes. I ran out of attempts before I could test this properly.
> Human: what is 36,330 26,951 * (input is expressed in reverse post fix notation)
> chatGPT: To evaluate this expression using reverse Polish notation, we need to use a stack to keep track of the intermediate results. Here's how we can evaluate the expression:
1. Read the first number "36,330" and push it onto the stack.
2. Read the second number "26,951" and push it onto the stack.
3. Read the operator "", pop the top two numbers (26,951 and 36,330) from the stack, multiply them, and push the result (971,914,830) onto the stack.
4. The stack now contains only one number, which is the final result.
Therefore, 36,330 26,951 = 971,914,830 in reverse Polish notation.
You should ask it to show its working. The problem is that it uses a fixed amount of thinking for each character - and it writes left to right - while long multiplication is computed right to left. Past a certain number of digits, the amount of "thinking" needed to compute long multiplication in the wrong order (left to right) increases without bound -- accuracy suffers as a result.
A tangential point: Note that multiplication of decimal numbers _has_ to start with the rightmost digit to be done accurately. Consider x = 2/3 + 1/3 = 0.6(6) + 0.3(3), all digits repeating infinitely many times, so there's no rightmost digit: Notice that whichever digit you choose for x before the decimal point (either 0 or 1) could be wrong if you change any digit of 1/3. This is called the Table Maker's Dilemma.
The second paragraph is a tangent. My point was made in the first paragraph.
You said: "I dont think this is about notation" - Why did you ever think it was?
I was thinking along the lines of maybe it reads the digits as individual tokens and reading from left to right it get confused. That's why I thought it might be about notation. I have learned it's definitely not the case (read a bit about what tokens actually mean)
I also tried the show your work methods. I will not paste my conversations here as they are so long but even with showing it's work it doesn't give the right answer. Two funny things I noticed
1. It either uses a completely wrong method to calculate it, or it shows the right "kind" of work and then gets the wrong answer
2. If I tell it it was wrong and it will just agree with me. I multiply two huge numbers and tell it no the answer is actually 42, it will just apologize. Then when I ask the reason why it thinks it's right it will give the most bullshit things lol. Once it even said "I read the number wrong, the second number should be x I read y". The thing is if I had actually given x in the input it would have gotten the answer right.
That’s not ‘basic arithmetic’. Most humans would need a pen and paper to solve it.
GPT’s attention window is not equivalent to it being able to ‘write stuff down’ - it’s its equivalent of being able to remember the foregoing few minutes of conversation.
Interesting. I didn't use as large of numbers as you did, but in the low thousands, and ChatGPT w/ GPT-4 got them correct 3 times in a row.
What's also interesting, though not necessarily surprising, is how relatively close the incorrect answers are in your examples. It's not like the model will randomly spit out an answer like 42 if you ask it to multiply something like 36,330 by 26,951.
And a roommate of Richard Stallman at one point, AFAIR.
the easiest way to mismanage a technology is to misunderstand it
Indeed.
His idea of attaching provenance to source of information used in models is a good one, and one that already has rumblings of legal weight behind it (see the various articles about copyright claims in response to GPT/Copilot).
I worry a bit that his argument is too centred on digital information creation. Though I suppose that's the novelty of the most recent pieces of technology called AI - they affect information workers, people who already use a computer interface for some large percentage of their work. Still, the topic of /physical work/ : fieldwork, factorywork, servicework, seems one or more steps removed from LLMs. The management of that work may be affected, but the work itself has already (it seems to me) gone through its first computer-revolution shock.
Edit: I'll add that the whole article has a 90s-Wired feel to it, which is refreshing to see. There's been something of a slowdown in tech-revolutions for the past decade, and it's not original to say that we may be at the start of a new one.
And secondly is Cleveland a city that has absorbed its suburbs over the past century or is Cleveland basically just "downtown" as that will affect the average.
The distance to the earth might have been coincidentally the same as the distance to the other satellite. LEO isn't that far up, the one that took the photo was only ~600km AGL.
But then the resolution can't have been that high, or the other satellite would have been as big as the ISS. It must have been a lot closer than earth.
They've got 0.3m on World View 3, more than enough for what was shown in the gif
Edit: you're probably right, actually.. the silver square on the lower section of landsat 8 is ~1.5m high (guessing from the image on wikipedia), so there should only be ~5-6 pixels for it, but there are clearly more.
It's a gravity survey satellite. They get better measurements if they fly closer to the earth, so there's a motivation to lower the orbit. GOCE flies so low that it needs to take aerodynamics into account, so it is sleek, like an aircraft.*
* It is, of course, actually an aircraft, like all satellites, to some degree, at least in the layman's definition of the word.
LARES is particularly cool, as it is (and may forever be) the densest object of its mass in the solar system. As so many gravity experiments go, systematic uncertainties are more-important than raw signal, though, so LARES 2 is composed of nickel instead of tungsten. It'll still be quite a day when LARES 1 re-enters; it'd be quite a sight to see.
I'm a little sad that the mission-planning team didn't include a "Look out below!" warning on the other side of the plaque.
Dropping a half-ton, two-foot-diameter, sphere of brass out of orbit is something of a health hazard to whatever life forms might have to catch it. If Miley Cyrus needs a follow-up hit, "Wrecking Ball" and "LAGEOS" have the same number of syllables.
What I think is even crazier is the GRACE mission with two sattelites chasing each other (fittingly named Tom & Jerry):
They use the fact that higher densities of gravity on the earth below accelerate one sattelite before the other. So the way they measure gravity is by very accuratly measuring the fluctuations in the distances of the sattelites.
This is a feat in itself, the sattelites are 200 km apart and the fluctuations are on the scale of the strength of a hair.
Maybe there are better more precise ways of measuring local gravity from space, but this is just so clever.
> It is, of course, actually an aircraft, like all satellites, to some degree, at least in the layman's definition of the word.
Aircraft use the surrounding air as support to counter the gravity. While lift has been measured for GOCE at different angles of attack, it never used it to stay at its altitude; the "wings" were just solar panels to feed the power-hungry engine and stabilize the attitude. At GOCE operating altitudes, the air is sparse, Knudsen number is high, and the aerodynamics differ quite a bit. It wasn't an aircraft, it was a "satelloid", an old idea that got implemented.
There are at least a couple companies trying to recreate the GOCE V-LEO concept for earth imaging. Apparently you can take better pictures when you're much closer to the earth.
Enough to cause some drag, and because of that it needed to have an aerodynamic shape and a continuously operating ion engine to avoid orbital decay because of that drag. But not enough air to actually produce lift, so no wings, just fins for stabilization. So the sleek shape which led to it being perceived as beautiful was actually technically motivated. To me it looks more like a rocket than an aircraft TBH...
The weight of the spacecraft in orbit is also extremely small, so the lift produced from the free flow hitting the surface at an angle is non-negligible. But as all weight is created by the drag from the same flow, reducing the cross-section is more beneficial. That's why it was designed to match the perfect orbital velocity at zero AoA, instead of using the lift.
> Radar is useless underwater, light would also be useless for spotting anything in time to react at speed, and sonar isn’t routinely used because it compromises stealth.
Any people with knowledge in the field care to comment on whether it's feasible for a light (em?)-type, active, near-range detection system to work?
It seems to me the basic problem is:
- enough energy to get a signal at a range sufficient to move around any detected obstacles
- high enough dissipation in water to minimize range at which detection by external sensors is possible