Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway8503's commentslogin

>> Are light, radar, and sonar still not enough to detect obstacles that big?

> Radar is useless underwater, light would also be useless for spotting anything in time to react at speed, and sonar isn’t routinely used because it compromises stealth.

Any people with knowledge in the field care to comment on whether it's feasible for a light (em?)-type, active, near-range detection system to work?

It seems to me the basic problem is:

- enough energy to get a signal at a range sufficient to move around any detected obstacles

- high enough dissipation in water to minimize range at which detection by external sensors is possible


Water absorbs photons at most frequencies outside the visible spectrum. What we deems as visible light is actually where water is most transparent. Since eyes evolved underwater, it makes sense they would use this section of the spectrum.

https://commons.wikimedia.org/wiki/File:Absorption_spectrum_...


That makes so much sense and actually unravels a few other mysteries for me. Thanks!


Your two requirements contradict each other. You can choose one or the other, but not both.

The world's militaries have invested a great deal of time and money into researching underwater sensors. The solution is not going to be discovered by random people on the Internet.


> The solution is not going to be discovered by random people on the Internet.

I'm fairly certain someone knows a number of things that have been discovered by random people on the internet.


> Any people with knowledge in the field care to comment on whether it's feasible for a light (em?)-type, active, near-range detection system to work?

I’m not in the field, but I follow it enough to say: in theory, yes, (blue-green lidar), and there is a lot of work on moving this from theory to practice.

But approximately everyone is working on it (for bathymetry and for detecting subs, and similar blue-green laser tech for underwater comms), so its likely once its deployed so that a sub could in principle use it as a “headlight”, it’ll raise similar concerns to sonar, because if it is used actively, passive sensors looking for active use will also be common.


The light transmission through the water, it's cloudiness, is just way to variable for something like this to work constantly, reliably.


Wouldn't the cloudiness itself be a signal?

Presumably it's essentially clear of suspended particles, most of the time.


The issue is that the cloudiness of water, even open water, varies greatly. So there would be either lots and lots of false positives, or the system would be too insensitive.

Think driving in fog


Have you ever dived? Max visibility is like 30-50m unless you’re in a filtered environment


A signal of what?


Something solid ahead

To clarify: I mean that, generally speaking, the presence of 'cloudy' water would presumably signify that the seafloor was close. In the open ocean, maybe a sea mount is surrounded by essentially a cloud of particles.

Edit: I think your reply made the thread reach maximum depth, so I'll end it here.. I should do more reading on the topic.


The problem I'm highlighting is it also gets cloudy randomly in open ocean, for lots of other reasons.


the issue is to detect these while maintaining stealth.


The problem seems to be with defining good tests for intelligence. FWIW, because GPT4 answers have a detectable pattern, they should presumably fail the Turing test.

At some level, intelligence requires logic, rationality, and conceptualization, all of which are topics which have evaded clear definition despite millennia of philosophy directly addressing the issues.


When you say we need ‘good’ tests for intelligence, you mean ‘tests that humans can pass but machines can’t’.

You’re demanding this because you aren’t comfortable with the implication that a computer can pass our existing tests for intelligence, so you rationalize that with the comforting thought that those tests were not meant to identify intelligence. Tests like the SAT or the bar exam or AP English. Or tests for theory of mind or common sense or logic. Those tests aren’t testing for ‘intelligence’ - they can’t be. Because a computer passed them.

It’s okay. We can make new tests.


Those are a lot of leaps to make about my motivations!


Humans also answer in predictable ways so if you place such criteria for a turing test humans will fail to pass it too


I suppose it's a bit of a scotsman argument, but the turing test is to see whether an observer can correctly guess whether the interlocutor is _human_, so by definition the test would pass if the other correspondent was human.

To the point underneath, humans do not answer in as predictable a way as ChatGPT. Your answer, for example, I am confident does not come from ChatGPT.

Edit: if I've horribly mangled the Turing test definition, please let me know


I just imagined we could look at the oldest example of intelligence in human history. In contrast with AI, our chauvinism has us tend to pretend even the earliest monkey had it, fish?, insects? etc If it can rub 2 sticks together it gets the diploma.

Turing test is easy, I had 2 chat bots talk about other users in the channel while besides some trigger words ignoring what those other users had to say. The human subjects got angry successfully which means it was important to them.


Since LLMs can't do arithmetic, one has to think there's more going on, at least in the arithmetic part of the brain, than is going on in LLMs.


Excellent observation. In fact the language part of the brain is only a (albeit a rather large) portion of the brain (not nearly as large as visual processing though). And people who suffer brain damage which renders them unable to speak (or understand speech; which interestingly is a different portion; albeit close to each other) are still able to demonstrate intelligent behavior with ease.

In fact it is damage to the prefrontal cortex (which has nothing to do with speech) which is mostly correlated with a detriment in intelligent behavior (suspiciously also social behavior; a food for though in what we consider “intelligence”). Victims of lobotomy had their prefrontal cortex destroyed, and their injuries resulted in them loosing their personalities and loosing basic function as human beings, even though they were still able (but perhaps not always willing) to speak and comprehend speech.


I don’t think you have an ‘arithmetic’ part of your brain.

What you have that LLMs lack is a visual part of your brain - one which can instantly count quantities of objects up to about 7. That gives you tools that can be trained to do basic arithmetic operations. Although you have to be taught how to use that natural capability in your brain to solve arithmetic problems.

And of course for more complex things than simple arithmetic, you fall back on verbalized reasoning and association of facts (like multiplication tables) - which an LLM is capable of doing too.

Poor GPT though has only a one dimensional perceptual space - tokens and their embedding from start to end of its attention window - although who’s to say it doesn’t have some sense for ‘quantity’ of repeated patterns in that space too?


Dunno if there's some trickery going on under the hood, but GPT-4 does basic arithmetic correctly.


It statistically outputs the next probable value of text. There is a lot of math related text in the CommonCrawl (likely the biggest source of it's training). That is all the company who owns it claims that it does. There is no calculation, therefor no basic arithmetic being done correctly.


If it's only statistics, what is the neural network for?


Implement the probabilistic model of token sequences and fit it to the training data?

You could say the same for a character recognition system.


emergent behavior does happen with neural networks. It is correct to say that GPT 4 doesn't do math correctly, but it may be incorrect to say a large language model can't in principle.


Is there any evidence it can? It's not incorrect to say there is no God for example


It's extremely easy to test arithmetic on random numbers it'll never have seen in the training set. GPT-4 can do arithmetic just fine.


And it will give you some random number with maybe the right order of magnitude. Doesn't seem to do arithmetic very fine at all actually.


Yeah that's not true lol. It's arithmetic is not perfect (mostly multiplication, addition is fine) but there's nothing random about even the wrong/non-exact numbers


we're talking about an llm dude

how am i supposed to have a conversation about someone who is gassing up "not perfect [arithmetic]" (something a wrist watch from the 80's can do) and won't even believe what the creators of said machine say about how it works


This is what I got on ChatGPT today. I assume it used GPT4:

        Prompt            ChatGPT           Actual       Match
  397,356 * 930,547   369,685,207,932   369,758,433,732  FALSE
   36,330 *  26,951       979,458,630       979,129,830  FALSE
    8,681 *   9,330        80,911,430        80,993,730  FALSE
      278 *     903           250,734           251,034  FALSE
       82 *      77             6,314             6,314  TRUE

Edit: # of correct digits (counting from leftmost) only exceeds 3 on the smallest pair. It drops to two, as well, on the 3x3 set.


You probably used GPT-3.5. That said, I didn't manage to get GPT-4 to calculate 36,330 * 26,951 correctly. I suggested casting out 9s, casting out 11s, doing long multiplication, reversing the digits - nothing.

I have a theory that it does arithmetic badly because the logic goes right-to-left, when LLMs write left-to-right. If the digits were to be reversed, it might not make as many mistakes. I ran out of attempts before I could test this properly.


Did you adjust the prompt to ask it how a famous mathematician would answer the question? Or what a calculator would say the answer is?

Sometimes LLMs get math wrong because people got math wrong on the training data and so they match the error frequency (https://learnprompting.org/docs/basics/roles).


Yes that's exactly what people are saying here. It's not a criticism of the tool it's an example of what the tool is and how it functions.


I just had the following conversation on chatGPT

> Human: what is 36,330 26,951 * (input is expressed in reverse post fix notation)

> chatGPT: To evaluate this expression using reverse Polish notation, we need to use a stack to keep track of the intermediate results. Here's how we can evaluate the expression:

1. Read the first number "36,330" and push it onto the stack. 2. Read the second number "26,951" and push it onto the stack. 3. Read the operator "", pop the top two numbers (26,951 and 36,330) from the stack, multiply them, and push the result (971,914,830) onto the stack. 4. The stack now contains only one number, which is the final result.

Therefore, 36,330 26,951 = 971,914,830 in reverse Polish notation.

I dont think this is about notation.


You should ask it to show its working. The problem is that it uses a fixed amount of thinking for each character - and it writes left to right - while long multiplication is computed right to left. Past a certain number of digits, the amount of "thinking" needed to compute long multiplication in the wrong order (left to right) increases without bound -- accuracy suffers as a result.

A tangential point: Note that multiplication of decimal numbers _has_ to start with the rightmost digit to be done accurately. Consider x = 2/3 + 1/3 = 0.6(6) + 0.3(3), all digits repeating infinitely many times, so there's no rightmost digit: Notice that whichever digit you choose for x before the decimal point (either 0 or 1) could be wrong if you change any digit of 1/3. This is called the Table Maker's Dilemma.

The second paragraph is a tangent. My point was made in the first paragraph.

You said: "I dont think this is about notation" - Why did you ever think it was?


I was thinking along the lines of maybe it reads the digits as individual tokens and reading from left to right it get confused. That's why I thought it might be about notation. I have learned it's definitely not the case (read a bit about what tokens actually mean)

I also tried the show your work methods. I will not paste my conversations here as they are so long but even with showing it's work it doesn't give the right answer. Two funny things I noticed

1. It either uses a completely wrong method to calculate it, or it shows the right "kind" of work and then gets the wrong answer

2. If I tell it it was wrong and it will just agree with me. I multiply two huge numbers and tell it no the answer is actually 42, it will just apologize. Then when I ask the reason why it thinks it's right it will give the most bullshit things lol. Once it even said "I read the number wrong, the second number should be x I read y". The thing is if I had actually given x in the input it would have gotten the answer right.

It's all very confusing.


That’s not ‘basic arithmetic’. Most humans would need a pen and paper to solve it.

GPT’s attention window is not equivalent to it being able to ‘write stuff down’ - it’s its equivalent of being able to remember the foregoing few minutes of conversation.


Stop redefining things to support your argument. "The basic operations under arithmetic are addition and subtraction, division and multiplication"


Right, but when you say an LLM can’t do basic arithmetic you’re presumably talking about mental arithmetic, right?


No.


Unless you're using Plus (black icon), you're using GPT-3.5 (green icon).


Interesting. I didn't use as large of numbers as you did, but in the low thousands, and ChatGPT w/ GPT-4 got them correct 3 times in a row.

What's also interesting, though not necessarily surprising, is how relatively close the incorrect answers are in your examples. It's not like the model will randomly spit out an answer like 42 if you ask it to multiply something like 36,330 by 26,951.


> I assume it used GPT4

You did not. You have to explicitly select it from the dropdown which is only available on paid Plus accounts.


One should think these things are disembodied politicians.

That has been my best analogy so far.

They'll never say I don't know and bullshit you into oblivion while never backtracking.


And a roommate of Richard Stallman at one point, AFAIR.

  the easiest way to mismanage a technology is to misunderstand it
Indeed.

His idea of attaching provenance to source of information used in models is a good one, and one that already has rumblings of legal weight behind it (see the various articles about copyright claims in response to GPT/Copilot).

I worry a bit that his argument is too centred on digital information creation. Though I suppose that's the novelty of the most recent pieces of technology called AI - they affect information workers, people who already use a computer interface for some large percentage of their work. Still, the topic of /physical work/ : fieldwork, factorywork, servicework, seems one or more steps removed from LLMs. The management of that work may be affected, but the work itself has already (it seems to me) gone through its first computer-revolution shock.

Edit: I'll add that the whole article has a 90s-Wired feel to it, which is refreshing to see. There's been something of a slowdown in tech-revolutions for the past decade, and it's not original to say that we may be at the start of a new one.


From the given link:

  As of April 2023, the average rent for a 1-bedroom apartment in Cleveland, OH is $1,136. This is a 7% decrease compared to the previous year.


What's that as a fraction of median income? A brief search seems to suggest it's on the high side, over 40% at any rate.


And secondly is Cleveland a city that has absorbed its suburbs over the past century or is Cleveland basically just "downtown" as that will affect the average.


$1,136 for Cleveland ?!


Luckily, this[1] is also reading zero at the moment.

[1] https://hasthelargehadroncolliderdestroyedtheworldyet.com/


The distance to the earth might have been coincidentally the same as the distance to the other satellite. LEO isn't that far up, the one that took the photo was only ~600km AGL.


But then the resolution can't have been that high, or the other satellite would have been as big as the ISS. It must have been a lot closer than earth.


The range to the imaged satellite is overlaid on the images - it's between 90km and 130km over the sequence.


They've got 0.3m on World View 3, more than enough for what was shown in the gif

Edit: you're probably right, actually.. the silver square on the lower section of landsat 8 is ~1.5m high (guessing from the image on wikipedia), so there should only be ~5-6 pixels for it, but there are clearly more.


The sample distance (SSD) is noted on the GIF, and ranges between 6.5 and 4.6 cm for the images shown here.


I'd just like to throw in a plug for what I think is the best-looking satellite:

GOCE: https://en.wikipedia.org/wiki/Gravity_Field_and_Steady-State...

It's a gravity survey satellite. They get better measurements if they fly closer to the earth, so there's a motivation to lower the orbit. GOCE flies so low that it needs to take aerodynamics into account, so it is sleek, like an aircraft.*

* It is, of course, actually an aircraft, like all satellites, to some degree, at least in the layman's definition of the word.


GOCE is a wonderful (and beautiful) experiment. Its scientific contributions are particularly wonderful.

When I think of pretty gravity experiments, there are many, as the form must follow the function and gravity is geometry.

I'm particularly partial to the laser-ranging satellites:

https://en.wikipedia.org/wiki/LAGEOS

https://en.wikipedia.org/wiki/LARES_(satellite)

LARES is particularly cool, as it is (and may forever be) the densest object of its mass in the solar system. As so many gravity experiments go, systematic uncertainties are more-important than raw signal, though, so LARES 2 is composed of nickel instead of tungsten. It'll still be quite a day when LARES 1 re-enters; it'd be quite a sight to see.


LAGEOS is a beautiful object. Wikipedia says its projected lifespan in orbit is 8.4 million years, which somehow makes me very happy today.


The mission plaque as a message for Earth habitants of the future is also cool!


I'm a little sad that the mission-planning team didn't include a "Look out below!" warning on the other side of the plaque.

Dropping a half-ton, two-foot-diameter, sphere of brass out of orbit is something of a health hazard to whatever life forms might have to catch it. If Miley Cyrus needs a follow-up hit, "Wrecking Ball" and "LAGEOS" have the same number of syllables.


When is LARES-1 rentry expected ? 1500 km orbit shouldn't be very long lived, right ?


Very cool. Reminds me of the retroreflectors on GLONASS.


What I think is even crazier is the GRACE mission with two sattelites chasing each other (fittingly named Tom & Jerry):

They use the fact that higher densities of gravity on the earth below accelerate one sattelite before the other. So the way they measure gravity is by very accuratly measuring the fluctuations in the distances of the sattelites.

This is a feat in itself, the sattelites are 200 km apart and the fluctuations are on the scale of the strength of a hair.

Maybe there are better more precise ways of measuring local gravity from space, but this is just so clever.


> It is, of course, actually an aircraft, like all satellites, to some degree, at least in the layman's definition of the word.

Aircraft use the surrounding air as support to counter the gravity. While lift has been measured for GOCE at different angles of attack, it never used it to stay at its altitude; the "wings" were just solar panels to feed the power-hungry engine and stabilize the attitude. At GOCE operating altitudes, the air is sparse, Knudsen number is high, and the aerodynamics differ quite a bit. It wasn't an aircraft, it was a "satelloid", an old idea that got implemented.


There are at least a couple companies trying to recreate the GOCE V-LEO concept for earth imaging. Apparently you can take better pictures when you're much closer to the earth.

https://eoi.space/


> It is, of course, actually an aircraft, like all satellites, to some degree, at least in the layman's definition of the word.

doesn't the layman definition of "aircraft" require air?


Right. Also, the layman may assume that space is a total vacuum (so absolutely no air), even near the earth.


isn't the air at orbits of most satellites (4x starlink) so thin that it's effectively vacuum?


A pity that the Wikipedia article has artist impressions and models, but no actual photos of the satellite.



I think its hard to beat Misty for best looking satellite. https://en.wikipedia.org/wiki/Misty_(satellite_program) an eavesdropping Paul McCarthy tree in Paris.


Space Buttplug!


> Its sleek, aerodynamic design led it to be dubbed the 'Ferrari of space

great compliment for Ferrari!


It's more of an aircraft when there's air around, be it only a very small quantity :)

Can't be much at 250km though.


Enough to cause some drag, and because of that it needed to have an aerodynamic shape and a continuously operating ion engine to avoid orbital decay because of that drag. But not enough air to actually produce lift, so no wings, just fins for stabilization. So the sleek shape which led to it being perceived as beautiful was actually technically motivated. To me it looks more like a rocket than an aircraft TBH...


The weight of the spacecraft in orbit is also extremely small, so the lift produced from the free flow hitting the surface at an angle is non-negligible. But as all weight is created by the drag from the same flow, reducing the cross-section is more beneficial. That's why it was designed to match the perfect orbital velocity at zero AoA, instead of using the lift.


I always feel a little dirty when I click on twitter links these days.


It does make me think, isn't there another way to easily share things? But twitter is the one to beat for network effect.


works for me at the time of writing

edit: not logged in (usually am not)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: