Hacker Newsnew | past | comments | ask | show | jobs | submit | roger_'s commentslogin

Thanks, I was looking through the article for exactly that. Does it lock on to a configuration of stars?

Really curious how they did this mechanically.


I'll probably write another article on the star tracker itself. But I can give you a quick summary of the spiral search mechanism. It was electromechanical: a motor turned a resolver, a device with coils to generate sine and cosine from the shaft angle. This gives the X and Y deflections for a circle. These signals went through potentiometers that were also turned by the motor to produce constantly growing magnitudes, so you get a spiral. But you need to slow down the motor as you spiral outwards since you're covering a much larger linear region. So the motor also turns a stepping switch that progressively reduces its speed.

Once the system finds a star, a complicated feedback mechanism keeps it locked onto the star. There is a spinning slotted disk in front of the photomultiplier tube. If the star is off center, the output will peak when the slot lines up with the star. Thus there is an error signal with phase that indicates the direction to the star. This signal is demodulated to produce X and Y signals that change the aim to move towards the star.


I would absolutely love to read something about that - thanks for putting in the work and sharing it.

I have a buddy working on restoring a set of binoculars that were attached to the Target Bearing Transmitter system for a US sub from the 50s. Last I heard he was able to find someone that actually had parts of the original schematics for it so that he’s able to machine some new pieces.

These things are definitely a labor of love.


I would love to read a more detailed article on this device. I came back from reading the article with this exact question.

Am I right in thinking it didn't matter which star it locked onto, and it didn't need to know which star it was? Would it be a problem if it locked onto another celestial body (e.g. Venus)?

No, it needed to lock onto the right star, the one that matched the coordinates. Otherwise, it would be pointing in a random direction. The navigator would check against three different stars to detect an error.

The system could also use planets or even the sun for navigation. A special filter was used with the sun to avoid burning out the photomultiplier tube.


Ah, so it could be used in the daytime. I read the whole article assuming it was only useful at night. (When else would you be flying a bomber and need high accuracy?)

The SR-71’s star tracker was so sensitive it could track stars during the day.

https://theaviationgeekclub.com/the-sr-71-blackbird-astro-na...


being halfway to space probably helped

I can see valid uses of this but I also feel like a probabilistic calculator would be more useful.

e.g. the result for the 1 / [-1, 2] example doesn’t tell you how likely each value is and it clearly won’t be uniformly distributed (assuming the inputs are).


DFlash immediately came to my mind.

There are several Mac implementations of it that show > 2x faster Qwen3.5 already.


Looks great but even $40 for a perpetual license with only two years of updates still seems excessive.

Taskbar, Uber, etc. cost less and have unlimited updates. How is this better?


I've already used these apps and they did not serve my purpose well.

- I forced myself to use uBar but it has another level of jank that doesn't sit right with me - it is not reliable on a multi-monitor setup, there's no guarantee it'll work after waking up from sleep. If you maximize windows they will sit behind uBar sometimes - all of which boringBar does better and is more reliable at.

- Taskbar by Lawand is better than uBar but it has similar problems with multi-monitor support and wake from sleep. Apart from that their "start menu" app launcher is still in beta and you have to download a beta version from the developer's twitter page to actually use it. And obviously it's a subjective thing but the boringBar UI is a lot better - it integrates nicely with macOS.


Thank you for mentioning Taskbar (https://lawand.io/taskbar/). The multi Monitor bug was fixed in the recent macOS update, as it was a macos bug and not a taskbar bug. Also, the start menu update is almost done and will be out soon.

Thank you for mentioning my app (https://lawand.io/taskbar/). It is still free for the foreseeable future and once the paid version comes out it will be 25$ for a lifetime license, and I will not offer a subscription option

Here's my (hopefully) intuitive guide:

1. understand weighted least squares and how you can update an initial estimate (prior mean and variance) with a new measurement and its uncertainty (i.e. inverse variance weighted least squares)

2. this works because the true mean hasn't changed between measurements. What if it did?

3. KF uses a model of how the mean changes to predict what it should be now based on the past, including an inflation factor on the uncertainty since predictions aren't perfect

4. after the prediction, it becomes the same problem as (1) except you use the predicted values as the initial estimate

There are some details about the measurement matrix (when your measurement is a linear combination of the true value -- the state) and the Kalman gain, but these all come from the least squares formulation.

Least squares is the key and you can prove it's optimal under certain assumptions (e.g. Bayesian MMSE).


Skimmed this but don't have an intuitive understanding of why this works and how temperature and truncation factor in.


I was expecting something about the morphological erosion operator but this was pretty cool.

Some of the techniques here seem to be motivated by physical processes (e.g. rain). I wonder if that could be taken further to derive the whole process?


Criticize gatekeeping all you want, but I feel it’s safer to recommend a Mac or iPhone to an older, non-technical person than the equivalent Windows / Android machine.

And I’m still able to install any app I want with minimal fuss.


Why no need to make it the default? I’m all for rethinking legacy decisions.

It helps 99% of the user base and the security risk seems negligible.


Rethinking would imply there was thinking going on. This decision was made on vibes alone.


If anything, the people clinging to this snake oil security theater are the ones running on vibes alone.


Can anyone explain why Mamba models start with a continuous time SSM (and discretize) vs discrete time?

I know the step isn’t fixed, also not sure why that’s important. Is that the only reason? There also seems to be a parameterization advantage too with the continuous formulation.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: