I'll probably write another article on the star tracker itself. But I can give you a quick summary of the spiral search mechanism. It was electromechanical: a motor turned a resolver, a device with coils to generate sine and cosine from the shaft angle. This gives the X and Y deflections for a circle. These signals went through potentiometers that were also turned by the motor to produce constantly growing magnitudes, so you get a spiral. But you need to slow down the motor as you spiral outwards since you're covering a much larger linear region. So the motor also turns a stepping switch that progressively reduces its speed.
Once the system finds a star, a complicated feedback mechanism keeps it locked onto the star. There is a spinning slotted disk in front of the photomultiplier tube. If the star is off center, the output will peak when the slot lines up with the star. Thus there is an error signal with phase that indicates the direction to the star. This signal is demodulated to produce X and Y signals that change the aim to move towards the star.
I would absolutely love to read something about that - thanks for putting in the work and sharing it.
I have a buddy working on restoring a set of binoculars that were attached to the Target Bearing Transmitter system for a US sub from the 50s. Last I heard he was able to find someone that actually had parts of the original schematics for it so that he’s able to machine some new pieces.
Am I right in thinking it didn't matter which star it locked onto, and it didn't need to know which star it was? Would it be a problem if it locked onto another celestial body (e.g. Venus)?
No, it needed to lock onto the right star, the one that matched the coordinates. Otherwise, it would be pointing in a random direction. The navigator would check against three different stars to detect an error.
The system could also use planets or even the sun for navigation. A special filter was used with the sun to avoid burning out the photomultiplier tube.
Ah, so it could be used in the daytime. I read the whole article assuming it was only useful at night. (When else would you be flying a bomber and need high accuracy?)
I can see valid uses of this but I also feel like a probabilistic calculator would be more useful.
e.g. the result for the 1 / [-1, 2] example doesn’t tell you how likely each value is and it clearly won’t be uniformly distributed (assuming the inputs are).
I've already used these apps and they did not serve my purpose well.
- I forced myself to use uBar but it has another level of jank that doesn't sit right with me - it is not reliable on a multi-monitor setup, there's no guarantee it'll work after waking up from sleep. If you maximize windows they will sit behind uBar sometimes - all of which boringBar does better and is more reliable at.
- Taskbar by Lawand is better than uBar but it has similar problems with multi-monitor support and wake from sleep. Apart from that their "start menu" app launcher is still in beta and you have to download a beta version from the developer's twitter page to actually use it. And obviously it's a subjective thing but the boringBar UI is a lot better - it integrates nicely with macOS.
Thank you for mentioning Taskbar (https://lawand.io/taskbar/). The multi Monitor bug was fixed in the recent macOS update, as it was a macos bug and not a taskbar bug. Also, the start menu update is almost done and will be out soon.
Thank you for mentioning my app (https://lawand.io/taskbar/). It is still free for the foreseeable future and once the paid version comes out it will be 25$ for a lifetime license, and I will not offer a subscription option
1. understand weighted least squares and how you can update an initial estimate (prior mean and variance) with a new measurement and its uncertainty (i.e. inverse variance weighted least squares)
2. this works because the true mean hasn't changed between measurements. What if it did?
3. KF uses a model of how the mean changes to predict what it should be now based on the past, including an inflation factor on the uncertainty since predictions aren't perfect
4. after the prediction, it becomes the same problem as (1) except you use the predicted values as the initial estimate
There are some details about the measurement matrix (when your measurement is a linear combination of the true value -- the state) and the Kalman gain, but these all come from the least squares formulation.
Least squares is the key and you can prove it's optimal under certain assumptions (e.g. Bayesian MMSE).
I was expecting something about the morphological erosion operator but this was pretty cool.
Some of the techniques here seem to be motivated by physical processes (e.g. rain). I wonder if that could be taken further to derive the whole process?
Criticize gatekeeping all you want, but I feel it’s safer to recommend a Mac or iPhone to an older, non-technical person than the equivalent Windows / Android machine.
And I’m still able to install any app I want with minimal fuss.
Can anyone explain why Mamba models start with a continuous time SSM (and discretize) vs discrete time?
I know the step isn’t fixed, also not sure why that’s important. Is that the only reason? There also seems to be a parameterization advantage too with the continuous formulation.
Really curious how they did this mechanically.
reply