Hacker Newsnew | past | comments | ask | show | jobs | submit | frumiousirc's commentslogin

L1 (abs linear difference) is useful as minimizing on it gives an approximation of minimizing on L0 (count, aka maximizing sparsity). The reason for the substitution is that L1 has a gradient and so minimization can be fast with conventional gradient descent methods while minimizing L0 is a combinatoric problem and solving that is "hard". It is also common to add an L1 term to an L2 term to bias the solution to be sparse.

The article is about comments. But, more generally, I think the issue here is about naming things.

Names capture ideas. Only if we name something can we (or at least I) reason about it. The more clear and descriptive a name for something is, the less cognitive load is required to include the thing in that reasoning.

TFA's example that "weight" is a better variable name than "w" is because "weight" immediately has a meaning while use of "w" requires me to carry around the cumbersome "w is weight" whenever I see or think about "w".

Function names serve the same purpose as variable names but for operations instead of data.

Of course, with naming, context matters and defining functions adds lines of code which adds complexity. As does defining overly verbose variable names: "the_weight_of_the_red_ball" instead of "weight". So, some balance that takes into account the context is needed and perhaps there is some art in finding that balance.

Comments, then, provide a useful intermediate on a spectrum between function-heavy "Uncle Bob" style and function-less "stream of consciousness" style.


the first time you write something, descriptive names are handy, but if youre writing a second or thrid copy, or trying to combine several back down into one, those names are all baggage and come with a full mental model.

an alternative ive seen work well is names that arent descriptive on their own, but are unique and memorable, and can be looked up from a dictionary


I was also wondering about the inherent resolution for the BPM precision claims.

Besides the sample period, the total number of samples matter for frequency resolution (aka BPM precision).

44100 Hz sampling frequency (22.675737 us period) for 216.276 s is 9537772 samples (rounding to nearest integer). This gives frequency samples with a bandsize of 0.0046237213 Hz which is 0.27742328 BPM.

Any claim of a BPM more precise than about 0.3 BPM is "creative interpretation".

And this is a minimum precision. Peaks in real-world spectra have width which further reduces the precision of their location.

Edit to add:

https://0x0.st/Pos0.png

This takes my flac rip of the CD and simply uses the full song waveform. This artificially increases frequency precision by a little compared to taking only the time span where beats are occurring.


This is plainly false though. You're saying beats can't be localized to less than one second of precision (regardless of track length, which already smells suspect). Humans can localize a beat to within 50ms.

Yes, I got lost in the numbers and made a blunder by misinterpreting what we mean by frequency resolution expressed in "BPM" instead of Hz.

It is correct to say "0.0046237213 Hz which is 0.27742328 BPM". My mistake was to interpret 0.27742328 BPM as the limit of frequency resolution in units of BPM. Rather, any BPM measured must be an exact multiple of 0.27742328 BPM.

Thanks for pointing out my mistake!

> (regardless of track length, which already smells suspect)

Frequency resolution being dependent on the number of samples is a very well known property of basic sampling theory and signal analysis.

In fact, one can interpolate the frequency spectrum by zero-padding the time samples. This increases the resolution in an artificial way because it is after all an interpolation. However, a longer song has more natural frequency resolution than a shorter song.

Note, this frequency resolution is not related to fidelity which is some messy human related thing that is over a sliding window of shorter duration that I don't pretend to understand.

BTW, the opposite is also possible. You can zero-pad the spectrum as a means of resampling (interpolating) the time domain. This is slower but more spectrally correct than say time-domain linear or cubic interpolation.

These techniques require an FFT and so are somewhat expensive to apply to long signals like an entire song, as I did for the plot. Daft Punk's HBFS takes about 8 seconds on one CPU core with Numpy's FFT.


Well, plants and eyes long predate apes.

Water is most transparent in the middle of the "visible" spectrum (green). It absorbs red and scatters blue. The atmosphere has a lot of water as does, of course, the ocean which was the birth place of plants and eyeballs.

It would be natural for both plants and eyes to evolve to exploit the fact that there is a green notch in the water transparency curve.

Edit: after scrolling, I find more discussion on this below.


Eyes aren't all equal. Our trichromacy is fairly rare in the world of animals.


Perhaps you are thinking of megahal https://homepage.kranzky.com/megahal/Index.html or if a bit later in the millennium, cobe https://teichman.org/blog/


ah, probably so, looks like there were eggdrop scripts for megahal, thanks!


I'm curious how the cost of performing these CT scans compared to the profit reaped by Haribo while the batteries were selling.


Lumafield sells CT scanners, so these posts serve double duty as advertising for their capabilities. Given how many times their previous posts have been shared I'm sure the ROI is great.


This is basically good marketing content for Lumafield that sell the CT scanners. Cost to them is almost nothing, just opportunity cost of doing something else on the tool.


> more examples in that thread

Some supposition: A Fourier amplitude image should show that pattern as peaks at a certain angle/radius location. The exact location may be part of the identification scheme. Running peak finding on the Fourier image and then zeroing out the frequencies in the peak should remove the pattern. Modeling the shape of the peak would allow mimicking the application of a legit SynthID signature.

If anyone tries/tried this already, I'd love to see the results.


> I use wired headphones to study with Anki (AnkiDroid) because I've found most (inexpensive) Bluetooth headphones require a second or two to begin playing.

1-2 seconds is an eon for audio latency so I guess something else is going on than anything BT related in the headphones. Unless you have particularly bad luck in what headphones you use.

FWIW, I use a variety of cheap and not so cheap BT headphones across multiple devices and apps including AnkiDroid and have not perceived any latency.

If switching to wired removes the latency then it does seem to indicate something in the BT stack of your device. I wonder if you experience the lag when using AnkiDroid + BT on another device.


Thank you. I actually have since switched devices, but have not yet tested on the new device. The old device was a flagship phone, the Note 10 Lite. That phone served me well for four years, I'll test on the S24 Ultra that just replaced it. Thank you.


The lack of proper indentation (which you noted) in the Python fib() examples was even more apparent. The fact that both AIs you tested failed in the same way is interesting. I've not played with image generation, is this type of failure endemic?


My hunch in that case is that the composition of the image implied left-justified text which overwrote the indentation rule.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: