> In saying “every point,” we obviously have to turn a continuous problem into a discrete problem, and the choice of what level of resolution to use is non-trivial. Right now, when I set that resolution to only make a calculation for each two-yard by two-yard square, I can get the map done in a couple minutes. When I set the resolution at one foot by one foot, the process usually takes over an hour. TL;DR: I can make maps for shots from general areas on a hole pretty quickly, but as I get closer to mapping “every point” on a hole, it gets exponentially more time consuming.
Are you making an assumption that you need to have the same resolution at every point on the course? Maybe there are broad areas of fairway that have similar scores (so lower res is fine) and specific "sharp" areas (you'd probably need finer resolution on the hole-wards side of bunkers vs the side that's farther away from the hole?).
There's lots of techniques in computer graphics for figuring out when you can downsample, and you're going to have lots of opportunities to tune those techniques to the problem of golf course strategy analysis (trees, as you mention, probably cast "shadows" of uncertainty and those shadows would need better sampling).
(disclaimer: I golf <1x / year so I don't even know all the words you used in the article)
OP here. You make a very good point. Most of the holes definitely don't need a consistent gradient across every point on the hole. However, some will. The goal is to reveal that underlying nature of the hazards, and this includes contouring. When we get into the really nitty-gritty aspects of contouring's outsized effects on architecturally interesting areas, the main worry I have is that you can't know where to skimp on the resolution until after you've got the result.
I do think, however, that there should be some relationship between resolution and the "stickiness" of the surface (with higher friction variable, e.g. heavy rough). Given the higher friction areas, there will be less movement in rollout. Less movement in rollout means that the net effect of the contouring is less significant, which means that the resolution is less important, and we can probably save time in these areas.
I'll really have to think about this. It's a good idea.
> The forced changeover from coal gas to natural gas is largely credited with a reduction of suicide by 40% after it was done.
The mechanism of that reduction very well could be reducing the level of depression in the populace and thus suicidal ideation, rather than just making the means less handy (or of course, some combination). Coal gas, like any other gas used for combustion, doesn't burn perfectly and UK homes likely had persistent amounts of carbon monoxide roughly all the time since heat gets used not-quite-year-round.
> Google Talk and Facebook Messenger were XMPP all the way through and worked with vanilla XMPP clients
I remember this, it was great to connect to absolutely every chat platform with bitlbee and pretend that all my chats were just DMs on some irc server somewhere
This is a great explanation; Prosody/ejabberd seem to kind of be "everything to everybody" but because they are so general it's hard to know if they're a good fit for any one particular purpose.
Snikket seems to just be a focus or lens on Prosody that answers that question for the mission statement you gave.
It doesn't seem to request location in a modern-enough way on safari on ios. It just seems to think that it can't get the location and suggests that I go open the Maps app.
They said upthread that they had blocked 17.0.0.0/8 ("Apple"), but maybe there are teams inside Apple that are somehow operating services outside of Apple's /8 in the name of Velocity? I kind of doubt it, though, because they don't seem like the kind of company that would allow for that kind of cowboying.
I don't doubt it in the slightest. Every corporate surveillance firm—I mean, third-party CDN in existence ostensibly operates in the name of 'velocity'.
The author might not be malicious, but from going through some of the audio packs, they're really not quality-checking PRs. For instance, sc_medic/sounds/WhereDoesItHurt.mp3 sounds like two-and-a-half sounds stuck together ("Critical? You Rang? Please state the nat--", it cuts off right there, and doesn't include the phrase "Where does it hurt?").
I wouldn't use this repo outside of some kind of sandbox.
Plus, the fact that audio/video assets can have RCE zero days quite often on some of these systems should make someone immediately suspicious. It isn't hard to generate those assets on your own in a way you are comfortable with. I would never, ever, ever install this without forking my own assets and doing my own install, but not everyone is me.
What is "Google Messages"? I can't count the number of articles people have written over time about how many first-party messaging apps Google themselves have put out (and then put down), not to mention what messaging apps get shoveled on by third-party android integrators.
> the main reason a message wouldn't be properly end-to-end encrypted in Google's Messages app is when communicating with an iPhone user, because Apple has dragged their feet on implementing RCS features in iMessage
(or with any other android user who isn't using a first-party device / isn't using this one app)
> [...] Android's equivalent cloud backup service has been properly end-to-end encrypted by default for many years. Meaning that you don't need to convince the whole world to turn on an optional feature before your backups can be fully protected.
You make it out to seem that it's impossible for Google to read your cloud backups, but the article you link to [0] earlier in your post says that "this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor" (emphasis added). So they have your encrypted cloud backup, and the only way to get the key material to decrypt it is to get it from an HSM in their datacenter, every part of which and the access to which they control... sounds like it's not really any better than Apple, from what I'm reading here. Granted, that article is from 2018 and I certainly have not been keeping up on android things.
HSMs are designed to protect encryption keys from everyone including the manufacturer. Signal trusts them for their encryption features. It's the best security possible for E2EE backups with passcode recovery, and Apple does it too for the subset of data that they do real E2EE backups on, like Keychain passwords. Characterizing using an HSM to implement E2EE securely as "not any better than" just giving up on E2EE for messages backups is ridiculous.
The HSMs that Signal and Apple are using are on-device though. Yes you still have to trust Signal / Apple to not exfil your key matter once decrypted by the HSM, but I submit that that is materially better than having the HSMs be hosted in a datacenter.
There are multiple examples in the literature of people leading perfectly ordinary lives whilst unknowingly having no more than 5% of the typical amount of brain matter (typically because of hydrocephalus). For example, https://www.science.org/doi/10.1126/science.7434023 from 1980.
The brain is indeed incredibly resilient - some kids with serious epilepsy get an entire hemisphere taken out - but which 5% you're left with matters enormously.
Are you making an assumption that you need to have the same resolution at every point on the course? Maybe there are broad areas of fairway that have similar scores (so lower res is fine) and specific "sharp" areas (you'd probably need finer resolution on the hole-wards side of bunkers vs the side that's farther away from the hole?).
There's lots of techniques in computer graphics for figuring out when you can downsample, and you're going to have lots of opportunities to tune those techniques to the problem of golf course strategy analysis (trees, as you mention, probably cast "shadows" of uncertainty and those shadows would need better sampling).
(disclaimer: I golf <1x / year so I don't even know all the words you used in the article)
reply