>> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive
>
> Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his "Version SAT" blog post (https://research.swtch.com/version-sat). This research is what informed Go's Minimal Version Selection (https://research.swtch.com/vgo-mvs) for modules.
It appears to me that uv is walking the same path. If most developers don't care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it'll be a sign that perhaps Python's ecosystem as a whole will drop package version upper bounds.
Perhaps so, although I'm more algorithmically optimistic. If ignoring upper bounds makes the problem more tractable, you can
1. solve dependency constraints as if upper bounds were absent,
2. check that your solution actually satisfies constraints (O(N), quick and passes almost all the time), and then
3. only if the upper bound constraint check fails, fall back to the slower and reliable parser.
This approach would be clever, efficient, and correct. What you don't get to do is just ignore the fucking rules to which another system studiously adheres then claim you're faster than that system.
My groceries are cheaper if I walk out of the store without paying for them too. Who's going to stop me?
While I agree that an optimistic optimization for the upper-bound-pass case makes sense, just ignoring the bounds just isn't correct either.
Common pattern in insurgent software is to violate a specification, demonstrate speedups, and then compare yourself favorably to older software that implements the spec (however stupid) faithfully.
I haven't, no. But as far as I can tell from the documentation, it looks more like an alternative to stgit (with a similar lack of history or collaboration support)?
stgit is similar in that it sits with git, but it's not the same workflow. git-spice has branches per feature that base on one-another. It's more git-like than quilt-like.
What you get is git-styled patch-series development. Branches on branches where each branch maintains history of the feature.
A git-spice workflow is compatible with GitHub style PRs where a PR depends depends on another PR.
Or pair git-spice with format-patch you can share with developers who prefer patch files. Or take patches from someone and import each patch as a branch, then let git-spice track the stack position.
As far as I understand it, this is a still debated question. One theory is it's about evaporating water: Plausible photomolecular effect leading to water evaporation exceeding the thermal limit (https://www.pnas.org/doi/10.1073/pnas.2312751120).
There are black plants though! And they're studied for the same kind of questions. E.g. The Functional Significance of Black-Pigmented Leaves: Photosynthesis, Photoprotection and Productivity in Ophiopogon planiscapus ‘Nigrescens’ (https://pmc.ncbi.nlm.nih.gov/articles/PMC3691134/)
I believe you're right, that was my conclusion as well. I'm not sure that that will accomplish what they hoped.
To continue my original example, I could, in theory, take this code, ensure that it works with arbitrary independent pseudo-services, create my own such services, under a proprietary licence, and distribute the whole as an aggregate, which is permitted by the GPL.
The author likely seeks to provide commercial licensing for those interested in integrating their pseudo-services as libraries, which would require either that they be GPLd or that the original code be licensed in some other way.
I hope the author achieves the success they hope for without the licensing and legal hell they may have set themselves up for. It can be a great disappointment to have one's work turned into someone else's success by a someone or someones with more legal and licence cunning than one's self.
(Note: that ain't me, I've just seen that exact scenario playout more than a fair few times....)
Yes, people can do that. It's inconvenient and risky, so serious customer prospects will pay to avoid it. This is one of the more common open source commercialization strategies; one of the earlier examples is Sleepycat.
EIRP is good at reducing uintentional interference. After all, you'd probably wouldn't like me pointing a 20 element yagi antenna through your house, denying your ability to use the spectrum in a reasonable manner, just so I could do a point-to-point fixed link.
EIRP minimizes regulations. It's a good trade-off over operator and installation licencing.
I feel the opposite. Morse code was a major barrier into HF for many different people. Folks who are musically challenged would fail to copy. People with hand disabilities faced major hurdles. Hearing disabilities made it impractical, even though there are plenty of visual-only modes (sstv, digital tty modes, etc).
Removal of morse has allowed far more people to reach more technical levels of ham radio. And, in my anedotcial experience, dropping the requirement hasn't impacted the popularity of morse code. I'd wager its more popular than ever because of how many more hams have HF access compared to limiting access to folks who learned code just to pass a test and to promptly throw away their key.
Morse code is fun! It's not just an emergency mode either. Those mountain climbing SOTA hams love code because CW radios are so simple that they can cram several bands into an absolutely tiny QRP rig. Collectors practice on their 80 year old rigs. There's even a club dedicated to making contracts only with straight keys.
Fear not. Code will be with us for decades to come.
And I learned today that NYC has a 50%-off program for low-income transit users ("Fair Fares"). That makes taking a bus or the subway incredibly cheap.
The last federal gas tax increase was 31 years ago (1993). Most states have similarly stalled gas taxation. Both are also facing the issue of electric cars entirely dodging this tax revenue. It's such a large problem that the Highway Trust Fund is projected to bankrupt in 2028.
To be fair, I believe the author is addressing a beginner audience for inexpensive displays commonly found today. These modern small displays have about 7 times more pixels than those older Nokia ones, which means the bandwidth to drive them in monochrome is the same as driving the color Nokia displays.
The advice of "if color, start with SPI" is simple and mostly correct. Good enough for a target audience of engineers just starting with hardware.
Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his "Version SAT" blog post (https://research.swtch.com/version-sat). This research is what informed Go's Minimal Version Selection (https://research.swtch.com/vgo-mvs) for modules.
It appears to me that uv is walking the same path. If most developers don't care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it'll be a sign that perhaps Python's ecosystem as a whole will drop package version upper bounds.