Each repo is a history of the ebook including editorial changes, typos fixes, and the like. Having a single repo containing thousands of ebooks and their histories would be pretty annoying to browse.
Presumably to keep the repo size reasonable. Say I want to make an ad hoc contribution to a book, if step 1 is "download this multi-gigabyte repo" then that's a fairly big hurdle.
I also modified a script I've been using for a few years to patch pylsp so it can now see uv script envs using the "uv sync --dry-run --script <path>" hack.
This sounds like a really useful modification to the LSP for Python. Would you be willing to share more about how you patched it and how you use it in an IDE?
I have a somewhat particular setup where I use conda to manage my envs, and autoenv[0] to ensure the env for a given project is active once I'm in the folder structure. So there's a .env file containing "conda activate <env_name>" in each. I also use Emacs as my sole IDE, but there are quite a few instances where support falls short for modern workflows. I use the pylsp language server, and it's only able to provide completions, etc for native libraries, since by default it doesn't know how to find the envs containing extra 3p packages.
And so I wrote a patcher[1] that searches the project folder and parents until it finds an appropriate .env file, and uses it to resolve the path to the project's env. With the latest changes to the patcher it now uses the output from "uv sync", which is the path to a standalone script's env, as well as the traditional "source venv_path/bin/activate" pattern to resolve envs for uv-managed projects.
I would definitely buy one, assuming there was some guarantee on parts availability "for the lifespan of the company up to X decades" or some similarly long time.
I would doubly do so if the design were open-source.
I would triply do so if I could have all my new home appliances this way. I tossed my last fridge and my last washing machine over parts which would have been in the single-digit dollars to manufacture.
If the author is not commercializing this, the design and instructions should be open-sourced (and, ideally, advertised to a Chinese discount company).
Just-in-time compilation of Ruby allowing you to elide a lot of the overhead of dynamic language features + executing optimized machine code instead of running in the VM / bytecode interpreter.
For example, doing some loop unrolling for a piece of code with a known & small-enough fixed-size iteration. As another example, doing away with some dynamic dispatch / method lookup for a call site, or inlining methods - especially handy given Ruby's first class support for dynamic code generation, execution, redefinition (monkey patching).
> In particular, YJIT is now able to better handle calls with splats as well as optional parameters, it’s able to compile exception handlers, and it can handle megamorphic call sites and instance variable accesses without falling back to the interpreter.
> We’ve also implemented specialized inlined primitives for certain core method calls such as Integer#!=, String#!=, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, Module#===, and more. It also inlines trivial Ruby methods that only return a constant value such as #blank? and specialized #present? from Rails. These can now be used without needing to perform expensive method calls in most cases.
it makes ruby code faster than c ruby code so they are moving toward rewriting a lot of the core ruby stuff in ruby to take advantage of it. run time performance enhancing makes the language much faster.
Same as the benefits of JIT compilers for any dynamic language; makes a lot of things faster without changing your code, by turning hot paths into natively compiled code.
> Throughout the pandemic, the media focused on the idea of the “urban doom loop,” in which remote work would kill downtowns, triggering a downward spiral of reduced services that would cause people to leave cities. What went overlooked has turned out to be the bigger and even more consequential story: the human doom loop, a cycle in which people stop connecting in real life, reducing the quality of in-person activities and the physical realm itself, further discouraging IRL activities, and so on. Nearly five years after the pandemic, it’s not the real estate we need to worry about. It’s us.
> The pattern is clear: The more we go online, the less we show up in person. And the less we show up, the less likely our physical realm will offer experiences that can compete.
Being able to <Context value={1}> instead of <Context.Provider value={1}> is also a nice, albeit small, QOL change. Feels like they are really honing down the API. Esp once they kill off useMemo and related hooks with the compiler.
TL;DR: Several new UUID versions have been standardized
UUIDv5 is meant for generating UUIDs from "names" that are drawn from, and unique within, some "namespace" as per Section 6.5.
UUIDv6 is a field-compatible version of UUIDv1 (Section 5.1), reordered for improved DB locality. It is expected that UUIDv6 will primarily be implemented in contexts where UUIDv1 is used.
UUIDv7 features a time-ordered value field derived from the widely implemented and well-known Unix Epoch timestamp source, the number of milliseconds since midnight 1 Jan 1970 UTC, leap seconds excluded. Generally, UUIDv7 has improved entropy characteristics over UUIDv1 (Section 5.1) or UUIDv6 (Section 5.6).
UUIDv8 provides a format for experimental or vendor-specific use cases. The only requirement is that the variant and version bits MUST be set as defined in Sections 4.1 and 4.2. UUIDv8's uniqueness will be implementation specific and MUST NOT be assumed.
The only explicitly defined bits are those of the version and variant fields, leaving 122 bits for implementation-specific UUIDs. To be clear, UUIDv8 is not a replacement for UUIDv4 (Section 5.4) where all 122 extra bits are filled with random data.
Background for the changes:
Many things have changed in the time since UUIDs were originally created. Modern applications have a need to create and utilize UUIDs as the primary identifier for a variety of different items in complex computational systems, including but not limited to database keys, file names, machine or system names, and identifiers for event-driven transactions.