you should not ask indie hackers for advice and you should not hang out with them.
If you build a product for marketers, you should hang out with them and ask them for advices, not indie hackers who know nothing about marketing.
If you build a product for bakers, you should hang out with them to understand what they need, not with indie hackers who have never baked anything in their lives.
That sounds logical, but for certain types of products, it is not.
There is no point in talking with indie hackers. It's only useful if you need knowledge about coding skills, which is rarely the case (especially now with AI).
The philosophy of uv is that the venv is ephemeral; creating a new venv should be fast enough that you can do it on demand.
Do you have a standalone script or do you have a project? --script is for standalone scripts. You don’t use it with projects.
If you tell it to run a standalone script, it will construct the venv itself on the fly in $XDG_CACHE_HOME.
If you have a project, then it will look in the .venv/ subdirectory by default and you can change this with the $UV_PROJECT_ENVIRONMENT environment variable. If it doesn’t find an environment where it is expecting to, it will construct one.
A semicolon connects, whereas an em-dash creates more of a pause and therefore separates. In addition, em-dashes can be used in pairs to create a parenthesis, which semicolons can’t. I think with time you will appreciate the difference.
Dashes surround a sub-clause - something like this - which is like a parenthetical addition to a sentence that could stand alone without it; semi-colons (';') connect a further sentence or part of one where perhaps a full-stop and additional word could have been. They also sometimes separate list items following a colon, especially if the things listed are longer sentences perhaps themselves containing commas that'd otherwise be ambiguous.
Em dashes are very similar to semicolons. You use em dashes if your related sentence is in the middle of another sentence, and semicolons if it's at the end.
They're frequently used in skilled and professional grade writing.
So as not to mislead anyone, the parent is mostly incorrect:
Here's an example sentence: Semicolons must have independent clauses—phrases that could form a full sentence on their own—on both sides of them; they are essentially alternatives for periods. Em dashes don't require independent clauses on either side.
In the italicized sentence,
* phrases that could form a full sentence on their own is not an independent clause but is valid between em dashes. on both sides of them, after the em dashes, is also not an independent clause. (The em dashes function like commas or parentheses here.)
* The parts before and after the semicolon are independent clauses. You could replace the semicolon with a period and you'd have perfectly valid grammar. I just chose to connect the two sentences a bit more.
I don't know if you can use em dashes as the parent comment describes, connecting three independent clauses:
* My favorite fruit is peaches—they are very sweet—I eat them all summer.
I think the above is wrong; it should be one of the following:
* My favorite fruit is peaches—they are very sweet—and I eat them all summer.: The last section is a dependent clause made by "and", not an independent clause.
* My favorite fruit is peaches—they are very sweet; I eat them all summer.: One both sides of the semicolon are independent clauses; I could replace the semicolon with a period.
Maybe there are examples I'm not thinking of? I infer that the rule might be that the punctution following the em-dashed clauses should be the punctuation that would have been used without the em-dashed clause, but that's based on very limited evidence.
Many people don't use semicolons (;) in English but many do, and they are certainly part of correct grammar.
Semicolons are generally alternatives to periods, when you want more connection between the two sentences. Like periods, semicolons must have two full sentences—that is, what could be full sentences—on either side of them; the potential 'full sentences' are properly called independent clauses. (A dependent clause needs the rest of the sentence to form valid grammar; it can't function on its own. For example, in this paragraph's first sentence, when you want more connection between the two sentences is a dependent clause. Often they follow commas.)
Another use of semicolons is for lists in a paragraph where one of the list items has a comma in it (similar to the parsing problem for CSVs where some records contain commas): I only like wine; beer, but only ales; and orange juice.
Everything can scale if you throw enough servers at it. Of-course Shopify scales, they even spent time and money to build a JIT on top of Ruby. As a smaller company, does everyone have the time and money to spend on servers or optimising the language to this extent?
That's the nice thing! You don't need to optimise the language and build a JIT as a smaller company, Shopify already did that for you. Just like Google did for Javascript, which lead to Javascript having any performance at all (which lead to node being a thing).
Also remember that Shopify didn't start out making billions. They started as a small side project on a far, far slower version of Ruby and Rails.
Same with GitHub, same with many others that are either still on Rails or started there.
You can optimise things later once you actually have customers, know the shape of your problem and where the actual pain points are/what needs to be scaled.
To me, I care a ton about performance (it's an area I work in), but there's not a lot of sense in sacrificing development agility for request speed on things that may not matter or be things people will pay for. Especially when you're small.
Smaller companies have less traffic, need less expensive servers, and have no need to spend money optimising the language. They can focus on that when they make billions of dollars, like Shopify does.
>On Rails, the most heavy page has a P95 duration of 338 ms. There is of course room for improvement but it's plenty snappy.
I guess everyone will have different opinion on P95 at 338ms. The great thing is that we are getting cheaper CPU Core price and YJIT. As long as this trend continues, the definition of Fast Enough will cover more grounds for more people.
There's lots of tricks you can do, such as preloading pages when the users hovers over the link. This makes even a "slow" page load of 400ms feel pretty much instant to a human.
- There's a noticeable trend where the general population is increasingly favoring smartphones over traditional computers for their digital needs. This shift has predominantly affected Windows users, as they represent a significant portion of the casual computing market. As a result, the proportion of dedicated Linux users has become more pronounced
Keep in mind that standards are moving slow, CODEC standards more so. The golden standard is still h264/AVC, which dates back to the nineties. This is primarily due to many appliances (set top boxes, cameras, phones, TVs) using the absolute cheapest hardware stack they can get their hands on.
Compared to other standards in streaming media, I'd say that AOMedia has found adoption a lot quicker. h265 (HEVC) was all but DoA until years after it's introduction Apple finally decided to embrace it. It is still by no means ubiquitous, mostly due to patent licensing, which significantly drives up the price of hardware in single digit dollars price range.
Anecdotally, consider that Apple's HTTP Live Streaming protocol (till version 6) relied on MPEG2TS, even though Apple lay the groundwork for ISO/IEC 14496-12 Base Media File Format aka MP4. The reason was that chips in the initial Iphones had only support for h264 using mpeg2 transport streams, and even mp4 -> mp2 transmuxing was considered too resource intensive.
> h265 (HEVC) was all but DoA until years after it's introduction Apple finally decided to embrace it
No? You're talking in terms of PC / phone hardware support only. HEVC was first released June 7, 2013. The UHD Blu-ray standard was released less than 3 years later on February 14, 2016 - and it was obvious to everyone in the intervening years that UHD Blu-ray would use HEVC because it needed support for 4k and HDR, both of which HEVC was specifically designed to be compatible with. (Wikipedia says licensing for UHD Blu-ray on the basis of a released spec began in mid 2015.)
>Anecdotally, consider that Apple's HTTP Live Streaming protocol (till version 6) relied on MPEG2TS
This sounds like you might be confusing that MPEG2TS might have something to do with the video encoding instead of it solely being the way the video/audio elementary streams are wrapped together into a single contained format. The transport stream was designed specifically for an unreliable streaming type of delivery vs a steady consistent type of source like reading from a dis[c|k]. There is nothing wrong with using a TS stream for HLS that makes it inferior.
> There is nothing wrong with using a TS stream for HLS that makes it inferior.
Not wrong, but a bit surprising. As you mention, transport streams are designed to operate over unreliable connections (like satellite or terrestrial transmission). Reliability is not an issue with with HTTP or TCP.
Other than being archaic, some disadvantages are that TS has somewhat more overhead than (f)MP4 and poor ergonomics for random access / on the fly repackaging caused by continuity counter, padding, PAT/PMT resubmission timing, PCR clock.
If it were up to me to design a streaming protocol like DASH, HLS, Flash, or SmoothStreaming I'd instantly choose mp4 (or plain elementary streams). I wouldn't even consider TS or PS unless some spec forced me to.
>Reliability is not an issue with with HTTP or TCP.
We seem to be confusing what reliable means here. Yes, HTTP/TCP can reliably transmit data in the fact that if packets are missed they will be resent so that you can be assured that the data will eventually be delivered. However, that doesn't do well for real time streaming of data that is necessary to be received in order. That's why UDP was made.
>If it were up to me to design a streaming protocol like DASH, HLS, Flash, or SmoothStreaming I'd instantly choose mp4 (or plain elementary streams). I wouldn't even consider TS or PS unless some spec forced me to.
Well, it's a good thing we didn't have to wait for you to come around and design a streaming protocol and we've been able to use it for the past ~20 years with the technology that was available at the time. Perfect is the enemy of progress.
What I meant to point out as odd about Roger Panthos and co.'s decision to build HLS on top of Transport Stream containers is that Apple had already laid the foundation for MP4 with QuickTime.
Since HTTP live streaming was never about anything but HTTP, container capabilities like auto-synchronization offered by mpeg2TS were moot. It would therefore seem logical for Apple to build HLS upon what they already had with QuickTime + iso2 fragments. That was more or less the route Adobe/Macromedia had taken with Flash streaming.
Yet, the choose mpeg2TS (initially only muxing AAC and h264). The reason, historically seems to have been driven primarily by the capabilities of the iphone hardware which supported this out of the box! Separate transport streams for audio and video, WebVTT, elementary stream audio were added much later, and fragmented MP4 was introduced only once HEVC was bolted on.
I'm all for favouring what exists over what's perfect; it's just odd that Apple choose to (initially, at least) regress to 90s technology while rest of world had already adopted superior container.
> Compared to other standards in streaming media, I'd say that AOMedia has found adoption a lot quicker. h265 (HEVC) was all but DoA until years after it's introduction Apple finally decided to embrace it.
Was it all but dead because people thought h264 was good enough, until 2.5 and 4k became more prominent in media consumption? It seems really useful if youre doing encoding at resolutions than 1080p, and it makes me less regretful that I have a bunch of recent hardware that didn’t get av1 hardware support :)
> Was it all but dead because people thought h264 was good enough, until 2.5 and 4k became more prominent in media consumption?
(Lack of) a compelling use case certainly played a role. Sure, reducing bandwidth is in itself a noble (and potentially profitable) goal but why fix it if it ain't broken? Ie.: the h264 infrastructure for HD was already there, working fine.
Another factor was that HEVC was full of new patents and the patent pool licensing costs hadn't yet settled.
memcpy is ±99%. The 1% involves bit shifting (nal unit reshuffling, avc3/avc1 fixups).
So indeed, repackaging mp4 <-> mp2 containers is pretty trivial. Nevertheless, Apple initially choose mpeg2TS because it conveniently allowed them to shove the reassembled media segments straight into the dedicated AV chip.
The H265 patent licensing situation is famously a mess and has been a big barrier to adoption. Except in circles where people worry less about that sort of thing: Warez, China, ...
The licensing shenanigans of H265 was a big motivator for creating AV1, a royalty free codec.
The person who benefits from a more efficient codec tends to be netflix/youtube (lower bandwidth costs), and they are far far removed from the chipmaker - market forces get very weak at that distance.
People never stopped using VP8. In fact, your screen sharing is probably wasting excessive amounts of CPU every day because there is no hardware support.
AV1 isn't particularly behind schedule compared with previous codec generstions. We could and should have moved faster if everything went well but Qualcomm in particular were being awkward about IP issues.
Luckily the effort behind the David software codecs kept up the rollout momentum.
The AV1 ASIC takes up space on the SoC, so effectively it decreases the performance of other parts. This could be why some manufacturers have delayed including support for quite a while. Though Mediatek already had AV1 support three years ago.
Especially when Google and AOM promise to have a hardware encoder and decoder to be given out for free by 2018, and be implemented in many SoC by 2019, wide availability support and AV2 by 2020.
Well the basic answer is that, making an efficient hardware encoder and decoder, within power budget and die space, all while conforming to standard because you wont have much of a chance to correct it, and implementing it into the SoC design cycle which is and always has been at least three years minimum, is a lot harder than most software engineer at Google and AOM would thought.
Apple has a tiny sliver of the patents in HEVC, and while we don't have the numbers I feel pretty certain they pay far more into the pool to ship HEVC in their devices than they get out of it. The same is doubly true of Qualcomm who aren't even a part of the pool.
HEVC was finalized in 2013. AV1 was finalized in 2018, and has just finally started getting a robust ecosystem of software and hardware.
It wasn't really all that slow in general, just slow on dedicated streaming hardware.
Basically, it was the push to 4K (and especially HDR) that caused HEVC to roll-out. In 2016 4K Blu-rays started coming out and they were all HEVC 10-bit encoded. It took a couple more years before dedicated streaming devices and lower-end smart TVs bothered to start including HEVC support as standard because at first 4K content was uncommon and the hardware came at a premium.
Now that it's mostly the de-facto standard, we see HEVC support in basically all streaming devices and smart TVs.
AV1 didn't have any sort of resolution change or video standard change to help push it out the way HEVC did, so it's basically rolling out as the parts get cheaper due to pressure from streaming giants like Google and Netflix rather than due to a bottom-up market demand for 4K support.
I didn't know about Blu-Ray being relatively prompt. But I still think HEVC adoption was slow in broadcast TV, which I would have thought was a shoo-in market.
Qualcomm isn't even a part of the HEVC alliance patent pool, so that theory doesn't hold. Indeed, the fact that Qualcomm is currently building AV1 support into their next chip (purportedly) puts them at risk of being sued because while AV1 is open, we all know how patents work and there are almost certainly actionable overlaps with the pool.
Apple ships probably more devices than anyone, and given that the patent pool is huge as mentioned odds are overwhelmingly that it costs them money to support HEVC / HEIC, not the reverse. That theory also is dubious.
Remember when everyone was yammering for VP8 support? Then it was VP9 support? Now it's AV1. Sometimes it takes a while to shake out. By all appearances AV1 is a winner, hence why it's finally getting support.
>Apple ships probably more devices than anyone, and given that the patent pool is huge as mentioned odds are overwhelmingly that it costs them money to support HEVC / HEIC, not the reverse. That theory also is dubious.
This is a nit that doesn't negate your main point: Apple may ship more complete devices than anyone, but Qualcomm makes up significantly more of the SoC manufacturer market share[1] at 29% vs Apple's 19%
Netflix (and Youtube? I forget) will push an AV1 stream if you have the support. This was even mentioned in Apple's show yesterday. So the egg is already there and the chicken is slowly coming, thankfully.
YouTube was the first to support it. They even went to war with Roku over it and Roku killed the YouTube TV app in retaliation to YouTube's mandate that all next-gen devices support AV1, so YouTube went ahead and embedded it inside the regular YouTube app.
Roku's latest devices to support AV1, so I guess either the price came down, they struck a deal, or Roku just lost to the market pressure after Netflix pushed for AV1 as well.
I think a lot of content creators really want AV1 because of the drastic reduction of file sizes. Streaming companies want it to catch on because of the drastic reduction in bandwidth use.
I thought Google was the main one behind AV1. Couldn’t they use their position as one of the world’s biggest video platforms to break that chicken egg loop?
They have. They literally threatened to pull their support from devices if they don't implement the codec in hardware. Roku's spat with Google was a big-ish story when that happened.
I don't know how that can be viewed as a good thing.
What I would love to see in a future version of python is being able to do `user["email"]` or `user.email` independently of the reason.
Sometimes both work, sometimes only one of the two and an error in throw for the other one. I don't care why, I just want it to work, it's such a basic feature.
Something even crazier would be to have an equivalent of `console.log` in python. It would be an amazing feature but I think I'm the only one wanting it. I know I can use `print` or different logger. But it's a lot more complicated to use and the output is a lot less navigable than in javascript. PHP also has `var_dump`. But we don't have any equivalent in python.
> What I would love to see in a future version of python is being able to do `user["email"]` or `user.email` independently of the reason. Sometimes both work, sometimes only one of the two and an error in throw for the other one. I don't care why, I just want it to work, it's such a basic feature.
It's an absolutely terrible idea and I'm thankful that there's so little chance it'll ever happen. I don't want random objects to become mappings, nor do I want mapping entries and their attributes to conflict. Javascript is a bad language and this is one of its worst features.
> Something even crazier would be to have an equivalent of `console.log` in python. It would be an amazing feature but I think I'm the only one wanting it. I know I can use `print` or different logger. But it's a lot more complicated to use and the output is a lot less navigable than in javascript.
You... can just call `logging.whatever()`, after having called `logging.basicConfig()` to set up your basic config?
I fail to see how that would change anything to navigability. `console.log` is not inherently navigable, it's the browser's console UI which provides some navigability.
In terms of programming language construction making `x.y` and `x["y"]` equivalent looks appealing and, admittedly, cute but there are some problems:
* For new languages: It's not generic enough since there is no equivalent of `x[t]` if t is of a non-string type. E.g. there is no way to express `x[(1,2,3)]` or `x[3]` or `x[frozenset({1, 2, "foo"})]` this way.
* For existing languages like Python: this would be a breaking change since things that can do `x.y` and `x[t]` are structurally different in Python so they're typed differently. One are called "mappings" and the other are "objects", they're completely different things. Hence, you'll get cases where `x["foo"] == 5` but `x.foo == 4` so this will for sure break some programs. Too much pain for no gain.
I will admit to implementing `__getattr__` and `__setattr__` in such a way that they mimic object properties in dictionaries, for specific cases.
In general, the threshold for doing so should be IMHO fairly high. In my case,
- they are data-heavy classes but not @dataclass classes,
and
- there's enough attribute access that the `["` and `"]` become visually distracting,
and
- there are nested structures, so so you can write `x.foo.bar.baz` instead of `x["foo"]["bar"]["baz"]`
This is especially useful, in our case, in a system that intakes a lot of JSON with a LOT of nested dictionaries.
> Don't you think `x["foo"] == 5` but `x.foo == 4` is a hell of a lot confusing ?
No, having used lots of OO languages before JS and its “objects are dictionaries are arrays and member access and string indexing and integer indexing are all equivalent and can be used interchangeably, except you can't use member access where the key isn't a valid identifier” approach, which I find more confusing and error prone (though I’ve since had to use JS enough to be proficient in that, too.)
Indexing an object as an indexable collection and accessing a member (field, property, or method) of the object are fundamentally different things, and having a collection item with a particular string index isn’t the same thing as an object member with a similar identifier name.
This use of objects as also quasi-associative-arrays is so broken that JS’s actual associative-array type (Map) can’t use indexing notation because of it, and has to use .get() and .set() instead, unlike the associative array types of most other dynamic languages (and several statically-typed OO languages).
The JS way is less bad as a type-specific behavior (e.g., Ruby ActiveSupport’s HashWithIndifferentAccess), though.
> Don't you think `x["foo"] == 5` but `x.foo == 4` is a hell of a lot confusing ?
No. They're different notations; one means `x.__getitem__('foo')` and the other means `x.__getattribute__('foo')`. Why should they be the same? It isn't confusing that `5-4 == 1` but `5+4 == 9`, after all.
If we assume that the dict class was enhanced with your proposed equivalence, would you want `d['items']` to be the function `d.items`? Would that not make 'items' a forbidden key?
No, there is no confusion here at all (for a Python developer). I would consider it a code smell though as the whole problem is completely avoidable by better naming.
It is error prone. You're simply refusing to see an aberration.
That's how the language works, but it doesn't mean it's intuitive and easy to understand especially for a language known for being easy to use and understand.
It's not how the language "works", it's what the language offers.
When all you have is a hash table, or when all you have is an object, you get to refer to keys and properties uniformly - because they are the same thing. When you have both, you refer to them differently - because they're different. That's it. There are some languages where objects and hash tables use the same syntax for access even though they're different things, but... you probably never used any of them, and certainly none of them is in the Top 20 on TIOBE.
I'll kick the venerable HN guidelines aside for a second and mention this: you're being heavily downvoted, all your comments in this thread are in various shades of gray, and many people offer many different arguments as to why you're wrong. Yet, you're undaunted and continue posting - I don't want to break the guidelines that much, but honestly, it reads like trolling. You don't engage with the arguments, you're just repeating the same thesis over and over again, without citing evidence. Why?
In pandas for example that can happen often: `df["count"] == 5` and `df.count == 5` are logically different expressions that will give different answers
> PHP also has `var_dump`. But we don't have any equivalent in python.
Maybe pprint is for you:
from pprint import pprint
# usage, but can do more as well and has more config stuff if needed
pprint({'a':'b'})
https://docs.python.org/3/library/pprint.html
I think what you are looking for as an equivalent to the JS `console.log`/PHP `var_dump` feature set is available in f-string formats.
There's not one perfect format to rule them all but the "=" "self-documenting" debug format such as `f"{some_obj=}"` probably gives you a good starting point for what you are looking for most of the time. Sometimes you still want the `str()` or `repr()` representation specifically and would want something like `f"{some_obj=!s}"` or `f"{some_obj=!r}"` respectively. In some cases objects you want pretty-printed to a console might have custom `__format__()` representations as well and it might be something like `f"{some_obj=:some-custom-format}"`.
It's obviously all still differently useful than JS having a full object browser embedded in most common consoles today, but there is interesting power to explore in f-string formats if you need quick and dirty logs of object states.
If you want to write javascript, use javascript. There are ways to get what you're asking for depending on your use case. types.SimpleNamespace in the standard library provides one approach.
"There should be one-- and preferably only one --obvious way to do it." (zen of python)
I do agree that python logging is a weak point. It is too easy to do it wrong -- particularly when you are a few modules deep.
It seems so fake to me and so far from the experience I have here in France.