We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.
> My man, Israel had a blockade surrounding Palestine on all sides for years prior.
A blockade that was specifically accounted for the the preceding ceasefire agreement that was in place on Oct 6th.
> David and Goliath
Yet, it is David who keeps starting this fight, losing, then calling Goliath unjust because his ability to punch back is greater.
> And not ignoring Palestine, which had existed for 12 centuries before the birth of Christ?
Nope not ignoring. Both groups have a long history in the region. Arabs through colonization centuries ago. Heck, "Palestine" even comes from the Jewish word for invader (the naming is not connected to the arabization of Palestine).
The Jewish history in the region became the Palestinian history of the region. The Palestinians are literally the direct descendants of the Israelites said to be in prior history. This is per David Ben Gurion.
That is an elementary understanding international law.
If after Oct 7th Israel went and killed a single child in retaliation, that would be unjust. Justification and proportionality are not measured like that.
Justification is established by a valid objective to go to war. Proportionality is measured in comparison to the military objectives. The Oct 7th attack clearly justifies the removal of Hamas. The proportionality of doing so is dependent on the size of Hamas's army (20k-30k), the size of their infrastructure (500 kms of tunnels), and their ability to separate their operations and operators from civilians.
That is insanely disingenuous. Rightly calling out a genocide by a country known to commit war crimes and violate human rights, international law, and previous peace deals is not antisemitic.
This is equivalent to you claiming that calling out ethnic cleansing campaigns in Sudan is racist. I hope that makes it clear how ridiculous that sounds.
> one would suspect that the specific characteristics of the human cochlea might be tuned to human speech while still being able to process environmental and animal sounds sufficiently well.
I wonder if these could be used to better master movies and television audio such that the dialogue is easier to hear.
The "Ai Ecosystem" has its flaws but this article seems to just provide a description of how it is now, and how they want it to be, without a path towards there.
It's perfectly valid to point something out as a problem. Not every post needs to provide the solution as well. Even raising awareness of the issue is helpful.
Not that its what should determine the ideal length, computing power has gone up significantly faster than the number of characters in Unicode (chatGPT gives me characters ^ 7 = flops).
Im never going back to non-foldable. The ability to have a full sized phone take up half as much space in my pocket is amazing. Consistently more comfortable moving around.
> At this point, it's becoming obvious that it is not profitable to provide model inference, despite Sam Altman recently saying that OpenAI was.
Except the authors own provided data says it cost them $2B in inference costs to generate $4B in revenue. Yes training costs push it negative, but this is like tech growth 101, debt now to grow faster leads to larger potential upsides in the future.
Training costs keep exploding and several companies are providing frontier models. They'll have to continue shoveling tons of money into training just to stay in place with respect to the competition. So you can't just ignore training costs.
Why not? Training isn't just "data in/data out". The process for training is continuously tweaked and adjusted. With many of those adjustments being specific to the type of model you are trying to output.
The US copyright office’s position is basically this-under US law, copyrightability requires direct human creativity, an automated training process involves no direct human creativity so cannot produce copyright. Now, we all know there is a lot of creative human effort in selecting what data to use as input, tinkering with hyperparameters, etc - but the copyright office’s position is that doesn’t legally count - creative human effort in overseeing an automated process doesn’t change the fact that the automated process itself doesn’t directly involve any human creativity. So the human creativity in model training fails to make the model copyrightable because it is too indirect
By contrast, UK copyright law accepts the “mere sweat of the brow” doctrine, the mere fact you spent money on training is likely sufficient to make its output copyrightable, UK law doesn’t impose the same requirements for a direct human creative contribution
We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.
reply