Also, cheaper... X99 + 8x DDR4 + 2696V4 + 4x Tesla P4s running on llama.cpp.
Total cost about $500 including case and a 650W PSU, excluding RAM.
Running TDP about 200W non peak 550W peak (everything slammed, but I've never seen it and I've an AC monitor on the socket).
GLM 4.5 Air (60GB Q3-XL) when properly tuned runs at 8.5 to 10 tokens / second, with context size of 8K.
Throw in a P100 too and you'll see 11-12.5 t/s (still tuning this one).
Performance doesn't drop as much for larger model sizes as the internode communication and DDR4 2400 is the limiter, not the GPUs.
I've been using this with 4 channel 96GB ram, recently updated to 128GB.
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard
I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.
> The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?
That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.
I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...
That's the main advantage self-driving has over humans now.
A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.
Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.
The statistics on this are much less clear than Tesla would like us to believe. There's a lot of confounding factors, among them the fact that the autonomous driver can decide to hand over things to a human the moment things get hairy. The subsequent crash then gets credited to human error.
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
NHSTA's reporting rules are even more conservative:
> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
At highway speeds, "30 seconds" is just shy of an eternity.
Tesla doesn't report crashes that aren't automatically uploaded by the computer. NHTSA has complained about this before. Quoting one of the investigations:
Gaps in Tesla’s telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic [airbag] deployment, which are a minority of police reported crashes. A review of NHTSA’s 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.
Even if this were the case, it would still skew the statistics in favor of Tesla: The Autopilot gets to hand of the complicated and dangerous driving conditions to the human who then needs to deal with them. The human to the opposite cannot do the same - they need to deal with all hard situations as they come, with no fallback to hand off to.
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
True, but not even relevant to this specific example. Since the humans clearly saw it and would not have hit it, so we have a very clear example where Tesla is far inferior to humans.
Indeed... You can see the driver reaching for the wheel, presumably he saw it coming, and would have hit the breaks. He left the car to do its thing thinking it knows better than him... maybe.
Personally if the road was empty as here, I'd have steered around it.
This was really best possible driving conditions - bright day, straight dry road, no other cars around, and still it either failed to see it, or chose to run over it rather than steering around it or stopping. Of all the random things that could happen on the road, encountering a bit of debris under ideal driving conditions seems like it should be the sort of thing it would handle better.
And yet Tesla is rolling out robo taxis with issues like this still present.
The irony of this is that Gen-Z have been mollycoddled with praise by their parents and modern life, we give medals for participation, or runners up prizes for losing. We tell people when they've failed at something they did their best and that's what matters.
We validate their upset feelings if they're insulted by free speech that goes against their beliefs.
This is exactly what is happening with sycophantic LLMs, to a greater extent, but now it's affecting other generations, not just Gen-Z.
Perhaps it's time to rollback this behaviour in the human population too, and no I'm not talking reinstating discipline and old Boomer/Gen-X practices, I'm meaning that we need to allow more failure and criticism without comfort and positive reinforcement.
Sorry that was indeed uncalled for. It's just the "kids need to be tough again" narrative I have an issue with. That's especially coming from conservative Americans right now. We have so much wealth in the western world, it doesn't have to be survival of the fittest.
I personally feel we should be way more in touch with our emotions especially when it comes to men.
Can somebody please break this down for me so I can see what benefit WB gets from this?
I can only see mostly negatives
- Winning means taking away the ability for fans to spread viral / subliminal advertising for WB via the art they create.
- Anybody who uses Midjourney commercially to create DC characters etc will be sued, BAU, this might be worth more...
- Midjourney might actually be useful for creatives at WB (mockups etc), so shutting it down not in that interest.
- Negative publicity from people who use Midjourney.
The positives
- WB gets a boatload of short term money from Midjourney if they win.
- They exercise enforcement of copyright preventing their characters becoming a public good (yeah this one's flaky, but as I said, I couldn't think of positives).
Until I read this article I didn't properly understand Fourier transforms (I didn't know how image compression bitmaps were derived), now it's opened a whole new world - toying with my own compression and anything that can be continuous represented as it's constituent parts.
I can use it for colour quantisation too possibly to determine main and averaged RGB constituents with respect to hue, allowing colour reduction akin to dithering, spreading the error over the wave instead and removing the less frequent elements.
It may not work but it'll be fun trying and learning!
Afterstep looks too much like Stardock's Window Blinds from around 2000 (see the weird glass effect, font etc), but Etolie seems to nail the aesthetic for me.
I hope this comes back, I'd love to use it on an old netbook I have for accessing my servers remotely.
First, they should move to GitHub or GitLab (or Codeberg) to attract more contributors and make the process of development easier. Maybe it could also be ported to support Wayland and Unicode properly, and remove some legacy code to ease up the maintenance.
So the app you want to run (suppose that's a game) runs in a docker container? Wouldn't this, just like with running the game on a Windows -in-a-VM setup) require an extra GPU (since you want your game to be GPU accelerated, but if you pass the GPU resource to the docker container (does that even work?) - your host machine thus loses the GPU?
I don't think I'd try running anything more than a puzzle game over RDP myself. There are better alternatives for that. As far as GPU sharing, there are patches for many consumer GPUs that will unlock enterprise features like gpu resource splitting/sharing.
Of course not being able to monetise Alexa has always been a problem, but these and the article's issues are all to do with poor planning and top tier business direction.
reply