Hacker Newsnew | past | comments | ask | show | jobs | submit | SillyUsername's commentslogin

Also, cheaper... X99 + 8x DDR4 + 2696V4 + 4x Tesla P4s running on llama.cpp. Total cost about $500 including case and a 650W PSU, excluding RAM. Running TDP about 200W non peak 550W peak (everything slammed, but I've never seen it and I've an AC monitor on the socket). GLM 4.5 Air (60GB Q3-XL) when properly tuned runs at 8.5 to 10 tokens / second, with context size of 8K. Throw in a P100 too and you'll see 11-12.5 t/s (still tuning this one). Performance doesn't drop as much for larger model sizes as the internode communication and DDR4 2400 is the limiter, not the GPUs. I've been using this with 4 channel 96GB ram, recently updated to 128GB.

> Also, cheaper... X99 + 8x DDR4 + 2696V4 + 4x Tesla P4s running on llama.cpp. Total cost about $500 including case and a 650W PSU, excluding RAM.

Excluding RAM in your pricing is misleading right now.

That’s a lot of work and money just to get 10 tokens/sec


Won't happen. They'll buy the next indie game studio that is successful, chew on their profits, then tank that, rinse repeat.


A lot of apologists say that "a human would have hit that".

That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.

Comparing to a human is not a valid excuse...


> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard

I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.

Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.


> I don't think that is the case.

It's the standard Tesla set for themselves.

In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...

Wasn't true then, still isn't true now.


> I think we will see FSD be safer on average very quickly.

This is what Musk has been claiming for almost a decade at this point and yet here we are


> The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.

This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?

That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.

I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...


That's the main advantage self-driving has over humans now.

A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.

Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.


The statistics on this are much less clear than Tesla would like us to believe. There's a lot of confounding factors, among them the fact that the autonomous driver can decide to hand over things to a human the moment things get hairy. The subsequent crash then gets credited to human error.


That's an often-repeated lie.

Tesla's crash reporting rules:

> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

NHSTA's reporting rules are even more conservative:

> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.

At highway speeds, "30 seconds" is just shy of an eternity.


Tesla doesn't report crashes that aren't automatically uploaded by the computer. NHTSA has complained about this before. Quoting one of the investigations:

    Gaps in Tesla’s telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic [airbag] deployment, which are a minority of police reported crashes. A review of NHTSA’s 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.
From:

https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf


Even if this were the case, it would still skew the statistics in favor of Tesla: The Autopilot gets to hand of the complicated and dangerous driving conditions to the human who then needs to deal with them. The human to the opposite cannot do the same - they need to deal with all hard situations as they come, with no fallback to hand off to.


I don't think the decision should be or will be made based on a single axis.


A human would not have hit that, the two guys see it coming from a long time and would have stopped or changed lanes like.


> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.

True, but not even relevant to this specific example. Since the humans clearly saw it and would not have hit it, so we have a very clear example where Tesla is far inferior to humans.


Indeed... You can see the driver reaching for the wheel, presumably he saw it coming, and would have hit the breaks. He left the car to do its thing thinking it knows better than him... maybe.


> presumably he saw it coming

Not presumably, we know for sure since they are talking about it for a long time before impact.

The point of the experiment was to let the car drive so they let it drive and crash, but we know the humans saw it.


Ah, I didn't watch with audio.


Personally if the road was empty as here, I'd have steered around it.

This was really best possible driving conditions - bright day, straight dry road, no other cars around, and still it either failed to see it, or chose to run over it rather than steering around it or stopping. Of all the random things that could happen on the road, encountering a bit of debris under ideal driving conditions seems like it should be the sort of thing it would handle better.

And yet Tesla is rolling out robo taxis with issues like this still present.


I imagine it might be limited by number of layers and you'll get diminishing returns as well at some point caused by network latency.


The irony of this is that Gen-Z have been mollycoddled with praise by their parents and modern life, we give medals for participation, or runners up prizes for losing. We tell people when they've failed at something they did their best and that's what matters. We validate their upset feelings if they're insulted by free speech that goes against their beliefs.

This is exactly what is happening with sycophantic LLMs, to a greater extent, but now it's affecting other generations, not just Gen-Z.

Perhaps it's time to rollback this behaviour in the human population too, and no I'm not talking reinstating discipline and old Boomer/Gen-X practices, I'm meaning that we need to allow more failure and criticism without comfort and positive reinforcement.


You sound very old man yelling at cloud. And the winner takes all is so American.

And no discrimination against lgbt etc under the guise of free speech is not ok.


Well you're wrong on all accounts of the veiled insults.

Also, I've not stated LGBT, this has nothing to do with it, it's weird you'd even mention it.


Sorry that was indeed uncalled for. It's just the "kids need to be tough again" narrative I have an issue with. That's especially coming from conservative Americans right now. We have so much wealth in the western world, it doesn't have to be survival of the fittest.

I personally feel we should be way more in touch with our emotions especially when it comes to men.


Can somebody please break this down for me so I can see what benefit WB gets from this?

I can only see mostly negatives

- Winning means taking away the ability for fans to spread viral / subliminal advertising for WB via the art they create.

- Anybody who uses Midjourney commercially to create DC characters etc will be sued, BAU, this might be worth more...

- Midjourney might actually be useful for creatives at WB (mockups etc), so shutting it down not in that interest.

- Negative publicity from people who use Midjourney.

The positives

- WB gets a boatload of short term money from Midjourney if they win.

- They exercise enforcement of copyright preventing their characters becoming a public good (yeah this one's flaky, but as I said, I couldn't think of positives).


> Winning means taking away the ability for fans to spread viral / subliminal advertising for WB via the art they create.

Would it? Creating art as an individual of copyrighted stuff is already legal. No one can stop me from drawing Superman.

I’m not trying to profit off it, like Midjourney is. Fans can still make their own art.


Amazing.

Until I read this article I didn't properly understand Fourier transforms (I didn't know how image compression bitmaps were derived), now it's opened a whole new world - toying with my own compression and anything that can be continuous represented as it's constituent parts.

I can use it for colour quantisation too possibly to determine main and averaged RGB constituents with respect to hue, allowing colour reduction akin to dithering, spreading the error over the wave instead and removing the less frequent elements.

It may not work but it'll be fun trying and learning!


I miss the days of Sun Solaris' CDE desktop.

Afterstep looks too much like Stardock's Window Blinds from around 2000 (see the weird glass effect, font etc), but Etolie seems to nail the aesthetic for me.

I hope this comes back, I'd love to use it on an old netbook I have for accessing my servers remotely.


CDE was open sourced a while back: https://sourceforge.net/projects/cdesktopenv/


CDE is included as one of the options in Sparky Linux:

https://sparkylinux.org/cde-common-desktop-environment/

For me, it's a bit broken, though; I can't get the terminal to launch, and without that, you can't do much.

I wrote a comparison of CDE and modern recreation NotSoCDE:

https://www.theregister.com/2022/07/28/battle_of_the_retro_d...

A chap I know led the campaign to open-source it and I wrote about it at the time:

https://www.theregister.com/2012/08/09/cde_goes_opensource/


First, they should move to GitHub or GitLab (or Codeberg) to attract more contributors and make the process of development easier. Maybe it could also be ported to support Wayland and Unicode properly, and remove some legacy code to ease up the maintenance.


It's running RDP to a Winboat docker image hosting the app and rendering the container on the desktop. This includes audio forwarding.


So the app you want to run (suppose that's a game) runs in a docker container? Wouldn't this, just like with running the game on a Windows -in-a-VM setup) require an extra GPU (since you want your game to be GPU accelerated, but if you pass the GPU resource to the docker container (does that even work?) - your host machine thus loses the GPU?


I don't think I'd try running anything more than a puzzle game over RDP myself. There are better alternatives for that. As far as GPU sharing, there are patches for many consumer GPUs that will unlock enterprise features like gpu resource splitting/sharing.


Amazon also didn't read the room when it fired most of its Alexa staff just as GenAI was taking off.

https://www.cnbc.com/2023/11/17/amazon-cuts-several-hundred-...

Of course not being able to monetise Alexa has always been a problem, but these and the article's issues are all to do with poor planning and top tier business direction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: