Hacker Newsnew | past | comments | ask | show | jobs | submit | pkroll's commentslogin

Not a shareholder, but on first try, it won't do it because it recognizes Iger's name. And clearly the deal is fresh because it balked at Mickey Mouse too. But it has no trouble with just, "mouse": https://sora.chatgpt.com/p/s_693ae0d25bbc819188f6758fce3f90c...


Ask it to do Steamboat Willy? Now in the public domain


Did you test some local image gen software in that you installed the Python code on the github page for a local model, which is clearly a LOT for a normal user... or did you look at ComfyUI, which is how most people are running local video and image models? There are "just install this" versions, which eases the path for users (but it's still, admittedly, chaos beneath the surface).


Interesting you say that. No I've tried out Invoke and AUTOMATIC1111/WebUI. I specifically avoided ComfyUI because of my inexperience in this and the fact that people described it as a much more advanced system with manual wiring of the pipeline and so on.


It's likely that I'm seeing this from my deep into ComfyUI bubble. My impression was that AUTOMATIC1111 and Forge and the like, were fading as ComfyUI was the "what people ended up on" no matter which AI generation framework they started with. But I don't know that there are any real stats on usage of these programs, so it's entirely possible that AUTOMATIC1111/Forge/InvokeAI are being used by more people than ComfyUI.


Mass shootings in the US, are a little over 1 a day. School shootings are a subset, and as cudgy says, so far 13 this year.


I think there are actually more school shootings than mass shootings because mass requires ~4 victims and school doesn't. There were ~330 shootings in 2024. 70 people died. 200 wounded.

https://www.abc.net.au/news/2024-12-17/us-school-shootings-2...


"NTSTC content at 23.976 I'd hope the player would just speed up at that point, but even if not... judder at 120 Hz is better than at 60 Hz."

I'd bet money when TVs are advertised at 120 FPS, they're really 119.88 FPS, so no judder showing 23.976 FPS and the other NTSC-off display rates.


Some content is truly exactly 24 FPS or 30 FPS though, so whichever path the TV goes (i.e. NTSC rate or integer rate) the same problem will exist. I suppose some TVs might have extremely fancy film mode detection which catches the occasional frame difference, but I doubt mine does :D


It's a regular expression, and OS/2 was referred to in many ways, so try something like: "os-2|os\/2|os2" (without the quotes)

I get 136 results, from that query.


Well, they sort of do: they keep referring to the 4090, on their Github and primary promotional pages (https://wan.video/).

But really all the various video models really want an 80+ gig vram card, to run comfortably. The contortions the ComfyUI community goes through to get things running at a reasonable speed on the current, dinky-sized vram consumer cards, are impressive.


As jasonjmcghee says, they're available... but if you go to ollama.com and set models to "newest" you'll see Mistral (specifically mistral-small3.2 at this writing) because they seem to not sort the models based on newest update: only newest "group" or however you'd phrase it. So you need to scroll down to "qwen3" to see it's been updated.

Slightly frustrating. But good to know.


Yup, that's why I didn't notice! Thanks!


https://arxiv.org/abs/2412.10849

"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"


This isn't really applying LLMs to research in novel ways.


I use Markdown Viewer, in Chrome: I'd bet there are multiple equivalents in Firefox and Safari. Well. I don't know what Safari's extension universe is like but it seems likely.


You're not the only one thinking that: https://www.nvidia.com/en-us/project-digits/

128G of unified memory. $3K. Throw ollama and ComfyUI on that sucker and things could get interesting. The question is how much slower than a 5090, is this gonna be? The memory bandwidth isn't going to match a 512 bit bus.


It's going to be waaay slower than a 5090. We're looking at something like 60W TDP for the entire system vs 600W for a 5090 GPU.

It's going to be very energy efficient, it will get plenty of flops, but they won't be able to cheat physics.


AFAIK this uses even slower memory.


And a fraction of the 5090 cores.


I think digits is STARTS AT $3k. We'll see.


It's LPDDR5.


That's actually a good thing. That's how you get a ton of DRAM without it costing a fortune. M2 Ultra is able to get GPU-like 800GB/sec with DDR4. From that it follows that if you can design a specialized chip, you can get a respectable 1 TB/sec quite easily with LPDDR5, provided that you're willing to design a chip to support a ton of memory channels (and potentially also a wider memory bus). In fact, I'm baffled that such devices don't already exist outside Apple's product line. Seems like a rather obvious thing to do, and Apple has a "proof of concept" already. I can think of at least four companies off the top of my head that could do it quite easily, besides Apple.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: