Obviously going to depend on your definition of "decent". My impression so far is that you will need between 90GB to 100GB of memory to run medium sized (31B dense or ~110B MoE) models with some quantization enabled.
I have the same setup but tried paperclip ai with it and it seems to me that either i'm unable to setup it properly or multiply agents struggle with this setup. Especially as it seems that paperclip ai and opencode (used for connection) is blowing up the context to 20-30k
Any tips around your setup running this?
I use lmstudio with default settings and prioritization instead of split.
I asked AI for help setting it up. I use 128k context for 31B and 256k context for 26B4A. Ollama worked out of the box for me but I wanted more control with llama.cpp.
You sweat because you are working with the CLI. Git is intrinsically "graphical". Use a good GUI client or higher level interface (maybe jj) to manipulate git graphs --- stop worrying about "how" (i.e. wrangling with CLI to achieve what you want) and focus more on "what".
'Stacked PRs' are back on the menu with Claude, because changing something in PR1 isn't a massive time sync to get PR2-5 back in shape, as Claude can usually handle all of that for me.
How many developers are using VSCode? How does that number compare with Emacs/Vim?
In many ways, GUI was developed as the natural evolution of TUI. X server, with its client-server architecture, is meant to allow you to interact with remote sessions via "casted" GUI rather than a terminal.
Countless engineers spent many man-hours to develop theories and frameworks for creating GUI for a reason.
>How many developers are using VSCode? How does that number compare with Emacs/Vim?
How many people eat microwave meals? How many eat gourmet Michelin star dishes?
I don't care "how many use VSCode". My argument Emacs/Vim have great, well loved TUIs. And they are used by a huge number of the most respected coders in the industry. Whether a million React jockeys use VSCode doesn't negate this.
>Countless engineers spent many man-hours to develop theories and frameworks for creating GUI for a reason.
Yes, it sells to the masses. Countless food industry scientists aspend many man-hours to develop detrimental ultra-processed crap for a reason too.
The analogy mostly makes a point for snobbishness, but otherwise doesn’t really work. Most people would rather eat meals prepped by a Michelin star cook, but they can only afford microwave meals - whereas EMacs/Vim and VSCode are equally accessible to anyone.
I love emacs but would never compare that with a Michelin meal! On the contrary, emacs is the DIY option that lets you experiment with whatever ingredients you please without judging your choices!
> My argument Emacs/Vim have great, well loved TUIs.
They... are not great. They provide the absolute bare minimum of an UI.
An UI, even a terminal one, is more than a couple of boxes with text in them. Unfortunately, actual great TUIs more or less died in the 1990s. You can google Turbo Vision for examples.
> How many developers are using VSCode? How does that number compare with Emacs/Vim?
Perhaps I'm in some sort of "TUI bubble", but I'd bet good money that Emacs/Vim users outnumber VSCode users by an order of magnitude. But maybe I'm just surrounded by *nix devs.
I agree except about the TUI coolness factor. There really is a lot that’s appealing about TUIs, I agree on that with the other commenters here. I want a better synthesis than what we have.
I think it's the opposite. Especially considering Codex started out as a web app that offers very little interactivity: you are supposed to drop a request and let it run automatously in a containerized environment; you can then follow up on it via chat --- no interactive code editing.
Fair I agree that was true of early codex and my perception too.. but today there are two announcements that came out and thats what im referring to.
specifically, the GPT-5.3 post explicitly leans into "interactive collaborator" langauge and steering mid execution
OpenAI post: "Much like a colleague, you can steer and interact with GPT-5.3-Codex while it’s working, without losing context."
OpenAI post: "Instead of waiting for a final output, you can interact in real time—ask questions, discuss approaches, and steer toward the solution"
Claude post: "Claude Opus 4.6 is designed for longer-running, agentic work — planning complex tasks more carefully and executing them with less back-and-forth from the user."
When I tried 5.2 Codex in GitHub Copilot it executed some first steps like searching for the relevant files, then it output the number "2" and stopped the response.
On further prompting it did the next step and terminated early again after printing how it would proceed.
It's most likely just a bug in GitHub Copilot, but it seems weird to me that they add models that clearly don't even work with their agentic harness.
I think those OpenAI announcements are mainly because this hasn’t been the case for them earlier, while it has been part of Claude Code since the beginning.
I don’t think there’s something deeply philosophical in here, especially as Claude Code is pushing stronger for asking more questions recently, introduced functionality to “chat about questions” while they’re asked, etc.
Frankly it seems to be that codex is playing catch-up with claude code and claude code is just continuing to move further ahead. The thing with claude code is it will work longer... if you want it to. It's always had good oversight and (at least for me) it builds trust slowly until you are wishing it would do more at once. When I've used codex (it has been getting better) but back in the day it would just do things and say it's done and you're just sitting there wondering "wtf are you doing?". Claude code is more the opposite where you can watch as closely as you want and often you get to a point where you have enough trust and experience with it that you know what it's going to do and don't want to bother.
Greenlanders could vote to be completely independent, yes. That is the situation right now.
However, Trump has done everything to turn Greenlanders away, and not done anything to convince them of independence would be good for them, so a vote for independence will likely fail catastrophically right now. Independence is many decades away, as they would really have to build a stronger economy to make it happen, but that is the direction Greenlanders would like to go, at least if you asked them 2 years ago.
The "independence" of Greenland under Trump would be identical to the "independence" of Venezuela following the US' abduction of its leader & murder of 100 people during the operation. Whatever Greenland's opinion on independence is, what's on offer by Trump would only be worse in every way than what they currently have.
Not in a meaningful way which Greenlanders would submit to. There would be constant unrest and civil disobedience, nothing would function, and bringing in your own people (including the armed forces) to keep things barely working wouldn't be a solution either.
Unfortunately Greenland as a whole has 50.000 people in total of which 20.000 live in largest city and the rest scattered across 19 others.
Thats about the size of a small town in the US, the country may be big in territory but not in population.
It happens all the time. America and the EU are bought and paid for. The funniest part is that they’re being paid for with the very money the buyers plunder with the left hand, only to use the right hand to purchase the treasonous dominant class.
It’s like a sleight of hand magic trick pulled on an infant that is then gleeful for the deception.
reply