Tree-sitter is one of the finer engineering products out there, it enables so much. Thanks to its creator and everyone who has contributed to this project and its many grammars!
Long before Brexit, I was bemoaning the bad effects of direct democracy in California for constitutional amendments that pass with a simple majority. A good amount of the dysfunction in California is from these sorts of propositions that can not be overruled or modified by the legislature. And the public debate about them is largely divorced from their actual content, quite frequently. You still encounter people that think that Prop 13 is a about letting grandmas stay in their homes in retirement by sheltering them from any increase in property taxes, but it is a much much larger handout to commercial real estate and investment properties than it is to grandmas, for example!
Even a slightly higher threshold than majority vote would be good for direct democracy. And constitutional amendments should either have a higher bar, or should automatically expire after X years unless there's a second vote to verify that the change should actually stay in effect.
I tend to vote no on all ballot propositions automatically due to the bad effects of permanent changes being far too easy with too little substantive information provide to voters.
The need for nuclear is simply not clear. Storage has advance so quickly, while nuclear tech has remained stagnant or even gotten more expensive.
Eve China, the best nuclear power builders out there, are shifting away from massive nuclear to storage and wind and solar.
Without a major technological innovation in the nuclear power space, I don't see how it can compete, except at the poles and in niches with very poor renewable resources.
Saying that grid storage "only has a few hours of capacity" is like saying that a nuclear power reactor "only has 1GW of power." You solve both issues by deploying more. And if you want a longer lithium ion battery installation without the additional power capacity, you can save a bit on inverters.
Grid storage is cheap enough that Texas, a purely profit-driven grid is now overtaking California in the amount of battery storage deployed. 58GWh of new grid storage was added in 2025 alone, and the growth is still exponentialhttps://seia.org/news/united-states-installs-58-gwh-of-new-e...
All current grid storage will fully discharge in less than 4 hours at max watts. It is designed to level daily demand variability. To make a 4 hour battery last for a week at the same wattage would make it cost 42 times as much.
Yes, this is how the basic arithmetic works. What's your point?
I see now that your original post had a fantastical claim that we need weeks of battery storage, which is a fantastical claim. In reality we will need variable amounts of battery but a "week long" battery is not supported by a single detailed grid study I have ever seen.
When I have asked Pell to justify claims of "weeks long battery" the only justifications have been "I heard it from someone else", or napkin math that contains many errors, and in places where there are not errors choices are made to estimate an upper bound rather than a lower bound, indicating that the calculator doesn't understand how napkins math can be useful.
And for super cheap infrequently used storage, here's a recent purchase at $33/kWh of a 30GWh battery by Google:
I don't expect such batteries to be used much, despite being a fraction of the cost of current LFP batteries, because we really won't need much storage with such a low power:energy ratio.
Also, 5 was a typo, it should have been 4 of course! Form energy has been making a major splash for years, having first modeled out the grid case for this type of battery, and ensuring that all materials and battery chemistry allowed a potential route to success. It's iron-air.
Incredibly disingenuous for nuclear power proponents to state that grid storage is expensive. Your entire argument centers around the most expensive power generation available and one of the slowest to build.
Right now renewables and storage are cheaper than most new fossil fuel types of generation. The cheapest new fossil fuel generation, gas, is bottlenecked by limited capacity to build new turbines currently.
So if you look at new resources being added to the grid, it's all solar, wind, storage, and a tiny bit of new fossil gas generation.
The biggest impediment to more renewables is no longer cost, it's politics and regulations. We have a president that has torpedoes one of the best new sources of wind, offshore wind, just as it's becoming super economical, and all the rest of the world is going to get the benefit of that cheap energy while the US falls behind. Floating offshore wind in the Pacific, based on the same type of tech as floating oil platforms, could provide a hugely beneficial amount of electricity at night and in winter, to balance out solar with less storage and less overbuilding.
Meanwhile on land, transmission line are a huge bottleneck towards more solar and wind, and the interconnection queue for the grid is backed out to hell in most places.
The technology and economics are there, but the humans and their bureaucracy is not ready to fully jump on board.
My comment, like the linked article, was focused entirely on the US's situation, which has abundant fossil gas to the point that many frackers burn it as a waste product.
I'd totally agree for UK and continental Europe. The difference between oil and gas is massive on the distribution angle, oil moves easily as long as there's not a naval blockade, but fossil gas requires super super expensive infrastructure either via pipeline or LNG. And with nearly all fossil fuel companies in the last stages of their life, trying to maximize profits on existing capital, it's hard to get investor support to buy infrastructure that costs multiple billions and has limited lifetime. I don't know the details in Europe, but it seems like this phasing out of infrastructure as the transition happens is a major hassle... I'd love any links on that sort of info about Europe.
LNG may be priced internationally to some degree, but local distribution of gas by pipelines drastically changes that equation. It may only be a few dollars per barrel to transport a barrel of oil, but LNG is far higher due to the massive liquefaction costs. As an indication of just how much natural gas is not priced internationally, US Henry Hub is down around $3/MMBTu, while UK NBP prices are around $14/MMBtu, if I did that correctly.
When you say that distribution costs for the UK are much less than in the US, do you mean the cost of distributing natural gas? I'm not following your logic there.
I'm including the costs of fossil fuel extraction in the comparison here; in the US fossil gas is super super cheap which makes it more competitive with solar and storage than in most places.
I think the continuous churn of versions accelerates this disregard for supply chain. I complained a while back that I couldn't even keep a single version of Python around before end-of-life for many of the projects I work on these days. Not being able to get security updates without changing major versions of a language is a bit problematic, and maybe my use cases are far outside the norm.
But it seems that there's a common view that if there's not continually new things to learn in a programming language, that users will abandon it, or something. The same idea seems to have infected many libraries.
Today I accidentally transposed the first two digits on my CC number.
The form programmer had done some super stupid validation that didn't allow me to edit it directly. Every change moves the cursor to the end of the input. More than 16 characters could not be typed.
Any person who codes that PoS should have their software license revoked and never be allowed in the industry again. Far better to use a plain text input than all the effort used to make users lives hell.
> We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight out of eight models detected Mythos's flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens.
Impressive, and very valuable work, but isolating the relevant code changes the situation so much that I'm not sure it's much of the same use case.
Being able to dump an entire code base and have the model scan it is they type of situation where it opens up vulnerability scans to an entirely larger class of people.
This is from the first of the caveats that they list:
> Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints. The models' performance here is an upper bound on what they'd achieve in a fully autonomous scan. That said, a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do.
That's why their point is what the subheadline says, that the moat is the system, not the model.
Everybody so far here seems to be misunderstanding the point they are making.
If that's the point they are making, let's see their false positive rate that it produces on the entire codebase.
They measured false negatives on a handful of cases, but that is not enough to hint at the system you suggest. And based on my experiences with $$$ focused eval products that you can buy right now, e.g. greptile, the false positive rate will be so high that it won't be useful to do full codebase scans this way.
How do we know the false positives for this "Mythos" thingamabob? Since they didn't release it, and we cannot reproduce it, are we to simply believe their word on this? What if the author of the featured article simply made a claim about that? We also simply believe their word? To me these AI tech companies are not any more trustworthy than a random blog author, maybe even less so, due to all the shady stuff they are pulling and especially since they have not released. Show or it didn't happen.
That they were able to use it for security scanning puts the false positive rate at a useable level, inherently.
Maybe they spent more on labor to comb through reports than they did on the hardware costs of discovery, but if so I think we'd be hearing from third parties about how useless those millions in Mythos credits were that they got.
I get what you're saying, but I think this is still missing something pretty critical.
The smaller models can recognize the bug when they're looking right at it, that seems to be verified. And with AISLE's approach you can iteratively feed the models one segment at a time cheaply. But if a bug spans multiple segments, the small model doesn't have the breadth of context to understand those segments in composite.
The advantage of the larger model is that it can retain more context and potentially find bugs that require more code context than one segment at a time.
That said, the bugs showcased in the mythos paper all seemed to be shallow bugs that start and end in a single input segment, which is why AISLE was able to find them. But having more context in the window theoretically puts less shallow bugs within range for the model.
I think the point they are making, that the model doesn't matter as much as the harness, stands for shallow bugs but not for vulnerability discovery in general.
OK, consider a for loop that goes through your repo, then goes through each file, and then goes through each common vulnerability...
Is Mythos some how more powerful than just a recursive foreloop aka, "agentic" review. You can run `open code run --command` with a tailored command for whatever vulnerabilities you're looking for.
newer models have larger context windows, and more stable reasoning across larger context windows.
If you point your model directly at the thing you want it to assess, and it doesn't have to gather any additional context you're not really testing those things at all.
Say you point kimi and opus at some code and give them an agentic looping harness with code review tools. They're going to start digging into the code gathering context by mapping out references and following leads.
If the bug is really shallow, the model is going to get everything it needs to find it right away, neither of them will have any advantage.
If the bug is deeper, requires a lot more code context, Opus is going to be able to hold onto a lot more information, and it's going to be a lot better at reasoning across all that information. That's a test that would actually compare the models directly.
Mythos is just a bigger model with a larger context window and, presumably, better prioritization and stronger attention mechanisms.
Harnesses are basically doing this better than just adding more context. Every time, REGARDLESS OF MODEL SIZE, you add context, you are increasing the odds the model will get confused about any set of thoughts. So context size is no longer some magic you just sprinkle on these things and they suddenly dont imagine things.
So, it's the old ML join: It's just a bunch of if statements. As others are pointing out, it's quite probably that the model isn't the thing doing the heavy lifting, it's the harness feeding the context. Which this link shows that small models are just as capabable.
Which means: Given a appropiately informed senior programmer and a day or two, I posit this is nothing more spectacular than a for loop invoking a smaller, free, local, LLM to find the same issues. It doesn't matter what you think about the complexity, because the "agentic" format can create a DAG that will be followable by a small model. All that context you're taking in makes oneshot inspections more probable, but much like how CPUs have go from 0-5 ghz, then stalled, so too has the context value.
Agent loops are going to do much the same with small models, mostly from the context poisoning that happens every time you add a token it raises the chance of false positives.
I know you're right that there's a saturation point for context size, but it's not just context size that the larger models have, it's better grounding within that as a result of stronger, more discriminative attention patterns.
I'm not saying you're not going to drive confusion by overloading context, but the number of tokens required to trigger that failure mode in opus is going to be a lot higher than the number for gpt-oss-20b.
I'm pretty sure a model that can run on a cellphone is going to cap out it's context window long before opus or mythos would hit the point of diminishing returns on context overload. I think using a lower quality model with far fewer / noisier weights and less precise attention is going to drive false positives way before adding context to a SOTA model will.
You can even see here, AISLE had to print a retraction because someone checked their work and found that just pointing gpt-oss-20b at the patched version generated FP consistently: https://x.com/ChaseBrowe32432/status/2041953028027379806
To clarify, I don't necessarily agree with the post or their approach. I just thought folks were misreading it. I also think it adds something useful to the conversation.
> That's why their point is what the subheadline says, that the moat is the system, not the model.
I'm skeptical; they provided a tiny piece of code and a hint to the possible problem, and their system found the bug using a small model.
That is hardly useful, is it? In order to get the same result , they had to know both where the bug is and what the bug is.
All these companies in the business of "reselling tokens, but with a markup" aren't going to last long. The only strategy is "get bought out and cash out before the bubble pops".
You can imagine a pipeline that looks at individual source files or functions. And first "extracts" what is going on. You ask the model:
- "Is the code doing arithmetic in this file/function?"
- "Is the code allocating and freeing memory in this file/function?"
- "Is the code the code doing X/Y/Z? etc etc"
For each question, you design the follow-up vulnerability searchers.
For a function you see doing arithmetic, you ask:
- "Does this code look like integer overflow could take place?",
For memory:
- "Do all the pointers end up being freed?"
_or_
- "Do all pointers only get freed once?"
I think that's the harness part in terms of generating the "bug reports". From there on, you'll need a bunch of tools for the model to interact with the code. I'd imagine you'll want to build a harness/template for the file/code/function to be loaded into, and executed under ASAN.
If you have an agent that thinks it found a bug: "Yes file xyz looks like it could have integer overflow in function abc at line 123, because...", you force another agent to load it in the harness under ASAN and call it. If ASAN reports a bug, great, you can move the bug to the next stage, some sort of taint analysis or reach-ability analysis.
So at this point you're running a pipeline to:
1) Extract "what this code does" at the file, function or even line level.
2) Put code you suspect of being vulnerable in a harness to verify agent output.
3) Put code you confirmed is vulnerable into a queue to perform taint analysis on, to see if it can be reached by attackers.
Traditionally, I guess a fuzzer approached this from 3 -> 2, and there was no "stage 1". Because LLMs "understand" code, you can invert this system, and work if up from "understanding", i.e. approach it from the other side. You ask, given this code, is there a bug, and if so can we reach it?, instead of asking: given this public interface and a bunch of data we can stuff in it, does something happen we consider exploitable?
That's funny, this is how I've been doing security testing in my code for a while now, minus the 'taint analysis'. Who knew I was ahead of the game. :P
In all seriousness though, it scares me that a lot of security-focused people seemingly haven't learned how LLMs work best for this stuff already.
You should always be breaking your code down into testable chunks, with sets of directions about how to chunk them and what to do with those chunks. Anyone just vaguely gesturing at their entire repo going, "find the security vulns" is not a serious dev/tester; we wouldn't accept that approach in manual secure coding processes/ SSDLCs.
In a large codebase there will still be bugs in how these components interoperate with each other, bugs involving complex chaining of api logic or a temporal element. These are the kind of bugs fuzzers generally struggle at finding. I would be a little freaked out if LLMs started to get good at finding these. Everything I've seen so far seems similar to fuzzer finds.
I think there is already papers and presentations on integrating these kind of iterative code understanding/verificaiton loops in harnesses. There may be some advantages over fuzzing alone. But I think the cost-benefit analysis is a lot more mixed/complex than anthropic would like people to believe. Sure you need human engineers but it's not like insurmountably hard for a non-expert to figure out
Tunnel vision? If your model can handle big context, why divide into lesser problems to conquer - even if such splitting might be quite trivial and obvious?
It's the difference of "achieve the goal", and "achieve the goal in this one particular way" (leverage large context).
I meant, if the claim here is that small models can accomplish the same things with good scaffolding, why didn’t they demonstrate finding those problem with good scaffolding rather than directly pointing them at the problem?
Lot of people in this thread don't seem to be getting that.
If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
People are talking about false positives, but that also doesn't matter. Again, they're not thinking it through.
False positives don't matter, as you can just automatically try and exploit the "exploit" and if it doesn't work, it's a false positive.
Worse, we have no idea how Mythos actually worked, it could have done the process I've outlined above, "found" 1,000s of false positives and just got rid of them by checking them.
The fundamental point is it doesn't matter how the cheap models identified the exploit, it's that they can identify the exploit.
When it turns out the harness is just acting as a glorified for-each brute force, it's not the model being intelligent, it's simply the harness covering more ground. It's millions of monkeys bashing type-writers, not Shakespeare at one.
It’s strange to see this constant “I could do that too, I just don’t want to” response.
Finding an important decades-old vulnerability in OpenBSD is extremely impressive. That’s the sort of thing anyone would be proud to put on their resume. Small models are available for anyone to use. Scaffolding isn’t that hard to build. So why didn’t someone use this technique to find this vulnerability and make some headlines before Anthropic did? Either this technique with small models doesn’t actually work, or it does work but nobody’s out there trying it for some reason. I find the second possibility a lot less plausible than the first.
From the article:
>At AISLE, we've been running a discovery and remediation system against live targets since mid-2025: 15 CVEs in OpenSSL (including 12 out of 12 in a single security release, with bugs dating back 25+ years and a CVSS 9.8 Critical), 5 CVEs in curl, over 180 externally validated CVEs across 30+ projects spanning deep infrastructure, cryptography, middleware, and the application layer.
They have been doing it (and likely others as well), but they are not anthropic which a million dollar marketing budget and a trillion dollar hype behind it, so you just didn't hear about it.
> If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
They didn't just point it at the right place, they pointed it at the right place and gave it hints. That's a huge difference, even for humans.
> That said, a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do.
Unless the context they added to get the small model to find it was generated fully by their own scaffold (which I assume it was not, since they'd have bragged about it if it was), either they're admitting theirs isn't well designed, or they're outright lying.
People aren't missing the point, they're saying the point is dishonest.
> Anthropic's own scaffold is described in their technical post: launch a container, prompt the model to scan files, let it hypothesize and test, use ASan as a crash oracle, rank files by attack surface, run validation. That is very close to the kind of system we and others in the field have built, and we've demonstrated it with multiple model families, achieving our best results with models that are not Anthropic's. The value lies in the targeting, the iterative deepening, the validation, the triage, the maintainer trust. The public evidence so far does not suggest that these workflows must be coupled to one specific frontier model.
The argument in the article is that the framework to run and analyze the software being tested is doing most of the work in Anthropic's experiment, and that you can get similar results from other models when used in the same way.
The thing is with smaller cheaper models it is very possible to simply take every file in a codebase, and prompt it asking for it to find vulnerabilities.
You could even isolate it down to every function and create a harness that provides it a chain of where and how the function is used and repeat this for every single function in a codebase.
For some very large codebases this would be unreasonable, but many of the companies making these larger models do realistically have the compute available to run a model on every single function in most codebases.
You have the harness run this many times per file/function, and then find ones that are consistently/on average pointed as as possible vulnerability vectors, and then pass those on to a larger model to inspect deeper and repeat.
Most of the work here wouldn't be the model, it'd be the harness which is part of what the article alludes to.
> it is very possible to simply take every file in a codebase, and prompt it asking for it to find vulnerabilities.
My understanding (based on the Security, Cryptography, Whatever podcast interview[0] -- which, by the way, go listen to it) is that this is actually what Anthropic did with the large model for these findings.
> I wrote a single prompt, which was the same for all of the content management systems, which is, I would like you to audit the security of this codebase. This is a CMS. You have complete access to this Docker container. It is running. Please find a bug. And then I might give a hint. “Please look at this file.” And I’ll give different files each time I invoke it in order to inject some randomness, right? Because the model is gonna do roughly the same time each time you run it. And so if I want to have it be really thorough, instead of just running 100 times on the same project, I’ll run it 100 times, but each time say, “Oh, look at this login file, look at this other thing.” And just enumerate every file in the project basically.
Isn't the difference just harness then? I can write a harness that chunks code into individual functions or groups of functions and then feed it into a vulnerability analysis agent.
It's probably not the 'only' difference, because clearly the models are advancing in capability, but it's likely way more important than generally given credit for.
Jobs is turning in his grave. There are lots of stories of this conflict at NeXT and Mac OS X where there's a quick fix but not via GUI, which was one of the many things that incensed him.
Then there's the OS/400 approach: Build TUIs that allow the user to set arguments and then just run command the line tools on submit. It was a really nice blend of two approaches and made things like man pages somewhat superflous.
This is every OS. Unless you're telling me linux users have never had to open terminal to change something? Or windows users never need to use powershell when installing the OS in order to create a local non-cloud account
I'm sure there are some great ones, but it was 5-10 years ago when I last read one, and it was fantastic. It's nearly impossible to do a web search for it right now, probably because of Google's bias towards recency. I know it's been linked on Hacker News many times, so maybe somebody else has better info here.
Even if you're not an Apple fan, these sorts of stories are kind of great for learning about product development and companies in general, I think. jwz's stories of Netscape are also phenomenal. (Just don't click on any HN links that go to jwz.org, or you'll have to clear cookies to see any content there in the future. He's not a fan of the exploitation that startups frequently do to their employees and views HN as a primary channel of promoting that exploitation.)
You just reminded me of one of my favorite Jobs / Carmack stories:
I had the privilege of working with John Carmack as a technology evangelist at Apple when he ported Quake III Arena to Rhapsody, Apple’s internal name for the OpenStep/Mach kernel based MacOS X. I enjoyed John's reminiscence about working with Steve and Apple and thought I would share a few of my own memories from that time which provided me with some of the most satisfying moments and lessons of my career.
John was the first game developer I ever worked with. Three weeks after I sent him development hardware (an iMac) he informed me that the PC and Mac versions of Quake III Arena were in “feature parity.” I still recall my shock upon reading that email from him.
John agreed to come to Cupertino and meet with several teams to share his development experiences with them. I picked him up in the lobby of the Fairmont Hotel in downtown San Jose. He stood unassumingly in the lobby, framed in the background by a Christmas Tree.
On day one, we met with several internal teams at Apple. I was accustomed to see 3rd party developers emerge somewhat awed by their meetings with Apple engineers. In John’s case the reaction was reversed. I’ve never seen anyone grok complex systems and architectures so quickly and thoroughly as John. Amusingly, he walked around the Apple campus unrecognized by all but for the occasional, former NeXT employee.
On Day 2, John was to meet with Steve. I never knew whether it was by design or not, but on that day John wore a T-shirt that featured a smiley face with a bullet hole in the forehead from which trickled a few drops of blood. After an hour of waiting for Steve in IL1, he marched into the room, and immediately mistook me for John Carmack, extending his hand to shake mine (we had never met). I locked eyes with Steve Jobs and looked down significantly at the Apple badge on my belt. Without missing a beat, Steve shifted his extended hand to John's.
That’s when Steve noticed the T-shirt and the meeting, as soon as it had begun, took a turn for the worse.
Steve’s jaw muscles visibly tensed and he became stone-faced. Clearly deeply offended by John’s T-shirt, he sat down at the conference table and looked straight ahead, silent.
John kicked off the meeting by saying, “So I’ve been working with MacOS for the past month and here’s what I learned.” His #1 concern (at an extremely high level) concerned OpenGL permissions and security for which he felt Apple needed a better solution than what he’d learned about the day before in meetings with the graphics team, even if it came at a slight cost in performance for 3D games. This was, suffice to say, typical of John in that he was approaching an issue from an objective engineering perspective and arguing for the most technically correct solution rather than pushing for something that might be of benefit to his personal projects.
Steve listened and abruptly said, “That’s not what we’re doing!” Then he looked at the three Apple employees in the room and asked, “Is it?” I confirmed that what John was raising as a concern came from a meeting with the graphics architecture team the day before. Without batting an eye, Steve stood up, tramped over to a Polycom phone and dialed from apparent memory the phone number of the engineering director whose admin informed Steve that he was at an offsite in Palo Alto. Steve hung up, sat down, and about 30 seconds later the phone rang with the engineering director on the line.
Steve said, “I’m here with a graphics developer. I want you to tell him everything we’re doing in MacOS X from a graphics architecture perspective.” Then he put his elbows on the table and adopted a prayer-like hand pose, listening to and weighing the arguments from his trusted director of engineering and from the game guy with the bloody smiley-face T-shirt.
And what happened next was one of the most impressive things I’ve ever witnessed about Steve or any Silicon Valley exec. Early on in the discussion, the Apple engineer realized that “graphics engineer” in the room was John Carmack. And he realized that he was going to need to defend his technical decision, on the merits, in front of Steve. After extended back and forth, the Apple engineer said, “John, what you’re arguing for is the ideal …”
He never made it to the next word because Steve suddenly stood bolt upright, slamming both palms onto the desk and shouting, “NO!!!!”
“NO!!! What John is saying is NOT the ideal. What John is saying is what we have to do!!! Why are we doing this? Why are we going to all this trouble to build this ship when you’re putting a TORPEDO IN ITS HULL?!!!!”
All of this was said with the utmost conviction and at extremely high volume. To his credit, John, seated directly next to a yelling Steve Jobs, didn’t even flinch.
What was so impressive to me in that meeting was not the drama so much as it was that Steve Jobs made a decision on the merits to side with John on a technical issue rather than his longstanding and trusted graphics engineer. He overcame his original distaste for the T-shirt and made the right call. Most CEOs would have dismissed John’s comments or paid them lip service. Steve listened to both sides and made a call that would have long lasting implications for MacOS.
As a comical aftermath to the story, John next told Steve point blank that the iMac mouse “sucked.” Steve sighed and explained that “iMac was for first-time computer buyers and every study showed that if you put more than one button on the mouse, the users ended up staring at the mouse.” John sat expressionless for 2 seconds, then moved on to another topic without comment.
After the meeting ended, I walked John to the Apple store on campus (this was before there were actual Apple stores) and asked him on the way what he thought of Steve’s response to the mouse comment. John replied, “I wanted to ask him what would happen if you put more than one key on a keyboard. But I didn’t.”
Very cool story. Now I’m wondering if this event happened sometime during this section from one of Carmack‘s own posts:
> I was brought in to talk about the needs of games in general, but I made it my mission to get Apple to adopt OpenGL as their 3D graphics API. I had a lot of arguments with Steve.
"John wore a T-shirt that featured a smiley face with a bullet hole in the forehead from which trickled a few drops of blood"
Sounds like a Watchmen Comedian logo t-shirt. It could be construed as a bold choice but was probably just what was on the top of his t-shirt stack that day.
I've lamented some of the decisions Apple has made over the years, one of them being to treat games and people who play them as second-class citizens. Marathon was a very good game but the main reason it was successful is because it was an oasis in the middle of a gaming desert.
I think the point is that it's a (temporary) coalition of the factions that joined together in order to get a leader elected, a leader which is in fact not religious at all and can not be considered to be a member of any of the factions. That temporary coalition will fall apart once faction members are given power in various domains, and then can enact their own faction's preferences, which involve harming other factions.
reply