Are you referring to the author specifically? Or a specific hypocritical person you know? If you're making a general statement about groups of online people you might be falling for the group attribution error[1], where the characteristics of an individual are assumed to be reflective of the whole group.
In any case, two things can be simultaneously true:
1. Writing code is not the bottleneck, as in we can develop features faster than they can be deployed.
2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
No because the goomba is the average of two real opinions, and the strawman is a distortion/reduction of any opinion such that its easy to argue against.
I think the Goomba is distinct. Strawman is disingenuously representing an argument, Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
> Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
Its because it comes from the same political faction. In general people are open about A when A seems palatable, and openly B when B seems palatable, but they almost never admit to do that when its obviously wrong to do so.
That is the rational part of the fallacy, even if these are different sets of people you can still tell they are biased since they never appear in the threads where its obvious they are in the wrong.
For example, lets say in a thread where a white cop shoots a black guy you find a lot of republicans say "this is just statistics, nothing to see here". Then in another thread where a black cop shoots a white guy republicans pour in and argue this must be racism and we should investigate! Maybe it isn't the same set of people, but its still a strong sign of problematic bias that they only choose to speak up in those particular threads and not the others.
Every political side everywhere does this, and that is why people started calling that out.
In general, hypocrisy is a pretty weak argument. It's an annoying personality trait, but consistency is a thing humans often fail at, and humans failing at holding consistent opinions is a failure of those humans, not the claims they're making. It's not quite as weak as the more non-sequitur kind of ad-hominem attack, because it does at least pertain to the argument being made, and kind of resembles a logical contradiction if you squint, but it seldom does a good job addressing the merits of the argument, rather than the arguer. It's a successful political tactic for the same reason ad hominem arguments in general are, of course, especially in the context of representative forms of government, where the person's character or competence is relevant when they're running for an office. Much less so in contexts where the merits of a position are being debated in abstract.
I think it's very silly to make the argument that "groupwise hypocrisy" is not a fallacy in such a conversation. In politics, the reality is that people have to form coalitions with people with whom they don't agree on everything, and non-political groupings are even more non-sensical, often holding people responsible for the opinions of other people who happen to share things like inborn characteristics. It's especially ridiculous to explain this with this idea that people are engaging in some kind of elaborate coordination to argue with you on the internet. Yes, some people, and indeed political parties, engage in that kind of behavior, and if you think you're arguing with something like a botnet, there are larger considerations to make about what you gain as an individual by trying to engage with such a machine at all. If I believe I'm arguing about the merits of an idea with an actual person, and I find myself reaching for something like "your group is collectively hypocritical on this issue" to make my argument, this is cause to reflect on whether I actually have any real arguments for my position, as that one is... well, essentially meaningless
I think you're trying to invoke what's commonly called a "motte-and-bailey" argument, where people argue for a maximally-defensible position when faced with serious criticism, but act as though they're proving a much less defensible version of their argument, often including a nebula of related ideas, in other contexts. This is something individuals and coordinated factions absolutely do, but again doesn't really support treating any grouping you want to draw of some kind of collective hypocrisy. Even assuming we care about hypocrisy, it seems like this kind of reasoning about nebulous groups that don't explicitly coordinate would allow making that argument about any position in any context, depending on how you draw the boundaries of the group that day. It's well-understood that you can go on the internet and find someone who believes just about any crazy thing you can think of, or find someone who makes the argument for any position poorly.
Is it true that they don't appear in the threads where you feel it's obvious they're in the wrong? Or do they just get upvoted less in those threads so you don't see it when they do appear?
This is exactly it. You see it on HN all the time. You will debate someone. Then deep in thread, a second person appears with a gotcha. when you point out that the gotcha doesn't fit in with the prior argument, they point out that was a separate person. They knew damn well what they're doing with their little conniving deflection fuck-fuck game. They're acting for the same surrogate argument. The Goomba is real and the people playing the game are just too cowardly to be two-faced themselves so they act two-faced through a surrogate and deflect to the surrogate when it's pointed out.
Sometimes there are two groups of people who have different opinions that don't interact, but given the extent they take up the same platform and don't seem to see each other, I'm not sure it is really a fallacy even then.
First, it becomes possible for people who have a double standard to hide behind this. One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory. (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)
Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups. That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior. Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.
So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.
Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.
I believe in A, I don't take a strong position on B, I am in coalition with people who believe in B and don't take a strong position on A, we both believe in C, D, E, and F, which some other people believe in with differing weights. Browbeating me about position B (or, the most useless kind of Internet banter, complaining about me and my hypocritical position on A+B to your friends who oppose both in a likewise contradictory way, in some venue I've never heard of) is not about making people reevaluate positions, it's about negative factionalism. The only reason it might not fit the familiar categorization of "fallacy" is that you would never use it in rational debate, either in arguing with another person or in reasoning out your own position.
>I believe in A, I don't take a strong position on B
But if A and B are opposed, then there is a question of why a strong position on A can be allowed with a weak position on B, if the reason for the strong position on A would also indicate a strong position against B.
The underlying argument being implied (but rarely ever directly stated) is to question if your reason for the strong position on A is really the reason you state, or if that is just the reason that sounds good but not the real reason for your belief.
In effect, that you don't apply the stated reason to B despite it fitting is the counter argument to why it doesn't actually support A.
If there is an inconsistency in arguments being applied, any formal discussion falls apart and people effectively take up positions simply because they like them, contradictions irrelevant. This generally isn't a good outcome for public discourse.
>Writing code is not the bottleneck, as in we can develop features faster than they can be deployed.
That's an organizational issue due to over-regulation, bureucracy, too many stakeholders each with their own irrelevant opinion, etc.
Startups or FOSS projects without the above absolutely can't "develop features faster than they can be deployed", and usually have a huge backlog of bugs and features they'd like to have, but never got around to.
I get much better results the more thought I put into crafting my prompt. including using llms to help create that prompt. There's definitely a declining rate of return on that time, but thinking about the problem and carefully describing the context can take fairly deep thought. I do think it's in shorter bursts than when doing all of the work, but I get that same feeling of 'bah, where was I?' if I get interrupted while creating the prompt for a more complex feature. On the other hand, I spend a lot less time in flow state while debugging - it's way easier to describe a bug to an llm (often can just paste in the exception or link to error log).
I'm using my ADHD hyper focus skills while flagging issues that the LLM is doing.
Reading 10x more code than before puts me straight into the zone. (In a language that I find interesting: Elixir)
My own process is improving so much that I had only one bug last week that was fixed immediately after the error tracker caught.
But yeah I feel more tired sooner. So it's oneto three hyper focus zones per day, just like before.
The difference is that I enter faster, and now I'm not afraid of leaving the task and resuming later, since I can just ask for a summary of what we did so far.
I'm using two different models from two different providers to cross check the work tho.
I'm very good with bad smells I guess, after years supervising less experienced developers since early days in my long career.
It isn't exactly flow - but at when the prompt comes back it forces me to think. Flow is about getting into a state where I'm thinking so this is surprisingly similar. The prompt is helpful because it gives me a place to focus: does the proposed changes make sense (this is much smaller than the entire code base), and given this is done: do I know anything else that was missed.
So you move from a maker’s schedule to a manager’s schedule. Interrupting you does not have any meaningful consequence on your ability to work because at any given moment you are not really working and when your interruption is over the prompt is still just waiting there for you.
What I do is, I'm always responding to output N while the AI is working on prompt N+1. So we are both always responding to each other's question/answer before last.
Many people today just trust whatever shit comes out. Some even brag about it, even famous devs like Yegge.
And requiring review of the result is not a "flow state". Flow states means continuous and uninterrupted focus while actively performing, LLMs block and return with new code or questions for minutes on end. That's the opposite of flow, it's the "let's take a break now, see you in a few minutes" for every interaction.
“Even if you stopped climate change today, New Orleans’s days are still numbered,” he added. “It will be surrounded by open water, and you can’t keep an island situated below sea level afloat. There’s no amount of money that can do that.”
Type 1 is often an island situated below sea level.
For instance https://en.wikipedia.org/wiki/Flevopolder . Island. Surrounded by open water because that's actually a good idea. Below sea level. 400 000 inhabitants. 2 cities, major agriculture, minor airport.
Ever wanted to grab dinner on the sea floor? Visit Almere Center. Though lots of people find it to be a bit boring in person.
Want the same sort of thing in the US? Consider dropping the Jones act. Right now it's illegal to bring the equipment that builds these things into the US.
The Jones act doesn't prohibit anything about bringing ships into the US to construct things. The closest reason I can think of you thinking that is it allows injured sailors to sue for damages. Maybe that equipment leads to a huge number of injuries?
So a crane like this one https://www.youtube.com/watch?v=yvicq-kvVbw ; it picks thing up and sets thing back down. In US waters? Verboten ("nee meneer, helaas verboden", in this case). Sure there's workarounds with barges sometimes; but it gets silly.
Jones act and more specifically dredge act even: you're moving stuff inside US territorial waters.
Both cases it's not (or barely) made in the US, and you can't hire the big crews from elsewhere. There's no competition, and this has resulted in no incentive to learn, keep up or even try.
"""
“New Orleans is not going to disappear in 10 years or anything like that, but policymakers really should’ve thought about a relocation plan a century ago,” said Dixon
"""
People have seen this coming for a long time. Here's a classic article about the channelization of the Mississippi by John McPhee from 1987: https://news.ycombinator.com/item?id=20636254
My main use case is emacsclient and vterm as a terminal multiplexer, in place of something like tmux or screen.
But even locally I use vterm. A terminal is just text, why wouldn't I manipulate it with emacs? At any time you can switch to `copy-mode` and it behaves like a read-only text buffer that you can manipulate as you please.
Pull-based streaming can work with webrtc. I implemented it for my custom ip camera nvr solution. I just open N streams on the client and when one is deactivated (typically by scrolling it out of the viewport), the client sends an unsubscribe message over a separate control channel and the server just stops sending video until they resubscribe.
I'm currently switching to a quic-based solution for other reasons, mainly that webrtc is a giant blackbox which provides very limited control[1], yet requires deep understanding of its implementation[2] and I'm tired[3].
I looked at moq-lite but decided against it for some reason. I think because I have <5 clients and don't need the fanout. The auth strategy is very different than what I currently use too.
[1] Why is firefox now picking that (wrong) ice candidate?
[2] rtp, ice, sdp, etc
[3] webrtc isn't bad for the video conferencing use-case but anything else is a pain
I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.
But WebCodecs is just really straightforward. It's hard to find anything to complain about.
If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <https://github.com/scottlamb/retina> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.
Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.
What was the WebRTC bug, would love to help! I saw at work that FireFox doesn't properly implement [0] I wanted to go fix after FFmpeg + WHEP.
If you are still struggling with WebRTC problems would love to help. Pion has a Discord and https://webrtcforthecurious.com helps a bit to understand the underlying stuff, makes it easier to debug.
You can convert any push-based protocol into a pull-based one with a custom protocol to toggle sources on/off. But it's a non-standard solution, and soon enough you have to control the entire stack.
The goal of MoQ is to split WebRTC into 3-4 standard layers for reusability. You can use QUIC for networking, moq-lite/moq-transport for pub/sub, hang/msf for media, etc. Or don't! The composability depends on your use case.
And yeah lemme know if you want some help/advice on your QUIC-based solution. Join the discord and DM @kixelated.
> I could use AI and have a working solution an hour later.
That sounds really cool. You should share what you used.
> The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem.
I'm sympathetic to this view. It seems very pragmatic. After all, the reason we write software is not to move characters around a repo, but to solve problems, right?
But here's my concern. Like a lot of people, I starting programming to solve little problems my friends and I had. Stuff like manipulating game map files and scripting ftp servers. That lead me to a career that's meant building big systems that people depend on.
If everything bite-sized and self-contained is automated with llms, are people still going to make the jump to be able to build and maintain larger things?
To use your example of the BLE battery monitor, the AI built some automation on top of bluez, a 20+ year-old project representing thousands of hours of labor. If AI can replace 100% of programming, no-big-deal it can maintain bluez going forward, but what if it can't? In that case we've failed to nurture the cognitive skills we need to maintain the world we've built.
It has also led me to a career in software development.
I find myself chatting through architectural problems with ChatGPT as I drive (using voice mode). I've continued to learn that way. I don't bother learning little things that I know won't do much for me, but I still do deep research and prototyping (which I can do 5x faster now) using AI as a supplement. I still provide AI significant guidance on the architecture/language/etc of what I want built, and that has come from my 20+ years in software.
This is is the project I was talking about. I prefer using codex day-to-day.
And? I didn't say anything about "incitement", I said "actual death/violence threats", because I meant people making actual threats of violence up to and including death, are the actual things tweeted in the most commonly seen examples given on Hacker News (besides the aforementioned "also not upheld" that the commenter I was replying to tried to use to justify when Americans get arrested for tweets).
> The classical cultural example is the Luddites, a social movement that failed so utterly
Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.
I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.
> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan
I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).
> We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.
Rather
> We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.
TFR doesn't account for mortality which has also continuously fallen since then. If you're not adjusting for that, then you're looking at meaningless decontextualized numbers. Obviously if people want a certain number of children and the children keep dying then they're going to need to give birth more to get the right number of children. Birthing is not a useful measure on its own because pre-adulthood dead children lead to the same impact on population growth as no children in the first place.
In any case, two things can be simultaneously true:
1. Writing code is not the bottleneck, as in we can develop features faster than they can be deployed. 2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
[1] https://en.wikipedia.org/wiki/Group_attribution_error
reply