Subjecting every real contributor to the "AI guardian" would be unfair, and shadow banning is ineffective when you're dealing with a large number of drive-by nuisances rather than a small number of dedicated trolls. Public humiliation is actually a great solution here.
> Subjecting every real contributor to the "AI guardian" would be unfair
Had my first experience with an "AI guardian" when I submitted a PR to fix a niche issue with a library. It ended up suggesting that I do things a different way which would have to involve setting a field on a struct before the struct existed (which is why I didn't do that in the first place!)
Definitely soured me on the library itself and also submitting PRs on github.
I suspect people are doing it to pad their resume with "projects contributed to" rather than to troll the maintainers, so if they're paying any attention they probably do care...
what you say, is of course the only relavent issue.
I can attest to my own experiences on both sides of this situation, one running a small business that is bieng inundated by job seekers who are sending AI written letters and resumes, and dealing with larger companys that have excess capacity to throw at work orders, but an inability to understand detail, AND, AND!, my own fucking need to survive in this mess, that is forceing me to dismiss certain niceties and adhearance to "proffesional" (ha!), norms.
so while the inundation from people from India(not just), is sometimes irritating, I have also wrangled with some of them personaly, and under all that is generaly just another human, trying to make by best they can, so....
You could easily guard against bullshit issues. So you can focus on what matters. If the issue is legit goes ahead to a human reviewer. If is run of the mill ai low quality or irrelevant issue, just close. Or even nicer: let the person that opened the issue to "argue" with the ai to further explain that is legit issue for false positives.
I relate, and then realized that's been the basis of spam handling for decades now. It's depressing, and we aren't putting this genie back in the bottle unfortunately.
It's not happening because I'm not using an AI to summarize text. At the moment slop text is also fairly easy to recognise, so I can just ignore it instead.
I can't think of anything scarier than a military planner making life or death decisions with a non-empathetic sycophantic AI. "You're absolutely right!"
If ceo receives 1000 resumes per month will it even matter?
Imagine as a ceo you receive emails from juniors wanting to work for your company. You might not even know the role, why would you waste time checking these Cv/email that detracts you from your goals? Usually are low quality and spammy , any ceo will quickly learn to ignore or forward to hr to blacklist these people. These are the same people that once they get a job will email the ceo for a raise.
As a ceo you hire hr to deal with that noise and only give you the top 3 are hr and others wasted their time filtering. If ceo does the filtering is useless.
Imagine for a tech role: the good devs would never email the CEO, the crap and entitles one will do. It’s definitively the kind of candidates you want to avoid.
Rocket Internet is often engaged in arbitrage where they bring an existing company’s idea to a different country or context. This is very different from the flood-the-zone astroturfing discussed above. Zalando is its own company employing ~5k engineers. This isn’t a copycat I would claim.
But maybe you’re referring to practices I am not aware of.
I worked for them: Its quuuiiiittteee different what they do from this "app-cloning-approach":
Rocket Internet copies business models and adapts them other countries. They are bringing in own people if they invest if required for the startup, but usually the companies are mainly built be the original founders with support from Rocket Internet on different layers (like legal).
Also Rocket invests Money. Sometimes .. a lot!
I worked for them and even interviewed to be founder of one of their spin-offs. GP’s comment has nothing to do with Rocket’s model. Zero.
And that’s coming from someone that despises Rocket for what they did to my workplace, the parent company, all teams I knew, and all colleagues I met from other projects.
App mills don’t respect anything at all, Rocket at least is Lawful Evil.
What’s the value to know 7283828*7282828 when you have a computer next to you? What’s the value to know something when an AI can do in seconds. Maybe we need to realize that most of the knowledge is cheap now and deal with it.
School is about being taught things and being able to use those ideas to solve problems that demand that understanding you’ve learned. It isn’t about arbitrary computation or regurgitating information. It is about learning to think critically.
You can try docker compose with Watch tower. Then you just deploy a new branch: dev, prod. On server side counterparty you fetch updates on git, if anybody change, it will run docker compose, which will build your image and put it live.
Worked well for me a few years.
Problems: when you have issues you need to look into pertainer logs to see why it failed.
That’s one big problem, if prefer something like Jenkins to build it instead.
And if you have more groups of docker compose, you just put another sh script to do this piling on the main infrastructure git repo, which on git change will spawn new git watchers
—-
I find it really crazy that they think would be good idea. I wonder how many false positive css stuff is being added given their “trying to match classes”. So if you use random strings like bg-… will add some css. I think it’s ridiculous, but tells that people that use this can’t be very serious about it and won’t work in large projects.
——
> Using multi-cursor editing
When duplication is localized to a group of elements in a single file, the easiest way to deal with it is to use multi-cursor editing to quickly select and edit the class list for each element at once
Instead of using a var and reusing, you just use multi cursors. Bad suggestions again.
—-
> If you need to reuse some styles across multiple files, the best strategy is to create a component
But on benefits says
> Your code is more portable — since both the structure and styling live in the same place, you can easily copy and paste entire chunks of UI around, even between different projects.
—-
> Making changes feels safer — adding or removing a utility class to an element only ever affects that element, so you never have to worry about accidentally breaking something another page that's using the same CSS.
Or call the tool "Read" and it works, according to an issue comment.
But actually the solution is checking out how the official client does it and then doing the same steps, though if people start doing this then Anthropic will probably start making it more difficult to monitor and reverse engineer.
It might not matter, as some people have a lot of expertice in this, but people might still get the message and move away to alternatives.
Then they'd start pinning certs and hiding keys inside the obfuscated binary to make traffic inspection harder?
And if an open source tool would start to use those keys, their CI could just detect this automatically and change the keys and the obfuscation method. Probably quite doable with LLMs..
Aren't Anthropic in control of all the legitimate clients? They can download a new version, possibly automatically.
I believe the key issue here is that the product they're selling is all-you-can-eat API buffet for $200/month. The way they manage this is that they also sell the client for this, so they can more easily predict how much this is actually going to consume tokens (i.e. they can just put their new version of Claude Code to CI with some example scenarios and see it doesn't blow out their computing quota). If some third party client is also using the same subscription, it makes it much more difficult to make the deal affordable for them.
As I understand it, using the per-token API works just fine and I assume the reason people don't want to use it because it ends up costing more.
reply