In January someone hit us with ~2K malicious backlinks from AWS instances across 15+ regions, cheap TLD spam domains, and even Blogspot. We built a Python script to automate the disavow file generation, then added a UI and Dockerized it. Full technical writeup with the forensic analysis here:
https://dev.to/surcebeats/someone-paid-around-2k-to-destroy-...
The tool parses exports from Ahrefs/SEMrush/Google Search Console, categorizes IPs vs domains, supports whitelisting, tracks new threats across uploads, and generates Google-ready disavow.txt files.
I'm atm working on a couple of things, first the biz, a self-hosted home server OS that simplifies Docker management and provides a unified dashboard for running services at home. The goal is making self-hosting more accessible without sacrificing flexibility.
And also building as a hobbie a procedural universe generation engine that simulates galaxies, solar systems and planets in real-time. Everything is generated from a seed with actual orbital physics, seasonal changes and so... Built with Python/Flask backend too but Three.js for 3D visualization and React instead of Vue3 as in the prior one. Think No Man's Sky vibes but as an explorable simulation engine really D:
Benchmarks optimize for fundraising, not users. The gap between "state of the art" and "previous gen" keeps shrinking in real-world use, but investors still write checks based on decimal points in test scores.
we try to make benchmarks for users, but it's like that 20% article - different people want different 20% and you just end up adding "features" and whackamoling the different kinds of 20%
if a single benchmark could be a universal truth, and it was easy to figure out how to do it, everyone would love that.. but that's why we're in the state we're in right now
The problem isn’t with the benchmarks (or the models, for that matter) it’s their being used to prop up the indefensible product marketing claims made by people frantically justifying asking for more dump trucks of thousand-dollar bills to replace the ones they just burned through in a few months.
Absolutely not. This is not a problem with any part of the engineering process. Nearly everything wrong with the AI business lies at the feet of product managers, marketing, the c-suite crowd, etc.
Nice! We considered this exact approach but never shipped it in the end. The geolocation permission is probably unnecessary friction and probably an overkill imho... Timezone + rough location (country-level from IP) would get 95% accuracy without the prompt. Most users will bounce on that permission dialog.
Solid work though, especially the twilight transitions. Loving it!!!
Well, it is in KDE + Firefox. And yeah, the simplistic idea that day = bright and night = dark fails all the time, the OS has already other settings to deal with those failures, and your site or app should just use the system theme.
Suffered that back in the day with an Electron desktop app. Not to mention that the notarization and signing integration itself is completely broken. The first time you submit a binary it can take DAYS to process, and setting everything up to work properly with GitHub Actions CI/CD is absurdly time-consuming. It's ridiculous, and if you add this new notarial verification policy on top of that... In the end it's just Apple being Apple.
Google used to proudly say "Don't be evil"... But they just forgot to add "let us take that part".
When tech giants start deciding what technical knowledge is too "dangerous" for users to access, we've crossed into a different kind of territory. Installing an OS on your own hardware is now physical harm? That's some creative interpretation of their policies. The irony is that this kind of censorship just validates why people want to bypass these systems in the first place, nobody wants corporations deciding what they can and can't do with their own machines.
The article is kind of right about legitimate bloat, but "premature optimization is evil" has become an excuse to stop thinking about efficiency entirely. When we choose Electron for a simple app or pull in 200 dependencies for basic tasks, we're not being pragmatic, we're creating complexity debt that often takes more time to debug than writing leaner code would have. But somehow here we are, so...
Thinking is hard, so any product that gives people an excuse to stop doing it will do quite well, even if it creates more inconveniences like framework bloat or dependency rot. This is why shoehorning AI into everything is so wildly successful; it gives people the okay to stop thinking.
Yes. Too many people seem to forget the word "premature." This quote has been grossly misused to justify the most egregious cases of bloat and unoptimized software.
Not sure. Tauri apps run on the browser and browsers are absolute memory horders. At any time my browser is by far the biggest culprit of abusing available memory. Just look at all the processes it starts, it’s insane and I’ve tried all popular browsers, they are all memory hogs.
A big complaint with Electron that Tauri does avoid is that you package a specific browser with your app, ballooning the installer for every Electron app by the size of Chromium. The same with bundling NodeJS (or the equivalent backend for Tauri), but that isn't quite as weighty and the difference is which backend not whether it is there at all or not.
In either case you end up with a fresh instance of the browser (unless things have changed in Tauri since last time I looked), distinct from the one serving you generally as an actual browser, so both do have the same memory footprint in that respect. So you are right, that is an issue for both options, but IME people away from development seem more troubled by the package size than interactive RAM use. Tauri apps are likely to start faster from cold as it is loading a complete new browser for which every last byte used needs to be rad from disk, I think the average non-dev user will be more concerned about that than memory use.
There have been a couple of projects trying to be Electron, complete with NodeJS, but using the user's currently installed & default browser like Tauri, and some other that replace the back-end with something lighter-weight, even more like Tauri, but most of them are currently unmaintained, still officially alpha, or otherwise incomplete/unstable/both. Electron has the properties of being here, being stable/maintained, and being good enough until it isn't (and once it isn't, those moving off it tend to go for something else completely rather than another system very like it) - it is difficult for a newer similar projects to compete with the momentum it has when the “escape route” from it is generally to something more completely different.
Based on https://v2.tauri.app/concept/architecture/, it seems that Tauri uses native webviews, which allows Tauri apps to be much smaller and less of a memory hog than a tool which uses Electron and runs a whole browser.
On the flip side, what you're saying is also an overused excuse to dismiss web apps and promote something else that's probably a lot worse for everyone.
I've never seen a real world Electron app with a large userbase that actually has that many dependencies or performance issues that would be resolved by writing it as a native app. It's baffling to me how many developers don't realize how much latency is added and memory is used by requiring many concurrent HTTP requests. If you have a counterexample I'd love to see it.
What is often missing from the discussion is the expected lifecycle of the product. Using Electron for a simple app might be a good idea, if it is a proof-of-concept, or an app that will be used sparsely by few people. But if you use it for the built-in calculator in your OS, the trade-offs are suddenly completely different.
A large majority of Electron crap could be turned into a regular website, but then the developers would need to actually target the Web, instead of ChromeOS Platform and that is too hard apparently.
I've recently gone back to more in depth (but still indie) Web dev with vuejs and quasar, and honestly I don't even find myself thinking about "targeting Web" any more - I just write code and it seems to work on pretty much everything (I haven't tested safari to be fair).
I'd argue that the insane complexity of fast apps/APIs pushes many devs towards super slow but easy apps/APIs. There needs to be a middle ground, something that's easy to use and fast-enough, rather than trying to squeeze every last bit of perf while completely sacrificing usability.
Java Swing? It was slow in 1999, which means it's fast now. It's also a much more sensible language than JavaScript. It's not native GUI, but neither is JavaScript anyway.
Swing has no place in a sentence about good usability. It may be the best of the worst, but it's not a positive example. Things like html or Imgui are better to use, with the former also being much more powerful and the latter being as simple as can be while still being blazing fast.
This really resonates. Sometimes the best reason to switch tech is just to feel that spark of learning again. I build self-hosting platforms and have spent years trying to make it “easy”, even getting it to work on Windows/macOS. But honestly, the magic isn’t in convenience. It’s in that figuring it out phase imho...
When we don't have convenience and rather jump into the sea directly, we would actually learn how the stack works and not how the convenience wrapper worked. We would feel more confident in our ability to do more things without requiring somebody else's help and more.
It is this reason why figuring out this phase feels really important and lovely even, yet most people feel its hardness and leave it aside since they just want something which just works
Fortunately, for them, I think with technologies like docker/podman, flatpak, appimage etc. I feel like its already easy-ish enough.
Side nit pick but I hate when apps create docker/podman containers when they can also have flatpak, I would love to see some self hosting apps which have a gui or maybe even some cli hosted via flatpak but I rarely saw cli apps in flatpak etc.
Fascinating deep dive into OverlayFS CoW behavior. The 11GB btmp file getting copied 271 times is a perfect storm scenario. Did they consider mounting /var/log outside the image layers? Seems like that would prevent any log file from causing this amplification. Also interested in image-manip... Does it handle metadata differently than
docker export/import?
This is less of a deep dive and more an illustration of the worst way to use containers.
Having /var/log set as as a persistent volume would have worked, but ultimately they were using "docker commit" to amend/update their images which is definitely the wrong way to do it.
Seriously. Honestly this whole thing feels kinda like…using an LLM to write a blog post about debugging weird problems that only exist because the whole platform was built by an LLM in the first place. The multiple top level comments that are clearly written by an LLM are icing on the (layer) cake.
The tool parses exports from Ahrefs/SEMrush/Google Search Console, categorizes IPs vs domains, supports whitelisting, tracks new threats across uploads, and generates Google-ready disavow.txt files.
Feedback welcome.
reply