After spending most of my career hacking on these systems, I feel like queues very quickly become a hammer and every entity quickly becomes a nail.
Just because you can keep two systems in complete sync doesn't mean you should. If you ever find yourself with more-or-less identical tables in two services you may have gone too far.
Eventually you find yourself backfilling downstream services due to minor domain or business logic changes and scaling is a problem again.
I once was told "we cannot promote you because the work you've done checks the boxes for 2 rolls above you and does not check the boxes for your next roll"
I've heard it's actually beneficial for your ankles long term to get some tilt/pan on them. It reduces your chances of injury by strengthening the twitch muscles in your ankle and legs.
Important to note the point is "trail running" not "alpine running", gravel and dirt vs, steep inclines and big rocks.
Anecdotally, just adjust your pace/length until you're comfortable. I've always done mixed asphalt/dirt-trail and there is a notable difference in my knee fatigue when there is a bigger ratio of one or the other, would always prefer a nice gravel or dirt over the road.
I enjoy when:
Things are simple.
Things are a complicated, but I can learn something useful.
I do not enjoy when:
Things are arbitrarily complicated.
Things are a complicated, but I'm just using AI to blindly get something done instead of learning.
Things are arbitrarily complicated and not incentivized to improve because now "everyone can just use AI"
It feels like instead of all stepping back and saying "we need to simplify things" we've doubled down on abstraction _again_
I really do not like working with uncle bob hardliners
There have been so many times where I have commented _why_ I think some uncle bob-ism made the code unclear and the response is always:
> *Sends link to Clean Code, maybe you don't know about this?
No, I do, and I am allowed to disagree with it. To which they always clutch their pearls "How do you think you know better than uncle bob!?", this is a Well Established Pattern TM.
I don't think I know better than Uncle Bob, but I don't think Uncle Bob works on this codebase nearly as much as you or I.
I knew someone who created a mandelbrot set viewer that would display over an VGA port, you had a game controller to move around and zoom into it. Something like that?
WireGuard itself can be configured to work either way.
Our target market is smaller teams and people with limited IT skills. So, we chose not to send all traffic through the vpn. The only traffic going through the VPN is traffic to and from your other devices (in your account). Internet access is still through your default network.
In the Pro version, you can route specific destinations through other peers, also belonging to you. An example use case here would be accessing your web banking while on vacation in a distant country. You would route your bank website through your home connection.
Similarly, our access control is only restricting traffic that comes from your devices on the wireguard network. We do not interfere with the settings of your own personal firewall.
For WireGuard in general, you provide it an AllowedIPs config which is a list of CIDR ranges that should be routed across the link. That could be `0.0.0.0/0` (aka everything), a single subnet, a union of several, or even individual IPs. This config is technically symmetric between the endpoints, though a prototypical implementation of "individual clients enable the VPN to access the internal network" may limit the "client" AllowedIPs to an individual address.
Can anyone explain to me (someone not so network security savvy) if there are any privacy or security concerns using a wire guard provider like this?
As I understand it, with traditional VPNs, you basically have to trust third-party audits to verify the VPN isn't logging all traffic and selling it. Does the WireGuard protocol address theses issues? Or is there still the same risk as a more traditional VPN provider?
This is not providing the same functionality as a "traditional VPN," in the sense that it does not do anything to your traffic going to the wider internet. With popular VPN services, they are an encrypted tunnel for all your internet traffic (some use the same protocol, WireGuard), but at the end of the tunnel they decrypt the message and send it to whatever website you requested, which is exactly what can cause those privacy issues you describe.
In this case, though, it creates an encrypted tunnel _only between your own devices_. This allows you to connect to all your devices, home desktop, phone, laptop, as if they were on the same network, allowing you to do fairly sensitive things like remote desktop without having to expose your machine to the public internet or deal with firewall rules in the same way.
Assuming this project is legitimate, then the only traffic this service would even touch would be those between your own devices, nothing related to public internet requests. And, on top of that, the requests should be encrypted the entire way, inaccessible to any devices other than the ones sending and receiving the requests.
There are many caveats and asterisks I could add, but I think that's a fairly straightforward summary.
To clarify, one of the big advantages of a Mesh VPN is that the traffic does not flow through the VPN provider at all. WireGuard encrypts the traffic from device interface to device interface. The connections are point-to-point and not hub-and-spoke. This is both faster and more secure.
If a direct connection cannot be established due to a very restrictive firewall or a messed-up ISP modem, it will fall back to a relay server. But in that case, the relay relays the traffic, but it does not have the keys to read it.
TL;DR WireGuard itself is a relatively small project at roughly 4,000 lines of code. It has been thoroughly audited and is even built into the Linux kernel.
Just because you can keep two systems in complete sync doesn't mean you should. If you ever find yourself with more-or-less identical tables in two services you may have gone too far.
Eventually you find yourself backfilling downstream services due to minor domain or business logic changes and scaling is a problem again.
reply