You can make it so employees don’t have ambient access to data, and require multi-party approval for all actions that require user data. Giving away a user password should be treated as a routine risk.
I’m not saying that’s how it actually works, and this process doesn’t have warts, but the ideal of individual employees not having direct access is not novel.
It doesn’t have to be a compile time constant. An alternative is to prove that when you are calling the function the index is always less than the size of the vector (a dynamic constraint). You may be able to assert this by having a separate function on the vector that returns a constrained value (eg. n < v.len()).
Management not having to listen to engineers is the structural problem. How do managers know which concerns that engineers bring up are actually relevant? How do engineers know which concerns have real world consequences (without having a incredibly high burden of proof)?
Having regulation, or standardisation is a step toward producing a common language to express these problems and have them be taken seriously.
Leadership gets a strong signal - ignoring engineers surfacing regulated issues has large costs. Company might be sued and executives are criminally liable (if discovered to have known about the violation).
Engineering gets the authority and liability to sign off on things - the equivalent of “chartership” in regular fields with the same penalties. This gives them a strong personal reason to surface things.
It’s possible that this is harder for software engineering in its entirety, but there is definitely low hanging fruit (password storage and security etc).
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Team lead manages the overall direction of the team (and is possibly the expert on some portions), but for an individual subsystem a senior engineer might be the expert.
For work coming from outside the team, it’s sort of upto your management chain and team lead to prioritise. But for internally driven work (tech debt reduction, reliability/efficiency improvements etc) often the senior engineer has a better idea of the priorities for their area of expertise.
Prioritisation between the two is often a bit more collaborative and as a senior engineer you have to justify why thing X is super critical (not just propose that thing X needs to be done).
I view the goal of managers + lead as more balancing the various things the team could be doing (especially externally) and the goal of a senior engineer is to be an input to the process for a specific system they know most about.
I agree, but I think that input is limited to unopinionated information about the technical impact or user-facing impact of each task.
I don't think it can be said that senior engineers persuade their leaders to take one position or the other, because you can't really argue against a political or financial decision using technical or altruistic arguments, especially when you have no access to the political or financial context in which these decisions are made. In those conversations, "we need to do this for the good of the business" is an unbeatable move.
I guess this is also a matter of organisational policy and how much power individual teams/organisational units have.
I would imagine mature organisations without serious short/medium term existential risk due to product features may build some push back mechanisms to defend against the inherent cost of maintaining existing business (ie prioritising tech debt to avoid outages etc).
In general, it is a probably a mix of the two - even if there is a mandate from up high, things are typically arranged so that it can only occupy X% of a team’s capacity in normal operation etc, with at least some amount “protected” for things the team thinks are important. Of course, this is not the case everywhere and a specific demand might require “all hands on deck”, but to me that seems like a short-sighted decision without an extremely good reason.
In my 30 years in industry -- "we need to do this for the good of the business" has come up maybe a dozen times, tops. Things are generally much more open to debate with different perspectives, including things like feasibility. Every blue moon you'll get "GDPR is here... this MUST be done". But for 99% of the work there's a reasonable argument for a range of work to get prioritized.
When working as a senior engineer, I've never been given enough business context to confidently say, for example, "this stakeholder isn't important enough to justify such a tight deadline". Doesn't that leave the business side of things as a mysterious black box? You can't do much more than report "meeting that deadline would create ruinous amounts of technical debt", and then pray that your leader has kept some alternatives open.
It’s possible, but I think it’s typically used for ingress (ie same IP, but multiple destinations, follow BGP to closest one).
I don’t think I’ve seen a similar case for anycast egress. Naively, doesn’t seem like it would work well because a lot of the internet (eg non-anycast geographic load balancing) relies on unique sources, and Cloudflare definitely break out their other anycast addresses (eg they don’t send outbound DNS requests from 1.1.1.1).
So reading the article you’re right, it’s technically anycast. But only at the /24 level to work around BGP limitations. An individual /32 has a specific datacenter (so basically unicast). In a hypothetical world where BGP could route /32s it wouldn’t be anycast.
I wasn’t precise, but what I meant was more akin to a single IP shared by multiple datacenters in different regions (from a BGP perspective), which I don’t think Cloudflare has. This is general parallel of ingress unicast as well, a single IP that can be routed to multiple destinations (even if on the BGP level, the entire aggregate is anycast).
It would also not explain the OP, because they are seeing the same source IP, but from many (presumably) different source locations whereas with the Cloudflare scheme each location would have a different source IP.
To be clear, they definitely use ingress anycast (ie anycast on external traffic coming into Cloudflare). The main question was whether they (meaningfully) used egress anycast (multiple Cloudflare servers in different regions using the same IP to make requests out to the internet).
Since you mentioned DDOS, I’m assuming you are talking about ingress anycast?
It doesn't really matter if they're doing that for this purpose, though. Cloudflare (or any other AS) has no fine control of where your packets to their anycast IPs will actually go. A given server's response packets will only go to one of their PoPs. It's just that which one will depend on server location and network configuration (and could change at any time). Even if multiple of their PoPs tried to fetch forward from the same server, all but one would be unable to maintain a TCP connection without tunneling shenanigans.
Tunneling shenanigans are fine for ACKs, but it's inefficient and therefore pretty unlikely that they are doing this for ingress object traffic.
For POSIX: I leave Bash as the system shell and then shim into Fish only for interactive terminals. This works surprisingly well, and any POSIX env initialisation will be inherited. I very rarely need to do something complicated enough in the REPL of the terminal and can start a subshell if needed.
Fish is nicer to script in by far, and you can keep those isolated with shebang lines and still run Bash scripts (with a proper shebang line). The only thing that’s tricky is `source` and equivalents, but I don’t think I’ve ever needed this in my main shell and not a throw-away sub shell.
I often write multi-line commands in my zsh shell, like while-loops. The nice thing is that I can readily put them in a script if needed.
I guess that somewhat breaks with fish: either you use bash -c '...' from the start, or you adopt the fish syntax, which means you need to convert again when you switch to a (bash) script.
I guess my workflow for this is more fragmented. Either I’m prototyping a script (and edit and test it directly) or just need throwaway loop (in which case fish is nicer).
I also don’t trust myself to not screw up anything more complex than running a command on Bash, without the guard rails of something like shellcheck!
I used to do it this way, but then having the mentally switch from the one to the other became too much of a hassle. Since I realized I only had basic needs, zsh with incremental history search and the like was good enough.
I don't care for mile-long prompts displaying everything under the sun, so zsh is plenty fast.
Wha do you mean by “fixing this” or it being a design flaw?
I agree with the point about sequential allocation, but that can also be solved by something like a linter. How do you achieve compatibility with old clients without allowing something similar to reserved field numbers to deal with version skew ambiguity?
I view an enum more as an abstraction to create subtypes, especially named ones. “Enumerability” is not necessarily required and in some cases is detrimental (if you design software in the way proto wants you to). Whether an enum is “open” or “closed” is a similar decision to something like required vs optional fields enforced by the proto itself (“hard” required being something that was later deprecated).
One option would be to have enums be “closed” and call it a day - but then that means you can never add new values to a public enum without breaking all downstream software. Sometimes this may be justified, but other times it’s not something that is strictly required (basically it comes down to whether an API of static enumerability for the enum is required or not).
IMO the Go way is the most flexible and sane default. Putting aside dedicated keywords etc, the “open by default” design means you can add enum values when necessary. You can still do dynamic closed enums with extra code. Static ones are still not possible though without codegen. However if the default was closed enums, you wouldn’t be able to use it when you wanted an open one, and would have it set it up the way it does now anyway.
Not sure what GP had in mind, but I have a few reasons:
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
I’m not saying that’s how it actually works, and this process doesn’t have warts, but the ideal of individual employees not having direct access is not novel.
reply