I don't think it was purely a calculation error, it was probably also an intuitive evaluation. Bobby was trying to complicate the game and create an imbalance, where there were still winning chances if misplayed by spassky, but obviously it was a bad evaluation and it backfired. I think he was in a bit of a mood and got reckless. He had been making a lot of demands leading up to this and threats to not participate, probably got frustrated by the drawn endgame and took a big risk. I don't think he ever really opened up about his reasoning to be fair, but was asked along the lines of "were you trying to complicate the game" and he said "something like that". After losing those first two games he demanded the cameras to be removed from the playing hall and started to play really well against spassky, so possibly a psychological aspect from the cameras were also to blame. Maybe he knew he was throwing the game and it'd make for an entertaining match... the guy was sort of insane
I love the concept. I wouldn't prefer to play chess this way, but I've had a lot of practice, I find it visually a little distracting but I started getting used to it.
I had a situation where my queen was being attacked by a bishop, and the board showed a "safe" space to move my queen, but that queen would have still been attacked by the bishop along that diagonal. Not sure how you solve that, maybe when clicking on a piece, recalculate the board as if the piece is no longer there?
Wonder if simple fork, skewer, or attack counting threats could also be highlighted in some way. I suppose at a certain point it's just too visually busy and the tactics get way deeper than the surface level notions and end up being a distraction, but could be fun exploring an opening or previous game and seeing the "obvious" threats you might not have seen when playing
>I had a situation where my queen was being attacked by a bishop, and the board showed a "safe" space to move my queen, but that queen would have still been attacked by the bishop along that diagonal.
Are you sure? Can you send a screenshot? Any place the opposing bishop attacks would have a dot on it. (The green highlighting when you pick up a piece shows all legal moves, not just safe moves.)
The square to the bottom right of the Queen isn't "currently" visible to the bishop, but if the Queen moved into that square they'd still be killed by the bishop.
Maybe the cloud companies could do something here by always keeping a small subset of machines online and ready to join the cluster. Provided there is some compromise in what the configuration is for the end user. I guess it doesn't solve image pulling. Pre-warming nodes is an annoying problem to solve.
Best solution I've been able to come up with is: Spegel (lightweight p2p image caching) + Karpenter (dynamic node autoscaling) + pods with low priority to hold onto some extra nodes. It's not perfect though
I think it gets hard when an emergent chain of complex trust relationships need to be built and understood. Things like IAM identity center, workload identity, IRSA on EKS, service principals vs roles for accessing other services from a service, resource policies vs principle-level policies and when to use each. Not necessarily intuitive all the time. I don't think it's THAT hard and I understand why some of these things were built this way, but it's a huge complicated ecosystem of services and I understand why it can get confusing to some. Gotta be disciplined about it.
It's not THAT hard if that's the thing you specialize about.
Problem is, that kind of things are the backend of the backend. An accessory of an accessory. No business cares about it. Ever. And yes, running wildcard policies can hit you hard if not attended properly; my point is I don't want to learn a complex ACL system made for enterprise for my small startup that is actually gonna be fine with wildcard policy basically forever.
Coming from the world of audio software I've always wondered why it seemed like Adobe has such a stranglehold on visual work and nothing really catches up to photoshop or illustrator. In audio there are several big DAWs (digital audio workstations) that I would classify as popular and competent enough for serious work, each of which has artists or producers that have built successful careers around. Yes there are endless wars about what is better but more or less can do the same things and most experienced people say, choose one, learn it, decide what works for you. I feel like with photoshop it's always like "oh it's missing critical feature x, y, and z compared to photoshop so it's a dealbreaker". The closest analogy I could think of is pro-tools being a popular "de-facto" standard in many pro recording studios, but most hobbyists don't use pro-tools and agree that it's popular in pro studios mostly due to tradition.
I'm surprised there aren't at least a handful of adobe competitors that carved a niche and are significantly popular because they made some key workflows faster, more intuitive, or more powerful.
Maybe this difference is because of ubiquitous plugin formats like VST that translate across different DAWs?
1. It's significantly more standardized and straightforward for data interpretation. MIDI is standard (and OSC sort of fizzled), and audio clips (wav, aiff, whatever) are also very standard. You don't have the issues of color science, and you have a much smaller range of transformations that can be done to an audio clip.
2. A lot of infrastructure is standardized. From hardware interaction, to key mapping, but also things like plugins (Audio Units, VSTs, RTAS/AAX). It's so much simpler to go between apps.
3. A lot of audio workflows are treated as procedural and non-destructive.
Compare this to images:
1. Color science is horrific. Even Adobe often get it wrong (Krita was actually the best for a long time). D
2. Plugins are very application specific. So biggest marketshare often wins.
3. The range of transformations people want to do is massive. Each of them need very bespoke workflows, and due to the lack of standardized plugins, they're rarely shared.
4. A lot of image workflows are destructive by nature. A lot of image plugins as well are destructive.
5. Document interchange still sucks. For raster, you'll be plagued by color science issues. For vector, you'll be plagued by nobody implementing SVG the same.
6. Hardware APIs also vary wildly. For a long time, you had to target every vendor of pen you wanted to support for example.
I think a large part of it is due to the industries behind it. Video and Audio need to scale massively within a single project, across a lot of hardware devices, and production houses. The data is massive in comparison. Issues cost a lot.
Images are smaller in scale. An issue can be fixed very cheaply.
The Video and Audio industries fixed this by putting effort into standardization, education and interoperability. Images never had that attention.
This is a great summarize, I'd emphasize that as a result of the items mentioned here, both input (through external MIDI controllers) and output (through VST instruments) are actually cross-DAW, that consistency makes switching DAWs far easier, and makes what any one DAW is best at much narrower.
In the context of this article, it is that security scanning software that companies/users are using seem to be indexing some of the 12-char links out of emails which ends up in some cases on public scan. Additionally, if domain.com/12-char-password is requested without https, even if there is a redirect, that initial request went over the wire unencrypted and therefore could be MITM, whereas with a login page, there are more ways to guarantee that the password submit would only ever happen over https.
I'm kind of surprised at the statement that Terraform is bad at bootstrapping things like Kubernetes, not the statement on it's own, but in context of using Talos. Yes, for a lot of roll-your-own Kubernetes cluster distributions, it isn't great at it and implementations are somewhat badly maintained, but for Talos specifically it's actually a very nice experience. They've done a good job on the provider and made it possible to bootstrap in an idempotent way, and it helps manage the lifecycle, upgrades, going forward by talking to the Talos control plane after it's bootstrapped. It's still being actively developed but I think their approach works better than most, and in some ways feels nicer than trying to bootstrap something like EKS with terraform. https://github.com/siderolabs/terraform-provider-talos
This is reprimanding for the content of the message, not the scope of the code which would have actual security implications. Furthermore, it is a warning about not violating an actual company policy. This is not far off from the scope this pop-up tool is designed for. While it is clear that this was done as a response to google hiring this firm to dissuade folks from organizing, I could argue that it could be done to warn managers not to use the firms presence as permission to violate a specific policy + law. IANAL but this seems like extremely grey legal area. For example, this could be aimed at managers to remind them that even though this firm is hired, they cannot enforce a ban on organization according to that specific policy in the handbook. I think that's an appropriate use IMO, it would save the company some serious money and headache if it stopped a manager from illegally retaliating against organization.
I would not characterize this as evidence that this person is a security risk. It takes existing culture of google, including past incidents like changing the default desktop wallpaper for a protest that was happening, etc.
Also if this is true it is totally insane. Sounds like intimidation tactics to stop exactly what the pop-up warned against.
> They also dragged me into three separate interrogations with very little warning each time. I was interrogated about separate other organizing activities, and asked (eight times) if I had an intention to disrupt the workplace. The interrogations were extremely aggressive and illegal. They wouldn’t let me consult with anyone, including a lawyer, and relentlessly pressured me to incriminate myself and any coworkers I had talked to about exercising my rights at work.
I think you're assuming it's related to the message content, but that's not what Google are saying and it's not how corporations work in my experience. How you do something matters a great deal in any large bureaucracy. If Spiers wanted to remind people they could unionise there are communication systems that exist for people to talk to each other on their own initiative without approval, systems like email or even memegen.
Modifying the behaviour of people's web browsers isn't a channel intended for employees to push personal messages to each other and this should have been really obvious to her. She and her colleagues were trusted with a tremendous amount of power which could be readily abused (see my other comment on this thread), and the expectation was clear that it'd be used only within the bounds of what her management asked her to do, namely corporate security.
When she went outside those bounds and started using her immense technical privileges in ad-hoc ways, and (worse) making arguments like "I got a colleague to approve a code review so it was OK" she gave an extremely clear demonstration that management simply couldn't trust her. It's not about unionisation. It's about someone with the power to steal cookies from her own colleagues going rogue and deciding her own personal political priorities matter more than company policies she had agreed to follow.
In the age of treating servers (or containers) like cattle instead of pets, the "Back up everything" mantra has fallen by the wayside. In order to get away with selective backups you have to know exactly where long-term state is stored and you need to have the infrastructure in place to manage re-provisioning everything and restoring snapshots. It's not something you can tack on later. Iterate, test, integrate, document, audit, review. It ends up being much more complicated than periodic wholesale snapshots on a server.
There's a certain elegance and assurance you get from this that has been lost with the times, akin to how monolithic server software with all functionality natively available in the code has gone away in favor of microservices. Now you have message queues, k/v stores, caches, search engines as a microservices that are tacked on to the core services and rarely fully understood by the engineering team and containing more functionality than the codebase ever really utilizes. Ends up being more complicated in manage in a lot of ways. I think the emergence of microservices is one of the driving forces behind selective state backups, because you can never back up the entire state at once, everything is too spread out. You're not going to back up the running state of the k8s node, or whatever
Thank you. I feel kind of silly about this but I feel like I've had a hard time understanding when an org should, or could use something like this. I have seen them mentioned but every time it's explained it's explained with more abstract language on top of it that confuses me. I keep hearing "it manages business processes" but then it fails to mention if this means like, a human being's process within an org, or something coupled with an application of some sort that has business processes in the application? Does this type of thing replace sort of what Jira does, make a ticket and then pass it off to the next team or whatever? Do you ship it with the app for on-premise deployments of a software product? I have a hard time seeing the big picture with things like this sometimes. Then I hear workflow orchestrator and I think, oh okay so like ansible, but for, work...flows? But what is a workflow really exactly?
This could also be used to kill off systems like SharePoint in many businesses and that would be great.
Seriously, its workflow engine has race conditions, randomly fails and has no transaction management. But there are few alternatives. I don't know why there hasn't been any real contender. You would need a full suite to challange it though.
Yep! The one thing I would change though is that it's common for workflows to be started by an event. So in your example, the first step would be UserService.signupUser, that emits a sign-up event, that starts a workflow that sends the email.
Without the workflow/orchestration, we're effectively coupling the EmailService to the UserService, and it's that type of coupling that reduces reusability and isolation.
Thank you very much for your reply!
I wonder if the flow can start from Workflow not from UserService.
E.g.,
1. Browser requests with UserSignup event (or API gateway receives it, do not call UserService but emits event)
2. Workflow receives the event, then calls UserService.signupUser activity
3. UserService creates a user then return the call back to Workflow
4. Workflow resolves the call(singupUser) then calls EmailService.sendEmail
5. EmailService.sendEmail sends an email then calls back to Workflow
6. Workflow resolves the call(sendEmail) then the flow is completed
The difference is that Every workflows will be defined inside Workflow and Services won't serve requests directly, which I believe this gives a complete view to the flow.
However there must be something that I'm missing here since what I'm describing seems like an anti pattern.
It's certainly acceptable to start workflows explicitly! However it wouldn't be a good fit for the user signup process.
In the above example, the user just wants to signup. They don't care about receiving a welcome email or being subscribed to a mailing list or anything else, they just want to register an account. That's a pretty good use case for just `POST /signup`, that hits the user service and spits out an event that the user has signed up.
Starting a new workflow when that event is published makes sense in this case.
An example when a workflow is started explicitly could be something like doing a fire system test. You could:
1. shoot off a command that starts the FireSystemTestWorkflow
2. the workflow sends a bunch of commands to test sensors and sirens
3. those things publish events that they're functioning correctly
4. the workflow waits for all of these events to come back
5. the workflow publishes a FireSystemTestedSuccessfully event
The nice side of this is that the workflow can respond if a sensor or siren fails and does what's called a "compensating action", ie: compensates for the deviation of the successful path by performing a corrective action like sending a command to start the device or notify a technician.
Wow. Thanks for the extra explanation about when the different approach can benefit.
I love you. Will definitely give bus-workflow a thorough shot.
And a very last question if you could spare a bit more time..
Again the user signup & event flow,
When Workflow calls EmailService.sendEmail (given that the communication is via RPC and EmailService.sendEmail is an async operation that will resolve if the email was sent successfully), should Workflow wait for the sendEmail operation to resolve and complete the flow?
Or should EmailService dispatches an event EmailSent so that Workflow can complete the flow?
This is a bit off the topic but I've been sticking with RPC style call rather than Events but still don't know yet what the best practice is.