Yes! Letta Code also has a "danger" mode, it's `--yolo`. If you're running Claude Code in a sandbox in headless mode, Letta Code has that too, just do something like `letta -p "Do something dangerous (it's just a sandbox, after all)" --yolo`
Their business was always a bit chaotic but the technical side of the organization was competent. We were colo'd in one of their datacenters so it was nice to be able to rent additional capacity in the same facility. Servers were manually provisioned, but manageable as you'd expect via online portal after provisioning was complete.
So... not a glowing recommendation I guess, given their corporate instability? But a recommendation nonetheless, the corporate instability never impacted our technical operations and the product was good.
OVH's Toronto datacentre is not operational yet. But you can pre-order.
It's going to be interesting to see what this does to pricing for the Toronto hosting and compute market, as that location has always been more expensive compared to other North American locations.
Right now it looks like OVH will not be offering their lower tier offers out of the Toronto DC.
Hydro power is not green. The benefit are it's cheap (in terms of cost to produce) and produces no CO2 emissions. The downside is it makes it harder to fish to travel from the ocean to their spawning sites. Like all power sources, it has benefits and drawbacks and it absolutely impacts the environment.
Hydro on the north of Quebec (most of % electricity produced) is not that a fish-reach area at all. But yes, on some areas in the world that's a concern.
The things I've taken from scrum and use at every team:
- plan in 2 week chunks
- estimate in points (relative size to something you've already done), emphasis on consistent estimates for each dev.
- make sure you define what 'done' means, and make sure it relates to what exactly you are trying to measure (Eg just coding effort, work till feature can ship?, etc). This is probably the most tricky bit.
- capture total velocity every 2 weeks and eventually use the avg for future planning
- review the entire process and modify things that take a lot of time for devs.
I have long abandoned scrum for Kanban. I don't care about sprints, and getting things done by the end. Just give me (or since I'm the team leader now often I'm the one giving) the next thing to work on and when it is done I'll start the next. Nobody cares about what you got done this sprint, they care about what what got into the next release. Next release includes a lot of manual testing as despite a very great automated test program we constantly discover a lot of serious bugs in manual test that are difficult to automate.
we gave up on points. All anyone cares about is days. Thus it is better to retro on the days estimate vs days to deliver and make adjustments on our end. Nobody cares about days for an individual story anyway - they want the days for the complete feature (or at least enough of the feature that we can ship it)
Yeah, you are doing it wrong if scrum is deadlines. I've worked with people who had to pull all nighters to get all sprint content done before the sprint closes. I'm using it more as a window to do some cheap analysis on our progress.
They are always deadlines as long as the cycle is official. An informal status update can be done with the project management tool, a 1:1 meeting or a quick team meeting (in this order)
> - capture total velocity every 2 weeks and eventually use the avg for future planning
I have never got to this stage. Someone is added to the team. Someone leaves the team. New team members get more knowledge. Old team members get sick or take a lot of leave. The focus of what you're working on moves from one part of the code base to another.
Every time you have to throw your velocity out the window because you're not the same team any more, and those metrics are for a different team that no longer exists.
You could argue points are useful as a discussion point to make sure there isn't some massive piece of complexity hiding in something (everyone says 3 points, the quiet person who knows the most about it says 13), but even tshirt sizing covers that imo, and regardless after that you should just throw them away.
Yeah, it won't work without a stable team. And that may be okay in a true agile environment but I've always had a manager who wants some type of estimate/high level schedule.
We do T-shirt sizes mapped to numbers, because recording effort in numbers lets you get an avg etc...
As an example, you'd write your entire API as a flask app, and then deploy that app to Lambda. Then send all requests to that one Lambda. As long as your startup time is quick (and your datastore is elsewhere, like in DynamoDB) then it will work great for quite a while. Lambda will basically run your app enough times to handle all the requests.
You have to be careful when you design it such that you don't rely on subsequent requests coming to the same machine, but you can also design it so that if they do come to the same machine it will work, using a tiered cache.
z/OS (aka OS360 aka MVS) supports programs going back to the 60s and I just talked with a DE at IBM who is still using a program compiled circa Apollo 11 mission.
Yes, you got it right. The Dorados now run a binary emulator on top of a microcomputer (x86_64) architecture, while IBM Z (itself essentially a 64-bit S390 arch) kept a mainframe configuration.
What's a DE? Also did they tell you what the program did?
Other systems that will run or automatically translate > 30 yr old binaries:
- I believe IBM i on POWER {i5/AS400} will run stuff from System 38 (1980).
- HPE Nonstop (aka Tandem Guardian ) on X86-64 will run or translate binaries from the original proprietary TNS systems (late 1970s) and MIPS systems (1991).