Hacker Newsnew | past | comments | ask | show | jobs | submit | shortlived's commentslogin

I'm very interested in trying this out! I run Claude Code in sandbox with `--dangerously-skip-permissions`. Is that possible with Letta?


Yes! Letta Code also has a "danger" mode, it's `--yolo`. If you're running Claude Code in a sandbox in headless mode, Letta Code has that too, just do something like `letta -p "Do something dangerous (it's just a sandbox, after all)" --yolo`

More on permissions here: https://docs.letta.com/letta-code/permissions

Install is just `npm install -g @letta-ai/letta-code`


Do we have Windows support yet?


It should work! Our demo may not (as I haven't tested it, so don't want to advertise it).


I also recommend listening to the drum history podcast which has episodes with Zildjian and Sabian family members.


Clink + windows terminal + Git tools is the perfect setup IMO.


First 10 minutes are quite interesting. Analog vs digital, longevity of music, “open” formats.


Any recommendation for a provider with dedicated servers in the US?

I’m with a provider now who is phasing them out.


OVH have some in the US. I've only had good experiences with them. I like Server Hunter to get a general overview:

https://www.serverhunter.com/#query=product_type%3Adedicated...


At a prior company I'd occasionally lease dedicated servers from INAP (now calling that part of their business HorizonIQ after bankruptcy and reorg) - https://www.horizoniq.com/services/compute/bare-metal/

Their business was always a bit chaotic but the technical side of the organization was competent. We were colo'd in one of their datacenters so it was nice to be able to rent additional capacity in the same facility. Servers were manually provisioned, but manageable as you'd expect via online portal after provisioning was complete.

So... not a glowing recommendation I guess, given their corporate instability? But a recommendation nonetheless, the corporate instability never impacted our technical operations and the product was good.


There are several dedicated server providers in US but not much is known about their track review.

OVH has a datacenter in Toronto which may close enough to the US for many people. They provide dedicated servers.


OVH's Toronto datacentre is not operational yet. But you can pre-order. It's going to be interesting to see what this does to pricing for the Toronto hosting and compute market, as that location has always been more expensive compared to other North American locations. Right now it looks like OVH will not be offering their lower tier offers out of the Toronto DC.


OVH operates data-center close to Montreal (BHS), 8ms latency to NYC. Also it's 100% green hydro electricity powered.


Hydro power is not green. The benefit are it's cheap (in terms of cost to produce) and produces no CO2 emissions. The downside is it makes it harder to fish to travel from the ocean to their spawning sites. Like all power sources, it has benefits and drawbacks and it absolutely impacts the environment.


Hydro on the north of Quebec (most of % electricity produced) is not that a fish-reach area at all. But yes, on some areas in the world that's a concern.


No suggestions from my own experience. Probably Leaseweb or OVH worth checking.


The things I've taken from scrum and use at every team:

- plan in 2 week chunks

- estimate in points (relative size to something you've already done), emphasis on consistent estimates for each dev.

- make sure you define what 'done' means, and make sure it relates to what exactly you are trying to measure (Eg just coding effort, work till feature can ship?, etc). This is probably the most tricky bit.

- capture total velocity every 2 weeks and eventually use the avg for future planning

- review the entire process and modify things that take a lot of time for devs.


I have long abandoned scrum for Kanban. I don't care about sprints, and getting things done by the end. Just give me (or since I'm the team leader now often I'm the one giving) the next thing to work on and when it is done I'll start the next. Nobody cares about what you got done this sprint, they care about what what got into the next release. Next release includes a lot of manual testing as despite a very great automated test program we constantly discover a lot of serious bugs in manual test that are difficult to automate.

we gave up on points. All anyone cares about is days. Thus it is better to retro on the days estimate vs days to deliver and make adjustments on our end. Nobody cares about days for an individual story anyway - they want the days for the complete feature (or at least enough of the feature that we can ship it)


> despite a very great automated test program we constantly discover a lot of serious bugs in manual test that are difficult to automate

+1 :)

Yes, I could just upvote. But this deserves more emphasis than that.


Agreed, Scrum is a death march. Kanban is the way.


Yeah, you are doing it wrong if scrum is deadlines. I've worked with people who had to pull all nighters to get all sprint content done before the sprint closes. I'm using it more as a window to do some cheap analysis on our progress.


They are always deadlines as long as the cycle is official. An informal status update can be done with the project management tool, a 1:1 meeting or a quick team meeting (in this order)


My experience is very different. Sprint wasn't a deadline in any of the companies I've worked at.


> - capture total velocity every 2 weeks and eventually use the avg for future planning

I have never got to this stage. Someone is added to the team. Someone leaves the team. New team members get more knowledge. Old team members get sick or take a lot of leave. The focus of what you're working on moves from one part of the code base to another.

Every time you have to throw your velocity out the window because you're not the same team any more, and those metrics are for a different team that no longer exists.

You could argue points are useful as a discussion point to make sure there isn't some massive piece of complexity hiding in something (everyone says 3 points, the quiet person who knows the most about it says 13), but even tshirt sizing covers that imo, and regardless after that you should just throw them away.


Yeah, it won't work without a stable team. And that may be okay in a true agile environment but I've always had a manager who wants some type of estimate/high level schedule.

We do T-shirt sizes mapped to numbers, because recording effort in numbers lets you get an avg etc...


“Past performance is not a predictor of future success”

Capacity planning only really works where you are creating the same thing over and over.

Otherwise I’d suggest it is better to just bring work in and work on it (kanban basically)


> estimate in points (relative size to something you've already done), emphasis on consistent estimates for each dev.

> capture total velocity every 2 weeks and eventually use the avg for future planning

This aspect of scrum has never made sense to me. Planning with average velocity turns points into an obfuscated time estimate - why use points at all?


It's a psychological trick to counter our bias toward scheduling optimism.


Sounds like an agile application of scrum.


Pretty cool! How do you persist storage?


Localfs driver uses littlefs and indexedDB


Later on I will add drivers for googledrive, Dropbox, etc…


Hm. That won't afoul of Same Origin stuff? (really asking - I don't know much about web security)


There are web API for accessing Google Drive and Drop Box. So I do not think so. But I have to try for confirming this


> And did you know that you can deploy a monolith to Lambda and still get all the benefits of Lambda without building services

I did not know. Does anyone have pointers or examples on this?


As an example, you'd write your entire API as a flask app, and then deploy that app to Lambda. Then send all requests to that one Lambda. As long as your startup time is quick (and your datastore is elsewhere, like in DynamoDB) then it will work great for quite a while. Lambda will basically run your app enough times to handle all the requests.

You have to be careful when you design it such that you don't rely on subsequent requests coming to the same machine, but you can also design it so that if they do come to the same machine it will work, using a tiered cache.


Even more “insanity”:

z/OS (aka OS360 aka MVS) supports programs going back to the 60s and I just talked with a DE at IBM who is still using a program compiled circa Apollo 11 mission.


That's common in the mainframe world. Unisys (ex Univac) still has its Dorado mainframes binary compatible with the Univac 1100 released in 1962.


I think I remember reading a while back that System/360 binaries can still run on modern Z/architecture mainframes.


Yup, that’s the example I cited above.


Oops, missed that, I came in from the comments link.


I thought the Unisys mainframes have been running emulated on X86/X86-64 for a while? I assume they have some sort of binary translator.


Yes, you got it right. The Dorados now run a binary emulator on top of a microcomputer (x86_64) architecture, while IBM Z (itself essentially a 64-bit S390 arch) kept a mainframe configuration.


What's a DE? Also did they tell you what the program did?

Other systems that will run or automatically translate > 30 yr old binaries:

- I believe IBM i on POWER {i5/AS400} will run stuff from System 38 (1980).

- HPE Nonstop (aka Tandem Guardian ) on X86-64 will run or translate binaries from the original proprietary TNS systems (late 1970s) and MIPS systems (1991).


Distinguished Engineer?


Bingo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: