Hacker Newsnew | past | comments | ask | show | jobs | submit | gck1's commentslogin

Which social network would you choose if you wanted reach?

There isn't any that isn't run by questionable people.


I'm working on a project now and what you're saying is already true. I have agents that are able to handle other things apart from code.

But these are MY agents. They are given access to MY domain knowledge in the way that I configured. They have rules as defined by ME over the course of multi-week research and decision making. And the interaction between my agents is also defined and enforced by me.

Can someone come up with a god-agent that will do all of this? Probably. Is it going to work in practice? Highly unlikely.


> Windows cheats here

Slightly off-topic: it also cheats in how TPM works for Bitlocker when you do TPM + PIN. One would assume PIN becomes part of the encryption key, but in reality, it's just used as the auth for TPM to release the key. So while it sounds like a two-factor solution, in reality it's just single factor.

So the Bitlocker without TPM is actually a better idea and Windows makes it very painful to do if TPM is on.


I don’t know much about the TPM but if it’s anything like Apple’s Secure Enclave, it should require exponentially longer time after each incorrect PIN past the first one, making it so you can’t reasonably brute force it without getting lucky.

I’m not sure how the typical “two factor” best practices would interpret one of the factors basically self destructing after 10 guesses, but IMO it’s a pretty decent system if done right.


That's not the issue. The TPM isn't blinded in the above description meaning that if someone cracks the TPM they can get your key. Ideally both factors are always required to access the secret.

If you're wondering, yes this is a security issue in practice. There have been TPM vulnerabilities in the past that enabled exfiltration of secrets.


Aren't PINs usually short, and might even be really be made out of just digits in the first place? So would there be real security benefits in adding that to the key?

You can make PINs as complex as you want, there's only a maximum length limitation of 20 characters. There's no difference between passwords and PINs in Windows except that Windows calls it a PIN if it's used with TPM. And yes, it does nudge you in the direction of making it simple because "TPM guarantees security", but you don't have to.

You've clearly touched the problem with healthcare in general though. If it's not life threatening, it's not taken seriously.

There are a lot of health related issues humans can experience that affect their lives negatively that are not life threatening.

I'm gonna give you a good example: I suffer from mild skin related issues for as long as I can remember. It's not a big deal, but I want my skin to be in better condition. I went through tens of doctors and they all did essentially some variation of tylenol equivalent for skin treatment. With AI, I've been able to identify the core problems that every licensed professional overlooked.


It depends on where you live and what the issue is.

Where I live, doctors are only good for life threatening stuff - the things you probably wouldn't be asking ChatGPT anyway. But for general health, you either:

1. Have to book in advance, wait, and during the visit doctor just says that it's not a big deal, because they really don't have time or capacity for this.

2. You go private, doctor goes on a wild hunt with you, you spend a ton of time and money, and then 3 months later you get the answer ChatGPT could have told you in a few minuites for $20/mo (and probably with better backed, more recent research).

If anything, the only time ChatGPT answers wrong on health related matters is when it tries to be careful and omits details because "be advised, I'm not a doctor, I can't give you this information" bullshit.


The second part is what I'd also like to have.

But I think it should be doable. You can tell it how YOU want the state to be managed and then have it write a custom "linter" that makes the check deterministic. I haven't tried this myself, but claude did create some custom clippy scripts in rust when I wanted to enforce something that isn't automatically enforced by anything out there.


Lints are typically well suited for syntactic properties or some local semantic properties. Almost all interesting challenges in software design and evolution involve nonlocal semantic properties.

> In frankness, the whole menu of /-commands is intimidating and I don't know where to start.

claude-code has a built in plugin that it can use to fetch its own docs! You don't have to ever touch anything yourself, it can add the features to itself, by itself.


This is some great advice. What I would add is to avoid the internal plan mode and just build your own. Built in one creates md files outside the project, gives the files random names and its hard to reference in the future.

It's also hard to steer the plan mode or have it remember some behavior that you want to enforce. It's much better to create a custom command with custom instructions that acts as the plan mode.

My system works like this:

/implement command acts as an orchestrator & plan mode, and it is instructed to launch predefined set of agents based on the problem and have them utilize specific skills. Every time /implement command is initiated, it has to create markdown file inside my own project, and then each subagent is also instructed to update the file when it finished working.

This way, orchestrator can spot that agent misbehaved, and reviewer agent can see what developer agent tried to do and why it was wrong.


And you can automate all this so that it happens every time. I have an `/implement` command that is basically instructed to launch the agents and then do back and forth between them. Then there's a claude code hook that makes sure that all the agents, including the orchestrator and the agents spawned have respected their cycles - it's basically running `claude` with a prompt that tells it to read the plan file and see if the agents have done what they were expected in this cycle - gets executed automatically on each agent end.

Interesting. Another thing I'll try is editing the system propmts. There are some projects floating around that can edit the minified JavaScript in the client. I also noticed that the "system tools" prompts take up ~5% context (10 ktok).

This is kind of why I'm not really scared of losing my job.

While Claude is amazing at writing code, it still requires human operators. And even experienced human operators are bad at operating this machinery.

Tell your average joe - the one who thinks they can create software without engineers - what "tools-in-a-loop" means, and they'll make the same face they made when you tried explaining iterators to them, before LLMs.

Explain to them how typing system, E2E or integration test helps the agent, and suddenly, they now have to learn all the things they would be required to learn to be able to write on their own.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: