Hacker Newsnew | past | comments | ask | show | jobs | submit | enkrs's commentslogin

There are also rate limits on Google side for incoming mails. By just forwarding four domains to my Gmail I used to hit them quite often. Then 2 years ogo, I stopped forwarding and switched to the now discontinued Gmail fetching domain mails over POP...


I'm wondering, now almost three years in after the Forgejo/Gitea fork, which side of the fork ended up better. Both still seem very active with thousands of commits each.

I run a Gitea server (since long before the fork, constantly updated) that handles issues, pull requests, signed commits, CI/CD, actions, and even serves my containers and packages. It's been amazing.

Of course Forgejo can do the same. For those who’ve followed both projects closely — which fork would you say has come out ahead? Codeberg being Forgejo's SaaS offering likely gives them more resources, but I also wonder if that means their priorities lean more toward SaaS than self-hosting.


When I checked a couple months ago, Forgejo was getting quite a bit more developer activity, which makes sense to me given the reason for the split: https://honeypot.net/2025/05/14/gitea-vs-forgejo-development...


> their priorities lean more toward SaaS than self-hosting

It was FUD when the fork was announced, it is FUD now. Look at commercial images and what differentiates them from MIT — it's pretty much just SAML and not much else. Their actual development policy is "you pay us for the feature you need — we build it under MIT and ship for everyone"; their collaboration with Blender is the most prominent example of this that I know of.

I've also been wondering whether to jump ship, and have been going by comparing release notes — how many features were shipped within the same period of time, which bugs were fixed, etc. I've seen no reason to migrate, Gitea continues to advance faster, even though Forgejo copies some of their commits that still apply relatively easily.

Forget about commit counts, issues closed, and other artificial metrics — they're significantly inflated on Forgejo's side by heavy use of bots (like bumping dependencies) and merge commits (which Gitea development process doesn't use). Look at release notes.


How is Gogs, the original project doing these days?


I use LLMs (like claude-code and codex-cli) the same way accountants use calculators. Without one, you waste all your focus on adding numbers; with one, you just enter values and check if the result makes sense. Programming feels the same—without LLMs, I’m stuck on both big problems (architecture, performance) and small ones (variable names). With LLMs, I type what I want and get code back. I still think about whether it works long-term, but I don’t need to handle every little algorithm detail myself.

Of course there are going to be discussions what is real programming (like I'm sure there were discussions what is "real" accounting with the onset of a calculator)

The moment we stop treating LLMs like people and see them as big calculators, it all clicks.


The issue with your analogy is that calculators do not hallucinate. They do not make mistakes. An accountant is able to fully offload the mental overhead of arithmetic because the calculator is reliable.


> The issue with your analogy is that calculators do not hallucinate. They do not make mistakes. An accountant is able to fully offload the mental overhead of arithmetic because the calculator is reliable.

If you've ever done any modeling/serious accounting, you'll find that you feel more like a DBA than a "person punching on a calculator". You ask questions and then you figure out how to get the answers you want by "querying" excel cells. Many times querying isn't in quotes.

To me, the analogy of the parent is quite apt.


But the database doesn't hallucinate data, if always does exactly what you ask it to do and gives you reliable numbers unless you ask it to do a random operation.


I agree databases don't hallucinate but somehow most databases still end up full of garbage.

Whenever people are doing the data entry you shouldn't trust your data. It's not the same as LLM hallucinations but it's not entirely different either.


I really don't understand the hallucination problem now in 2025. If you know what you're doing and you know what you need to get from the LLM and you can describe it well enough that it would be hard to screw up, LLMs are incredibly useful. They can nearly one shot an entire (edited here) skeleton architecture that I only need to nudge into the right place before adding what I want on top of it. Yes, i run into code from LLMs that i have to tweak, but it has been incredibly helpful for me. I haven't had hallucination problems in a couple of years now...


> I really don't understand the hallucination problem now in 2025

Perhaps this OpenAI paper would be interesting then (published September 4th):

https://arxiv.org/pdf/2509.04664

Hallucination is still absolutely an issue, and it doesn’t go away by reframing it as user error, saying the user didn’t know what they were doing, didn’t know what they needed to get from the LLM, or couldn’t describe it well enough.


That is why you check your results. If you know what the end outcome should be. Doesn’t matter if it hallucinates. If it does, it probably already got you 90% of the work done which is less work that you have to do now to finish it.


This only works for classes of problems where checking the answer is easier than doing the calculation. Things like making a visualization, writing simple functions, etc. For those, it’s definitely easier to use an LLM.

But a lot of software isn’t like that. You can introduce subtle bugs along the way, so verifying is at least as hard as writing it in the first place. Likely harder, since writing code is easier than reading for most people.


Except verifying code is much harder then writing it.


Exactly, thank you.


I recognize that an accountant’s job is more than just running a bunch of calculations and getting a result. But part of the job is doing that, and it would be a real PITA if their calculator was stochastic. I would definitely not call it a productivity enhancer.

If my calculator sometimes returned incorrect results I would throw it out. And I say this as an MLE who builds neural nets.


You still make mistakes. Just because you did it yourself doesn't mean it's error free. The more complex the question the more error prone.

Thankfully the more complex the question almost always there's more than one way to derive the answer and you use that to check.


Replace calculator with the modern equivalent: Excel.

It does make mistakes and is not reliable[0]. The user still needs to have a "feel" for the data.

(to be pedantic "Excel" doesn't make mistakes, people trusting its defaults do)

[0] https://timharford.com/2021/05/cautionary-tales-wrong-tools-...


With an LLM there is no learning curve though (or a minimal one at best). No expert can prevent an LLM from hallucinating, even (and especially) the people building them.


> (to be pedantic "Excel" doesn't make mistakes, people trusting its defaults do)

So what is your point? An expert that mastered excel don't have to check that excel calculated things correctly, he just need to check that he gave excel the right inputs and formulas. That is not true for LLM, you do have to check that it actually did what you asked regardless how good you are at prompting.

The only thing I trust an LLM to do correctly are translations, they are very reliable at that, other than that I always verify.


"Just" check that every cell in the million row xlxs file is correct.

See the issue here?

Excel has no proper built-in validation or test suite, not sure about 3rd party ones. The last time I checked some years back there was like one that didn't do much.

All it takes is one person accidentally or unknowingly entering static data on top of a few formulas in the middle and nobody will catch it. Or Excel "helps" by changing the SEPT1 gene to "September 1. 2025"[0] - this case got so bad they had to RENAME the gene to make Excel behave. "Just" doing it properly didn't work at scale.

The point I'm trying to get at here that neither tool is perfect and requires validation afterwards. With agentic coding we can verify the results, we have the tools for it - and the agent can run them automatically.

In this case Excel is even worse because one human error can escalate massively as there is no simple way to verify the output, Excel has no unit test equivalents or validators.

[0] https://www.progress.org.uk/human-genes-renamed-as-microsoft...


You are describing a garbage in, garbage out problem. However, LLMs introduce a new type of issue, the “valid data in, garbage out” problem. The existence of the former doesn’t make the latter less of an issue.

“Just” checking a million rows is trivial depending on the types of checks you’re running. In any case, you would never want a check which yields false positives and false negatives, since that defeats the entire purpose of the check.


That's why you tell claude code to write tests, and use them, use linting tools, etc. And then you test the code yourself. If you're still concerned, /clear then tell claude code that some other idiot wrote the code and it needs to tear it apart and critique it.

Hallucination is not an intractable problem, the stochastic nature of hallucinations makes it easy to use the same tools to catch them. I feel like hallucinations have become a cop out, an excuse, for people who don't want to learn how to use these new tools anyway.


> That's why you tell claude code to write tests, and use them

I've seen Python unit tests emitted by LLM that, for a given class under test, start with.

    def test_foo_can_be_imported(self):
        try:
            from a.b.c import Foo
        except ImportError:
            self.fail()


    def test_foo_can_be_instantiated(self):
        from a.b.c import Foo
        instance = Foo()
        self.assertNotNull(instance)
        self.assertTrue(isinstance(instance, Foo)

   def test_other_stuff_that_relies_on_importing_and_instantiating_foo(self)
        ...
And I've watched Cursor do multiple rounds of

"1: The tests failed! I better change the code. 2: The tests failed! I better change the tests. GOTO 1"

until it gets passing tests, sometimes by straight out deleting tests, or hardcoding values to make them pass.

So I don't have the same faith in LLM-authored tests as you do.

> I feel like hallucinations have become a cop out, an excuse, for people who don't want to learn how to use these new tools anyway.

I feel like you've taken that attitude so you can dismiss concerns you don't agree with, without having to engage with them. It's disappointing.


> you now have to not only review and double-check shitty AI code, but also hallucinated AI tests too

Gee thanks for all that extra productivity, AI overlords.

Maybe they should replace AI programmers with AI instead?


I said to make the chatbot do it, not to do all the reviewing yourself. You can do manual reviews once it makes something that works. In the meantime, you can be working on something else entirely.


> In the meantime, you can be working on something else entirely.

Like fixing useless and/or broken tests written by an LLM?

(Thank you, AI overlords, for freeing me from the pesky algorithmic and coding tedia so I can instead focus on fixing the mountains of technical debt you added!)


It depends on how much you want the LLM to do. I personally work on function level and can easily verify if it works with a look and few tests.


I'm assuming based on the granularity you're referring to autocomplete, and surely that already doesn't feel like dialup.


Browser password managers with passkeys are more convenient for me, but a pass vault can still be useful for recovery codes and API keys.

I used pass for a while but couldn’t see what threat model it actually solves:

If you let GPG agent cache your key, any script (e.g. an npm post-install) can just run `pass ls` or `pass my/secrets` and dump all your credentials. At that point it’s basically just full-disk encryption with extra steps—might as well keep everything in ~/passwords.txt.

If you don’t cache the key, you’re forced to type your long GPG password every single time you need a secret.

I tried a YubiKey for on-demand unlocking, but the integration is clunky and plugging it in constantly is a pain if you need passwords multiple times per hour.

I eventually switched to Bitwarden.


That’s true for any password manager. If the database/store is unlocked (so the master password is cached or available in RAM), all passwords can be extracted. You have to lock the password manager when you don’t need it.

In fact, with Bitwarden, the cached password is exposed to the browser that has a large attack surface (including interacting with random remote servers). There was just a vulnerability in most browser based password managers including Bitwarden that would allow a remote attacker trick a user send out their passwords.

I use Bitwarden but mostly for non-critical passwords.


Doesn't good 2factor minimize a large attack surface like this?

I like the idea of storing password data in individual encrypted files and using git to store changes, but I wonder if it creates more friction to retrive the information. I havent tried this solution yet. I will when I get more time. It seems like this solution would benefit from a more standardized specification for storing and retrieving information. I known its not every persons cup of tea, but maybe some kind of separated add-on for streamlining this process could be beneficial.


>That’s true for any password manager

Modern operating systems isolate individual apps such that a malicous app can not access the RAM of another app. There is a difference between not making an effort to protect passwords and requiring an OS exploit to do so.


Memory isolation doesn't really help, though. If you have a malicious process running under the same user account as your password manager, it's still game over since that process could e.g.

- capture keyboard input - capture your screen - silently install browser extensions to capture your credentials - modify your shell config, .desktop files, $PATH, … to have you e.g. call a backdoored version of your password manager, or put a modified version of sudo on your $PATH that logs your password (=> root access => full memory access) - …


You can use Qubes OS for true VM-level isolation, or use hardware security keys where possible, or run sensitive applications in dedicated VMs.

I think that in general it is game over the moment you have malicious processes running. I use firejail for most applications, which I believe is the bare minimum, or bubblewrap.


Yeah. Personally, I'm crossing my fingers for SpectrumOS[0] to make things a bit easier. As the developer notes on her website[1]:

  <qyliss> I have embarked on the ultimate yak shave
  <qyliss> it started with "I wish I could securely store passwords on my computer"
  <qyliss> And now I am at the "I have funding to build my own operating system" level

[0]: https://spectrum-os.org/

[1]: https://alyssa.is/about/


What else can you tell me about Spectrum OS? Is it actively maintained? Is it usable? How does it compare to Qubes OS?

Also what do you think about Subgraph OS[1]? Although I think it is not maintained anymore, or is it?

[1] https://subgraph.com/img/sgos.png (old image which I remembered it by) (https://web.archive.org/web/20241206072718/https://subgraph....)


I don't know how usable SpectrumOS is so far – I guess we'd have to compile it ourselves in order to find out. Either way, it is being developed quite actively, see https://spectrum-os.org/git/

As for how it compares to Qubes, I don't think I'll be able to tell you more than https://spectrum-os.org/design.html & friends. I suppose the upshot is:

- KVM instead of Xen

- One VM per application

- Single file system for user data (to which users can grant VMs access on a folder-by-folder basis)

- Package system from NixOS (nixpkgs) for reproducibility & immutability


Ugghh, once again I forgot that HN removes line breaks unless you use double line breaks or indent by 2 spaces, and now it's too late to edit my comment.

@dang People keep running into this. (See e.g. this comment[0] from a few days ago.) It also makes it rather awkward to write lists IMO. What's the reason for removing line breaks and could this be changed?

[0]: https://news.ycombinator.com/item?id=44946386


For modern operating systems capturing keyboard input is locked down to avoid keyloggers. Capturing your screen requires explicit user permission to do so, popping up a dialog. Apps are isolated so another app can't interfere and install a browser extention or modify shell configs, etc.


And modern operating systems are being … ? macOS, I assume?


Can you name one of these modern operating systems?


iOS is a modern operating system.


It's also impossible to use it for anything productive.


The OS protections apply to all applications. In addition, the job of agents like gpg-agent or ssh-agent is to protect secret keys while they are cached (like preventing OS writing keys to swaps). You can configure them to erase keys after a certain time, require user’s confirmation for each key operation, store gpg keys in internal TPM or external hsm, and would talk to the agent through specific sockets.

Unlike browser-based password managers, the agents don’t continuously interact with the browser code and remote elements (probably don’t have network access at all).

One area that matters that I forgot to mention in my comment below is that, as a result of all above, Pass doesn’t check the domains and doesn’t protect against phishing. There might be extensions, but at that point, you might as well use keepassxc.


I store my passwords on an encrypted file partition sqlite database. My script grabs the pass and immediately closes the partition afterwards.

You can also just encrypt your passwords into individual encrypted files (one for each password) and have your script clear the gpg agent after a passfile is decrypted.


If you can spare a USB port you can use one of their Nano keys that just stays plugged in.

Even if someone/malware was to steal my yubikey pin they'd still need to convince me to tap the thing over 1,000 times to steal all my passwords.


I just leave my yubi plugged in. It requires a physical touch anyway (at least you can configure it for that which I have). And my place is physically secure.

The good thing also is that unlike with fido2 you only have to enter the pin once for OpenPGP. Then it stays unlocked while it's plugged in. But still needs the physical touch for every password. Perfect and convenient for me.

It also works great on mobile with openkeychain and password store. Both are not really maintained now but I don't really care because the encryption is in hardware anyway (yubikey over nfc)


> a pass vault can still be useful for recovery codes and API keys

You might already be aware of this, but Bitwarden also has a CLI client that can be used for this purpose, at least casually.


And can run a local webserver to expose an API (though they still need to tighten up security on it)


I can't remember how but pass for me works in brave browser and Firefox, as well as on mobile. It's my only password manager. I'm assuming some browser plugin.


You can configure the yubikey to need a PIN and/or touch to authorise the use a GPG key.

My main issue with pass is that it doesn’t work great on iOS with yubikeys.


Is the biometrics step (fingerprint reader) on macOS much different from a ubikey? I imagine implementation may have some differences, but in practice it seems I can already protect access to my GPG key using the built-in reader, so what’s the advantage of ubikey in that respect? Genuinely curious.


The TouchID is bound to a device - of course, I could copy my secret into a secure enclave that is only accessible through TouchID. Could even just store my GPG key there. With a Yubikey, I generate the key on an airgapped device and store it on the Yubikey. No other piece of hardware ever needs to see my secret key in plaintext. I could achieve the same with TouchID, generate the secret key inside the enclave, but then I cannot move the secret keys out without some other computer baring witness to that.

I really do not want to give Apple any more leverage over me, I'm looking to minimize it.


it took a while to get it to work well, but I use yubikey here, and recommend it. I do need to find and pulg it in sometimes, but overall might leave it plugged in. and I have it configured to require a touch for every operation


Is bitwarden in some way able to protect passwords while still being unlocked?


It’s great on esp32 with MicroPython. Even has support for server sent events (SSE). Paired with htmx, SSE gives some fun intetactive web experience for iot devices - instant GPIO status indicators etc. Loved tinkering with it. The source code is very readable too.


I thought structured output is a solved problem now. I've had consistent results with ollama structured outputs [1] by passing Zod schema with the request. Works even with very small models. What are the challenges you're facing?

[1] https://ollama.com/blog/structured-outputs


Structured output is solved, it is structuring data that's not, because that is an unbounded problem. There is no limit to how messy your data may be, and no limit to the accuracy and efficiency you may require.

I have used such models to structure human-generated data into sth a script can then read and process, getting important aspects in this data (eg what time the human reported doing X thing, how long, with whom etc) into like a csv file with columns eg timestamps and whatever variables I am interested in.


For anyone who thinks it isn't "solved", outlines debunked the paper which claims that "structured generation harms creativity":

https://blog.dottxt.co/say-what-you-mean.html


If the argument for a password login is being able to log in from anywhere, just store a spare ssh key (password protected) in your gmail or similar that's reasonably safe and accessible from anywhere.

But I'm having hard time imagining those "anywhere" machine scenarios. Strangers machines that you trust enough to connect to your servers, and are able to install putty or your preferred ssh client of choice on? Better just have SSH on your own phone and laptop.


> I'm having hard time imagining those "anywhere" scenarios

Hold my beer.

You ski in the Alps, its noon, and you get an alert that your DB is down.

You know this may happen because of invasive bots, and you know what to do, so you just find a calm spot at the high-altitude cafe, ssh from the phone, find the infringing bot's IPs, block them with ipset and send yourself an email to deal with the problem properly later.

Then you ski happily until dusk, knowing that users won't be affected.


I think "anywhere" here has to mean "any random device you come across", not merely "any strange location", as the premise is being able to log in with just a password rather than a key... I often use my phone to do tasks, but I do it with an ssh key on my phone.


Back when I worked from my phone while in the ski lift line, the solution really is to keep an SSH key on the phone if I intended to do any work from it.

If I really had to access work resources from any random device, I'd go through the ordeal of logging into the SSO to log in to the web console to open a temporary cloud SSH session with the multiple layers of 2FA and probably even SecOps manual approvals that's likely required.


For some reason I don't mind an ephemeral SSH session on a random device but I'm less likely to do webmail/email.


As someone who works with SREs every day, this breaks my heart.

1 - Don't be on-call while going to ski

2 - fail2ban and other automated systems can do this for you

3 - Passwords suck and are typically not regularly rotated unless you're using some centralized IdP

If you're in this situation you have already failed. If you use password auth use 2FA as well, and then I don't cry, it's just toil though.


1. It breaks my heart to see indie dev spirit die even on HN.

2. it's brittle and too automated to my taste. There may be false positives that I'd fait to review if it was too automated.

3. There should be a very limited set of passwords for your main assets. For instance, one for infrastructure, one for a password manager, one for the safe at home. And they should never be rotated. They are meant to be ingrained in muscle memory and stay with you for many years.


> There may be false positives that I'd fait to review if it was too automated.

On my little vps fail2ban has added over 23,000 ipv4's to it's f2b-ssh ipset. There is no way I'm reviewing that manually.

For what it's worth I don't allow passwords, so there is not a lot of additional security to be gained from fail2ban. I don't use it for that reason. I use it because 100's of login attempts brings my very cheap vps with bugger all RAM to it's knees. I don't particularly care that it runs like a dog when it's on its knees, but the OOM killer taking out the services I actually use it for is a step too far.

> it's brittle and too automated to my taste.

That problem largely disappears when you get rid of passwords. Fail2ban triggers on failures, and allowing passwords means you must tolerate some failures. People don't mistype public keys.


> ssh from the phone

That strengthens the previous commenters point. That personal phone is not an "anywhere" device but one that already carries the necessary software and can both interface your yubikey or carry your encrypted keys.

A better example would be the same ski trip but where the data connection is bad on nonexistent so you borrow the hotel's computer to make the emergency fix.

We used to do things like that, complete with post trip password rotations. I carried a laminated card in my wallet with the important key fingerprints. But with devices like the yubikey and cheap international data roaming, that has gotten less common.


A Google or Apple phone carrying encryption keys to my precious servers? Hm... I feel already pwned.

Jokes aside, I can not be bothered installing ssh keys on my phone. Phones change, get broken or stolen. Ssh clients on phones change as well and can not always be relied upon. I want to be 100% sure I can have ssh access to my servers in whatever improbable situation.

As for Yubikey... I used it for a while as a keyboard emulator to generate a string to prepend to my corporate laptop password that had insane strength requirements.

For personal and small business auth... it is too complex and brittle.

And frankly, what's the problem with a strong password? Like... a quote from Netzsche translated in a mix of French and Dutch with a couple special chars thrown in?


We can all dream up improbable scenarios that will neuter reasonable planning and precautions.

I travel full-time and work remotely, for over a decade. I have lost my phone once. Both Apple and Android phones sync passwords and ssh keys (if you set it up) to their encrypted cloud services. If you get a new phone everything comes back.

I put my most crucial keys and backup codes on a biometric-locked USB key that I protect along with my passport. I have never needed to use it, but in case I lose my phone and can’t get into my cloud account I have that.

I use a Yubikey for 2FA where supported, I have two, one handy and one secured with my passport.


Yubikey with libfido works beautifully.

>As for Yubikey... I used it for a while as a keyboard emulator to generate a string to prepend to my corporate laptop password that had insane strength requirements.

Wtf? Tell me you don't know how to use a yubikey without telling me you don't know how to use a yubikey.


I bet you did not know Yubikeys have keyboard emulation mode )


Lol. I'm pretty sure everyone has a coworker that has accidentally "keyboard emulated" their OTP into a public slack message.


Another one: you sold an online business and forgot about it until the moment the buyer contacts you asking for a meeting exactly when you decide whether you want to go to the bomb shelter or risk staying in the appartment building so conveniently located next to a damb that protects Kyiv from flooding.

You decide that staying on the 9th floor on the path of cruise missiles to the damb is too risky, pick your good old Toughbook that has enough juice to last until dawn, and go downstairs, asking the buyer over phone to reset the root password and send it over whatsapp.

Once installed in the shelter, you quickly realize the disk is full, clean the logs and give furter instructions to the buyer to pass on to his IT.


Instead: you WhatsApp your public ssh key to the buyer and login once they confirm your key has been added.

I have had to send my ssh pub key over all sorts of messaging platforms.


No way this person would understand what I want him to do. And if he would not understand, he would grow suspicious. No, no and and no again.


Just making sure I understand.

You have sold your business but are still responsible for IT support.

You are responsible for IT support but don't already have a defined access path.

The new buyer knows what a root password is and how to gain access to a Linux machine and reset it, but does not know what an SSH key is, or how to check for a full disk.

Despite clearly being a (very specific kind of) novice the new owner is suspicious of the person responsible for his IT giving him instructions he doesn't understand?


Nope.

It happened once and I hope won't happen again. I make myself available out of courtesy and I made a point that I will not be remunerated outside of the long expired knowledge transfer agreement.

Funnily, I got remunerated this one time with a lavish gift card for Neuhaus, but it only proves the point.

Of course the buyer knew the root password, details of the setup and all passwords reset procedures were part of the deal.

And of course they can reset the password when properly guided, people are not dumb even when they are not software engineers.

OTOH, most people are totaly unaware of Diffie-Hellman key exchange, the roles of public and private keys and have limited patience and even less interest in learning new things in a stressful situation.

And yes, people with money and authority have a particular distrust for people with skills and knowledge.


> It happened once and I hope won't happen again.

I should certainly hope not. I definitely won't be making "I'm providing free IT support while being bombed to someone who apparently thinks I'm some kind of conman instead of trying to help" a priority scenario in my infrastructure planning.


If I’m skiing in the alps there’s no fucking way I am on call, and you shouldn’t accept it either…


Can you imagine that some people are their own bosses, with no backup whatsoever?


One person isn't enough to run a business with a 24/7 support requirement.


Peter Levels enters the room )


When the outage started, for me duckduckgo.com just returned no results with the searchbar visible. The ddg homepage was still working. I've been using "my search term !g" for now and ddg just redirects my search to Google, so I don't have to change search provider in browsers.


Exactly. I've been using dehydrated on two servers since it was named acme.sh (pre-2015) and updated it once. It just works and no dependecies.


Dehydrated and acme.sh seems different. Is one of them fork?

https://github.com/acmesh-official/acme.sh

https://github.com/dehydrated-io/dehydrated


Brainfart from my side. When I first installed dehydrated.sh it was originally named letsencrypt.sh (not acme.sh) and later renamed to dehydrated. That was quite a while ago.


My favourite variation on this is the short movie by Joel Haver https://www.youtube.com/watch?v=hnUpTyKSjag


"Who? Who are the 12 people who found this helpful?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: