I’ve ended up the same place as you. I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke. 2FA still works though.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.
> I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke
Did you try to SSH in verbose mode to ascertain any errors? Why did you assume the hardware "broke" without anyone objective qualifications of an actual failure condition?
> I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough
How is trusting a closed-source, for-profit, subscription-based application with your SSH credential "secure enough"?
Choosing convenience over security is certainly not unreasonable, but claiming both are achieved without any compromise borders on ludicrous.
I love that Kagi puts the "monetization" icon right next to results so I can avoid navigating to them. This means I'm much less likely to click on Medium.com links and other monetized blogs and sites. Often times the good content is on some personal website where the creator doesn't really care about earning money off it.
Another neat feature is the possibility to rank results or block them manually so you can lower visibility of certain sites. Really help push the scammy sites down.
Compare this to Google Search where the first half page is paid results (ads) and the rest of the results are of dubious quality. And you don't really have much of a way to influence your search results.
> love that Kagi puts the "monetization" icon right next to results so I can avoid navigating to them
One of the things I love about Kagi is it isn't overly opinionated. I'm not particularly sensitive to this issue. You are. Yet until this comment, I didn't notice that Kagi was doing this. It informed you. It didn't get it in my way. That's good design.
> Another neat feature is the possibility to rank results or block them manually so you can lower visibility of certain sites. Really help push the scammy sites down.
The ad-driven search engines refusing to implement this really drives home their conflicts of interest.
I don’t mind Medium being monetized, but I have the domain downranked, because posting on medium is a very strong signal that the content is worthless.
Scaleway is indeed the closest thing we have to AWS, Google Cloud and Azure by a European company. They are fast building out a comprehensive managed cloud with IAM, managed databases, containers, etc. I do hope they succeed. I've only used them for hobby projects, so my experience is limited to lighter workloads. But the UI is pretty good, and they have APIs and CLI for all operations.
This. Relying on developers manually trying to follow a style guide is a recipe for not having a consistent style. Instead something like pgFormatter should be used. I'm not sure what the state of SQL formatters and IDE support is these days. Not sure how many command based options there are.
And people who use things like Datagrip or other IDEs will probably format with their IDE's preferences unless there is a plugin for things like pgFormatter. This works well if there is a company mandated editor/IDE, but not so well when you have developers across various editors and IDEs.
A killer feature rustic has over restic is built-in support for .gitignore files. So all your dependencies and build output is automatically ignored in your backups.
Nice. Using `.gitignore` would simplify my Restic, Borg/Borgmatic, and Rsync-based backup scripts/configs. (Right now, I end up duplicating the same information in a few places, not very well.)
At first I thought that sounded great, but then I realized that that would exclude files that I want to be backed up, like `dir-locals-2.el`, which should be excluded from git, but should also be backed up. There doesn't seem to be a great solution to that in general.
Wouldn’t you back up your git repos by pushing them somewhere? Even if that somewhere is a different directory on the same drive. Backing up your local working copy sounds a bit odd.
I see they have gotten support for S3 (and other storage providers) via OpenDal. Might need to revisit rustic for my backup needs then! I once started looking at what it would take to build a GUI using Tauri (Rust backend <-> JS/Web frontend), but didn't have time to figure out the APIs.
What I really like about Rustic is that it understands .gitignore natively so you can backup your entire workspace without dragging a lot of dependencies, compiled binaries, and other unnecessary data with you into your backups.
As a developer who has worked extensively with React and Reagent (a ClojureScript wrapper around React) I actually enjoy this kind of syntax. Better that then some custom HTML templating syntax I need to learn in addition to the language.
It doesn't look too bad if one also break the code into multiple functions to make "layouts" and "components".
I have had lots of fun building with Bun, ElysiaJS, and HTMX. Might test your go library out as well. Looks pretty neat.
This! Required frequent changes just makes people who don't use password managers choose weaker passwords to be able to remember them easily. And they'll almost guaranteed just choose the same password as before with a new post or prefix. "mychildhoodteacher1", "mychildhoodteacher2", etc.
It would be better to encourage users to use a single random four word passphrase and stick to that forever. Add 2FA and you are golden. But legacy systems gonna legacy. I still see systems with max password lengths of 12 characters in the wild, and no 2FA to boot. It's been a while since I got my password back in clear text though, so perhaps we're moving in the right direction.
This is the number one thing that made Clojure work for me despite being dynamically typed. Having confidence that values did not change under my feet after sending them into functions or across thread boundaries was so refreshing. In immutably valued languages even if you technically might be sending a reference to an immutable value type, you can at least practically think about it as pass by value instead.
I never really got into F# or Haskell (more than some tutorials) so can't really comment on the type safety part.
The value of static typing depends very much on the application domain. In a closed-world application domain where you can be reasonably sure that you know upfront all entities, their attributes, their valid ranges, etc, static typing is extremely valuable. That applies to domains like compilers, system software, embedded systems, and more.
In open-world domains, like business information systems, static typing is often an obstacle to fast adaptation.
Whereas immutability provides value in every domain, unless the performance requirement cannot be met.
I've heard that argument before, but it never really clicked, and I'm a former fan of dynamic typing.
The application domain is not relevant because you rarely know the domain up-front. Even if the domain is fully known by business stakeholders, it's not known by the application developers, and those domains can be vast. Application development is a constant process of learning the domain and of extending the existing functionality.
This is why all the talk about how LLMs are going to make it possible to replace programmers with people using prompts in English doesn't make much sense. Because the act of programming is primarily one of learning and translating requirements that are initially confusing and context dependent into a precise language. Programming is less about making the computer dance, and more about learning and clarifying requirements.
Static typing helps with refactoring, A LOT!
So when your understanding of the domain changes, YOU WANT static typing because you want to safely change already existing code. You want static typing precisely because it gives you “fast adaptation”.
It's the same argument for why someone would pick Clojure over other dynamic languages. Clojure gives you some guarantees due to the pervasive use of immutability, such that it gives you a clearer view of the API's contract and how you can change it. But statically typed FP goes even further.
I've been involved in projects using dynamic typing (PHP, Perl, Ruby, Python) and with no exception, the code became a mess due to the constant evolution. This is one reason for why we preferred an architecture of microservices because it forces you to think of clear boundaries between services, and then you can just throw away or rebuild services from scratch. Large monoliths are much more feasible in statically typed languages, due to the ability for refactoring. And no, while unit testing is always required, IMO, it isn't the same thing.
Static typing exists on a gradient. A ounce of static typing is useful to help with the refactoring, as you suggest, but the tradeoffs seem to quickly take over once you go beyond typing basics. Not even the static typing die hards are willing to write line of business applications under complete type system.
I, for one, prefer more static typing, rather than less. I prefer Scala, OCaml, F#, or Rust. And I've seen some difficult refactorings accomplished in Scala due to its expressive type system, although I can understand why it can be a turnoff.
The downside of having more static typing is a bigger learning curve, so you end up sacrificing horizontal scaling of software development (hiring juniors fast) over vertical scaling (doing more with fewer, more senior people).
Another downside is many times a slower compiler, which changes how you work. Once the code compiles, it may be correct, but then again, you end up doing less interactive development, so you work more in the abstract, instead of interactively playing with the code. I.e., Python's `pdb.set_trace()` is rarely available in static languages. I've always found this difference between dynamic and static languages quite interesting.
> I, for one, prefer more static typing, rather than less. I prefer Scala, OCaml, F#, or Rust.
Why, then, don't you prefer languages with more static typing? Scala, Ocaml, F#, and Rust are middle of the road at best.
It seems you're echoing that the pragmatic choice for a business application is to stick to typing basics (within some margin of what is considered basic).
I recognize there are diminishing returns, and also, while I don't want first-tier mainstream languages, at the very least I want second-tier mainstream languages :)
Haskell, for example, is harder to pick, and I wouldn't pick Idris even if I founded my own company.
I am a former fan of extreme static typing, think Haskell higher-order type-classes fan. So, yes, I understand the value. But everything you build is very brittle in the face of changing requirement. When requirements seem to change because developers are still learning the domain, that is fine. This churn is unavoidable, true. But, if you have business people tell you, you have to pass some additional information through your system without doing anything to it, and you answer, I have to refactor all my type definitions, then you have some explaining to do.
> But everything you build is very brittle in the face of changing requirement.
It's supposed to be 'brittle', in the sense that the compiler verifies your code and if the logic changes, the compiler complains. Everything else is a bug.
> But, if you have business people tell you, you have to pass some additional information through your system without doing anything to it, and you answer, I have to refactor all my type definitions, then you have some explaining to do.
And the explanation is that this is software development, there is no "without doing anything to it". If there's a new requirement I need to adapt the code - take it or leave it, I'm not a wizard. And yes, I have done that already in some form and it was almost always received properly. 'Business people' sometimes have no understanding of software development.
The logic "code changes -> only dynamic typing" isn't valid, in my opinion.
The data model did change though. Adding an extra field, even if you don't use it, changes the shape of your data.
Ultimately your data is going to be typed with or without your approval. It's unavoidable, because eventually the data needs to be bits on a disk or on the wire. It's just a matter of how aware of it you want to be.
If you can reasonably keep the shape, and all possible shapes, in your head then fine. But I think you'll find this becomes less feasible as systems grow, and even less feasible in a corporate environment when teams come and go.
Apologies for being pedantic here, but Clojure is too - strongly typed, dynamically typed language. This means types are inherent to the values, not the variables that hold them, providing both safety and flexibility - even though variables don't have fixed types, every value in Clojure has a type.
In contrast, Javascript is weakly typed. Yet, Clojurescript, which compiles to JS, retains Clojure's strong typing principles even in the Javascript (weakly typed) runtime. That (to certain degree) provides benefits that even Typescript cannot.
Typescript's type checking is limited to the program's boundaries and often doesn't cover third-party libraries or runtime values. Typescript relies on static type analysis and type inference to catch type errors during development. Once the code is compiled to JS, all type information is erased, and the JS engine treats all values as dynamic and untyped.
Clojurescript uses type inference, runtime checks, immutable data, and dispatch mechanisms, optimized by its compiler, to achieve strong typing for the code running in the JS engine.
I mean, if a website claims to have tens if not close to a hundred "legitimate interest" cookies I'm reasonably sure they are living of wildly invasive ad tracking. I immediately close these websites just as you do.
It would be swell if more of the web was made by passionate people to share knowledge for free. I know this is a privileged attitude as creating content takes time which is not free. But some of the best web sites are the ones without monetisation. We need a better monetisation system for the web that is based on people paying for content instead of people being sold as user data.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.