Hacker Newsnew | past | comments | ask | show | jobs | submit | tucosan's commentslogin

I love it. I loved my pebbles. The only thing that always annoyed me was the design.

My dream is that pebble eventually becomes a beautiful time piece. Maybe teenage engineering could be convinced to give the pebble a design refresh.


Cool. In the meantime Google Assistant still fails to reliably call my contacts via voice command. And trying to use Google's AI offering as a paying customer of Google apps for work is a giant shitshow. So much so that I'm finally contemplating of dropping the Google ecosystem all together.


How do they know the study participants consumed honey and not corn syrup?


How do they know 1803 study participants were women?


The developer time required to learn and properly use nix makes it unattractive to most teams. The benefits don't outweigh the costs of adoption.

Instead of debugging code, the team would have to spend significant time maintaining the build system for the build systems sake. Don't get me wrong, I want something nix-like in my toolbox. I want to love nix. But I wouldn't dare to argue my team to commit to the world of pain that comes with it.

There's a good reason that nix didn't see wide adoption in the industry.


In my experience, Nix is very high leverage. My company has ~5 nix gurus, but Nix is invisibly used by hundreds of engineers. Most engineers know we use Nix and that's about it.


Similar experience for me. In my company adopting nix paid off in weeks with no prior experience. Very happy with it almost 10 years later and at much larger scale. The difference between things working reliability or not is too big to overstate.


I tried using Nix but stopped for two very practical reasons: it's very slow and it's extremely disk heavy. Install a couple of things and suddenly your nix store weighs at 100 GB.


use only stable nix. override nixpkgs for inputs you add. after first build, use offline and no-substitute flags on reuse, alias such command. use nixdirenv.

read and setup store/gc settings work for you. do not use nixenv nor nix profile.


Interesting. For me it's generally much faster than other package managers. The evaluation takes some time, but copying derivations from a cache to the Nix store is so much faster than traditional package management.

I wonder if you somehow ended up eval'ing many versions of nixpkgs?

your nix store weighs at 100 GB

¯\_(ツ)_/¯ outside very constrained devices, who cares? I just checked my NixOS dev VM that I have used for months now and cannot remember when I last garbage collected. It's 188GiB, but I have many different versions of CUDA, Torch, etc. (the project I'm currently working on entails building some kernels for many different build configurations), and I run nixos-unstable, where a lot of stuff changes, so generations are pretty unique.

A 2TB NVMe SSD is just over 100 Euro. Caring about 100GiB seems to be optimizing for the wrong things.

I completely agree on embedded machines though. Just deploy it by copying the system closure, garbage collecting anything but the previous closure for backup, it'll be pretty much the same size as any other Linux system.


> For me it's generally much faster than other package managers.

I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.

> outside very constrained devices, who cares?

Seriously, are we going to shame people who can't afford to buy lots of storage?? My smaller laptop has only 250GB, but that's freaking plenty if I stick with apt. But I can barely run Nix on it.


> Seriously, are we going to shame people who can't afford to buy lots of storage??

It's not just storage, though - storage may be cheap but once your machine is at capacity (the physical space in laptops is an important constraint) you have to replace perfectly good hardware to accommodate absurdly space-hungry software (looking at you, Vivado).

Also, don't forget that not everyone has always-available, fast, reliable, cost-free internet. By rural standards my connection's very good, but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time.

Digital wastefulness is a problem, and I do think we need to take it more seriously.


but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time

Except that Nix does not download 100 GiB under unless you are installing a gazillion packages. First, Nix downloads compressed output paths. Second, it's not like Nix packages are substantially larger than Debian, Ubuntu, or Fedora packages. The extra storage space comes from (1) Nix keeping multiple generations to allow you to roll back to previous versions of the system -- if you break something, you can always roll back; (2) people using multiple different versions of nixpkgs, which could lead to having multiple versions of system libraries.

(1) is a feature of Nix/NixOS, if you want to use less space, you can trade off the ability to roll back for space. You could always garbage collect everything except the current generation and it would be similar to other distributions. For (2), avoid using multiple nixpkgs versions.

I generally like keeping around a lot of generations, etc. so I don't mind my history of NixOS systems keeping 100-200 GiB. But if you care about space, garbage collect and it won't take up that amount of space.


Thanks - I appreciate the background info.


I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.

Pretty much all popular package managers. APT/dpkg, DNF/rpm, pacman, etc.

I have just updated one machine to the latest unstable. It updated 333 packages, a substantial part of that system. It took 1 minute and 50 seconds, most of it downloading. So, not sure how it takes a good part of an hour for you.

Seriously, are we going to shame people who can't afford to buy lots of storage??

I'm not shaming anyone. Just saying that 1 or 2 TB is pretty normal nowadays (outside Mac, because Apple makes you pay for it). At any rate, you can make the size pretty similar to any other distribution. It's not like glibc or GNOME takes up substantially more disk space on Nix.

If you end up using 100 GiB of storage, you are either keeping a lot of system generations around or you somehow have different nixpkgs versions in your system's closure, ending up with duplicate versions of glibc, etc. If the former is the case, set up automatic garbage collection and the space use will be far less. E.g. on one machine I have only three NixOS unstable generations and the system is 18 GiB (which includes a bunch of machine learning models, etc.). It would probably be substantially less on NixOS stable, since there are less differences between generations (e.g. I have qemu, webkitgtk, etc. three times).


Adding some data here:

Total size of installation is roughly comparable between NixOS and, say, Ubuntu.

My laptop's Nix closure of 1 generation is 33 GB. My desktop Ubuntu has 27 GB (20 GB /usr + 7 GB in /var, where snaps and flatpaks are stored).

Indeed the disk usage of Nix comes from multiple generations. Every time there is a new version of glibc, gcc, or anything that "the world" depends on, it's another 33 GB download. Storing the old generation is entirely optional. The maximum disk space needed is 2 generations.

Updating Ubuntu to a new LTS version almost always costs me multiple hours, caused by interleaved questions on how to merge changed config files in /etc (which unfortunately one cannot seem to batch), apt installation being rather slow, and during recent years, the update generally breaking in some way that requires a major investigation (e.g. the updater itself dies, or afterwards I have no graphics). On NixOS, these problems do not exist, and the time to update is usually < 30 minutes.


In my experience Nix is a force multiplier. But you need someone on the team who has plenty of Nix experience, because you inevitably need to write your own derivations and smoothen over issues that you might encounter in nixpkgs.

We use Nix with Cachix in the team I currently work in. We use a lot of ML packages/kernels, which are nearly impossible to manage in Python venvs (long build times because we have to patch some dependencies, version incompatibilities, etc.). Now you can set up a development environment in seconds. The nicest thing is when we switch between branches we automatically have the state of the world needed for that branch (direnv yay).

It was some work to set up, but it saves so much time now.


How do you do the initial setup? I'm concerned with anything that happens before activating the dev shell.

Right now I wrote a bash script to check for nix, direnv, git, gpg, etc. But it feels a bit clumsy, compared to the flake that contains the dev shell.

For my own system I set up home manager. But I don't want to make the use of home manager a requirement, as it can be quite opinionated. (e.g. setting up direnv will be done by generating a .zshrc, which can be limiting to some)


For our particular project you only need to install Nix and then run nix develop, but I'd indeed recommend to use direnv. For me it's not an issue, since I run NixOS on development VMs, but a colleague who was not using Nix before (I think) also wrote a bash script to set up an AWS VM with the NixOS AMI and then rolls out a minimal NixOS configuration.

I think for people who don't want to dive into Nix much, doing an imperative install (nix profile install) of the necessary packages is also fine. You could even make your own small meta-package that depends on everything that is needed. Then they could do a nix profile install yourflake#yourmetapackage and have all the tools they need. But I agree direnv is a bit harder, since you'll have to put something in the shell rc/profile.


The imperative install is as many lines of code as the flake itself. That’s what’s bothering me. But a meta package would be a step in the right direction.

Thank you!


The vaccination campaign has solely been target at girls and women. Which is crazy, since boys and men can get cancer from HPV too.

I really don't understand the bias in public health communication here. OWID is making the same mistake again. How are boys supposed to know that they should get the shot, if everybody public health professional fails to tell them?

> HPV is also thought to cause about 95% of anal cancers, 75% of oropharyngeal cancers, 75% of vaginal cancers, 70% of vulvar cancers, and 60% of penile cancers.2 Low-risk or non-oncogenic genotypes (eg, types 6 and 11) cause anogenital warts, low-grade cervical disease, and recurrent respiratory papillomatosis. In the USA, the incidence of oropharyngeal cancer in men now exceeds that of cervical cancer in women, and by 2020 the annual number of HPV-associated oropharyngeal cancers will exceed that of cervical cancers. As a result, it is important to consider ways to expand our HPV prevention efforts to boys and men.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


Why are you happier exactly?


If you don't need the pinouts, it's likely cheaper to get a used thin client or a N100 based machine. You'd get similar power draw, plus a case and a PSU.


It doesn't seem even close. Rasberry Pi 4 starts at €40 or so, while the cheapest N100 PC my amazon query returns is €180.


I suspect that either you care about performance and the pi loses badly, or you care about price and the pi 4 loses to... probably other pis, actually. Or used x86 off eBay. Or you care about absolute power consumption and again other pis win.


Used thin clients start at 20€, that’s unbeatable (because they include case and PSU, allow SATA Harddrives, replaceable RAM…)


> while the cheapest N100 PC my amazon query returns is €180

Check AliExpress. Much cheaper for N100s.


Raspberry Pi 4 cannot be compared to N100. It's miles behind.


How does it mitigate the issues outlined in the article?


The root cause for the PHP vulnerability is trying to parse unstructured text. The actual information in WHOIS has structure: emails, addresses, dates, etc. This info should be provided in a structured format, which is what RDAP defines.

IMHO, there is no reason for a registrar to not support RDAP, and to have the RDAP server's address registered with ICANN.


Banks in Germany offer access to consumers via the HBCI standard. Not sure about the rest of the EU.


Banks in Germany provide it because of EU regulation.

https://www.digiteal.eu/open-banking-apis-all-you-need-to-kn...


Those regulations are about sharing via trusted third parties, not direct to users.


some similar tools offer a way to parse the PDF files provided by your bank and import it. I wish we had something similar here to do that


This sounds illegal and against what GDPR stands for.


Why is accessing your own banking data through a standard against what GDPR stands for? GDPR has a right to data portability.


I miss interpreted. I thought someone else can gain access to consumer data.


Can you point to any good research to back up your claim that this is a good treatment?


I liked this one https://www.nature.com/articles/s41598-024-54249-9

They went with the premise of "a little a day seems to help, what happens if we feed ridiculous quantities once". The sort of medical research that appeals to the engineer in me and something you can only really do with compounds that are already known to be unlikely to kill your patient if you exceed the usual dose. Also I strongly suspect they did this to students given the age range.

There's more literature out there than this. That paper's references are probably a reasonable start point. But as creatine is very cheap and has been widely consumed by athletes for ages now it's about as low risk a gamble as any out there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: