Hacker Newsnew | past | comments | ask | show | jobs | submit | srjilarious's commentslogin

I just learned about the whole homelab thing a week ago; it's a much deeper rabbit hole than I expected. I'm planning to setup ProxMox today for the first time in fact and retire my Ubuntu Server setup running on a NUC that's been serving me well for last couple years.

I hadn't heard about mealie yet, but sounds like a great one to install.


Ubuntu Server setup running on a NUC that's been serving me well

In my book, that’s a homelab, it's just a small one (an efficient one?...)


I've set up half a dozen different home labs over the years but never used anywhere near the compute or disk capacity I had. It was more about learning things, I guess. I laughed when he mentioned the number of cores he has available.


I used to have a large server serving a couple important things.

I was able to put everything on a fanless zotac box with a 2.5" sata SSD, and it has served well for many years. (and QUITE a bit less electricity, even online 24/7)


Proxmox is awesome! I've been running it for ~5 years and it's been absolutely stable and pleasant to run services on.

The Proxmox Backup Server is the killer feature for me. Incremental and encrypted backups with seamless restoration for LXC and VMs has been amazing.


I've been looking to get offsite backups going. Where do you keep your backups? NAS + cloud?

I also wanted to back up my big honking zpool of media, but it doesn't economical to store 10+ TB offsite when the data isn't really that critical.


My PBS server has 2 datasources - one local external drive & Backblaze B2. I snapshot to the local drive frequently throughout the day & B2 once in the evening.

Yeah I don't backup any of my media zpool. It can all be replaced quite easily, not worth paying for the backup storage.


In my scenario, PBS runs on a VM on my Synology. My Synology does automated backups to Backblaze B2 daily. It averages about $5/TB for B2 storage costs for me.I only backup the critical stuff I don't want to lose.


If you want to go another, related rabbit hole, check out the DataHoarder subreddit. But don't blame me, if you’re buying terabytes of storage over the next few months :)


Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.


don't blame me if you’re buying terabytes of USB drives and pulling out the hard drives


I can vouch for Mealie. My wife and I run it locally for family recipes and to pull down recipes from websites. I have a DNS ad blocker running, but most recipe sites are still a mess to navigate on mobile.

You can also distill recipes down. I find a lot of good recipes online that have a lot of hand-holding within the steps which I can just eliminate.


You should definitely try mealie yes. On top of a good way to host your own recipes, the entire thing just feels...really well put together?

I'm not even using the features beyond the recipes yet, but i'm already very happy that i can migrate my recipes from google docs to over there


As others have said, Mealie is an excellent app for any homelab. My wife and I use the meal planning feature and connect it to our Home Assistant calendar that is displayed on a wall-mounted tablet. The ingredient parsing update is amazing and being able to scale recipes up/down is such a time saver.


I've had a ton of fun with CasaOS in the past few months. I don't mind managing docker-compose text files, but CasaOS comes with a simple UI and an "App Store" that makes the process really simple and doesn't overly-complicate things when you want to customize something about a container.


I have Proxmox running on top of a clean Debian install on my NUC, I wanted to allow Plex to use the hardware decoding and it got a bit funny trying to do that with Plex running in a VM, so it runs on the host and I use VMs for other stuff


It's very easy to do this with LXC containers in Proxmox now, as passing devices to a container is now possible from the UI.


With containers, making backups seemed to become impractical with large libraries, since it seems to copy files individually?

I had to switch to VM because of that, passing through the GPU.


Just as easy with VMs, just have to pass the device to the VM


The only downside is that you essentially lock the GPU to 1 VM which there is nothing wrong with doing. At least with LXC, you can share device across multiple containers.


I have an Intel (12th Gen i5-12450H) mini-pc and at first had issues getting the GPU firmware loaded and working in Debian 12. However upgrading to Debian 13 (trixie) and doing apt update and upgrade resolved the issue and was able to pass the onboard Intel GPU through Docker to a Jellyfin container just fine. I believe the issue is related to older linux kernels and GPU firmware compatibility. Perhaps that’s your issue.


Jellyfin, Jellyserr on a QNAP TS-464 runs perfectly well for serving even 4k x265.


A Few Moments Later


There is time dialation in the homelab vortex ... what feels like a few hours can turn out to be years in the real world.


My best McConaughey voice: “this little server is gonna cost us 51 years”


That’s precisely what I meant! I’m at my sixth year, I guess. Maybe longer, I’ve lost my count.


I've had the same experience - I spent 6 months last year really digging into Rust and came to the conclusion that for the software I'm writing it's trying to save me from problems that I just don't run into enough to make it worth it.

I ended up jumping over to Zig and have been really enjoying it. I ported the same hobby 2D game engine project from C++ to Rust, and then over to Zig. A simple tile map loader and renderer took me about a week to implement in Rust and 3 hours in Zig. The difference was a single memory bug that took 15 minutes to figure out.


I end up installing mcfly (https://github.com/cantino/mcfly) in all my shells, and it works great in fish as well.


There's also fzf.fish, the only plugin I use.

https://github.com/PatrickF1/fzf.fish


Same. A shell without fzf now feels weird.


I dual booted Windows on my desktop and laptop for a few years and also noticed lots of weird issues - reduced battery life on my laptop, sleep/hibernate being broken, GRUB occasionally just dying on me. I eventually got rid of Windows all together and now just run Manjaro. I was surprised that suspend issues and battery life on my laptop, for instance, completely went away.

The main thing that kept me on Windows for years was games, but once I jumped into using Proton via Steam on Linux (and now the tweaked Proton GE), I can run almost all of my game library at full speed. The few games I can't play are due to anti-cheat software like Battleye.


Just wanted to +1 this. I've been a happy customer of Fastmail since ~2013, never had a single issue, great service


> never had a single issue

Fastmail was blown offline by a couple of DDoS attacks recently. Both of them impacted my ability to access Fastmail, but I suppose you didn't happen to try to access your account during those attacks.


Ha, good eye. corrected to 1e6 :)


Nice work. FPGA design appears to be very similar to GPU shader programming. First time I've read anything about FPGA design that connected. Usually FPGA stories get lost in data flow jargon and I learn nothing.


There is no programming in FPGA at all. You describe your hardware using hardware description languages like VHDL or Verilog.


You'll notice I didn't apply the term 'programming' to FPGA. Reading the post I noted this was a likely hang up among FPGA designers and carefully employed the preferred jargon. I imagine this sensitivity is the product of much frustration with forever being conflated with mere programmers. Must be awful.


Then no programming exists at all. When writing a C code you are describing a program that runs on the C abstract machine. The same thing holds for all "programming" languages.


Sorry, I am not ready for philosophical discussion. We can take definition from Wikipedia: https://en.m.wikipedia.org/wiki/Computer_programming Programming involves code execution on computer. There is no computer in FPGA.


Then why did you start the discussion? An FPGA is a computer just as well as any CPU.


Technically FPGA is a piece of memory. The functionality of the FPGA device depends how the bits in this memory are set. Size of this memory is constant This information is not public, brave hackers are working hard to reverse engineer this. You can make a CPU in FPGA, no way for the opposite performance wise. Complex simulation with couple 4k resolution pictures takes days.

Edit: the people here are decent enough to start a discussion.


Yes an FPGA is a different computer then a CPU. It's still a computer though. This is the definition of a computer:

> An electronic device for storing and processing data, typically in binary form, according to instructions given to it in a variable program. [1]

It fits an FPGA perfectly.

[1] https://www.lexico.com/en/definition/computer


There is a term “variable program” in your link. When you add peripherals to the chip on the printed circuit board it looses flexibility very fast. The whole system is made to very specific task. But yes, you convinced me that FPGA might be treated as a computer in an extreme case.


FPGA accepts "variable program"s. What you are talking about are peripherals. A CPU with certain peripherals can also be completely inflexible. That is completely outside the scope of what a CPU or an FPGA are though.


I don’t really see how FPGA programming is similar to shader programming.


Thanks! I too think GPU programming is quite similar in that you need to think of things in a more data streaming sense. It's sort of functional that way too; building up pipelines of transforms.


Very nice article, thanks.

That would more typically be written as 200e6, no need for the explicit multiplication when using standard float literal notation.


The IceStick is also nice and a bit cheaper at ~$25 but it has a smaller 1k logic element Ice40 FPGA on it whereas the TinyFPGA-BX has a larger 8k logic element FPGA.


I agree about using C++ for actual ip block implementation. My experience has been pretty mixed. Mostly because the tools (Intel HLS in my case) don't always give you a great idea of what constructs cause you to generate inefficient hdl code.

For example, passing a variable by reference in one context cost me an extra 10% logic blocks, and in another lowered it by 10%. It became a bit of a shotgun approach to optimising


One does not pass a variable in an HDL design ;-). Trying to pluck software principles onto FPGAs is wasting so much performance. Get one with the underlying hardware and map your problem onto them, not an intermediate SW-like representation. Like some other comment mentioned, get one with the clock and your design will fly.


I find this to be true for a lot of applications. Some times it seems like fpgas are hammers looking for a nail.

Where they can shine is if you need some odd combination of peripherals attached to a microcontroller: think of something like a uc with 4 uarts or multiple separate i2c buses.

Anywhere you need a lot of parallel processing that you can guarantee won't be interrupted, like a video processing pipeline is also a good fit.


Really excited to see this out. I've been learning Elixir and Phoenix with the 1.3rc and have really been enjoying it!

I'm a fan of Contexts myself as that is typically how I architect apps on mobile as well. I like having everything separated more explicitly and testable individually and Contexts seem to promote that in a really nice way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: