I've been using Elixir for the past 5-6 years for my startup. We use pg_notify extensively to broadcast changes between running nodes (basically, use Phoenix.PubSub locally in our apps, with a GenServer to subscribe+re-broadcast using pg_notify).
This has been a really elegant and low-complexity way to get distributed pubsub without the complexity of running a distributed erlang cluster (which seems a lil bit painful in a K8S+Continuous Deploy world)
There -are- some big downsides to be aware of though.
1. You can't use PgBouncer w/ LISTEN/NOTIFY. This has been really painful because of the high memory overhead of a pgsql connection + elixir keeping a pool of open pgsql connections. The tried and true method of scaling here is to just use PgBouncer. We've kicked the can on this by vastly over-provisioning our pg instance, but this has cost $10s of thousands on the cloud. Of course, it's solvable (dedicated non-pgbouncer connection pool just for LISTEN/NOTIFY, for example), but painful to unwind.
2. The payload has a fixed size limit (8KB, IIRC). This has bitten us a few times!
Even though I really like pg_notify, I think that if I were starting over, I'd probably just use Redis Pub/Sub to accomplish the same thing. Tad bit more complex if you're not already running Redis, but without the downsides. (Of course, w/ Redis, you don't get the elegance of firing a notification via a pg trigger)
We got around this at my company by just pooling all of the LISTEN/NOTIFY streams into a single database connection in software, here's a sample implementation:
function software_listen(channel, callback):
if not channel_listened(channel):
sql("LISTEN " + channel)
listeners[channel].append(callback)
function on_message(channel, data):
for listener in listeners[channel]
listener(channel, data)
function unlisten(channel, listener):
listeners[channel].remove(listener)
if len(listeners[channel]) == 0:
sql("UNLISTEN " + channel)
For #1 I've been keeping a keen eye on pgcat [1], in particular the https://github.com/postgresml/pgcat/issues/303 which
implies that it should be possible to add support for transaction mode LISTEN/NOTIFY support.
Triggers can affect throughput quite a bit on busy tables.
And we didn't want people schemas polluted with triggers.
But also we use Erlang distribution and Phoenix.PubSub because with a global network clients connected to the same node in the same region will get normal Broadcast messages to each other faster. If we ran them through Postgres or Redis the added latency wouldn't work for a global thing.
I’ve been running my startup on Google Cloud for the past 5 years. Initially got around 100K credits for the first year and have been steady spending $5-10K/mo since - not huge spend, but not nothing either.
In the early days, quota increases felt like a formality - put in a request and they’d get approved a few minutes later. Sometimes it was a bit of a puzzle (you can’t increase X unless you also know to ask for Y) - but never a roadblock.
Over the past 2-3 years, I’ve starting having the experience of having a quota increase denied and having a helpful account manager insist we jump on a call to get the quota increase. I’m not talking about anything crazy either - like maybe going from an existing 72 vCPU quota to 144.
Doesn’t make me feel very confident that I’d actually be able to use the cloud to actually do cloudy things in a pinch, like bursty on-demand scaling
Tangential comment - but that Netezza appliance was really cool. I was able to use one early in my career, circa 2008. It had a nice cli query interface similar to psql (maybe it spoke postgres protocol, I don't remember) - and was blazing fast. I ran totally rookie-mistake filled, unoptimized queries on tables w/ billions of rows and got results back in ~seconds. Of course, not so impressive now, but was really impressive 15 years ago!
They had a really wild architecture with FPGAs that, if I remember correctly, sat in between the CPU/disk and was used to offload some of query execution.
It was wildly expensive + expensive to maintain, so the company ended up (unsuccessfully, I think) jumping to Hadoop. Crazy times :)
I really enjoy working with LiveView and have personally built several substantially sized apps with it.
One of my current frustrations is the inability, out-of-the-box, to write to sessions from LiveView. I understand the reason why, but I personally find the "redirect to a controller to set the cookie" pattern to be a bit hacky and more work than I'd like in order to just.. set a cookie.
I hope that a future release has a built-in pattern for writing sessions and cookies in general.
I don't understand. Why isn't the session cookie created at initial load, and then all the data in the session kept on the server side? Writing anything into a cookie except the session id seems weird to me. What am I missing?
Could you use a 0 pixel 1st party iframe instead of a redirect to set the cookie? At least that way you could set and update it within the live view code (you would need a controller endpoint for the iframe still).
Avantek was, until recently, one of the few "price on the public box" options I was aware of in both server and desktop form factors. In desktop form factor there was another recent addition, the AVA Developer Platform, with a range of Altra SKUs. But it's very expensive just like the Avantek one[1]. In general Ampere is the only real competitor in this space anymore it seems, and they're only focusing on major buyers. At this point you'd think the higher volume would offset some costs and it would trickle down to us, but not really.
If I'm being completely honest: unless your opposition to Apple machines is purely political or whatever (which is whatever, do your thing), your best bet is probably just to buy an M1 Mac Mini, install Asahi Linux on it, and run it headless. You can get some aftermarket rackmount kits that can bundle 1-2 Minis, even, if you actually have racks. The performance/watt/$-spent is simply much better all around due to the massive economies of scale and consumer focus and it's a very modern ARMv8.something machine. All of the competitors are simply much slower in raw performance, have buggier hardware/firmware (often both), and run much more expensive when that isn't the case. Hell, even if you bought the Mac Studio and just ran Linux on it, it would probably still be reasonably price competitive, all things considered, even with like half the chip currently non-functional (GPU, NPU, etc). The hardware really is pretty good.
At this point I'm waiting for their ARMv9 chips to start rolling out before jumping on the Linux train. Maybe they'll do a Mac Studio refresh in a year or two from now...
Upstream kernel support is a bit sparse for the non-server boards.
When I was looking around, there was some stuff in the $500 to $2000 range:
Nvidia's usual Jetson/Tegra lineup
- sometimes has decent CPUs
- Good luck running anything other than Nvidia's slowly updated Kernel with blobs all over. Support drops real fast for older boards, leaving one stuck on old kernels (Jetson Nano and its upstream family was "EOL'd" in terms of kernel upgrades a while ago, even though Nvidia will still gladly take your money for a new Jetson Nano).
- Has a lot of Nvidia stuff attached, so you do get a GPU and a PCIe slot.
NXP LayerScape
- lots of CPU cores, but not great ones
- Claims 2nd highest level of ARM Systemready, so it may work with mainline kernels?
- Has a lot of networking stuff attached, since this seems to be the successor to generations of PPC networking chips.
Apple
- IMO, you already know the pros and cons, and will have already decided to purchase a Mac M1 or not by now.
- Popular vendor with a tendency to not make too many different variants, so some are trying to mainline device support in the Linux kernel (Asahi linux).
Qualcomm's developer platform for WoA
- Somewhat limited in specs and afaik, no mainline kernel support of note.
There are others in this range, such as Amazon's Annapurna Labs, Marvell's various SoCs, Broadcomm, Ampere, etc. IMO, none of them really target consumers or workstations. Some (Marvell, Broadcomm) treat simple public datasheets and documentation as a sin - Marvell's takedowns of fmr XScale documentation in particular. Even Nvidia isn't quite this prudish with their documentation, though they do hold back a bit vs the big x86 giants. Annapurna is found all over the place (Qnap NAS & Mikrotik routers are two places I've been surprised to find them), so there may be reclaimed consumer hardware, but Amazon is similarly stingy with documentation.
Ultimately, I don't feel it is really the year of the ARM workstation quite yet.
> Support drops real fast for older boards, leaving one stuck on old kernels (Jetson Nano and its upstream family was "EOL'd" in terms of kernel upgrades a while ago, even though Nvidia will still gladly take your money for a new Jetson Nano). - Has a lot of Nvidia stuff attached, so you do get a GPU and a PCIe slot.
Note that the Jetson Nano is supported pretty well by Fedora with a fully upstream kernel. This includes GPU acceleration through Nouveau, without reclocking catches.
And just before deprecating support for it entirely (won't get BSP releases beyond JetPack 4.x), they gave it u-boot on SPI with the UEFI module, which wasn't used at all before.
The new BSP release that is released tomorrow in public preview is Xavier onwards only, and sets the baseline to Linux 5.10, with UEFI across the board.
The older BSP release that supports Tegra X1 onwards, including the Jetson Nano, will continue getting security updates for years to come. Just don't expect new features anymore on the NVIDIA binary UM driver stack.
> It's a heavily diverged kernel tree, but it's all GPLv2, including the full GPU kernel-mode driver (https://nv-tegra.nvidia.com/r/gitweb?p=linux-nvgpu.git;a=sum...) for those. (there's no binary kernel modules present at all on the platform)
> Firmware, like everyone else, and userspace is where you have the proprietary bits.
My experience with this, is moving away from the Nvidia kernel isn't practically feasible. Userspace may be a refuge, but even trying to upgrade away from the Nvidia/Ubuntu 18.04 userspace was always going to bring up general incompatibilities and small problems. The long awaited 5.10 update targets Ubuntu 20.04, just as 22.04 is progressing through beta. Backporting PCIe device drivers and modules was painful enough that I eventually gave up on my jetson adventures.
In the end, I spent enough time trying to get the platform to work normally, that I realized I was wasting time when I could just get an x86 board for similar cost or get a Raspberry Pi CM4, and quit trying to grow the Jetson Nano out of its embedded system roots.
Nvidia needs to step it up, big time. Jetson Xavier users are finally going to get a beta for Cuda 11, when x86 and SBSA arm users had Nvidia's official Cuda 11 for almost 2 years now.
> My experience with this, is moving away from the Nvidia kernel isn't practically feasible
Oh yup... for just 4.14 (from the NV 4.9 kernel), took so much work to keep up...
I eventually had a project to decouple nvgpu so that it could be used with a regular mainline kernel as a DKMS module... but I didn't get around to it. The situation on that front will become better in the future, but it's taking way too long.
> Userspace may be a refuge, but even trying to upgrade away from the Nvidia/Ubuntu 18.04 userspace was always going to bring up general incompatibilities and small problems.
CUDA 10.2 doesn't support Ubuntu 20.04 so hello problems on that front. Enough said...
> Nvidia needs to step it up, big time. Jetson Xavier users are finally going to get a beta for Cuda 11, when x86 and SBSA arm users had Nvidia's official Cuda 11 for almost 2 years now.
Yes... the thing is that Jetson has been put exclusively on an LTS lifecycle and that posed a bunch of problems. We have promises that it won't happen again in the future, with CUDA being decoupled from L4T shipping sometime towards the end of the year.
And the beta shipping tomorrow is for CUDA 11.4, not the latest 11.6.
Also as a side note if you're on Xavier and trying to OTA update to the BSP released tomorrow, just don't, that flow is just not supported. Full reinstall required between major BSP releases.
I've been looking at the NVIDIA Jetson [0, 1] devices for a while. They are marketed at machine learning, but they run Linux so it looks like I could use them for generic compute.
Not available quite yet, but several companies have recently announced RK3588 based systems that are something of a middle ground between the existing options. Up to 16GB RAM and somewhat higher CPU performance than the Pi for $100-300ish depending on the board and RAM amount.
I've had disappointing experiences with kernel support for Rockchip systems. Hopefully things have improved since I last experimented (it's been about 3 years), but I really struggled to get stable drivers running for RK3399 and RK3328 chipsets.
After using a HiDPI display for the last 3 years I would never willingly go back.
And while I'm no particular fan of larger monitors, it seems that a lot of software just assumes it these days. So I would expect 5K @ 27" and 6K @ 30-some inches to be everywhere these days (I guess 4K @ 24" as well).
I guess it's just a matter of being somewhat too expensive to go mainstream, where either a large screen with lower-res or smaller hires screen is much cheaper and "good enough".
How would you connect such a beast to your graphics card? If I understand, thunderbolt is the only connect standard with enough bandwidth, or maybe USB-C would work?
Dell has a new 32” 8K monitor that connects via two display ports.
HDMI 2.1? I've yet to see a computer which supports it though. Even the latest macbooks don't support it - which is one of their biggest downsides. Why include port that's outdated in the moment when you put it?
I made a similar jump over the past few years out of frustration with stagnating apple hardware (pre-m1). I spent a year with a hackintosh, which worked pretty well, but became disenchanted by the continued locking-down of the OS.
For the most part, daily driving Linux as my desktop has been great - no small thanks to Electron. Slack, Spotify, VSCode, etc. all just mostly work.
Going the arch-route took extra upfront work since you're effectively building a desktop environment from scratch, but the benefit is knowing exactly how -everything- works. If I press my "volume up" shortcut and the overlay volume bar isn't displayed, I know exactly which sway config and executable to look at. It's refreshingly simple.
The downsides are that upgrading is a bit anxiety producing (will I break anything?). HiDPI on Linux is still (in my experience) a bit of a mess. If you run wayland, you need to patch xwayland/sway/wlroots if you don't want blurry x11 apps. And there are some quirks- like, I can't drag files into Slack. Maybe it's fixable, but at some point you become satisfied with "good enough".
>The downsides are that upgrading is a bit anxiety producing (will I break anything?).
I don't understand why Arch users put up with this. There are plenty of distros that you can build your DE on your own with, but that have regular releases, and are extremely stable.
arch users don't really "put up" with this. every computer i run (besides my work windows machine) is arch and they have broken exactly 0 times. i update once a week, 0 problems.
For what it's worth, when one of my Macs upgrades, and starts rebooting 8 times over the course of an hour, I get pretty damn anxious too. At least with Arch it's just a bunch of packages being replaced and then a reboot. Plus, if you run a snapshotting filesystem like btrfs, you can always just roll your whole system back a few hours if things are really borked; though I've never personally had to do that. No option like that on Macs. If you upgrade and something important stops working, you're shit outta luck.
I work on a SaaS app in the healthcare space where IE11 is the preferred browser, and was getting worried watching all of our favorite tools begin to completely drop IE11 support (Tailwinds, Bootstrap) - effectively punishing us for the sins our customers IT orgs.
This brings me hope. But only a little. I’m sure they’ll find a way to keep running it.
>in the healthcare space where IE11 is the preferred browser
Do you know why that is?
I noticed that there are prominent links to a Korean and Japanese version, presumably because Internet Explorer is still used to a large extend in those two countries. Korea had some crypto stuff that only worked in IE, but that was years ago. Why haven't those markets moved on more modern browsers?
I worked at a hospital about 10 years ago, and at that time only IE was available on our PC. PC workstation was locked down for security reason and users were not allowed to install 3rd party software without approval (including chrome, etc). Also Chrome's browser extensions were security concerns (esp for medical records, HIPPA regulations, etc)
It was also a time when IE was used in enterprise for mostly legacy web app that was written long ago. We were also using command line apps (TUI-based) at that time, mostly for nurse and doctors, but we were migrating to the fancy web apps.
As for Korea, IE was mostly for ActiveX, but now most Korean website supports modern browsers ie Chrome.
> users were not allowed to install 3rd party software without approval (including chrome, etc)
This should be, and always should be, the case. Chrome is such a bad actor in allowing userspace install, we mark it as malware to prohibit unauthorized installation. Chrome Enterprise policies don't seem to be able to disable this, marking it as malware is the only way out...
That said, we offer two modern browsers to everyone in our environment, Edge and Firefox.
Same as anywhere else, I suppose: because changing it costs time/money and has no obvious value. In regulated areas like health care there can also be a recertification or audit support cost if you change something.
Only hope is more and more services actually having the balls to drop IE11, e.g. Office 365.
Imagine Google would not support IE11, I'm sure the pressure to upgrade these browser would be much higher (not sure about the health care space though)
This announcement is a much better hope: IE11 being replaced by edge, which will also have a modern rendering engine, which will be only one anyone needs to support.
To the extent (if any, though I'd be surprised if there were none) IE11 doesn’t support standards required effectively, if indirectly, via the HITECH guidance on secured PHI, IE11 use could have some some adverse consequences under HIPAA (and, as a SaaS operator with a BAA that would include the vendor, not just the customer), but not in and of itself noncompliance for use or support. Mostly just making it more likely that situations would become reportable breaches.
As part of the 41% that recently swapped out their "gas guzzler" Porsche for a Tesla M3, I can confirm that they're ready for prime time.
The Tesla Supercharging experience has been pretty great. The route planner calculates charging stops automatically and you only need to wait 5-10 mins to top off ~100 miles before you're back on the road.
Without the Superchargers, though, the experience would be pretty terrible - how is the current charging station grid for non-Tesla EVs?
> how is the current charging station grid for non-Tesla EVs?
Acceptable, but different. More stations, fewer chargers per station. We haven't had any difficulty, the experience is largely similar to our Tesla. We do have to swipe a card (or use Apple Pay, which is what I do) on the charger before the electrons start flowing, but aside from that it doesn't really change our road trip experience.
This has been a really elegant and low-complexity way to get distributed pubsub without the complexity of running a distributed erlang cluster (which seems a lil bit painful in a K8S+Continuous Deploy world)
There -are- some big downsides to be aware of though.
1. You can't use PgBouncer w/ LISTEN/NOTIFY. This has been really painful because of the high memory overhead of a pgsql connection + elixir keeping a pool of open pgsql connections. The tried and true method of scaling here is to just use PgBouncer. We've kicked the can on this by vastly over-provisioning our pg instance, but this has cost $10s of thousands on the cloud. Of course, it's solvable (dedicated non-pgbouncer connection pool just for LISTEN/NOTIFY, for example), but painful to unwind.
2. The payload has a fixed size limit (8KB, IIRC). This has bitten us a few times!
Even though I really like pg_notify, I think that if I were starting over, I'd probably just use Redis Pub/Sub to accomplish the same thing. Tad bit more complex if you're not already running Redis, but without the downsides. (Of course, w/ Redis, you don't get the elegance of firing a notification via a pg trigger)