We've been storing jobs in the DB long before SolidQueue appeared. One major advantage is that we can snapshot the state of the system (or one customer account) to our dev environment and get to see it exactly as it is in production.
We still keep rate limiters in Redis though, it would be pretty easy for some scanner to overload the DB if every rogue request would need a round trip to the DB before being processed. Because we only store ephemeral data in Redis it does not need backups.
The difference can be explained in large part by urban design: many US shoppers need a car to drive to the supermarket and only go there once a week or less. In Europe you live much closer to a supermarket, so you go more often and get more fresh food and less frozen or canned.
Some Americans are surprised to learn that many supermarkets inside cities do not even provide parking, everyone walks or bikes there. People go to the supermarket every day.
Isn’t a monolithic place. I don’t think there is a non micro-state country in Europe where the absolute majority of people don’t commute by car.
Living outside of dense urban areas without a car is still generally tricky. In quite a few cities there are no large supermarkets in the densest parts and you have to drive further from the center to find one. So not having a car might be tricky
Eh, this is just not proven true by observing what people do though.
When I lived in Europe for a couple months, my first time there I grocery shopped like an American - filled up an entire cart with a week or two worth of groceries and then everyone stared at me when I checked out.
It's absolutely true that Europeans who live in walkable cities go to the market to pick up groceries a few times a week. Americans simply do not, with very few exceptions.
The grocery store density is much higher though. There were at least 2 grocery stores within a 5 minute walk from anywhere I've stayed in a city core in Europe. At least a dozen within 15 minutes.
It's simply a difference in culture. There are plenty of places in the US where you could drive to half a dozen grocery stores within 15 minutes but people simply don't do so. The store sizes reflect this cultural difference too. The average grocery store in the US seems to be 4-6x larger than those in Europe.
>I grocery shopped like an American - filled up an entire cart with a week or two worth of groceries
Is that really how the average American shops though? The majority of shoppers these days are in the self checkout or "15 items or less" lines with only a single basket of stuff, at least in the stores I frequent. Granted, I'm close to a city center but the store I go to is not very walkable
Your mileage varies, I guess. I used to live with easy walking distance of an upscale supermarket, but yet I did most of my shopping by driving to a different one farther away. Buying groceries with a car is simply more convenient.
Even after I moved out of that neighborhood, it wasn't unusual for me to stop at the grocery store every afternoon on my drive home.
Yes, it's not just homeless people with this bootstrapping problem. When I first arrived to the US in the nineties as a student I needed a social security number, for this I needed a P.O. Box (they did not accept the dorm house as address). For the P.O. Box I needed a social security number. Most international students ended up breaking the deadlock by making up a social security number.
I had a similar issue living abroad. My wife had a work visa (which was the reason we we moving) and I was allowed to go being a spouse, but once there getting a permit to work for myself was impossible without a job, and a job was impossible without a work permit.
There were ways around it, but it took finding a job at a really big company to make it work - they had dealt with it and had HR people that specialized in it. Once "on paper", I was pretty free to move around. I would not be surprised if their method was just putting in all zeros in the system or something until the permit number came back.
Well, just add "puts caller" in the function to find out. You can do this in your own code, but also you can also just briefly patch the library you're working with, if that's where the process is.
By the way, the generated identifiers are more a rails thing than a ruby thing.
Doesn't that just tell you the functions that happen to call it when you run a program? That's not remotely as good as just getting a complete list at the click of a button.
I don't like this change. There are a lot of SaaS business that allow you to create a CNAME along the lines of "saas_app_name.yourbusiness.com". For example Fastmail and Zoho do that, our business offers that feature as well. When you arrive at our site we do a redirect to a proper https URL.
But a browser will not accept a redirect from a domain with an incorrect certificate (and rightly so), so this will start failing if https becomes the default, unless we generate certificates for all those customers, many thousands in our case. And then we need to get those certificates to the AWS load balancer where we terminate https (not even sure if it can handle that many). I think we may need to retire that feature.
I don't know your business, but I would never consider using such a feature that didn't support HTTPS all the way through as a business customer. It's not like this can't be done at scale (all custom domains served by Netlify use Let's Encrypt certs, for example).
The complexity of AWS versus bare metal depends on what you are doing. Setting up an apache app server: just as easy on bare metal. Setting up high availability MySQL with hot failover: much easier on AWS. And a lot of businesses need a highly available database.
A high availability MySQL server on AWS is about the same difficulty as on your own kubernetes instance (I've got a play one on one of those $100 N100 machines, got one with 16G mem). Then:
And then you can just provision MariaDB "kind", ie. you kubectl apply with something specifying database name, maximum memory, type of high availability (single primary or multimaster) and secret reference and there you go: new database, ready to be plugged into other pods.
N100 is my homelab, for playing. For instance I have a kubernetes cluster running KubeVirt, which runs 5 VMs, which ... have a kubernetes installation (so I have multiple worker nodes doing a "distributed filesystem" all of which is resharing disks from the same SSD). My production servers are generally older Xeons with ECC ram, which are also running kubernetes.
If your database has a hardware failure then you could loose all sales and customer data since your last backup, plus cost of the down time while you restore. I struggle to think of a business where that is acceptable.
Why are you ignoring the huge middle ground between "HA with fully automated failover" and "no replication at all"?
Basic async logical replication in MySQL/MariaDB is extremely easy to set up, literally just a few commands to type.
Ditto for doing failover manually the rare times it is needed. Sure, you'll have a few minutes of downtime until a human can respond to the "db is down" alert and initiates failover, but that's tolerable for many small to medium sized businesses with relatively small databases.
That approach was extremely common ~10-15 years ago, and online businesses didn't have much worse availability than they do today.
I've done quite a few MySQL setups with replication. I would not call setup "extremely easy", but then, I'm not a full time DB admin. MySQL upgrades and general trouble shooting is so much more painful than AWS aurora where everything just takes a few clicks. And things like blue/green deployment, where you replicate your entire setup to try out a DB upgrade, are really hard to do onprem.
Without specifics it's hard to respond. But speaking as a software engineer who has been using MySQL for 22 years and learned administrative tasks as-needed over the years, personally I can't relate to anything you are saying here! What part of async replication setup did you find painful? How does Aurora help with troubleshooting? Why use blue/green for upgrade testing when there are much simpler and less expensive approaches using open source tools?
When I worked at AWS, the majority of customers who thought they had database backups had not tested recovery. The majority of them could not recover. At that point, RDS sells itself.
The other huge middle ground here is developer competency and meticulousness.
People radically overestimate how competent the average company writing software is.
Putting aside the fact that replication and backups are separate operational topics -- even if a company has no competent backend engineers, there are plenty of good database consultancies that can help with this sort of thing, as a one-time cost, which ends up being cheaper than the ongoing markup of a managed cloud database product.
There's also a big difference between incompetent and inexperienced. Operational incidents are how your team gains experience!
Leaning on managed cloud services can definitely make sense when you're a small startup, but as a company matures and grows, it becomes a crutch -- and an expensive one at that.
My "Homeserver" with its database running on an old laptop has less downtime than AWS.
I expect most, if not 99%, of all businesses can cope with a hardware failure and the associated downtime while restoring to a different server, judging from the impact of the recent AWS outage and the collective shrug in response. With a proper raid setup, data loss should be quite rare, if more is required a primary + secondary setup with a manual failover isn't hard.
Your girlfriend was somewhat right though: if you click "Reject all" they cannot show you targeted ads, and will show you generic ads instead. That's why I always accept the tracking cookies, for me the price of the privacy incursion is worth seeing more relevant ads.
I always consent as well. They can show much more relevant ads when you consent to cookies. If I block cookies I get generic ads about stuff I don't care about.
Ah, I can't think of any level of relevance that would make me want to see ads, and in areas where I do want to see something, like recommendation systems, I've found that they are better when they are only based on the content I am currently looking at as opposed to based on some profile based on my whole history.
The popup never lets you choose to see fewer ads. It's a common misconception by lay people that you will see fewer ads if you block cookies, but that's not happening of course. So you may as well get relevant ones.
Just today I got an ad for a new theater show in town I'd like to see, I might have missed that if it wasn't for the targeted ad. Did they "manipulate" me into seeing it? I guess so. Do I mind? No, I'm capable enough to decide for myself.
A power supply can operate most efficiently if its power output is close to what it was designed to supply. Typically, a PoE switch has a large power budget to take into account the myriad devices that might be connected to it.
If you have one small PoE device connected to a large PoE switch then it would be less efficient compared to a non-PoE switch and a small separate power supply for the device.
I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.
I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.
It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.
We still keep rate limiters in Redis though, it would be pretty easy for some scanner to overload the DB if every rogue request would need a round trip to the DB before being processed. Because we only store ephemeral data in Redis it does not need backups.
reply