On an unrelated note, I opened this url in a new tab, took a quick look, then came back to read the comments.
A few minutes after I hear the fans on my pc start ramping up. Sure enough, I open the system monitor and see chrome going crazy on my CPU. In chrome, I open the task manager, then click sort by CPU. The entry at the top of the list reads:
> subframe: facebook[dot]com
I get taken to the open downdetector.com tab after double clicking the entry. After closing the tab everything goes back to normal.
Does anyone know why or what downdetector/facebook would do that requires 100% of my CPU's resources?
PS. I have ublock origin installed. My cpu is an i9-12900K.
That is unrelated, but I've seen similar issues recently when logging into Canvas (you know for school). Maxes out my CPU and if I leave it long enough it crashes the tab due to memory...it's not displaying anything special...
of all things to connect to a smart home, the locks to my house would be the absolute bottom of the list. i hate coming home when the power is out and cannot open the garage door. i couldn't imagine not being able to get in at all.
as far as not getting out, any lock unable to be unlocked from inside seems like something should not be allowed to be made. ever.
The locks are purely additive in functionality, they lose nothing compared to a manual lock. It still has a key for manual unlocking, still has the twist knob on the inside.
But now I can see the status of the lock if I'm away from home, and I can lock it remotely if necessary. I can give out a keypad code to a house sitter. Or I can let someone in real-time from remote.
All my 'smarthome' technology at home is this way. Nothing requires the Internet to work, and if the server fails then only the automation itself stops; all of the switches, locks, and such just fail back into working like any old school switch/lock/etc.
1. I'm not aware of any smart lock that cannot be locked and unlocked manually from the inside. This would violate fire code for residential structures in a lot of US jurisdictions.
2. Electronic locks in general are either line-powered or battery-powered. Line-powered locks are unusual in residential environments because of the higher complexity of installation (they're more often strikeplates than actuators, although in-door actuators are available).
Battery-powered locks take one of two approaches to resolving power issues: most commonly on residential locks, there is still a key cylinder on the outside to manually lock and unlock. Less commonly on residential locks but more typical of commercial ones, there may be no key cylinder but instead an external connector that allows the programming tool (very common on commercial systems) or a 9v battery (common on residential units) to be connected to provide external power.
3. Cloud-reliant smart locks are pretty rare for practical reasons. Most are still fully functional (often minus remote control via app, but not always) without internet service. Even most commercial systems fall back to cached credentials in the door controller when the connection to the access server is lost, although annoyingly some of the newer "smarter" systems don't.
I have the Nest/Yale lock without the key, it definitively is not reliant on the Nest cloud being up (or internet working) for access, only to remotely lock/unlock or to program new codes. Do any locks actually fail to work with an already programmed PIN code if the internet is out? That seems like a massive failure. Go to dinner and your router crashes and you can't get back inside to fix it? Wow.
The only examples I know of are commercial systems, and specifically "cutting edge" commercial systems that are completely IP-based and cloud-managed. These are honestly kind of a disaster and I hope they don't catch on; they can be cheaper to install than conventional commercial systems (with ACU cabinets) but they achieve that cheapness by abandoning most of the reliability and security features of conventional designs. That said some of these get installed fail-open (e.g. loss of management means they stay unlocked) for fire egress reasons.
* Stores codes on the device itself and still unlocks with no internet connectivity
* Is a physical deadbolt inside that works without power
* If the lock itself runs out of battery power from the OUTSIDE, you can "jump" it with a 9-volt battery
* Allows me to auto-lock after X period of time, or at night
* Allows me to NEVER carry keys, ever. Or ever have to worry about keys.
* Allows me to manage multiple, time-boxed codes for people (housekeeper can't get in at midnight)
It's pretty damn great, honestly. And I stow a 9-volt in a flower box in case of emergency. (You still need to know the code, obviously.)
It's also absolutely pick-proof/bump-proof because it has no key at all. Not even a backup key.
It's the Yale x Nest lock and is really really nice.
My house still has a deadbolt like that, too. They were popular for a while, especially on doors with large glass panes (so that would-be intruders couldn’t smash the glass and unlock the door w/o a key).
I have Z-Wave locks and have no problem having them as part of my smart home. The 4x Lithium AA batteries in them last over a year, they don't talk to a "cloud service", but instead a physical server that I have total control over, and you can still use an old-fashioned key to unlock them.
Every smart or electronic lock I've used just augments the deadbolt and still has your physical key and its tumbler lock as a backup. I have an electronic deadbolt and have never gotten locked out even when its battery dies.
you ask that like you are challenging my idea of not ever using smart locks on my home. instead, you're bringing up another reason to support that decision.
so, which direction were you attempting to move the needle?
Batteries going out have never locked me out of my home.
Seems like you're putting way too much thought into something that probably won't happen, being 2022 with notifications and all. I don't justify my choices with 0.1% chances.
You seem to put trust in technology to do important tasks, when they have problems securing a stupid light bulbs.
There's a joke about it:
Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!
Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.
I don't have gun for that, but a baseball bat instead. That scene from Office Space made such an impression, and I swear one day, I will recreate it on a piece of gear that steps out of line.
August 24th was the first time we saw exactly this issue in 7 years of heavy, multi-region, AWS use. So we put in place the ability to semi-automatically route around this more quickly, but we didn't fixate on it. Two data points is a line, however. (But maybe not yet a trend?)
It's crazy to me how long it takes Amazon to notify users that there is even an issue. I think it took 15 minutes for them to acknowledge an issue, thats a lot of time for our services.
That's because it's a political issue inside AWS. They have the technology to report it automatically, but there's a strong pressure not to post "green-i" or yellow or red, because those things impact SLA payments.
So if there's any way they can spin it as not an outage they will try not to post it.
Can confirm this, it's political. It's always great seeing the "rationality" for not doing the right thing. After the 3 outages last December, I can remember a certain person in a certain global outage slack channel laughing at customers who weren't "resilient" enough to withstand the outage. Then AWS focused a resiliency campaign to document all those customers risks, as if AWS's own poorly designed system wasn't the cause for peoples inability to fail over. Glad I'm out of that place, it's super toxic these days.
I know many services that do this. And even if you fight with customer care for RCA they go silent and pass you around to other team members to whom you need to tell the problem again and again.
You mean on the AWS/service provider side? I've also participated in internal incident response on the service provider side and we again made SLA refund decisions based on actual customer impact not our status page. But then again we diligently update our status pages so there's that.
It was 41 minutes from the first sign of trouble in our monitoring (an SQS queue backing up because the lambda it triggers wasn't getting invoked) to AWS posting the "informational" status.
I continue to be irritated that services that consistently return errors are characterized as "increased error rates" or "increased latency". They seem to use those phrases for any kind of outage.
It's all networking issues as you're remotely access them remotely, so of course it's increased error rates or increased latency. When a piece of gear stops responding the network tries to self-heal as designed which causes those issues.
It's all in the messaging. I once worked at a place that was having bandwidth problems because of an "issue with an upstream vendor." The "issue" was we hadn't paid our bill in a couple months, so they disconnected us.
The moment I get a whiff of AWS issues, that's basically a wrap on the day. I'm not going to spend my time wondering why something isn't working today because somewhere down the chain it is undoubtedly a silently failing AWS service driving me mad. The whole house of cards comes tumbling down and things stop working that Amazon will swear up down left and right are 100% healthy.
Sadly this has become a monthly occurrence at this point. Monoculture, it turns out, is a really really bad idea.
People, I've just started setting up API GW and deploying my code to another region. Rest assured the outage will normalize itself before I finish migrating. ETA: 15min-ish
The first rule of publishing ETAs for updates is to triple the estimate. Be the hero that finishes before the published time vs finishing after! The second rule of ETAs is don't specifiy a time.
I'm still amazed at the number of cults that place a specific date on the return of the savior, and even more by the people that go along with the rescheduling. The fact that I'm still waiting in Dallas for JFK's return is irritating. /s
Honest question and frankly scared to ask because it sounds stupid: if you have like 30 mins of downtime on AWS every year and spend 3x cost on managing their infrastructure risk by deploying it on multi-AZ and multi-region (and thereby AWS pushing reliability management back to the customer); is the value proposition of cloud just some dude to install a disk if it goes out on a rack in your office? May be there is a reverse incentive for AWS to keep their AZ's slightly unreliable so that customers spend 3x or 9x or what have you to make sure nothing ever goes down.
Like what's wrong with on-prem? Lack of diesel generators? We could just have that without AWS. Bare metal datacenter. Counter to most opinions, I think managing a server isn't that difficult. I am sort of a semi-professional and prosumer that has no trouble managing servers for years on end with less downtime than a whole fricking datacenter.
There is more serious discussion and new revelations around this [1], [2]. Sometimes it is hard to ask questions about layers of abstractions that have built up and no one dares to think about getting rid of them.
Having managed a small data center in the past and having seen what it takes to manage multiple enterprise-scale data centers, the answer is "no, AWS does it better."
The company that I'm in right now has two engineers (including myself) who are building and maintaining a product that serves millions of streams a week. There's no fucking way we could have done this ourselves. One F5 would cost more than our entire total AWS bill for two years - and we'd have to have at least 4 F5s if we wanted to try to match AWS. Plus the media encoders would cost a fortune.
For some things it's fine to head over to lowendbox.com and pick up a cheap VPS hosting package. We could theoretically build our stack on top of a bunch of VPSs, sync everything with rsync, etc. But then we'd be spending time building infrastructure (which is pretty much valueless) instead of our product.
If you're a large multinational, you basically face the same threats as Google/AWS/MSFT but there's no way you can hire, train and keep as good a production security team as them (well maybe better than Azure, but I digress).
You can't afford the upfront contractual / capital costs to maintain datacenters in every region.
And finally you can't afford the armies of lawyers and compliance engineering teams to try and reason about your data residency and things like GDPR and CCPA.
In other words, you're mostly paying for production security / privacy incident response, compliance (lawyers) and datacenters.
That's really interesting. We didn't get any user reports before our canaries fired, either. Now I have to think about what might explain the difference between your systems and ours. We're monitoring API Gateway health (more or less), because that's what we care about in this part of our infrastructure.
[10:33 AM PDT] [10:33 AM PDT] We are investigating increased error rates for invokes in the US-WEST-2 Region. We do not yet have a root cause, but are investigating multiple potential root causes in parallel. In addition, we are implementing filters on inbound traffic from a set of sources with recent significant traffic shifts, which may help mitigate the impact. We do not yet have a solid ETA, but will continue to provide updates as we progress.
[10:13 AM PDT] [10:10 AM PDT] We are investigating increased error rates for invokes in the US-WEST-2 Region.
> While we have seen improvements in error rates since 10:40 AM PDT, recovery has stalled and we do not have a clear ETA on full recovery. For customers that have dependencies on API Gateway and are experiencing error rates, we do not have any mitigations to recommend to address the issue on the customer side.
Yeah, you can select "edge" for some resources (API Gateway and Lambda are two that come to mind) which just means it's located in all regions plus some additional "edge" infrastructure that isn't available as a region. AWS puts some restrictions on edge resources since there isn't enough capacity or full-region functionality.
Usually you pick this for stuff that is CDN oriented and front a regional service with that.
My Roomba isn't working!!!
hahahaha!