Hacker Newsnew | past | comments | ask | show | jobs | submit | dynamite-ready's commentslogin

This is interesting. At what level and team size? There's going to have to be a point where you just give in to the 'vibes' (whether it's from a human, or a machine), otherwise you become the bottleneck, no?

Better a bottleneck than constant downtime.

Only 4 or so people...so small, but that's how agile teams should be.

I think there's a place for this, it's not rare for one person to be the PR bottleneck like this, but I don't think it would be for me in either position; people should be able to be responsible for reviewing each others work imo. Incidentally "Agile" with a capital A sucks and should die in a fire, but lowercase a "agile" probably does by necessity mean smaller teams.

That's not always the intention behind that style of writing.

Often, when I'm communicating with someone who is either dyslexic, or uses English as a second (or even third or fourth) language, then I make an effort to shorten sentences, and almost make bullet points of them.

It's actually a good exercise for the person writing too. Less can indeed be more.


Some of the sites I maintain, are fine. But I'm guessing it's just a matter of time?


The whole industry walked straight into the cloud service lock-in trap. How would we begin to wind back? I also think Docker is as much to blame as the bigger cloud vendors.


I don't think it wants to. Ask any on-call engineer or support tech how they felt when, after having their phone blow up at 1am because everything is falling apart, they found out that this was an AWS-wide outage.


Why is docker to blame?


It's subjective I guess, but I feel as though containerisation has greatly supported the large Cloud vendor's desire to subvert the more common model of computing... Like, before, your server was a computer, much like your desktop machine, and you programmed it much like your desktop machine.

But now, people are quite happy to put their app in a Docker container and outsource all design and architecture decisions pertaining to data storage and performance.

And with that, the likes of ECS, Dynamo, RedShift, etc, are a somewhat reasonable answer to that. It's much easier to offer a distinct proposition around that state of affairs, than say a market that was solely based on EC2-esque VMs.

What I did not like, but absolutely expected, was this lurch towards near enough standardising one specific vendor's model. We're in quite a strange place atm, where AWS specific knowledge might actually have a slightly higher value than traditional DevOps skills for many organisations.

Felt like this all happened both at the speed of light, and in slow motion, at the same time.


Containers let me essentially build those machines but at the actual requirements I need for a particular system. So instead of 10 machines I can build 1. I then don't need to upgrade that machine if my service changes.

Its also more resilient because I can trash a container and load up a new one with low overhead. I can't really do that with a full machine. It also gives some more security by sandboxing.

This does lead to laziness by programmers accelerated by myopic management. "It works" except when it doesn't. Easy to say you just need to restart the container then to figure out the actual issue.

But I'm not sure what that has to do with cloud. You'd do the same thing self hosting. Probably save money too. Though I'm frequently confused why people don't do both. Self host and host in the cloud. That's how you create resilience. Though you also need to fix problems rather than restart to be resilient too.

I feel like our industry wants to move fast but without direction. It's like we know velocity matters but since it's easier to read the speedometer we pretend they're the same thing. So fast and slow makes sense. Fast by magnitude of the vector. Slow if you're measuring how fast we make progress in the intended direction.


Containers have nothing to do with storage. They are completely orthogonal to storage (you can use Dynamo or RedShift from EC2), and many people run Docker directly on VMs. Plenty of us still spend lots of time thinking about storage and state even with containers.

Containers allow me to outsource host management. I gladly spend far less time troubleshooting cloud-init, SSH, process managers, and logging/metrics agents.


> Containers have nothing to do with storage. They are completely orthogonal to storage

Exactly.

And sure, you can use S3/Dynamo/Aurora from an EC2 box, but what would be the point of that? Just get the app running in a container, and we can look into infrastructure later.

It's a very common refrain. That's why I believe Docker is strongly to linked the development of these proprietary, cloud based models of computing, that place containerisation at the heart of an ecosystem that bastardises the classic idea of a 'server'.

The existence of S3 is one good result of this. IAM, on the other hand, can die in dumpster fire. Though it won't...


> And sure, you can use S3/Dynamo/Aurora from an EC2 box, but what would be the point of that?

An easy API? Easy replication / failover / backups? I would absolutely use S3 even with EC2.

> IAM, on the other hand, can die in dumpster fire.

I’m no great fan of AWS’s approach to IAM, but much of the pain is just the nature of fine-grained / least-privilege permissioning. On EC2 it’s more common to just grant broader permissions; IAM makes you think about least privilege, but you absolutely can grant admin for everything. And as far as a permissioning API goes, IAM is much cleaner/saner than Linux permissions.


I don't see how Docker makes that worse.

Before Docker you had things like Heroku and Amazon Elastic Beanstalk with a much greater degree of lock in than Docker.

ECS and its analogues on the other cloud providers have very little lock in. You should be able to deploy your container to any provider or your own VM. I don't see what Dynamo and data storage have to do with that. If we were all on EC2s with no other services you'd still have to figure out how to move your data somewhere else?

Like I truly don't understand your argument here.


Containerization was basically a way to get rid of the problem of "it works on my machine", mainly the OS version and installed libraries. Plenty of instances where program X will work on system A, but not system B, but program Y works on system B but not A. Or X is supported on Redhat/Ubuntu/etc. but you can't or don't want to build from source.

Even if that is not a problem, you avoid having to install the kitchen sink on your host and make sure everything is configured properly. Just get it working on a container, build and image and spin it up when you need it. Leaves the host machine fairly clean.

You can run a bunch of services as containers within a single host. No cloud or k8s needed. docker-compose is sufficient for testing or smallish projects.

Also, there is a security benefit because if the container is compromised, problem is limited that container not the entire host.


I really don't remember voting on this web censorship issue, or ID cards, because both of those policies would have changed my vote, for sure.


Of course you don't remember - you don't vote on individual laws, you vote for politicians.

Politicians always lie/"break promises", so whatever they say before an election only has a loose correlation with what they actually do. Pay attention to their track record and vote accordingly next time.


What are you suggesting though? That there were clues to herald these changes? Tbh, I did look at my (new) MP's record on social justice and welfare... They were surprisingly out of Party character on a few issues. But certainly better than the other candidates in aggregate.

That's about the best we can all do.

Like many others, I would like to see a Swiss style vote on anything that gathers enough public support.

Apparently, there are a few sources that suggest a 'silent' majority supports the OSA. I think far reaching laws like these (the assisted death law changes being another) should always be put before the public. That way at least, I can better understand my position, and consider whether I'm in the wrong.

(I feel the OSA should have forced the ISPs to add parental controls, and let each household manage their own patterns of consumption)


I suppose the missing part of the story is why they held back on pursuing this market.

At the time, the console market was wide open, with little innovation in terms of hardware, until Nintendo released the Switch.

Even now, I'd be quite happy to own a Valve branded, small form PC that plugs into a TV.

The Steam Link was a kop out to me.


I still use my steam link all the time. I have it fiber back hauled to the computer that it runs off of. I'm thinking of buying a couple more. Give one to my kid and one put on a projector so I don't have to keep moving it back and forth.

Also, I think the device you're looking for is a deck because you can plug that into a television and use a wireless remote with it.

The steam link is the best remote display device I've ever used. No frame drops or artifacting, even on scenes that make the 3090 chug. It forwards controllers to the PC.

Now, the software, "big picture mode" and otherwise using a controller for PC input aren't the greatest, but you gotta figure it's me and like 2 other people still using this.

BTW airscreen/miracaat/screen mirroring/"wireless display" all suck. If your TV has smart bullt in that supports miracast, that in my limited experience is the second lowest latency, then firetv devices, and then roku and everything else. Roku only usable for presentation or digital signage, unless first party built in.

No idea why.


I have used an old sony bravia tv to cast COD from an android phone. Every time I connected, latency varied from 100 - 200 ms to 10 seconds. I had to reconnect several times until the latency was satisfactory.

Conclusion: it has been technically possible to cast to a tv for some years.


yes, miracast/screencast whatever was a thing prior to the Steam Link being released in November of 2015 (9 years and some change ago). some of the current devices can actually do sub-100ms of input latency, but you can't be in the same room as the source device or you'll go crazy. The roku stand-alone have the worst network and input latency, they're unusable for anything other than presentations.

firestick was <100ms network and barely noticeable input latency (on the order of ~20ms so interframe lag at 60fps). steam link is link latency + some small constant - whatever the "frameserver" processing takes, call it 3ms but definitely <10ms - and that's both network and input.

when i said network i meant both the network and the actual refresh of the screen. watching a movie is one thing, but pushing "Y" and your character jumping should be "as instant as practicable" and steam link is the only one that is that that i've used, so far.


I guess Proton/Wine/Linux gaming wasn't mature enough back then. Also a handheld wasn't really an option because there weren't any powerful enough yet energy efficient and cheap x86 chips available either.


Proton didn't exist yet, IIRC. The Steam Boxen relied on devs/studios/publishers being able and willing to port their games to Linux natively. The result was a handful of AAA and indie games that put proper effort into ports that ran well, a modest but larger selection of AAA games sloppily ported (such that they often can't run on current distros without containerization or extensive library preloading shenanigans), and a deluge of indie shovelware / forever-in-early-access vaporware produced by clicking the "gib me Linux" button in Unity and calling it a day.

Unfortunately, while it was certainly a boom in the number of games Linux users could play (easily enough for me to ditch Windows entirely and game exclusively on Linux and consoles), it wasn't quite the critical mass needed for Steam Boxen to be a commercial success. Proton was the missing piece.


I'd wager the chances of that happening are much lower under this current administration. Surprised Biden didn't consider it though.


Neither Obama nor Biden would ever pardon Assange, because it would not win them any votes with the constituencies they are really after. At least Manning carried some trans votes.


This is it.

Well organised and destructive conservatives across much of the western world, have conspired successfully to nullify the positive effect of a word once used to elide wide ranging ideas and discussions on the subject of social justice.

This is social media at it's most galling.

Though alongside that, we now have a wider appreciation of a long list historical crimes, and the longstanding effect of those transgressions.

In that sense, we have all become 'woke'.


> Whenever anyone tries to ban saying something that we'd previously been able to say, our initial assumption should be that they're wrong. Only our initial assumption of course. If they can prove we should stop say

For example, is a discussion about the defacement of the Black Hills a 'priggish' waste of time, or a valuable lesson about the real history of the United States?


Nick Clegg left just in time.


I think this comment comes across as slightly ignorant.

Many examples exist where a misguided belief in scientific 'facts' (usually a ropey hypothesis, with seemingly 'damning' evidence), or a straight up abuse of the scientific method, causes direct harm.

Suspicion is often based on facts or experience.

People have been infected with diseases without their knowledge.

People have been forced to undergo surgical procedures on the basis of spurious claims.

People have been burnt alive in buildings judged to be safe.

And look at Boeing.

No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements.

But whether one is "less bright", or super smart, we all know we as humans, are prone to mistakes, and are just as prone to bend the truth, to cover up those mistakes.

There's nothing plebeian about this form of suspicion. In fact, the scientific method relies on it (peer review).


> No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements

This is just wrong and naive. You can be happy if a majority of people agree to this.


As written, possibly. Taken literally, it's full of holes.

But if you're not a pedant, I essentially mean that most parents will vaccinate their children, many passengers will book flights, and a majority of the citizens in a population do respect their officials (etcetera).

And I think if you were to dig deeper than this, and test that hypothesis with... well... a scientific experiment of some kind, the result would probably support it.

But a good number of people will naturally question the outcome!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: