I agree, and the "not the first time" I think is key there. Setting expectations I think is crucial. For ours (5yo), we're clear about what he can watch and for how long. We control the device. "Two episodes before dinner" or so. Over time, he learns how this works. And we're not afraid to tell him that now isn't a good time for the TV.
It's not to say we never have any complaints over this, but when we do, it's rare and usually because something else is amiss (hungry, frazzled, tired).
But most instances it's like last night, where we were clear that we had time for two episodes of Tumble Leaf before dinner. At the end of the second one he announced "last one!" and got up off the couch as we picked up the remote.
5yo parent here. Agreed. And sometimes they just need to chill.
I agree with the overall sentiment. Too much screen time is bad. Kids need to get out and play, indoors or out. In our house, it's a lot of biking and playing with friends outside, Legos, Brio, Magnatiles, matchbox cars, or just crafts.
But sometimes they're frazzled, out of sorts, and would benefit from just being able to sit and chill.
So we'll put on something for him that we're comfortable with. Tumble Leaf, Blaze & The Monster Machines, Trash Truck, or the occasional Ghibli movie.
We do not give him a tablet or other portable device. He sits and watches on the couch, we set a expectation, and stick to that.
I think controlling the device is important. Keeping the screen as something we control and not something he carries around seems to allow us better control and helps him understand the limits in play. 90% of the time, we have no fuss.
And it's not bad. In moderation, TV can be just fine. Often it genuinely helps him soothe and relax (Especially if he's been really active and engaged all day), and as you said, helps us get something done. Two episodes of one of his favorite shows is great to help him unwind while we're making dinner.
But we keep time/episode limits as well, and that seems to keep things in balance along with the aforementioned things.
Seconding this. We've made Daddy Mix Tapes, "Mommy Reads Stories", and other compilations.
Adding to the plethora of good ideas here: My wife bought these hanging tabs to stick onto the cards[1], and then strings a keycable[2] through them so my son has groups of them together. Yoto makes folding binders for them as well, but the keycable method seems to be a bit easier for our 5yo to handle.
That would be quite the "budget" SMP build. The 366MHz "Mendocino" was based on the prior Pentium II core I believe. So quite the disparity in single-threaded workloads.
Yes, because there weren't really CPUs then that had double the performance.
Celeron CPUs were usually CPUs that shared the same core architecture as the current Pentium standard, but often had a lower core clock speed, lower core memory speed, and/or had smaller L2 caches.
Workloads have different constraints however, and simply doubling cache, clock speed, or memory bandwidth doesn't necessarily double performance, especially when running more than one application at once. Keep in mind, this is Windows 98 /NT/2000 era here.
Symmetric multi-processing (SMP) could be of huge benefit however, far more than simple doubling any of the above factors. Running two threads at once was unheard of on the desktop. These were usually reserved for higher-binned parts, like full-fledged Pentium workstations and Xeons (usually the latter.) But Abit's board gave users a taste of that capability on a comparative budget. Were two cheaper than a single fast CPU? Probably not in all cases (depends on speeds). But Abit's board gave users an option in between a single fast Pentium and a orders of magnitude more professional workstation: A pair of cheaper CPUs for desktop SMP. And that was in reach of more people.
In short, two Celerons were probably more expensive than a single fast Pentium, but having SMP meant being able to run certain workloads faster or more workloads at once at a time when any other SMP system would have cost tons.
>Celeron CPUs were usually CPUs that shared the same core architecture as the current Pentium standard, but often had a lower core clock speed, lower core memory speed, and/or had smaller L2 caches.
This had an interesting side effect: Celerons of that era overclocked extremely well (stable 300 -> 500MHz+), due to the smaller and simpler on-die L2 cache relative to the Pentiums of the era, whose L2 cache was much larger but had to be off-die (and less amenable to overclocking) as a result.
An overclocked dual Celeron could easily outperform the highest-end Pentiums of the era on clock-sensitive, cache-insensitive applications, especially those designed to take advantage of parallelism.
IIRC Celeron cache being on die was actually faster as it was on die, this was mitigated on the Pentiums by there being more of it. It seemed like in games the faster cache performed better.
Another thing that helped the Celeron overclocking craze is Intel seemed to damage the brand badly out of the gate. The original Celerons had no cache at all, performed terribly and took a beating in PC reviews. So even though the A variants were much better this still had a stink on them.
The thing that probably helped the Celeron the most with overclocking though was they gimped them by only giving them a 66mhz front side bus speed. Since you had to increase this number to push the locked multiplier CPU speed up this was an advantage if you were going to overclock as you could buy a capable motherboard and run it at stable 100mhz. Whereas you'd have a lot more system wide problems trying to push a Pentium's 100mhz bus higher.
That was a bit of a two edged sword as the heavily overclocked Celerons would benchmark extremely well, but be somewhat disappointing in actual applications due to the lack of cache space. It was right at the start of the era where cache misses became the defining factor in real world performance. CPUs ran ahead of DRAM and it has never caught back up, even as per-core CPU performance plateaued.
Going from a single CPU to a dual CPU would, in theory, double performance _at best_. In other words, only under workloads that supported multithreading perfectly.
But in the real world, the perceived performance improvement was more than doubling. The responsiveness of your machine might seem 10 or 100x improved, because suddenly that blocking process is no longer blocking the new process you're trying to launch, or your user interface, or whatever.
Very interesting observation. Multicore systems have been fairly standard for the last 10+ years, and while you occasionally notice a misbehaving process hog an entire core, it never visibly impacts system performance because there are still several other idle cores, so you don't notice said "hogs."
It's much rarer to see misbehaving multithreaded processes hog all of the cores. Perhaps most processes are not robustly multithreaded, even in 2025. Or perhaps multithreading is a sufficiently complex engineering barrier that highly parallelized processes rarely misbehave, since they are developed to a higher standard.
> Multicore systems have been fairly standard for the last 10+ years, and while you occasionally notice a misbehaving process hog an entire core, it never visibly impacts system performance because there are still several other idle cores, so you don't notice said "hogs."
Except on Windows laptops. Where, although the computer is idle, your favourite svchost.exe will heat your system and trigger thermal trottling.
100%. Its common for non-technical users to complain their laptop is faulty, because it gets hot and the battery drains very quickly. They have no concept of a runaway process in a hard loop causing this.
Granted, I wasn't good at video games in general. And this one infuriated me, because I loved it. I could easily beat the first level, but then I crashed on carrier landing. This happened for years. I only ever saw the first level of this game.
Then one day, while staying at my elementary afterschool sitter's house, one of the kids there told me he played Top Gun as well. He could land, but wasn't very good at the rest of the game.
A plan was formed.
The next day, I brought the cartridge over, and we settled in. I'd play the level, then hand him the controller at which point he'd plant it on the deck. Rinse and Repeat. Top Gun and Top Gun: The Second Mission didn't have too many levels, (6 maybe?) and I don't think it took us too long to beat. Neither one of us had seen much of the game. But working together, we beat both in a matter of hours.
I still look back on that as one of the few NES games I finished without codes or a Game Genie, just the help of a friend. =D
The blog says that failing to land on the carrier didn't actually fail the mission. Maybe you're misremembering? I just remember this game being so frustrating that I never replayed it.
Entirely possible I misremembered. As another commenter pointed out, it might be that I never got past mission 2. On further recollection, I think it was just Top Gun: The Second Mission that I owned. I remember playing both, but it might have been the second that vexed me the most.
I also clearly recall it failing the mission, could there possibly have been different versions of the game? I've heard before of Nintendo distributing slight variations of the same game based on region back in those days, perhaps that's what's going on?
This Microsoft response reminds me of the 2018 Blizzcon event, where the Diablo Immortal developer challenged the audience with "Do you guys not have phones?" when the audience asked if the game was coming to PC.
Then - like now - it seemed that they couldn't understand that what they made was not what their customers wanted.
Don't forget the audience member who literally asked if it was a joke - and got cheers and applause from the rest of the audience. It was probably one of the biggest PR disasters in gaming history - and it does seem like the AI CEOs have been taking quite a bit of inspiration from it.
I think the intent is to provide a sense of pride and accomplishment when rivaling the same monetary dedication on the mobile platform comparable to the PC counterpart. You think you want bread, but you don’t: we are making subscription-based cake available which is better in every way.
I just got curious, and googled around how much Diablo Immortal made - "Diablo Immortal has achieved over $500 million in revenue in its first year". To put it bluntly, nobody cares about this PR disaster internally, because at the end, they made a lot of money and proved them wrong.
My local state representatives just attempted this at our latest "town hall meeting" [i.e. to participate: scan the 8.5"x11" QR code, taped upon each chair].
I do not carry a phone, let alone one that scans QR codes... so instead I just provided 300 pound union dude commentary throughout our entire meeting. I definitely participated.
My thought exactly. From hindsight, Diablo immortal is not a bad game, but that moment was really…not great. I guess the guy knew that phone games were getting momentum but unfortunately that specific group of users in Blizzcon didn’t want a phone game.
I think if they'd teased a phone game it would have been well received. From memory, the problem was they teased something much larger/exiting (new diablo, not a chinese arpg reskin) so when the reveal hit everyone was massively let down.
I guess this is kind of similar though. what is promised isnt and likely wont be delivered.
I actually think that 2018 was about the time when phone games had very much lost momentum and now are much less exciting than they were circa 2013. By 2018, both the potential and the limitations of phone games were very much understood by the audience. I'd argue that the top of the hype cycle of "maybe phone games will actually become really good" was 2010's Infinity Blade. Clash of Clans came out in 2012, and by 2018 phone games were fully devoid of momentum.
It also had the absolute worse monetization scheme in history with the general sentiment being they abused every dark pattern and made the experience horrible.
And yet Diablo Immortal made about a billion dollars, orders of magnitude more than the other Diablo games combined. Sound like they knew exactly what their customers wanted.
The nuance there, I think, is that over half the players are reportedly new to the Diablo games, which suggests that their primary intended market was likely not existing Diablo players.
The core kernel of it always seemed, to me, to be an extension of the Diablo 3 RMT auction house idea - they wanted a recurring revenue source from a franchise where traditionally they were not charging one, and in this case, they squared that circle by appealing to users who were not existing players, and so did not have those norms in mind.
Yes, however that remains the same for Windows, in that they know (or at least, they surmise) that they can make more money with AI features than without, a hypothesis that remains yet to be tested, but it doesn't mean they don't know what they're doing.
I do still strongly suspect Microsoft's endgame is to get people off Windows in the consumer space, and that most of what's going on right now with 11 is froth as they add features they think will make them money in the near term even as it drives people off or be useful in the non-consumer space, not because they sincerely think this is something people will find a net gain in the consumer space.
So yes, I agree it's likely not primarily ignorance driving this.
I would assume because it's hideously expensive to maintain a full OS and support and compatibility guarantees with all the random horseshit consumer platforms throw at them, and they did the math and concluded they liked the profit margins for purely online and non-consumer targeted things, where they can more effectively constrain what is and isn't supported, better.
In particular, my guess is that they looked at their estimates for how much they could make off recurring revenue sources in desktop OSes, and their estimates for how the desktop market is changing with more younger users not using them or viewing them as legacy platforms, and decided they should pivot to primarily being a services provider, in much the same way they're aggressively trying to slap the Xbox branding on other things and getting out of the console market as fast as they can run.
Could be wrong, I don't work there, but usually my experience with companies that large making apparent missteps is that their goal isn't the one you think it is, and attempting to extract as much data as they can from desktop users really sounds like what you do when you're trying to squeeze the sponge before you throw it out.
It's true that the cloud is a big revenue driver for them, but I highly doubt they'd get rid of one of their flagship products, much less one used by billions of people as well as other corporations and governments.
I don't think they want to kill it entirely in the next 5 years, at least, but I do think they just want to stop supporting the non-enterprise users because that lets them significantly constrain what hardware and features they have to maintain, and all their big software offerings are very content being sold as cloud-based recurring revenue sources.
I would assume after 11 LTSC finally EOLs might be the earliest they'd be considering anything more drastic, but I wouldn't speculate whether it'd look like a good idea by then.
It may sound wild, and certainly possible time will prove me wrong, I'm not an oracle, but the ongoing failures in basic functions in Windows seems like they're removing significant investment in it as a reliable platform for general use going forward, and their recent introduction of things like the Xbox handheld running Windows makes me suspect their goal is to constrain where it's still used, and trim how much it costs to maintain that way.
Google made billions by scamming the world with "free email" and a search engine that would "never display ads" or "censor content".
It was "exactly what customers wanted". Microsoft Windows is just as successful....financially speaking.
Now, if I could just get teenagers to pay more money for a magic digital rune, besides extracting all that juicy marketing data from their phone app... Because more money = better corporation.
But it's unwise to make money at any cost. It can cost your corporation much more in the long term. I see MS Windows on the brink of irreparable reputation damage. I believe Elon Musk is starting to work on MacroHard, and people might flood into that system just out of spite for Microsoft.
At the time I got the feeling that the presenter got the genuine impression that players would at least not be completely disappointed by the announcement.
Here it's hard to understand Microsoft's surprise when almost everything Windows has done for the last ten years was despised by mostly everyone. I was thinking that decision makers knew they were making unpopular moves but did not care since there's no way Windows can lose market share. I assume he must be faking surprise, but I am not sure for what purpose since staying silent and going forward would have had less press. Well I guess bad publicity is still publicity.
Hi Warren! I'm Chris, and I'm with AWS, where among other things, I work on the Well-Architected Framework. Would you be willing to talk with us? You can reach me at [email protected]. Thanks!
Amazonian here. My views are my own; I do not represent my company/corporate.
That said...
We do our very best. But I don't know anyone here who would say "it can never happen". Security is never an absolute. The best processes and technology will lower the likelihood and impact towards 0, but never to 0. Viewed from that angle, it's not if Amazon will be hacked, it's when and to what extent. It is my sincere hope that if we have an incident, we rise up to the moment with transparency and humility. I believe that's what most of us are looking for during and after an incident has occurred.
To our customers: Do your best, but have a plan for what you're going to do when it happens. Incidents like this one here from checkout.com can show examples of some positive actions that can be taken.
> But I don't know anyone here who would say "it can never happen". Security is never an absolute.
Exactly. I think it is great for people like you to inject some more realistic expectations into discussions like these.
An entity like Amazon is not - in the longer term - going to escape fate, but they have more budget and (usually) much better internal practices which rule out the kind of thing that would bring down a lesser org. But in the end it is all about the budget, as long as Amazon's budget is significantly larger than the attackers they will probably manage to stay ahead. But if they ever get complacent or start economizing on security then the odds change very rapidly. Your very realistic stance is one of the reasons it hasn't happened yet, you are acutely aware you are in spite of all of your efforts still at risk.
Blast radius reduction by removing data you no longer need (and that includes the marketing department, who more often than not are the real culprit) is a good first step towards more realistic expectations for any org.
It's not to say we never have any complaints over this, but when we do, it's rare and usually because something else is amiss (hungry, frazzled, tired).
But most instances it's like last night, where we were clear that we had time for two episodes of Tumble Leaf before dinner. At the end of the second one he announced "last one!" and got up off the couch as we picked up the remote.
reply