The sad part of all of this was that the company that does this tried to poach me back in 2013 or 2014, but I was disgusted by the practice, so I refused to even interview.
Since then, I've made sure every single TV I own has this turned off (I go through the menu extensively to disable, and search on Google and reddit if it's not obvious how to disable like the case with Samsung).
I have an LG Smart TV, and just a week or two ago I was going through the settings and found Live Plus enabled, which means either they renamed the setting (and defaulted this to on), or the overrode my original setting.
Either way, I'm super annoyed. I want to switch to firewalling the TV and preventing any updates, but I need a replacement streaming device to connect to it.
Does anyone have recommendations for a streaming device to use (presumably one with HDMI CEC, that supports 4k and HDR)? I use the major streaming services (Netflix, Prime, Hulu, Apple TV) and Jellyfin.
I used to manage a team working on the news feed at Facebook (main page).
We did extensive experimentation, and later user studies to find out that there are roughly three classes of people:
1) Those that use interface items with text
2) Those that use interface items with icons
3) Those that use interface items with both text and icons.
I forget details on the user research, but the mental model I walked away with this that these items increase "legibility" for people, and by leaving either off, you make that element harder to use.
If you want an interface that is truly usable, you should strive to use both wherever possible, and ideally when not, try to save in ways that reduce the mental load less (e.g. grouping interface by theme, and cutting elements from only some of the elements in that theme, to so that some of the extra "legibility" carries over from other elements in the group)
Sounds like me:
1. For new UI/tool, I depend on text to navigate.
2. Once I'm more familiar, I scan using icons first then text to confirm.
3. With enough time, I use just icons.
4. Why the ** do they keep moving it/changing the icons?
Hooray, actual user research and data!! This is what I tell all my clients: "We can speculate all day long, but we don't have to. The users will tell us the correct answer in about 5 minutes."
It's amazing that even in a space like this, of ostensibly highly analytical folks, people still get caught up arguing over things that can be settled immediately with just a little evidence.
This is the bane of my existence since icons aren't standardized* and the vast majority of people suck at designing intuitive ones. (*there are ISO standard symbols but most designers are too "good" to use them)
Cite cliché about the only intuitive user interface is the nipple; everything else is learned.
Having done my share of UI work, my value system transitioned from esthetics to practicalities. Such as "can you describe it?" Because siloed UI, independent of docs, training and tech supp, is awful.
All validated by usability testing, natch. It's hard to maintain strong opinions UI after users shred your best efforts. Humilitating.
Having said all that... If stock icons work (with target user base), I'm all for using them.
I recently learned about the (ancient?) greek concept of amathia. It's a willful ignorance, often cultivated as a preference for identity and ego over learning. It's not about a lack of intelligence, but rather a willful pattern of subverting learning in favor of cult and ideology.
The only actual problem with cheating is leaderboards.
When you have accurate matchmaking, you will be playing against other players of a similar skill level. If you we're playing in single-player mode, it wouldn't bother you that some of the players were better than others.
Whether the person you're playing against is as good as you because they have aim assist, while you have a 17g mouse and twitch reflexes shouldn't matter. You're both playing at equivalent skill levels.
The only reason it matters to anyone is that they want their skills to be recognized as better than someone else's. Take down the leaderboards, and bring back the fun.
Comments like this just make me upset to the point I can't cohere an appropriate argument. It's so out-of-touch with reality and completely ignores the core problem that I have to believe you're just fucking with us.
No, it is not fun to play against smurf accounts using hacks. They aren't doing it for the leaderboards, they actively downrank themselves to play against worse players!
And no, it's not fun to play against cheaters who are so bad at situational awareness their rank is still low, but who instantly headshot you in any tense 1v1 and ruin your experience.
And no, I actually do care that people are cheating in multiplayer games because it's not fair. Since when do we reward immoral fuckwits who can't or won't get better at the game?
Why don't we just start letting basketball players kick each other and baseball players tar their hands while we're at it. Who cares if the sanctity of the sport or competition is ruined - we're a community of apathetic hacks.
I play online FPS with friends for fun. I don't care about leaderboards, but I know people that do and don't want to take them away from them.
You can't have accurate matchmaking and allow cheating. People cheat for a variety of reasons, at lot of cheaters are just online bullies that enjoy tormenting other players. In a low ELO lobbies, you would have cheaters that have top tier aim activated only if they lose too much, making the experience very inconsistent.
Top tier ELO would revolve around on how the server handle peeker advantage and which cheater as the fastest cheating software. It's an interesting technical challenge, but not a fun game. As soon as a non cheating player is in view of a cheating player, the non cheating player dies. That doesn't make for a fun game mechanic.
>"Top tier ELO would revolve around on how the server handle peeker advantage and which cheater as the fastest cheating software. It's an interesting technical challenge, but not a fun game"
Fun fact, this does exist. There used to be old CS:GO servers that were explicitly hack v hack, would make it abundantly clear to any new visitors that stumbled upon the servers that you would NOT have any fun without a "client", and it was a bunch of people out-config'ing each other. It was actually kinda cool for those people, it would NEVER be fun for anyone else.
Try playing Rust without anti-cheat and you will immediately change your tune. It isn't fun playing a game where you can lose everything to a guy who can cause bullets to bend around objects.
Yea, that's one game that's more fun to watch than play I will admit, so mostly I'm a "pro rust watcher with over 300 hours watching rust" (this is a bit of an in-joke, sorry) who sees the annoyance and lack of fun people have when they get destroyed by cheaters. I did play one wipe, and spent 25 hours over 3 days in the game, so I chose to quit right there instead of doing that on a regular basis.
Oh for sure, and even now I occasionally watch videos as background noise during work - but it's just not a fun game for the vast majority of times I log in. I'm about 175 hours in now; and for the one time a month I play, I can't be assed joining anything besides a 3x anymore with a wipe at least 10 days away, it's just too much sunk cost and wasted time even getting a 2x1 down in official or vanilla servers.
It's fun when I get something down (and fully use a starter kit, if a server has it) and have neighbors to dick around with, or there's not enough traffic to actually enjoy landmarks and underground, or occasionally when I meet someone that's down to team up or just talk on mic while pacing around one of our bases. But when that doesn't happen, I'm just not having fun haha
Not just single player. Even in competitive multiplayer a lot of the complaints about "cheating" are actually complaints about matchmaking, and "cheating" is a giant red herring (griefing is a different matter, of course, that gets lumped into the umbrella term of "cheating"). But trying to explain this is typically like pissing against the wind, because people already believe in the existing status quo (no matter how irrational it is) and no one wants to change their beliefs unless it obviously and immediately short-term benefits them.
At least in the world of chess (which has the OG matchmaking system, ELO), cheating is genuinely a problem.
The problem is that it doesn't matter how good you are. You will not beat a computer. Ever. Playing against someone who is using a computer is just completely meaningless. Without cheating control, cheaters would dominate the upper echelons of the ELO ladder, and good players would constantly be running into them.
> Without cheating control, cheaters would dominate the upper echelons of the ELO ladder, and good players would constantly be running into them.
...and, even worse, if they ever got to the very top of the ladder and started only playing against other cheaters, then they'd actually weaken their cheats so that they could drop down in ranking to play against (and stomp) non-cheaters again, and/or find creative ways to make new accounts.
Cheaters ruin games. The fact that the GP is so deluded as to claim that "The only actual problem with cheating is leaderboards." suggests that they've never actually played a competitive matchmade game on a computer before.
I worked at a company once that didn't use any anti cheat. I once asked why, and they said the matchmaking system solved the problem for them. The matchmaking was good enough so cheaters only ever played with other cheaters, and it kept the numbers up.
Honest players never really complained, so I guess it worked for them.
This is fine if you are low level, because the cheaters will be too good to play in the low level games.
If you are in the higher skill levels, you might end up playing too many cheaters who are impossible to beat. If the cheat lets you be better than the best human players, the best human player will end up just playing cheaters.
> If you are in the higher skill levels, you might end up playing too many cheaters who are impossible to beat.
It's almost kind of worse than this. If you are in higher skill levels, you end up getting matched with cheaters who lack the same fundamental understanding of the game that you do and make up for it with raw mechanical skill conferred by cheats.
So you get players who don't understand things like positioning, target priority, or team composition, which makes them un-fun to play with, while the aimbots and wallhacks make them un-fun to play against.
And as a skilled player, you are much better equipped to identify genuine cheaters in your games. Whereas in low skill levels cheaters may appear almost indistinguishable from players with real talent so long as they aren't flat out ragehacking with the aimbot or autotrigger.
I've been using an 8k 65" TV as a monitor for four years now. When I bought it, you could buy the Samsung QN700B 55" 8k, but at the time it was 50% more than the 65" I bought (TCL).
I wish the 55" 8k TVs still existed (or that the announced 55" 8k monitors were ever shipped). I make do with 65", but it's just a tad too large. I would never switch back to 4k, however.
Average bitrate from anything not a Bluray for even HD is not good, so you do not benefit from more pixels anyway. Sure, you are decompressing and displaying 8K worth of pixels, but the actual resolution of your content is more like 1080p anyway, especially in color space.
Normally, games are the place where arbitrarily high pixel counts could shine, because you could literally ensure that every pixel is calculated and make real use of it, but that's actually stupidly hard at 4k and above, so nvidia just told people to eat smeary and AI garbage instead, throwing away the entire point of having a beefy GPU.
I was even skeptical of 1440p at higher refresh rates, but bought a nice monitor with those specs anyway and was happily surprised with the improvement, but it's obvious diminishing returns.
This is exactly why 8K tv's failed in the market, but the point here is that your computer desktop is _great_ 8k content.
The tv's that were sold for sub-1000 usd just a few years ago should be sold as monitors instead. Replace the TV tuners, app support, network cards and such and add a displayport.
Having a high-resolution desktop that basically covers your useable FOV is great, and is a way more compelling use case than watching TV on 8K ever was.
HDMI 2.1 is required, and the cables are not too expensive now.
For newer gpus (nvidia 3000+ or equivalent) and high end (or M4+) macs hdmi 2.1 works fine but Linux drivers have some licensing issue that makes hdmi 2.1 problematic.
It works with certain nvidia drivers but I ended up getting a DP to HDMI 8K cable which was more reliable. I think it could work with AMD and Intel also but I haven't tried.
In my case I have a 55 and sit normal monitor distance away. I made a "double floor" on my desk and a cutout for the monitor so the monitor legs are some 10cm below the actual desk, and the screen starts basically at the level of the actual desk surface. The gap between the desk panels is nice for keeping usb hubs, drives, headphone amps and such. And the mac mini.
I usually have reference material windows upper left and right, coding project upper center, coding editor bottom center, and 2 or 4 terminals, teams, slack and mail on either side of the coding window. The center column is about tice as wide as the sides. I also have other layouts depending on the kind of work.
I use layout arrangers like fancyzones (from powertoys) in windows and a similar mechanism in KDE, and manual window management on the mac.
I run double scaling, so I get basically 4K desktop area but at retina (ish) resolution. 55 is a bit too big but since I run doubling I can read stuff also in the corners. 50" 8K would be ideal.
Basically the biggest problem with this setup is it spoils you and it was only available several years ago. :(
I run a ttyd server to get terminal over https, and I have used carbonyl over that to get work done. That's limited to a web browser (to get access to resources not exposed via the public internet), so having full GUI support is very useful
I looked it up, and it turns out you're right. Both the iPhone 17 and the iPhone Air use USB2.
USB3 was introduced in 2008 (!!!). That is 17 years ago.
I already wasn't interested in this tech, to be fair, but I've had to support family phones synchronizing/backing up over the cable, and even at full theoretical speed for the transfer, we're talking over an hour vs just under 7 minutes. Which, considering the flash most likely suppports the read in under a minute, is crazy.
Which is literally what Apple announced in this video:
"and the 2x telephoto has an updated photonic engine, which now uses machine learning to capture the lifelike details of her hair and the vibrant color of her jacket"
"like the 2x telephoto, the 8x also utilizes the updated photonic engine, which integrates machine learning into even more parts of the image pipeline. we apply deep learning models for demosaicing"
They've been using that terminology for like a decade. They take multiple photos and use ML to figure out how to layer them together into a final image where everything is adequately exposed, and applies denoising. Google has done the same thing on Pixels since they've existed.
That's very different from taking that final photo and then running it through generative AI to guess what objects are. Look at the images in that article. It made the stop sign into a perfectly circular shiny button. I've never seen artifacting like that on a photo before.
I used to work at Meta (back when it was just Facebook), and I pioneered a similar effort back in 2016-2017-ish. Now, I don't know anything about the current version (which seems to offer cloud processing as well), but when I was there, the effort was entirely local to the phone.
We had caffe2 running a small model on the phone to try and select and propose photos for the user to share.
We were trying to offer an alternative sharing model that both made sharing easier, while offering the user the controls that made them feel comfortable with photo suggestions. (for those who never noticed, we launched Moments, which was an app that allowed automatic private sharing of your camera roll with a close selection of friends and family, but the experience wasn't great because it was centered around group events and sharing photos with the people who were there, not connecting with the ones who weren't)
Ultimately, it was scrapped, because we were paranoid that we hadn't come up with a user experience that made it clear that this was happening only on the phone (I think we even tried a notification model), or that we'd accidentally surface someone's boudoir photos, and we were too worried about the kind of knee-jerk reactions that you're seeing in this thread.
I'm guessing that someone at Meta either had a more successful go at the UX, or they feel that the opinions about AI have shifted enough that there will be less fear.
Upon reading the article, it looks like there are two options, one which is local-only, and similar to what we built, and a second one which tries to make better suggestions using online, and that is only enabled after asking the user.
I would suspect that the cloud processing version also runs a local model to attempt to filter out racy photos before sending them to the cloud, but I don't know for sure.
I think the article is a bit disingenuous in it's presentation, but it's possible that I'm biased because I know how a similar thing was built, but it definitely sounds like fear-mongering.
As to MapReduce, I think you're fundamentally mistaken. You can talk about map and reduce in the lambda calculus sense of the term, but in terms of high performance distributed calculations, MapReduce was definitely invented at Google (by Jeff Dean and Sanjay Ghemawat in 2004).
Dean, Ghemawat, and Google at large deserve credit not for inventing map and reduce—those were already canonical in programming languages and parallel algorithm theory—but for reframing them in the early 2000s against the reality of extraordinarily large, scale-out distributed networks.
Earlier takes on these primitives had been about generalizing symbolic computation or squeezing algorithms into environments of extreme resource scarcity. The 2004 MapReduce paper was also about scarcity—but scarcity redefined, at the scale of global workloads and thousands of commodity machines. That reframing was the true innovation.
My main reference is the head of computing at CERN, who explained this to me. He gave some early examples of ROOT (https://en.wikipedia.org/wiki/ROOT) using parallel processing of the ROOT equivalent of SSTables.
Since then, I've made sure every single TV I own has this turned off (I go through the menu extensively to disable, and search on Google and reddit if it's not obvious how to disable like the case with Samsung).
I have an LG Smart TV, and just a week or two ago I was going through the settings and found Live Plus enabled, which means either they renamed the setting (and defaulted this to on), or the overrode my original setting.
Either way, I'm super annoyed. I want to switch to firewalling the TV and preventing any updates, but I need a replacement streaming device to connect to it.
Does anyone have recommendations for a streaming device to use (presumably one with HDMI CEC, that supports 4k and HDR)? I use the major streaming services (Netflix, Prime, Hulu, Apple TV) and Jellyfin.
reply