Hacker Newsnew | past | comments | ask | show | jobs | submit | FragmentShader's commentslogin

i'd rather have MS taking 8GB away from my 1TB disk than having Linux taking away at least 6 hours of my life per week cause something broke or cause I have to hoop through the entire internet to install or configure some shit that could've been a single click in windows.


This isn't 2010 any more. If you're doing any dev work at all native linux will cause way less headaches than WSL/mingw or whatever. Even the old folks in my family are all using linux these days because everything they do is in a browser and it's easier for them than windows. That and microsoft constantly changing things out from under you and reverting settings you thought you picked for your own computer.


> This isn't 2010 any more.

I've used Linux Desktop for 2 years in 2018 for IT studies, it was mandatory. We were like 20 students and there was a new Linux-related complaint, timewaste and workaround every day. Nobody got their degree and thought "Y'know what? i'm gonna use linux at home!"

I still use Linux for servers nowadays, though.

> If you're doing any dev work at all native linux will cause way less headaches than WSL/mingw or whatever.

I use visual studio and I don't dev for linux, so I don't have this problem.

> Even the old folks in my family are all using linux these days because everything they do is in a browser and it's easier for them than windows.

If all you do is use a web browser, you might as well just use a chromebook.

> That and microsoft constantly changing things out from under you and reverting settings

My settings are never touched, but I agree they sometimes change stuff in a way that bothers me. Like, removing file explorer functionality in W11 and remaking it months later. But they usually make up for it by adding other cool features. Such as native (but slow) unzipping, file explorer tabs, power toys and so on.


Yes, Microsoft will not touch your settings if you haven't disabled all the anti-features they force upon users.

For the rest of us, they are relentless. They won't take no for an answer. They won't hesitate to take their users' precious time as hostage. During each update, they coerce users to enable their numerous spyware with full screen nags riddled with dark patterns. They revert explicit opt-outs. They remove user choice altogether if things don't go their way. This has been reported in the media so many times. You can't just pretend that this hasn't happened.

When is enough enough? Windows bombards you with ads, installs junkware users never asked for, forces you to use Edge, collects keyboard input, records your screen, and outright steals your email. On top of that, their UX is far worse than it was 2 decades ago.

In contrast, desktop Linux has improved dramatically. Unzipping and file browser tabs? They've been around since forever. They're only cool new features on Windows.


> I use visual studio

That's not a solution. Visual Studio isn't portable, it's very close to worthless software. You can use VS, my company does too. But the baggage you take with it is pretty extreme. And now we're still maintaining COM stuff (sigh...)

> If all you do is use a web browser, you might as well just use a chromebook

Ironically, chromebooks have a full Linux environment and you can install graphical applications and whatnot. Also they run android apps.

> Such as native (but slow) unzipping, file explorer tabs, power toys and so on

Those are "neat" features, but IMO Windows 11 is still not a good desktop environment. KDE has significantly more features, is much more customizable, and also much more consistent in look and feel. Windows USED to at least be polished. Now there's 3 different setting apps and also Computer Management exists and none of them even look like they belong on the same system.


> cool features. Such as native (but slow) unzipping, file explorer tabs

Linux circa 2005. I will admit that the file explorer ads are a nice feature. Maybe one day ubuntu will figure out how to do this.


> If all you do is use a web browser, you might as well just use a chromebook.

Which is Linux.


Linux without the shitty aspects of having to use desktop Linux.

The only downside is how firmly it is wedded to Google's cloud ecosystem.


We know MacOS is Unix. We know Linux server runs most of the internet. We know Android is Linux. Thanks for your input.

We were obviously talking about the classic Linux Desktop distros, and based on the whole conversation you should know that. If ChromeOS and Android were categorized as Linux distros, then nobody would hate/love Linux Desktop per se.


> We know MacOS is Unix.

We're obviously talking about Linux, not "Unix", and based on the whole conversation you should know that. Thanks for your input.

> If ChromeOS and Android

One is traditionally used on a laptop/desktop, one on a phone, not sure why you're conflating the two.

And the point is, if you are "just using a web browser", desktop linux has done this for decades. And Chrome OS is definitely a desktop linux distro, why do you not consider it to be one?


Yeah, I know Linux can do gaming, dev and is stable and good. But there is always something which needs a hack / fix, different lib or something.

"Normal" people just take the convenience of a working system out of the box than having a privacy respecting OS.

Personal anecdote, installed Linux Mint on an old Laptop of a friend. First thing, we open Firefox and go to YouTube, hard freeze of the whole OS. Not a single key worked anymore.

Not the best advertisement for the stable and better OS Linux I was showing him.


How certain are you it was a problem with Mint and not an issue with the laptop itself?


> but when you create your own engine you are _fiercely_ aware of _everything_ it can and can't do

The problem is that if you start gamedev by making engines, then you aren't aware of what you need to do.

To give an example, if you make font rendering and looks blurry/pixelated, what now? Oh, and a simple 2D game takes 8 seconds to load, wonder why?

Meanwhile, if you have ever made a Unity game, chances are you already know the keywords "SDF" and "Texture compression", because you tore down an already big engine for optimizing your game and accidentally learned about what features a game needs.


> What now? > wonder why?

What now is you have a fantastic opportunity to learn some topics in depth. Using Unity is also no guarantee that you'll come across those terms. And even if you do, if the Unity solution is to check the correct boxes you're exactly better off from a knowledge point of view.

I'm not advocating for not using Unity, but I am advocating for learning, increasing the depth of your understanding, and just a general approach of curiosity and problem solving.


This was my experience.

I dove into writing a niche game engine and stumbled over every hurdle that modern game engines solve.

Been learning Godot lately and going back to writing an engine I'm confident I could trivally solve a lot of those hurdles.

Additionally, if im trying to make a basic editor I can now see what is tenfold easier graphically (animations) and what I don't mind programming in.


Writing an engine also has made more so much more aware of the implicit decisions of other engines and the "why" behind them. Largely I've come out of the process with much more respect for what they've done so far to get to where they're at.


I will say that you definitely shouldn't start gamedev by making your own engine. To your point, you need to learn the language of game development to actually understand how game development works and is different from other fields of software development.


Is there any software that can generate CSS visually by combining nodes in a graph, akin to shader graphs in game engines? (https://unity-connect-prd.storage.googleapis.com/20200902/le...). It'd be so useful for stuff like this.


> WebGPU is super slow on GPU and all the official benchmarks only care about CPU performance.

omg I thought I was the only one that found that. I tried webgpu (On a native context) and it was slowwwww. Only 10k non-overlapping triangles can bring my RTX GPU to its knees. It's not the shader because it was only a color. It's not the overlapping (And I tried a depth prepass as well). It's not the draw calls. The API is slow, straight up.

In fact, you can try to open a WebGPU demo in your browser and check the GPU usage in the Task Manager. Close it, open a random webgl Unity game and you'll see how much a single WebGPU triangle takes compared to a full-fledged game.

On my computer, the average Unity game with shadows, shaders 'n stuff takes 5% GPU and a simple WebGPU demo takes 7%.


> Only 10k non-overlapping triangles can bring my RTX GPU to its knees

Your benchmark doesn't match the experience of people building games and applications on top of WebGPU, so something else is probably going on there. If your benchmark is set up well, you should be limited by the fill rate of your GPU, at which point you should see roughly the same performance across all APIs.

> On my computer, the average Unity game with shadows, shaders 'n stuff takes 5% GPU and a simple WebGPU demo takes 7%.

GPU usage isn't a great metric for performance comparisons in general because it can actually imply the inverse depending on the test case. For example, if the scenes were exactly the same, a lower GPU usage could actually suggest that you're bottlenecked by the CPU, so you can't submit commands fast enough to the GPU and the GPU is sitting idle for longer while it waits.


> Your benchmark doesn't match the experience of people building games and applications on top of WebGPU

Here's an example of Bevy WebGL vs Bevy WebGPU:

I get 50 fps on 78k birds with WebGPU: https://bevyengine.org/examples-webgpu/stress-tests/bevymark...

I get 50 fps on 90k birds with WebGL: https://bevyengine.org/examples/stress-tests/bevymark/

So you test the difference between them with technically the same code.

(They can get 78k birds, which is way better than my triangles, because they batch 'em. I know 10k drawcalls doesn't seem good, but any 2024 computer can handle that load with ease.)

Older frameworks will get x10 better results , such as Kha (https://lemon07r.github.io/kha-html5-bunnymark/) or OpenFL (https://lemon07r.github.io/openfl-bunnymark/), but they run at lower res and this is a very CPU based benchmark, so I'm not gonna count them.

> be limited by the fill rate of your GPU

They're 10k triangles and they're not overlapping... There are no textures per se. No passes except the main one, with a 1080p render texture. No microtriangles. And I bet the shader is less than 0.25 ALU.

> at which point you should see roughly the same performance across all APIs.

Nah, ANGLE (OpenGL) does just fine. Unity as well.

> a lower GPU usage could actually suggest that you're bottlenecked by the CPU

No. I have yet to see a game on my computer that uses more than 0.5% of my CPU. Games are usually GPU bound.


> Here's an example of Bevy WebGL vs Bevy WebGPU

I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level). There are also some performance optimizations at the wasm binding level that might be noticeable for trivial benchmarks that haven't made it into Bevy yet, e.g., https://github.com/rustwasm/wasm-bindgen/issues/3468 (this applies much more to WebGPU than WebGL).

> They're 10k triangles and they're not overlapping... There are no textures per se. No passes except the main one, with a 1080p render texture. No microtriangles. And I bet the shader is less than 0.25 ALU.

I don't know your exact test case so I can't say for sure, but if there are writes happening per draw call or something then you might have problems like this. Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going on if the performance is a lot worse than you'd expect.

There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.


> I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

I know, but that's the unique instance where I could find the same project compiled for both WebGL and WebGPU.

> Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going

Yep, I know. I benchmarked my program with Nsight and calls are indeed native as you'd expect. I forced the Directx12 backend because the Vulkan and OpenGL ones are WAYYYY worse, they struggle even with 1000 triangles.

> That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level).

I agree. But the whole internet is marketing WebGPU as the faster thing right now, not in the future once it's optimized. The same happened with Vulkan but in reality it's a shitshow on mobile. :(

> There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.

Aha. That's kinda my point, though. It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL. Except this overhead seems to be in the native webgpu as well, not only in browsers. That's why I consider it way slower than, say, ANGLE, or a full game engine.

So, the problem after all is that by using WebGPU, I'm forced to optimize it to a point where I get less quality, more complexity and more GPU usage than if I were to use something else, due to the overhead itself. And chances are that the overhead is caused by the API itself being slow for some reason. In the future, that may change. But at the moment I ain't using it.


> It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL.

It really depends on how you're using it. If you're writing rendering code as if it's OpenGL (e.g., writes between draw calls) then the WebGPU performance might be comparable to WebGL or even slightly worse. If you render in a way to take advantage of how modern graphics APIs are structured (or OpenGL AZDO-style if you're more familiar), then it should perform better than WebGL for typical use cases.


The problem is that it's gonna be hard to use WebGPU in such cases, because when you go that "high" you usually require bindless resources, mesh shaders, raytracing, etc, and that would mean you're a game company so you'd end up using platform native APIs instead.

Meanwhile, for web, most web games are... uhhh, web games? Mobile-like? So, you usually aim for the best performance where every shader ALU, drawcall, vertex and driver overhead counts.

That said, I agree on your take. Things such as this (https://voxelchain.app/previewer/RayTracing.html) probably would run way worse in WebGL. So, I guess it's just a matter of what happens in the future and WebGPU is getting ready for that! I hope that in 10 years I can have at least PBR on mobiles without them burning.


Mobile is where WebGPU has the most extreme performance difference to WebGL / WebGL2.

I'm not convinced by any of these arguments about "knowing how to program in WebGPU". Graphics 101 benchmarks are the entire point of a GPU. Textures, 32bit data buffers, vertices, its all the same computational fundamentals and literally the same hardware.


> I'm not convinced by any of these arguments about "knowing how to program in WebGPU". Graphics 101 benchmarks are the entire point of a GPU.

You're totally right that it's the same hardware, but idiomatic use of the API can still affect performance pretty drastically.

Historically OpenGL and DX11 drivers would try to detect certain patterns and fast path them. Modern graphics APIs (WebGPU, Vulkan, DX12, Metal) make these concepts explicit to give developers finer grained control without needing a lot of the fast path heuristics. The downside is that it's easy to write a renderer targeting a modern graphics API that ends up being slower than the equivalent OpenGL/DX11 code, because it's up to the developer to make sure they're on the fast path instead of driver shenanigans. This was the experience with many engines that ported from OpenGL to Vulkan or DX11 to DX12: performance was roughly the same or worse until they changed their architecture to better align with Vulkan.

Simple graphics benchmarks aren't a great indicator for relative performance of graphics APIs for real use cases. As an extreme example, rendering "hello triangle" for Vulkan vs. OpenGL isn't representative of a real use case, but I've seen plenty of people measure this.


Mostly because their drivers suck, and don't get updates.

Android 10 made Vulkan required, because between Android 7 and 10, most vendors didn't care, given its optional status.

Android 15 is moving into OpenGL on top of Vulkan, because yet again, most vendors don't care.

The only ones that care are Google with their Pixel phones (duh), and Samsung on their flagship phones.

There is also the issue that by being NDK only, and not having managed bindings available, only game engine developers care about Vulkan on Android.

Everyone else, devs would still have better luck targeting OpenGL ES than Vulkan, given the APIs and driver quality, which isn't a surprise that now Google is trying to push for a WebGPU subset on top of OpenGL ES.


> I have yet to see a game on my computer that uses more than 0.5% of my CPU.

Just a nitpick here, you probably have some multicore CPU while the render-dispatch code is gonna be single threaded. So that 0.5% you're seeing is the percent of total CPU usage, but you probably want the % usage of a single core.


Yeah, you're right. Sorry about that one.


this looks to be cpu bound. I’m not getting full gpu utilization but i am seeing the javascript thread using 100% of its time trying to render frames.

The webgpu and webgl apis are pretty different so im not sure you can call it “technically the same code”.


> looks to be cpu bound.

Apparently "Bevy's rendering stack is often CPU-bound"[0], so that would make sense.

To be fair that quote is somewhat out of context, but it was an easy official source to quote and I've heard the same claim repeated elsewhere too. (I'm not a Bevy user but am using Rust for fulltime indie game dev, so discount this comment appropriately.)

[0]: https://bevyengine.org/news/bevy-0-14/#gpu-frustum-culling


> The webgpu and webgl apis are pretty different so im not sure you can call it “technically the same code”.

Isn't Bevy using WGPU under the hood, and then they just compile with it both WebGL and WebGPU? That should be the same code Bevy-wise, and any overhead or difference should be caused by either the WGPU "compiler" or the browser's WebGPU.


Yes but also no. WebGL lacks compute shaders and storage buffers, and so has a different path on WebGL than WebGPU. A lot of the code is shared, but a lot is also unique per platform.

---

This is also as good a place as any, so I'll just add that doing 1:1 graphics comparisons is really, _really_ hard. OS, GPU driver, API, rendering structure, GPU platform, etc all lead to vastly different performance outcomes.

One example is that something might run at e.g. 100 FPS with a few objects, but 10 FPS with more than a thousand objects. A different renderer might run at 70 FPS with a few objects, but also 60 FPS with a few thousand objects.

Or, it might run well on RDNA2/Turing+ GPUs, but terribly on GCN/Pascal or older GPUs.

Or, maybe wgpu has a bug with the swapchain presentation setup or barrier recording on Vulkan, and you'll get much different results than the DirectX12 backend on AMD GPUs until it's fixed, but Nvidia is fine because the drivers are more permissive about bugs.

I don't trust most verbal comparisons between renderers. The only real way is to see if an engine is able to meet your FPS and quality requirements on X platforms out of the box or with Y amount of effort, and if not, run it through a profiler and see where the bottleneck is.

- A Bevy rendering contributor


I have to pay my bills today, not whenever the professional software I use gets released on Linux. Also, the Linux experience is so miserable that if Windows were to die, I'd just switch to MacOS tbh.


> Linux experience is so miserable

This is just not true. I switched back to linux recently https://punkx.org/jackdoe/linux-desktop.html and honestly the experience is amazing. I even did dist-upgrade and everything still works, including all my pytorch stuff.

Windows experience is hundreds of times worse.

MacOS is quite good, I would recommend it as well, but don't dismiss linux without trying.


Using debian (and with an upgrade to 12) with gnome feels very polished. I think I'd call it close to a "default" linux experience. That doesn't sound sexy but it means it's the happy path, thoroughly tested, no surprises. The closest thing I've had to an issue in 6 months is extrepo duplicating some repo keys, it was as easy to fix as deleting the file once.

Modern linux can be a boring and productive place if you want it to be.


When was the last time you tried it?


Genuinely curious to what makes it miserable. I mostly love my basic Ubuntu setup (and especially compared to the clown show that is modern Windows), and although I’m a software engineer I am neither particularly good at nor have any interest in tinkering with my Linux setup. It mostly just works (except for distribution of apps outside of the package manager - that sucks!). That said, I’m on desktop - I think anything with batteries and touch interfaces is often more buggy.


I wouldn't say miserable, but there are sharp edges that don't exist in Windows or MacOS. I've been steadily getting annoyed at Windows so I've started trialing my steam deck as a desktop replacement when I travel. It's mostly fine. But the lack of ability to set scroll wheel speed across the entire OS annoys me every time and after weeks of trying to find a solution, I've learned way more than I should need to about mouse drivers and Wayland and libinput, and I still don't have an acceptable solution to it. Many others have had this issue and each component blames the other for some idealistic reasons. The users don't care. They want to have a mouse experience that just works.

These edge cases are annoying to test and fix and you have to pay smart people to grind out the time to do it. You need program managers who coordinate across teams to drive a solution. This is why linux hasn't solved it after all these years. The smart people would rather work on cool new features.


> The users don't care. They want to have a mouse experience that just works.

I do agree with this and share your pain, even though to me it's never been that bad. But you're right, it's a big blame game between linux, distros, gnome, wayland, etc, and the result is fragmentation. And a ton of users are left in the middle when mom and dad are fighting violently over age-old issues like dynamic linking and which window manager is the best. The fact that linux has a much more narrow scope than commercial OSs has some significant downsides.

Ironically, I get the sense that Torvalds agrees with this, it's just that he can't afford to increase the scope outside of the kernel, which is already a super-human effort. I got the sense he wishes everyone else can iron out their differences, while he (correctly, probably) assumes it's wiser tend to his garden than to get involved in other battles.


And get confronted with Apple’s version of enshittification? The grass is not as green as it used to be in cupertino world.


I have a M1 laptop so I kinda suffered some aspects of MacOS. But, when I have a problem, it just feels like this:

Windows: Chances are the problem doesn't exist.

Linux: Spend 6 hours to fix it.

MacOS: Spend 50$ for software that fixes it.


Apple has a long ways to go before they get to Microsoft levels of enshittification. The News app has ads and premium-only content mixed into the content you do have access to. They know I don't pay for News so stop showing me the articles I can't read, please. I know that's the whole point. They want me to get annoyed and pay for it. And I think there are random nudges throughout the OS to pay for iCloud. But it's a far cry from the ads and clickbait and unwanted notifications and resetting user preferences (Edge browser, etc.) that Windows does.


excuses excuses

spoken like someone who hasn't used Linux in a decade. The experience is much better in Linux than the other OSes these days

If you need to use software in Windows for work that's one thing, but you probably don't need that software on every personal device and your opinion about using Linux is definitely just your opinion


> The experience is much better in Linux than the other OSes these days

Until the next system update breaks Wayland or your GPU driver again.


Even if you do need Windows for your software, projects like Wine are making progress every day. There's only a few things I use that don't work out of the box, and for that I have a stripped down Windows VM.


Learning how to spin up a Windows VM in Linux was what finally gave me the courage to take the training wheels off and abandon Windows as my daily driver, after decades.


I agree, but

> Maybe one day they'll finally get to follow game engines in deleting all of their atlas hacks.

Are there any public game engines using bindless resources? Unreal seems to be the only one, but they pretty much don't care about low-end Android.


> you have to maintain the ship for the entire duration of the journey. Making sacrifices in power and computing ability to resolve problems that occur throughout the ship.

There's a multiplayer game where you get in a match and play tiny minigames where you fix the issues of the ship, and once you fix them all, you win.

The twist is, there is an assasin among all players, and his goal is to sabotage your ship even more and to kill everyone, without getting caught. So the duration of the journey is only based on how quick you either fix the ship, find the impostor or die.

Wish I could remember the name, it was pretty popular during the pandemic.


I think you're thinking of Among Us.

https://en.wikipedia.org/wiki/Among_Us


Yeah! Thanks :)



Yeah this isn't the right game, but is much closer to a full game based on that idea than among us.


Space Station 13 is a much deeper take on a similar premise.



Are you thinking of SpaceTeam? We had a ton of fun with this game at a previous job.

https://play.google.com/store/apps/details?id=com.sleepingbe...


Scratch that, I missed a paragraph in OP's post about the assassin. Definitely not SpaceTeam and more likely Among Us.

Give SpaceTeam a try anyway. You have to love a game that bills itself as a "cooperative shouting game."


Spaceteam also exists as a physical card game, the mechanics seem to be a bit different from the app (not sure which one of them was released first) but is still lots of fun and shouting.


Among Us.

I tried it once and was looking forward to it, but it was just a bunch of kids running around with no coordination whatsoever killing each other. After a few plays I asked Steam for a refund. Maybe I was doing it wrong and should have tried it with friends.


I've only played among us as in private lobbies with friends to have fun during the pandemic. That's why it was so popular. Would never think to play it online with strangers. There's really not much to it its just about as complex as a simple boardgame. It was just a way to easily socialise with everyone you knew even if they weren't a hardcore gamer during lockdown.


Online boardgames is a good way to look at it - https://www.fantasyflightgames.com/en/products/red-november/ and similar for example


I played mostly with friends and it was fun.

When I first got into public lobbies - yeah, everybody was a kid and the first color shouted into the chat would get kicked.

The trick I found was to switch the search language to English as I was in European servers with my native language. That way, I would match with Europeans that manually changed their language to English as well... or just british kids.

Anyways, it got way better, no more "red is sus" and then kicking without proof. But then everyone was too rude. People always got EXTREMELY angry when I found who the impostors were by either using the cameras or just remembering who was with who + kill spots + behaviour. Like, we win the game because I confirmed the users were impostors based on actual proof, then they cuss me and ban me from the lobby!

So, yeah, I kinda not played much in public lobbies for these reasons. But with friends it was fun!


Yeah, random lobbies are absolutely unplayable. Lobbies that are coordinated in some public Discord server tend to be a coinflip if they're going to be passable or not. But the game really shines when you have a core group of people to play with that all engage with it in good faith.

Of course, all those groups burnt out on the game after playing way too much of it over the pandemic, so it is what it is.


Not the game you were thinking of, but Spaceteam is a very fun game with a related premise. You're flying a ship and the ship is breaking down so you need to activate the right controls to keep it moving.

The ship will tell you what controls need to be activate, but the catch is that each player only has access to a portion of the controls, but will receive instructions for all them. Cue lots of yelling back and forth to try to communicate the appropriate directions :)


> is a strong indication that they do not value freedom (of the web) and ethics.

I don't think the average barbershop/restaurant owner will care about that, for instance? They just wanna set up a Facebook/Instagram and done, they can now instantly receive messages from clients to make reservations and also share their stuff with posts. I bet they don't even know they can make a website.

Also, every time they end up getting a website, it's powered by Wordpress hosted in the slowest server you can imagine. And it will end up redirecting you to a propietary service to make your reservation (Whatsapp, Facebook, Instagram...)

At least that's what I see in Europe and south america, I have no clue how it is everywhere else.


I think web browsers should implement already an API that allows developers to track any user in a "private" way, by generating a unique hash using your computer specs or something, and make it different for each website.

So, if you visit Google, your hash would be something like "h38kflak". If you're visiting twitter, the API would generate something different, so you won't be tracked across websites.

That way, even if you clean your cookies, you can still be identified as the same user.

The use case? Fraud detection and that kinda stuff. For example, you may create a web game where you allow users to play instantly without "creating" an account. So, an anonymous account would be created in the background, in order to log in. Any bad actor can just clear their cookies/storage to bypass a ban. IP banning isn't reliable, as multiple users may share an adress.

It's a shame that we have to rely on web api hacks in order to fingerprint users for legitimate reasons, and that ends up in an eternal cat and mouse game, because anything you try today may be broken tomorrow.


Because users do not want to be tracked or fingerprinted. I don't care about fraud detection and I am not a fraudster so why do I have to be tracked? There is no way that a feature like that would not get abused in one way or the other.


Unlike y'all, I actually love the website! I mean, yeah, it could use some extra opaqueness and padding, but it looks cool. Rad, even.

Most of personal blogs in HN that I see on the top page usually look like they were made in the 90's, with either no CSS at all or simple CSS with just big paragraphs of text and no substance in between. I cannot stand them. My zoomer ass needs to be stimulated by getting eye candy while reading the content.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: