Hacker Newsnew | past | comments | ask | show | jobs | submit | iMerNibor's commentslogin

SourceGit comes pretty close to it for me (same situation)


Hetzner cloud servers perform a lot better than ovh vps from my (limited) experience, ymmv though. (happy customer of both)


I've had the same experience. Hetzners ARM VPS servers have been noticeably better than even their own AMD and Intel (The Intel ones are awful and clearly running on old customer hardware).


I would assume they'd just decline and shut operations for that particular domain and create a new domain/account on cloudflare for the new site?

Not sure how attached these sites are to their specific brand/domain (or if this is indirect where main sites link to other sites that host the video)


What gets me the thumbnails are now so big, they're blurry since the images need to be stretched to fit now!

The preview is 530x300px on a 1920x1080 screen vs the image shown being 336x188px

How this passed any sort of QA is beyond me


They clearly need to conserve bandwidth for the most important assets - the 12 whole megabytes of Javascript.


Genuine question. I’m assuming that, since YouTube is owned by one of the largest tech companies in the world that they’ve optimized their delivered JS to only what is necessary to run the page.

What on the YouTube home page could possibly require 12MB of JS alone? Assuming 60 characters per line, that’s 200k lines of code? Obviously ballpark and LoC != complexity, but that seems absurd to me.


Webpages are dumptrucks for every bad feature anyone ever thought up and are in a constant state of trying to re-framework their way out of the complete mess of utils that get shipped by default. Need a gadget that implements eye tracking via sidechannels? Yeah, they got that. And then justify that with "analytics" or anti-fraud and abuse, and no "click jacking" or whatever crap, and roll it times 1000.


>What on the YouTube home page could possibly require 12MB of JS alone?

all of the code that hoovers up your analytics on what's been looked at, what's been scrolled past, etc. maybe I'm just jaded, but I'd suspect so much of it is nothing but tracking and does little for making the site function


Fun fact: Googles own web performance team recommends avoiding YouTube embeds because they're so obscenely bloated. Placing their <iframe> on a page will pull in about 4MB of assets, most of which is Javascript, even if the user never plays the video.

https://developer.chrome.com/docs/lighthouse/performance/thi...

YouTubes frontend people just don't care about bloat, even when other Googlers are yelling at them to cut it out.


We lazy-load Youtube iframes, fixes the problem pretty easily.


Depends on how you do it, loading="lazy" helps a bit, but the iframe still gets loaded when it enters the viewport even if the user has no intention of watching the video. The best approach is to initially show a fake facade of the player and only swap in the real iframe after the user interacts with it, which is what Google recommends doing in that article.


>but the iframe still gets loaded when it enters the viewport even if the user has no intention of watching the video

That doesn't affect page speed scores if the video is "below the fold", and that's all that I really care about. If Google Lighthouse doesn't complain about it, then my job is done.


> Assuming 60 characters per line, that’s 200k lines of code?

The code is minified so there's relatively few characters for each source line, if you run it through a pretty-printer to restore sensible formatting then it turns into well over half a million lines of code.


That's the full YouTube player - you were assuming it just has the code for the homepage, but actually it gets the entire player right at the start.


Meanwhile, loading up a channel page with Invidious pulls in about 700k and half of that is the banner. JavaScript was not mandatory (on public instances) but it is now due to AI scrapers.


I've recently noticed that the thumbnails on the homepage are higher resolution than the thumbnails on the subscriptions page


Same for me. How strange.


they want more money, less videos more ads, probably the UX/UI team was against it but you know how those big techs are


The perfect oppurtunity for more AI, image upscaling! /s

Or maybe the next step will be automated AI-generated thumbnails based on the video and the user itself, so each user will be grouped into a different category and gets served a different thumbnail accordingly.


We could do this by sending a header to the website.

What should we call this.. mmh..

"Do Not Track" is a bit long, maybe we just shorten it to DNT?

Nah thats dumb. /s

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/DN...


> Maybe they could have gotten away with this with UE5's Nanite

Exactly.

If unity actually delivered a workable graphics pipeline (for the DOTS/ECS stack, or at all keeping up with what UE seems to be doing) these things probably wouldn't be an issue.


DOTS/ECS has nothing to do with geometry LODs. Those are purely optimizing CPU systems.

Even if DOTS was perfect, the GPU would still be entirely geometry throughput bottlenecked.

Yes, UE5 has a large competitive advantage today for high-geometry content. But that wasn’t something Unity claimed could be automatically solved (so Unity is in the same position as every other engine in existence apart from UE5).

The developer should have been aware from the beginning of the need for geometry LOD: it is a city building game! The entire point is to position the camera far away from a massive number of objects!


To quote from the blog post:

> Unity has a package called Entities Graphics, but surprisingly Cities: Skylines 2 doesn’t seem to use that. The reason might be its relative immaturity and its limited set of supported rendering features

I'd hazard a guess their implementation of whatever bridge between ECS and rendering is not capable of LODs currently (for whatever reason). I doubt they simply forgot to slap on the standard Unity component for LODs during development, there's got to be a bigger roadblock here

Edit: The non-presence of lod'ed models in the build does not necessarily mean they don't exist. Unity builds will usually not include assets that aren't referenced, so they may well exists, just waiting to be used.


main issue is that DOTS and the whole HDRP/URP stuff started at about the same time, but the goals were completely different. So it would have been nearly impossible to get them working together while DOTS was a moving target. Devs already have multiple breaking updates from the alpha versions of DOTS, an entire GPU pipeline sure wasn't going to rely on that.

>Unity has a package called Entities Graphics

Well that's news to me. Which means that package probably isn't much older than a year. Definitely way too late for game that far in production to use.

oh, so they rebranded the hybrid renderer. That makes a lot more sense: https://forum.unity.com/threads/hybrid-rendering-package-bec...

I'm fairly certain the hybrid renderer was not ready for production.


To give an example:

user installs the game on their pc? unity wants the fee paid for that

same user installs it on their laptop? pay again

same user upgrades their pc and has to install the game again? unity wants their install fee

I believe they also initially said that deinstall + install would incur another charge, but backpeddalled there (weird)


Being able to stop constantly keeping things in the back of your head and just trusting the compiler to complain if something is off was the biggest differentiator for me by far. Less footguns = more better


When I switch to C++11 and unitue_ptr things got a lot better. There is still a lot of cruft from old code, but C++ is a lot better as of 12 years ago. I don't let people manage raw pointers without good reason (I wouldn't let someone use unsafe rust without good reason either)


I'm a customer of virgin's fttp connection, which is converted from fibre to coax on premise - so yes, actual fiber going to your house, but running docsis in some fashion or other

The article you linked covers this as well:

> while more than 1 million of their premises are also being served by “full fibre” FTTP using the older Radio Frequency over Glass (RFoG) approach to ensure compatibility between both sides of their network.

As for them going symmetric in the future: I'll believe it when they do, not holding my breath


From what I recall, its about phyiscal to virtual color matching, so pantone offers samples of say plastic with the exact matching color as the virtual ones, so you can pick a color, tell the manufacturer you want the plastic molded that color and be pretty sure you'll get the right color of product (or if not you can go back to them and tell them to do it according to spec).

You'd also want to calibrate your monitor accordingly of course


You'll never get exact matching colors between a digital representation and a physical sample - there are too many variables at play an distinct physical differences that can't be made up.

I have never found the Pantone hexs to be particularly close to even the basic coated/uncoated guide colors, either, despite having about as good of a color matching setup as one can get at the prosumer level (and do not see how going from the four-digit to five-digit range would close the gap in color accuracy on thee hexes)

As someone with 3 Pantone decks and 2 RAL decks within arm's reach while writing this, I've never understood the value proposition of these virtual libraries beyond a quick and dirty starting point for digital representation. When something goes to print, your printer isn't going to be comparing against what it looks like digitally, either. They'll either use their proprietary spot ink/dye mix/etc., or pull out their guide and compare physical to physical.

Every time I've sent stuff to a printer that has spot color in it, they've wanted it manually referenced as well, so I've never been able to just hand over an EPS or PDF that had spot color in it and get it done without additional work anyway.


Yes, at best with experience a designer can visualize how a particular color will look in print when they see it on screen.

But that's true when viewing anything to be printed on-screen.

The only way to get there is to do a lot of printing.


Of course at that point you're already bought into the ecosystem with physical samples (which are not cheap), monitor calibration and all, so it feels like a "double dip" for no value added


It's not really a double dip. It's more like a triple dip, as they charge the designers for the physical samples, then they charge the designers in order to reference those physical samples in their Photoshop designs, and then they charge the printers in order to produce the output that the Photoshop files are referencing.


So… they charge people for using their cross-referencing system is what you mean?

It’s no more triple dipping than two people each needing their copy of photoshop to work on the same psd.


They want to make sure you pay at every single point where you might think "PANTONE®". So in addition of having to buy the PANTONE® sample book in order to actually know what the PANTONE® colours are, you now need to pay to reference the PANTONE® colours in a Photoshop document so that the receiving party knows which PANTONE® colours they need to get out of a printer.

If they could make you pay for sending an email containing "I need the background to be in PANTONE® Red 032 C", they would.


I would argue that a multi-sided marketplace isn't always a double dip. If I want a Pantone 628 C coffee mug from China, I can order it and know exactly what colour I'm getting. It saves designers tons of money and time with avoided back-and-forth in the prototyping process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: