Hacker Newsnew | past | comments | ask | show | jobs | submit | urlgrey's commentslogin

in 1997 I saw Linus Torvalds speak at UC Berkeley following his move to California to work at Transmeta. I was a computer science undergrad at UC Davis, and took Amtrak to Berkeley along with some friends to see Linus in person. Linux was building momentum, and Linus was a real celebrity to those in the space.

Supporting Linus and the Linux community is a great legacy for Transmeta, even if their products didn't find commercial success.


Another important feature: HEVC playback on Mac OS X. An increasing amount of video content, especially content recorded on mobile phones, uses HEVC. Inconsistent playback support represents a real barrier to HEVC adoption (among other things), and Firefox support goes a long ways toward making HEVC more widespread.


There are several options from Amazon, try searching for "Clip-on Ferrite Ring".


I know; i hoped for a site that sells all the types of rings, binocular, clip-on - which are great for choking VHF vertical dipoles with the bottom leg being spokes that are obviously too short for the frequency - i digress.

I'm also just grown weary of amazon.


Author here, would love to discuss!


I'm amazed that their architecture doesn't include a CDN. These days I expect nearly all high traffic websites to make use of a CDN for all kinds of content, even content that's not cached.

They cited Cloudflare not being used due to privacy concerns. It'd be interesting to hear more about that, as well as why other CDNs weren't worth evaluating too.


What they are doing is unfortunately not legal. There were precedents of Cloudflare ratting out manga site operators before which have led to arrests [1] (the person who ran mangamura got a 3 year sentence and a $650k fine [2]). And at some point they were going after mangadex via the same way too [3].

A lot of their infrastructure design choices should be viewed with OPSEC constraints in mind.

[1] https://torrentfreak.com/japan-pirate-site-traffic-collapsed...

[2] https://torrentfreak.com/mangamura-operator-handed-three-yea...

[3] https://torrentfreak.com/mangadex-targeted-by-dmca-subpoena-...


Which is interesting considering they take no issue with sites like KiwiFarms which harass people to literal death, terrorist groups, criminals (carders, phishers, etc.), racists & other forms of hate speech.

I guess it all depends on how much money you bring in for them really.


It's effectively a warez site. There's a reason why they host in the places they do and can't be too picky about providers.

CF will also pass through things like DMCAs easily.

Based on their sidebar, it's probably hosted at Ecatel or whatever they are called now (cybercrime host) via Epik as a reseller, the provider famous for hosting far-right stuff.


What’s the reason behind where they host and having issues with providers? I haven’t heard this before

Regarding DMCA’s, as an entity doing business where they’re legal, what should they do as a middle man?


> Regarding DMCA’s, as an entity doing business where they’re legal, what should they do as a middle man?

Don't use them and instead have your middleman be in a country that ignores intellectual property rights and copyright?

I'm not saying CF is wrong to pass them through. I'm just saying CF is not the right choice for a warez site for longevity.


They do have a crowdsourced CDN called Mangadex@Home. I participated in it from last year until the site was hacked. The aggregate egress speed was around 10 Gbps.

The NSFW counterpart of MD also has a CDN appropriately named Hentai@Home run by volunteers.

These 2 sites are the only ones rolling their own CDN for free that I know.


I think the usual argument re: Cloudflare on the privacy front is the fact that they pretty aggressively fingerprint users, and will downgrade or block traffic originating from VPNs or some countries. This is a natural side effect of those things often being tied to abusive traffic, and a lot of it is likely configurable (at least on their paid plans) but it often comes up around this.


What's the benefit of a cdn if nothing is cacheable? Slightly lower latency on the tcp/tls handshake? That seems pretty insignificant.


The CDN part is kind of pointless because they can't really have nodes in large parts of the western world since.. it's a warez site. The CDN providers will get takedowns, requests to reveal the backing origin, etc. You can't use a commodity CDN provider for this.


In their case (manga), seems like the vast majority of the content is cacheable.


Latency makes a bigger impact on UX than throughput for general browsing. A TLS handshake can be multiple roundtrips that greatly benefit from lower latency, especially mobile devices.

Modern CDNs also provide lots of functionality from security (firewall, DDOS) to application delivery (image optimization, partial requests).


Properly tuned NGINX on a physical server can handle incomparably large load for static content than some of the "cloud" storages around.

The "trick" has really been known for a decade, or more. Have as many things static as possible, and only use backend logic for the barest minimum.


That's the raison d'être of nginx, so it is performant for this kind of thing. However, the advantage of a CDN is that they have points of presence around the world, so your user in Singapore doesn't have to do a trip around the world to get to your nginx on a physical box in Lisbon.


This literally cost me sleep last night, paging me for new Kubernetes nodes that failed to transition to 'Ready' because they were unable to pull Calico images during bootstrapping. After some duct-taping to get those initial nodes up and running, we just moved the Quay-hosted images to a GCR repository and moved on with life.

But that doesn't diminish the fact that this outage is a complete disaster for Quay.


At Mux we use Cedexis Openmix to dynamically select between Fastly and Highwinds (now Stackpath) CDNs to stream a lot of video. The end-user experience with a CDN is going to vary dramatically depending on where your users are located around the world, which ISP they use, and many other factors that change by the minute. I wrote a blog[1] post about how & why we use multiple CDNs with our video service.

You might also want to evaluate a video QoE service to see what your users are experiencing in terms of video start-up times, rendition switching, buffering, etc.

[1] https://mux.com/blog/multi-cdn-support-in-mux-video-for-impr...


That is cool. I've been stalking Mux for a while. Since I've had to implement some of the things Mux offers on my own.

I actually completed the video streaming part of my CMS right before Mux offered /video service. By that time it was too late lol I had already solved most of the problems.

Also saw that you guys use elixir on your backend which was coincidentally was also my language of choice.


> As of 2012, videos that didn’t load in two seconds had little hope of going viral.

This finding aligns with the results of a survey of users of the Mux video analytics service. We asked them "Which of the following streaming video problems is the most frustrating for you when it occurs?" Video rebuffering or stalled playback was considered the most annoying problem for 47% of respondents; video picture quality was chosen by only 14.3%. Slow startup times and rebuffering have a huge impact on the perceived QoE compared to video picture quality.

https://mux.com/blog/rebuffering-the-most-frustrating-and-fr...


Periscope developed a Low-Latency HTTP Live Streaming (LHLS) technique that relies on HTTP chunked transfer-encoding to stream video bytes as they are encoded at the origin. This is still subject to TCP packet retransmission overhead, but the time-to-first-byte is reduced significantly and leads to less buffering on the client.

Here's a Periscope post about LHLS: https://medium.com/@periscopecode/introducing-lhls-media-str...

Most systems that serve HLS media use fixed content-length segments, which requires knowledge of the length of a segment before the first byte can be sent over the wire. So, for a 5 second segment you would need to encode the entire 5 seconds before the first byte can be sent; this does not apply when streaming the segments with chunked transfer encoding.

Incidentally, at Mux we also use chunked transfer-encoding to stream video that is encoded on-the-fly with great performance.


I've heard from colleagues that this won't be possible with DASH due to the switch to fMP4 format. One of my co-workers tells me that fMP4 requires the entire segment to be loaded before playback can begin while TS segments don't require this. We've been looking into very small segments (e.g. 1s duration) to reduce latency but I've been interested in the LHLS approach since I first heard of it.


> I've heard from colleagues that this won't be possible with DASH due to the switch to fMP4 format.

That's incorrect. With DASH the latency depends on the fragment duration, not the segment duration. You can start sending the segment when its first fragment is generated, and use chunk-based HTTP transfer as mentioned in other comments.

Link for further details on low latency: https://www.gpac-licensing.com/2014/07/09/lowering-dash-live...


Very short segment durations are effective only when latency is more important than quality.

Each TS segment must start with a key-frame, and the GOP size can't exceed the duration of a segment (e.g. one second). Lowering the segment duration increases the frequency of key-frames, which has the effect of lowering the quality you can achieve at a given bitrate.


Note that this is a Apple requirement for HLS. Most people don't realize that the GOP size doesn't impact the latency, but it impacts start-up time.


fMP4 has an index for each chunk, so you have to buffer the whole thing to create the index on the writing side. However, with DASH you do have the option for WebM as well, which does not need and index and can be streamed. Or really small fMP4 segments work too.


The breach notice indicates that hashed passwords were compromised but doesn't mention whether a salt was used when computing the hashes.

Use of a salt makes all the difference, guarding against the use of rainbow tables to look up precomputed hashes of common passwords.


> The affected information included usernames, email addresses, and hashed passwords - the majority with the hashing function called bcrypt used to secure passwords.

If they're using bcrypt, then they're using salts since salts are built in to bcrypt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: