Hacker Newsnew | past | comments | ask | show | jobs | submit | more kaoD's commentslogin

Nitpick: "raymarching" is not "raytracing done in a shader" and it's not polygon-based.

Raymarching is a raytracing technique that takes advantage of Signed Distance Functions to have a minimum bound on the ray's distance to complex surfaces, letting you march rays by discrete amounts using this distance[0]. If the distance is still large after a set number of steps the ray is assumed to have escaped the scene.

This allows tracing complex geometry cheaply because, unlike in traditional raytracing, you don't have to calculate each ray's intersection with a large number of analytical shapes (SDFs are O(1), analytical raytracing is O(n)).

There are disadvantages to raymarching. In particular, many useful operations on SDFs only produce a bounded result, actually result in a pseudo-SDF that is not differentiable everywhere, might be non-Euclidean, etc. which might introduce artifacts on the rendering.

You can do analytical raytracing in fragment shaders[1].

[0] https://en.wikipedia.org/wiki/Ray_marching#Sphere_tracing Good visualization of raymarching steps

[1] https://www.shadertoy.com/view/WlXcRM Fragment shader using Monte Carlo raytracing (aka "path tracing")


SDFs still scale by geometry complexity, though. It costs instructions to evaluate each SDF component. You could still use something like BvH (or Matt Keeter’s interval arithmetic trick) to speed things up.


> Raymarching is a raytracing technique…

Did you mean ‘raymarching is a technique…’? Otherwise you’re somewhat contradicting the first sentence, and also ray marching and ray tracing are two different techniques, which is what you’re trying to say, right?

Raymarching can be polygon based, if you want. It’s not usually on ShaderToy, but there’s no technical reason or rule against raymarching polygons. And use of Monte Carlo with ray tracing doesn’t necessarily imply path tracing, FWIW.


Sorry, let me clarify, the terms are used imprecisely.

Some people use "raytracing" only for the ray intersection technique, but some people (me included, in the post above) consider it an umbrella term and raymarching, path tracing, etc. only as specific techniques of raytracing.

So what I meant is "'raymarching' is not 'raytracing in shaders' but just a technique of raytracing, in shaders or not".

I was not correcting OP, just adding clarifications on top.

> Raymarching can be polygon based, if you want

But not polygon-intersection-based, it'd still be a SDF (to an implicit polygon).


> Suddenly this typing problem goes away, because the type of your "flatten" method is just "MyStructure -> [MyElements]".

How is that less maintenance burden than a simple Flatten type? Now you have to construct and likely unwrap the types as needed.

And how will you ensure that you're flattening your unneeded type anyways? Sure you can remove the generics for a concrete type but that won't simplify the type.

It's simple. It's just recursive flattening an array in 4 lines. Unlikely to ever change, unlike the 638255 types that you'd have to introduce and maintain for no reason.

There are many reasons not to do that. Say your business logic changes and your type no longer needs one of the alternatives: you are unlikely to notice because it will typecheck even if never constructed and you will have to deal with that unused code path until you realize it's unused (if you ever do).

You made code harder to maintain and more complex for some misguided sense of simplicity.


For those unfamiliar with TS, the above is just...

    function flat([head, ...tail]) {
      return Array.isArray(head)
        ? [...flat(head), ...flat(tail)]
        : [head, ...flat(tail)]
    }
...in TS syntax.


Well, it is the type of that, in TS syntax. Few are the statically-typed languages that can even express that type.


Java: List<Object>

Python: list[Any]

...what am I missing?


You're missing the specialisation of Object/Any. For example Array.flat called with [int, [bool, string]] returns a type [int, bool, string]. Admittedly this is somewhat niche, but most other languages can't express this - the type information gets erased.


You're missing the input type, essentially. Those are just array types. The TypeScript type signature more of a function type, it expresses flattening a n-dimensional array (input type) into a flat array (output type).


Steam Deck is not free software, is it?

The repo[0] is basically an issue tracker and the hardware is not open either (but they're repair-friendly which is already an improvement over... everything else.)

[0] https://github.com/ValveSoftware/SteamOS


They run their own Gitlab instance with many more steamos projects:

https://gitlab.steamos.cloud


HTTP/3 does not need obscure DNS records (but it's greatly enhanced by them).

Messing with firewalls in what way?


> there is nothing today as secure as GPG

Depending on what part of the huge hulk that GPG is, there are many tools that are as secure (or more) than it.

For encryption age[0] comes to mind. For signing minisign[1] or, more recently, plain ssh-keygen[2]. For encryption at rest, restic[3].

PGP having all this built-in with forward-compatibility is a liability.

[0] https://github.com/FiloSottile/age

[1] https://github.com/jedisct1/minisign

[2] https://man.openbsd.org/ssh-keygen.1

[3] https://github.com/restic/restic


The 4 tools you've listed all lack any notion of trust inheritance, which is an utterly vital property of any good crypto system.

The only viable alternative for that is x509 and that's useless for individuals due to the design.


DNS is a hierarchical protocol. You can exfiltrate data as long as the DNS server is resolving recursively.


Good to know. I didn't know that.

For my devices in question I can see the size and frequency of the requests in OpenWRT and doubt it's actually doing so.


How old are you? I'm in your boat but I suspect we'll change our tune when we get older.


My 40s aren't too far off. I don't expect to lose that much of my ability


Yeah I'm not worried about my ability, but the perceived value from employers. We're probably in the sweet spot where we're still "young" but also very experienced.


That would be quite ridiculous in my opinion. Most of my peers hardly stay in one job for more than 2-3 years anyway, so unless you're retiring in the next two years I don't see why they would have a problem with it.

Of course I live in a country where retirement savings isn't your employer's responsibility. I think the US has some ridiculous retirement practices that may make older employees a bit of a hot potato situation?


> If you told someone in 1995 that within 25 years [...] most people would find that hard to believe.

That's not how I remember it (but I was just a kid so I might be misremembering?)

As I remember (and what I gather from media from the era) late 80s/early 90s were hyper optimistic about tech. So much so that I distinctly remember a ¿german? TV show when I was a kid where they had what amounts to modern smartphones, and we all assumed that was right around the corner. If anything, it took too damn long.

Were adults outside my household not as optimistic about tech progress?


To your point, AT&T's "You Will" commercials started airing in 1993 and present both an optimistic and fairly accurate view of what the future would look like.

https://www.youtube.com/watch?v=RvZ-667CEdo


I didn't know about these ads, thanks for sharing! Can't imagine how people reacted to that when they aired — the things they described sound so "normal" today, I wonder if it was seen as far fetched, crazy or actually expected.


I was in my late teens at the time. My memory is that I felt like the tech was definitely going happen in some form, but I rolled my eyes heavily at the idea that AT&T was going to be the company to do make it happen.

If you’re unfamiliar, the phone connectivity situation in the 80s and 90s was messy and piecemeal. AT&T had been broken up in 1982 (see https://www.historyfactory.com/insights/this-month-in-busine...), and most people had a local phone provider and AT&T was the default long-distance provider. MCI and Sprint were becoming real competition for AT&T at the time of these commercials.

Anyway, in 1993 AT&T was still the crusty old monopoly on most people’s minds, and the idea that they were going to be the company to bring any of these ideas to the market was laughable. So the commercials were basically an image play. The only thing most people bought from AT&T was long distance service, and the main threat was customers leaving for MCI and Sprint. The ads memorable for sure, but I don’t think they blew anyone’s mind or made anyone stay with AT&T.


We’re the same age, and I had exactly the same reaction.

AT&T and the baby bells were widely loathed (man I hated Ameritech…), so the idea they would extend their tentacles in this way was the main thing I reacted to. The technology seemed straightforwardly likely with Dennard scaling in full swing.

I thought it would be banks that owned the customer relationship, not telcos or Apple (or non-existent Google), but the tech was just… assume miniaturization’s plateau isn’t coming for a few decades.

Still pretty iconic/memorable, though!


Thanks for that context. Was your expectation/prediction about that tech that it was going to happen in say 2000s or sooner/later?


In these commercials, it wasn't the technology itself but the ease of access and visualized integration of these technologies into the commoners' everyday lives that was the new idea.


Wow, that genuinely gave me goosebumps. It is incredible to live in a time where so much of that hopeful optimism came to pass.


Same here. It's wild that every single thing in that video came to pass, and more. Within our own lifetimes.

Still no flying cars, though.


Indeed, AI now is what people in the 1980s thought computers would be doing in 2000.


Except people thought it would get basic facts right.


We can't decide whether to take a vaccine even when we are dying left and right. And we have brains, not chips inside.


Or hell, as Neil deGrasse Tyson said in a video, just put 2 lines with different arrows at the ends then our brains can't even tell if they're the same size!

https://en.wikipedia.org/wiki/Müller-Lyer_illusion


That’s how I remember it too. The video is from 1999, during the height of the dot-com bubble. These experts are predicting that within 10 years the internet will be on your phone, and that people will be using their phones as credit cards and the phone company would manage the transaction, the prediction actually comes pretty close to the prediction made by bitcoin enthusiasts.

https://bsky.app/profile/ruv.is/post/3liyszqszds22

Note that this is the state TV broadcasting this in their main news program. The most popular daily show in Iceland.


Still waiting on my flying car.


To be fair, that has been a Sci-Fi trope for at least 130 years and predates the invention of the car itself (e.g. personal wings/flying horse -> flying ship -> personal balloon -> flying automobile). So countless generations have been waiting for that :)


Might not be waiting for long.


There's no way I'm trusting the current driving cohort with a third dimension. If we get flying cars and they aren't completely autonomous, I am moving to the sticks.


Self-flying cars? I wonder if it's actually easier to have autonomous vehicles operating in 3D than in "2D".


Aren't green threads and async-await orthogonal concepts?

As I understand it async-await is syntax sugar to write a state machine for cooperative multitasking. Green "threads" are threads implemented in user code that might or might not use OS threads. E.g.:

- You can use Rust tokio::task (green threads) with a manually coded Future with no async-await sugar, which might or might not be parallelized depending on the Tokio runtime it's running on.

- ...or with a Future returned by an async block, which allows async-await syntax.

- You can have a Future created by an async function call and poll it manually from an OS thread.

- Node has async-await syntax to express concurrency but it has no parallelism at all since it is single-threaded. I think no green threads either (neither parallel or not) since Promises are stackless?

Is this a new usage of the term I don't know about? What does it mean? Or did I misinterpret the "but"?

As a non-Haskeller I guess it doesn't need explicit async-await syntax because there might be some way to express the same concept with monads?


You don't need "monads" (in plural) since GHC provides a runtime where threads are not 1:1 OS threads but rather are managed at the user level, similar to what you have in Go. You can implement async/await as a library though [1]

[1] https://www.cambridge.org/core/services/aop-cambridge-core/c...


What do you mean with "in plural"?


In the sense that you only need to work with the standard IO monad to get the benefits of the runtime.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: