Aaron from Deno here. I'm glad to see this come to light and very much looking forward to developing and strengthening foundational Web APIs underpinning modern JS runtimes such as Deno & Cloudflare Workers.
I don't think it's unfair to say that 1 man-hour equates to many machine-hours (in terms of dollar cost). Then in terms of per-core performance, modern JavaScript runtimes are nearly competitive with Go and per-core performance is ultimately what matters at scale since you'll load-balance between cores or VMs to saturate your compute.
Per-core performance is one thing, realized performance on today's multi-core machines is something else.
I have been writing back ends in Java and C# for more than a decade and it is widespread for people to take advantage of multi-core systems in two ways if they can: (1) threads sharing data structures such as system configuration and caches (e.g. it is no problem to have 10 or 100 megabytes of configuration data for a Java-based system) and (2) using Executors to split up tasks into smaller pieces and running them concurrently.
In Node.js, Python, and other GIL languages you can't do the above and slow down from "configuration at the speed of RAM" to "parsing configuration over and over again", "configuration at the speed of the database", etc.
I see people using Node.js for build systems but I think it's still an unusual choice (like Python) for a back end for a commercial system.
I'm not disagreeing with you but I am curious: what's the effective difference for most people (concerning perf and utilization) between multicore support in Java and Go and just running multiple processes in Node (edit: let's just ignore Python to make this simpler) ?
> what's the effective difference for most people (concerning perf and utilization) between multicore support in Java and Go and just running multiple processes in Node
The perf difference here probably isn't that great (although Java/Go will likely still be faster), but if you're at the point of running multiple processes you may well find that it's less dev effort to write your code in Java/Go (assuming you are familiar with both).
Yes I agree, it's always been simpler for me to operate Go apps rather than Node ones (but also because of memory usage in callback- and streaming-heavy JavaScript).
Multi-process in node.js web serving is doable but operationally more of a pain. The cluster module will let you spawn a bunch of processes all sharing the same listening port, but sharing caches, connection pools, etc become much more difficult. If some aspect is particularly CPU-bound you can use web workers, but for run-of-the-mill web requests I'm not sure it's worth it.
Processes eat more resources, and even taking them out of the equation, dynamic language runtimes have less opportunities for good JIT code optimization, at the same level as languages like Java and Go type systems allow for.
In regards to Node, as Python is anyway mostly CPython 99% of the deployments.
Because you're missing the picture that a goroutine doesn't use the stack size, heap from OS data structures, or CPU context switches into kernel code, as a full blown process.
Let alone the detail that the goroutine is full blown native code, while the node/Python process is interpreted, and even if a JIT is used, many C2 level optimisations are out of reach for dynamic languages.
It is no wonder that even with the herculean effort that has gone into V8, for the ultimate performance it needs help from GPU shaders and WebAssembly, both typed.
One of the big reasons for going with Deno is that it's an open runtime closely based on web standards. You can download the open source Deno Cli and all code written for our edge layer will run exactly the same there.
As more and more front-end frameworks starts leaning in on running part of their code at the edge, we felt it was important to champion and open, portable runtime for this layer vs a proprietary runtime tied to a specific platform.
Anything running JS comes with some TS support, you just have to transpile it before releasing :) I'm not sure why shipping the transpiler on the production server rather than keeping it in your CI is a good idea, but I think that's what Deno is doing.
> I'm not sure why shipping the transpiler on the production server rather than keeping it in your CI is a good idea, but I think that's what Deno is doing.
IMHO, the decoupling of build step and runtime step in JavaScript was a terrible mistake. I've wasted hours just trying to find tsconfig settings that are compatible with the other parts I'm using. Shipping a transpiler with a known-good configuration alongside the runtime forces everyone to write their packages in a way that are compatible with that configuration, instead of creating a wild west.
The current state of modules and npm reminds me a bit of the bad old ”php.ini” days, where you would have to make sure you enabled the language features enabled by the code you wanted to import. What a mess.
> I've wasted hours just trying to find tsconfig settings that are compatible with the other parts I'm using.
Deno only “solves” that problem by not having a legacy ecosystem, and that’s only if you stick to the happy path of only using modules with first class Deno support. If you try to tap into the vast Node ecosystem, where Deno’s lacking, through e.g. esm.dev, you can waste hours just as easily. Even packages that claim Deno support sometimes have minor problems.
I understand that it might be a problem for browser target, but nodejs is pretty easy to target (at least I never had anyissue).
Also speaking of wild west, Deno did not even manage to have their TS be the same as everyone else, as apparently they do import with .ts file extension, while everyone else is using .js. I feel like this would be creating more mess than fixing anything...
I'm surprised that the function is async but context.rewrite() doesn't use an await. Is that because the rewrite is handed back off to another level of the Netlify stack to process?
Promises are flat, so if a async function or promise callback returns a promise the result is just a promise, not promise<promise>.
Using async for functions that do not use await is still a good idea because thrown errors are converted to rejected promises.
`return await` can be useful because it's a signal that the value is async, causes the current function to be included in the async stack trace, and completes local try/catch/finally blocks when the promise resolves
Actually `context.rewrite` returns a `Promise<Response>`. The `async` isn't necessary here, but it also doesn't particularly hurt. You can return a `Promise` from an async function no problem.
Since it's being returned it doesn't really matter whether `.rewrite()` is returning a promise or not. `return await x` is mostly equivalent to `return x` within an async function.
Netlify and Supabase use Deno's infrastructure for code execution (https://deno.com/deploy/subhosting). Vercel hosts their edge functions on Cloudflare (nothing to do with Deno). Slack's Deno runtime is hosted on AWS.
Are you willing to talk a bit about how Deno Deploy works internally? I think you have an internal build of Deno that can run multiple isolates (unlike the CLI, which basically runs one). How do you limit the the blast radius in case of a vuln in Deno?
Kenton Varda did a pretty great writeup on CF worker security [0]. Would love to see Deno Deploy do something similar.
We probably will eventually. A talk like this takes a _lot_ of time to prepare though, so it's not on the top of our priority list. But it will happen eventually.
The TLDR is that Deno Deploy works pretty similarly to CFW in that it can run many isolates very tightly packed on a single machine. The isolation strategy differs slightly between CFW and Deploy, but both systems make extensive use of "defense in depth" strategies where you minimize the blast radius by stacking two or more defenses against the same issue on-top of each other. That makes it _much_ more difficult to escape any isolation - instead of breaking out of one sandbox, you might have to break out of two or three layers of isolation.
These levels of isolation could happen at different layers. For example network restrictions could be initially restricted by an in-process permission check, then additionally a network namespace, and finally a routing policy on the network that the machine is connected to. Imagine this, but not just for network, but also for compute, storage, etc.
How many Deno instances might an edge server run? Does each tenant have an instance or is there multi-tenancy? What interesting tweaks have you made making a cloudified offering of Deno tailored for http serving?
We're building a highly multi-tenant "isolate cloud" (think VMs => containers => isolates, as compute primitives).
The isolate hypervisor at the core of our cloud platform is built on parts of Deno CLI (since it has a modular design), but each isolate isn't an instance of Deno CLI running in some kind of container.
Isolate clouds/hypervisors are less generic and thus flexible than containers, but that specialization allows novel integration and high density/efficiency.
I actually think this is quite neat, but I am a bit worried about caching.
Someone mentioned rails, and rails have a lot of facilities to set correct cache headers for assets (css, js, images etc) and for dynamic content (for logged user in and/or for pages that are dynamic but public).
If you're deploying static files via a vanilla web server, you also get a lot of that for free, via the file meta-data.
I would expect a framework for publishing sites to showcase a minimum of good caching (client cache, ability to interact with a caching reverse proxy like varnish - and/or a cdn).
What other kinds of examples would you like to see ?
The goal was to showcase simple yet intuitive JSX + tailwind at edge, we didn't elaborate on more advanced use-cases like authenticated pages, API endpoints/forms, dynamic pages (location, etc...) or parametric routes.
Aaron from Deno here. Of course it's producing HTML as an output, but the point is that you can use JSX and familiar technologies like tailwind to dynamically generate that HTML at edge vs client side.
And unlike a pure static site, you can add API or form routes
Wow, a lot of misunderstanding of what Deno is and how it works in these comments! Must be frustrating for you.
I'm a huge fan of runtimes that reduce boilerplate and configuration, so that's what makes me most interested in Deno. What I'm most concerned about is that we're pushing the idea that Deno's approach to third party imports solves all the problems of npm et al. If we teach developers to think of third party and native libraries as equivalent, I think we're hiding a lot of problems rather than solving them, which could be even worse.
I can appreciate what's being done here, but I think a more compelling demo would have had a bit of dynamic rendering just to emphasize the point (since in the real world, a fully-static site like this would be better served by a static-only hosting service with no custom server running at all, even one on the edge). Even something as simple as grabbing the current timestamp and displaying it in the returned HTML, just to show that logic is running on every request.
This is a very lazy comment. I'm sure it makes you feel smart, but it drags down the entire conversation, and doesn't add anything of value. You seem very capable and accomplished, so I'm confused why you would spend any of your time to simply shit-post on someone who is trying to build something of use to many people.
You are right. It is a lazy comment. I would delete it if I could at this point but thats not possible.
It really comes about from my frustration. So much effort pushing into new tech and the result (at least as I and others in the comments here noted) is something reflective of pre-existing technology that has been around for decades.
I get that though enough of these exercises true innovation does emerge. However there is a whole lot of "re-inventing the wheel" in-between that which is frustrating as it seems to be prevalent.
On many of our internal benchmarks, Deno's std/http server outperforms Node's http (so express/koa too). There's still some room for improvement on our end and we would like to put together rigorous benchmarks before sharing that broadly.
Aaron@Deno here, we've been exploring something with the Tauri team but don't have a concrete release on the roadmap since we're focusing on other priorities.
I believe an Electron alternative is an important part of the Deno stack, so hopefully we'll ship a first iteration next year.
Thanks, Aaron. I think people who have been using Node for years are skilled at the old Node ways and have huge inertia in their skills, their own code, and others' Node code. For them, the ideal platform would be Node plus some upgrades. I'm guessing they would rather extend their inertial frame of reference than leave it behind.
Then there are others of us who have been saying no to Node and legacy JS for years. We have no such legacy to maintain and no intention of ever creating any. But some of us (at least I) would reconsider platforms built from scratch on a new TypeScript foundation rather than layered on a pre-ES6 foundation. That would include a Deno-based Electron. You might have more luck converting people who don't use Node than getting Node users to abandon their legacy.
The state of cross-platform desktop apps is terrible. All attention is on mobile, and desktop OS makers have almost zero interest in supporting cross-platform desktop apps. (MS cares a little more than zero, Apple less than zero and barely tolerates their own Mac-only developers.) Only something browser/Chromium based seems realistic for the next few years.
On the server, there are a lot of alternatives to Node that are considered better by (and very popular with) large segments of the market. Deno will be one of them, I think. But for cross-platform desktop apps, Electron would be rejected completely if the alternatives weren't so bad and unlikely to get better. A better Electron, despite its inherent problems, could end up more popular than server-side Deno. Just a thought.
Whilst not classical de/serialization I wrote serde_v8 (https://github.com/denoland/serde_v8), an expressive and ~maximally efficient bijection between v8 & rust.
Honestly using serde for language interop is one of my favorite things about serde, whether it's "classical de/serialization" or not. I've recently had the very-pleasant experience of writing some code that needs to pass geospatial data back and forth between Python and Rust, and found that the geojson crate, even though it's nominally for JSON, actually works with other serde-compatible things, including (something I found kind of miraculous) Python objects, using the pythonize crate, which can walk them with serde visitors. So as long as I can get my data into a roughly-geojson-shaped thing on the python side, I can consume it on the Rust side, without having to ever actually produce json.
I maintain the nodejs bindings for foundationdb. Foundationdb refuses to publish a wire protocol, so the bindings are implemented as native code wrapped by n_api. The code is a rat's nest of calls to methods like `napi_get_value_string_utf8` to parse javascript objects into C. (Eg [1]). As well as being difficult to read and write, I'm sure there's weird bugs lurking somewhere in all that boilerplate code. I've made my error checking a bit easier using macros, but that might have only made things worse.
I'd much prefer all that code to just be in rust. serde-v8 looks way easier to use than all the goopy serialization nonsense I'm doing now. (Though I'd want a serde_napi variant instead of going straight to v8).
While what I am using it for is not for performance, but I'm also using Serde as a bridge for my template engine and it's such a nice experience. I just wish it was possible to pass auxiliary information between serializers without having to resort to thread locals.
Happy to answer any questions you may have !