Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Okay, then it sounds like we've been using a different definition of "async" all along then.


Why do you say that? It’s definitely async on Node, even if a single request/response is entirely sequential.

Maybe some pseudocode will help illustrate. Here is a hypothetical[1] implementation of the Node equivalent of the PHP design you described:

  const handleRequest = async (request) => {
    try {
      const userData = await getUserData(request)
      const blogData = await getBlogData(request)

      return whatever(userData, blogData)
    } catch (error) {
      return errorResponse(error)
    }
  }
In this function, both `await` expressions suspend the current call and yield to the event loop, until the `Promise` returned by the awaited function resolves. During this time, Node will be idle and able to accept additional requests on the same thread.

But because, in this hypothetical implementation, `getBlogData` doesn’t depend on `getUserData`, it could be written this way too:

  const handleRequest = async (request) => {
    try {
      const [ userData, blogData ] = await Promise.all([
        getUserData(request),
        getBlogData(request)
      ])

      return whatever(userData, blogData)
    } catch (error) {
      return errorResponse(error)
    }
  }
Both implementations are functionally equivalent, but the latter speculatively risks wasted work for the potential benefit of IO concurrency in the non-error case. This is a common performance optimization technique for single-threaded runtimes like Node, where `await` might represent a significant portion of the response time.

To your original wondering, why `await` at all? Well, for one because you have no choice. With very few exceptions, all IO is asynchronous in Node. And async colors[2] JS functions. More importantly, because JS is (mostly) single threaded and generally spawning new processes is either not available or expensive to do on demand. Getting the most processing out of a single process is the whole point.

Other environments will either accomplish something similar with threads, or spin up a process per request and depend on the OS for scheduling work. They’re all slightly different trade offs along the same general theme: make idle resources available while waiting. Node’s trade-off is that it has to handle a global (or process-wide anyway) task queue. Threaded approaches and similar put more scheduling control, resources, and complexity, in the hands of devs. Process-per-request eliminates the need to think about any of this, at the expense of cold start times and memory.

All of those trade-offs are reasonable depending on the workload and depending on the environment. Part of the reason `async` (the keyword and/or language facility) has gained popularity across a wide variety of use cases is that the combination of concurrent state complexity, cold start time, and memory pressure makes it appealing to model the problem in a way that almost looks synchronous.

1: Node devs stumbling upon this: yes I realize this isn’t the typical interface for a request callback. That’s intentional to keep it simple, but you can do it this way if the traditional callback hell interfaces are not your cup of tea… you just need a simple wrapper.

2: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: