Hacker Newsnew | past | comments | ask | show | jobs | submit | more elisee's commentslogin

Happy to see Deno get some financial backing!

I've been building my new multiplayer games website [1] with Deno over the last 4 months and apart from some minor growing pains, it's been a joy to use.

The lack of unnecessary package management, and the TypeScript-by-default approach makes Web dev much nicer. We're also using TypeScript on the client-side, relying on VSCode for error reporting. We use sucrase to strip the types just as we're serving the script files, so that there is no extra build time, it feels like TypeScript is Web-native and we can share typed code with the server.

[1] Not yet launched but we ran a preview past weekend with hundreds of players over WebSockets: https://twitter.com/MasterOfTheGrid/status/13757583007179735... - https://sparks.land


https://sparks.land doesn't load properly on mobile (iOS, Firefox)


Ah thanks for the heads up. It requires WebGL 2 which isn't yet in iOS's Web engine I believe? And IIRC all browsers have to use it on iOS. It does work on Android.


No webgl2, but there are a lot of webgl2 extensions supported. The biggest omission for me is not being able to render to float textures. (Although a lot of android devices cant do this either)


Peeking at sparks.land I see that you're serving .ts files, I assume that's what you mean by using sucrase, you're transpiling "live" instead of building/deploying bundles offline?

I notice your script files are all pretty small, have you run into any upper limits on performance or scalability so far with this approach?


Correct! In production we've got Cloudflare in the middle, so we're only using sucrase on-the-fly for each .ts file during development. So far it's unnoticeable in terms of loading times.

> I notice your script files are all pretty small, have you run into any upper limits on performance or scalability so far with this approach?

Not that I can tell. But if we need to, we can always do a minified bundle in production later on. So far it's just nice to not have to even think about it!


Wait, so you're running Sucrase in a Cloudflare Worker?

It compiles, and then caches the output I assume?

That's a really cool use case I hadn't thought of..


Not quite, I'm running Sucrase on my Deno HTTP server: if the extension is ".ts", I put the file through sucrase before serving it as text/javascript. In development, it happens every single time I reload (and it's fast enough that I don't even notice). In production, Cloudflare requests the .ts file from my server once (triggering sucrase), and then caches it.


Is the VSCode support good? I tried using Deno with WebStorm a few months ago and it wasn't a great experience.


It's getting there! They finished a rewrite of the extension recently and it's quite nice.

If you're on Windows like me, sadly there's still a nasty bug with path mismatches between the LSP server and the VSCode extension (https://github.com/denoland/deno/issues/9744) which requires reloading the window to fix spurious errors, but I'm sure it'll be fixed soon enough.


Jetbrains extension hasn't been updated much since release and doesn't interface with the official LSP. The experience is poor and outdated.

Vscode extension is maintained by the official team and will provide the best experience. There are unofficial plugins for sublime and Vim. They use LSP too and provide a comparable experience.


Hey, the Discord invite link is not active anymore.


Which one? The home page one seems to work for me. Otherwise try: https://sparks.land/discord


Are you running multiple cores/threads of deno? If so how are you holding/communicating state server side?


There's a central Deno server program called the switchboard, which serves static content, runs a small REST API for account management / login, and starts a WebSocket server for game servers to register themselves.

Each game server (a stand-alone Deno program that might or might not run on its own machine) connects to the switchboard over websocket and authenticates itself with an API key (since people will be able to make their own game servers).

When a player wants to join a server, they POST a request to the switchboard, which gives them back a token that they can send to the game server after establishing a WebSocket connection to it. The game server checks the token with the switchboard and gets back public user account info if it's valid.

Each game server's logic is currently single-threaded. I guess we might end up offloading some work to WebWorkers later on.

A server can publish some state info through the switchboard that will be broadcasted to other servers from the same user. This is used to show player counts in game rooms from a lobby, things like that.

I run the whole thing on a couple cheap Scaleway servers, with Cloudflare in front (no AWS nor containers or anything of the sort). My previous platform, built with Node.js (https://jklm.fun) is able to sustain at least 2000 concurrent players like that, though admittedly those are board-like games which are not very demanding, unlike the games for Sparks.land which will be more fully-fledged... so we'll see how that holds up!


Thanks I run something similar that can only really do 300 players before it starts to lag badly but TBH needs to be rewritten as its all single threaded and 1 process controls the lobby, every game, all chats, etc. Don't do what I did -_-


Haha I feel your pain, I did the same initially, took a few rewrites over the years to get to something I'm happy with and runs well.


For completeness, you should check out Elixir and Phoenix (channels and presence) for the server. Easy websockets, isolated player processes, non-blocking VM, plus deep introspection for debugging issues. https://youtu.be/JvBT4XBdoUE. We see more and more indie games being built with Phoenix LiveView.


Indeed! I swore off Web development a couple times because of the mess but now my whole party games platform is made with plain JS, no bundling, no post-processing, no big frameworks and it's pretty good. (Although I did recently augment it with some JSDoc types, for long-term maintainability)

At least three devs have asked why I wasn't using some reactive framework or big library. I know why: I'm much more productive without! Simple DOM helpers go a long way: https://jklm.fun/common/dom.js

(If you're curious: https://jklm.fun)


Well isn’t that the economics of it? A new dev is much more productive in the new thing they learned. A more seasoned dev is more productive in all the things they already know.

The question is, who gets to dictate the economics here? The answer is always Thanos, balance.

In a true meritocracy this is a legitimate fight, neither side has a one up. We have to find the pros and cons of the old and new and iterate.


Can you point to some games (or game projects) using Grid Engine?


No, I'm sorry, most of our users are hobbyists with projects that we're not aware of the state of for the most part.

We hope to improve the engine to better serve the indie and hobbyist gamedev community with tools that aren't available elsewhere and if we hear about any projects one would like to showcase, we'd be happy to feature them!


I've been focusing on re-making my various Web party games into a more unified platform at https://jklm.fun/

The previous iterations were built years ago using CoffeeScript, Grunt/Gulp, Jade, Stylus, etc. This time around, I went for vanilla HTML/JS/CSS, no transpilation, no bundler, no build step at all (and no frameworks). It's been a joy. I'm using the TypeScript compiler in VS Code for sanity checking and might add some JSDoc to leverage the type checks even, but for now it's quite nice as is.

I've also enjoyed building up a Discord community for it all, got almost a thousand people in there now and it's a lot of fun interacting on a daily basis.




See also this recent blog post + paper by OpenAI, they designed curiosity in an agent by having it test how well it can predict the output of a randomly initialized neural network from the game's current frame + input. The better the prediction, the more its mental model of the world is correct, and the less it gets rewarded (encouraging it to go and find unexplored situations). https://blog.openai.com/reinforcement-learning-with-predicti...


Would "Me talking to you" be an equivalent of "mi do tavla", without temporal tense? Or did I miss the point?


You missed the point, but it's a hard point.

Suppose I utter {mi do tavla .i ba go'i}. {ba go'i} means "the previous utterance, logically, but with a future tense". English has no satisfactory translation for this entire phrase; the closest we might come is, for example, "I'm talking with you. In the future."


yes, there are some things that you can convey more accurately or precisely in lojban but... i think this is a poor example. Also, I think you're forgetting that translation is about intent not literal word / phrase conversion. The intent of "i'm talking with you." followed by something that indicates that the prior sentence takes place in the future translates very clearly into "i will be talking with you in the future" You can get all picky about verb tenses and say "oh english doesn't have that tense" but a) it's not true ("I am talking with you in the future.") and b) it ignores the point of communication. To communicate concepts between entities. The concept as that in the future communication will happen between you and I. "In the future we are talking." or "We are talking in the future." "talking" isn't temporal it's active. It can be future "we will be talking" or past "we were talking" or present "we are talking".

I'm sure there are some things which can't be translated nearly as clearly into English, but i don't think this is one of those things.


That's probably IL2CPP, developed at Unity: https://docs.unity3d.com/Manual/IL2CPP.html


itch.io's new open source wharf & butler tools might be what you're looking for: https://itch.io/docs/wharf/ and https://itch.io/docs/butler/

Quoting Wharf's spec intro:

    Wharf is a protocol that enables incremental uploads and downloads to keep software up-to-date. It includes:

    A diffing and patching algorithm, based on rsync
    An open file format specification for patches and signature files, based on protobuf
    A reference implementation in Go
    A command-line tool with several commands
Butler is the commandline tool for generating patches (it can negotiate small diffs from the server without requiring a full local copy of the thing you're diffing against), uploading them and applying them back on the client.

It is used to power itch.io's Steam-like application, itch: http://itch.io/app, delivering multi-gigabyte game installs & updates.


Wow, that's really impressive. I'll investigate, thank you for the link.


Hey, amos here, main developer of wharf/butler, here's a quick technical summary so you don't have to do the digging yourself:

- File formats are streams of protobuf messages - efficient serialization, easy to parse from a bunch of programming languages. Most files (patches, signatures) are composed of an uncompressed header, and a brotli-compressed stream (in the reference implementation, compression format are pluggable) of other messages.

- The main diff method is based on rsync. It's slightly tuned, in that: it operates over the hashes of all files (which means rename tracking is seamless - the reference implementation detects that and handles it efficiently), and it takes into account partial blocks (at the end of files, smaller than the block size)

- The reference implementation is quite modular Go, which is nice for portability, and, like elisee mentioned, used in production at itch.io. We assume most things are streaming (so that, for example, you can apply a patch while downloading it, no temporary writes to disk needed), we actually use a virtual file system for all downloads and updates.

- The reference implementation contains support for block-based (4MB default) file delivery, which is useful for a verify/heal process (figure out which parts are missing/have been corrupted and correct them)

- The wharf repo contains the basis of a second diff method, based on rsync - for a secondary patch optimization step. The bsdiff algorithm is well-commented with references to the original paper, and there's an opt-in parallel bsdiff codepath (as in multi-core suffix sorting, not just bsdiff operating on chunks)

- A few other companies (including well-known gaming actors) have started reaching out / using parts of wharf for their own usage, I'll happily name names as soon as it's all become more public :)

I'd be happy to answer any questions!


That's quite incredibly thorough. How familiar are you with Blizzard's NGDP protocol / CASC file format?


Not at all, but after a cursory look it seems to solve a slightly different (and easier, imho) problem. I might be mistaken!


It's actually exactly what you described - the documentation is very sparse on it because it's an internal thing (I'm guessing you found the CASC documentation, not the NGDP one). If you're interested, shoot me an email and I can send you some more details; but it'd simply be for intellectual curiosity, as I said it's an internal protocol.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: