Tangential to this cool project, Hytale has an amazing modder experience and toolset built into the game. Everything is JSON driven, with hot reload, so we can build tooling in any language (without needing to name said language, lol to that thread here, ahem, I'm using CUE, ahem). They have flags to dump the JSONschema for everything, including mods, as well.
More related, Worldgen v2 is pretty amazing compared to Minecraft. What is the basis for the Worldgen in this project? Not too different from a 3d voxel game? I'm pretty new to it and have a general curiosity still.
Thanks! Right, I asked Claude to write a ROADMAP.md and it was super long and a lot of YAGNI. I try to work much more intentionally but I got some great ideas from it.
I'm thinking first of my own needs and click/mouse handlers are definitely yes.
It would be nice to enable popups when hovering over nodes/edges.
I looked at CUE and lattices and I realize CUE fits right in with some other work I'm doing!
For now I made it possible to draw Hasse diagrams. That seemed like a fun way to extend the project. I did not add any CUE examples, that seemed out of scope but who knows, there could be some applied examples in the future.
I see that lattices are commonly drawn with straight line, but that is mostly historical (how it was done). I decided to allow bezier edges as well because, why not?
My personal project is https://github.com/hofstadter/hof, a CUE centric developer experience, and perhaps where I will try out your project first, for visualizing CUE things.
1. The test setup is divergent from the real world (single node k8s cluster)
2. Their product is #1 in every metric they measured, but is also missing features from those compared against. Will those features change the results?
> The test setup is divergent from the real world (single node k8s cluster)
The setup was chosen to simplify the suite, so it can be easily run anywhere.
In real world, log collectors are mostly deployed as deamonsets and this is what was tested during the benchmark. The vlagent was initially developed to run as a deployment, though. So I don't think changing setup will affect its performance.
> Their product is #1 in every metric they measured, but is also missing features from those compared against. Will those features change the results?
Depending on what features will be involved into the testing. In the benchmark, all collectors are doing the same job: collecting logs, parsing JSONs, shipping log records. So they are even in used features.
Of course, the #1 product is missing features for log transformations, but these features aren't used during testing and shouldn't affect performance of other log collectors.
The bottom line of the post has "Should I switch" section explaining what's missing yet in product #1 for transparency.
Meh, I don't get it—what's stopping you from running the same benchmark on a Kubernetes cluster and sharing your own benchmark results, instead of just claiming that this benchmark is crap?
The company selling a product and trying to convince me with benchmarks should be the one putting in the honest effort. The point is to nudge them towards doing this themselves, because people who know about things (how to run distributed system benchmarks) will silently close the tab. HN gives feedback to each other so we can improve. Do you have a problem with that?
Another thing we try to do here is avoid comments that lower the level of discourse, it's in the guidelines.
It seems you are biased to OP, your comment does more harm than good to their effort. (why would I want to use this project if the people around it make snide remarks)
I have migrated away from NextJS for similar reasons of being too special case in the ecosystem. Next in particular is designed for the way Vercel runs applications and comes with extra pain if you don't run on edge. (see middleware as an example, I think they renamed it since I left)
reply