As I said on the full launch post, I believe LLM's to be quite random. AGI stands for Artificial Gacha Interface, it's a very useful slot machine, very apparent in image generation for example!
Yggdrasil is honestly a product of frustration. After waiting almost a year for something more advanced than linear chat interfaces, I decided to build it myself.
My primary belief is that LLM's are better at augmenting humans than replacing them. They are incredible tools for learning and planning given the right interface.
Yggdrasil started as just a chat interface, with the main focus on enhanced branching, to emulate the natural process of human problem solving. But as I developed it more, I ended up adding my own custom agent harness called Valkyrie.
Turns out, chat + agent interface is much more productive than either alone.
Yggdrasil is still in very active development, and I am the only person working on it so I would love for everyone to give it a go and try it out!
Currently we are offering limited free usage to everyone can experience what Yggdrasil offers, its hard to convey without using it.
Yggdrasil is available as a desktop app (linux, windows [wsl supported] and mac) and on the web.
> Please don't do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is. It's implicit in submitting something that you think it's important.
> Please submit the original source. If a post reports on something found on another site, submit the latter.
> If you can describe why it is slop, an AI can probably identify the underlying issues automatically
I would argue against this. Most of the time the things we find in review are due to extra considerations, often business, architectural etc, things which the AI doesn't have context of and it is quite bothersome to provide this context.
I generally agree that vague 1 shot prompting might vary.
I also feel all of those things can be explained over time into a compendium that is input. For example, every time it is right, or wrong, comment and add it to an .md file. Better yet, have the CLI Ai tool append it.
We know what is included as part of a prompt (like the above) is more accurately paid attention to.
My intent isn't to make more work, it's just to make it easier to highlight the issues with code that's mindlessly generated, or is overly convoluted when a simple approach will do.
If you sent the python file to Gemini, wouldn't it be in your database for the chat? I don't think relying on uncertain context window is even needed here!
A big goal while developing Yggdrasil was for it to act as long term documentation for scenarios like you describe!
As LLM use increases, I imagine each dev generating so much more data than before, our plans, considerations, knowledge have almost been moved partially into the LLM's we use!
It started as a solution to LLM front ends having terrible native branching features. But slowly I realize most of our data will be going through LLM's so Yggdrasil is evolving into a platform which consumes all your LLM queries, while keeping it easy to query and reference.
And now I have begun to realize how detrimental LLM assisted coding can be to someone who starts depending on it too much, so Yggdrasil is a bet in the other direction as compared to mainstream. Instead of agents/AI doing everything I believe human + ai assistance will win in the end.
Yggdrasil has a simple agent called Valkyrie, so they have their place, but that I believe should be the last step, after the developer has discussed and planned thoroughly through our tree interface, Heimdall.
And if someone replaces the dev, they can browse their conversations with the LLM, observe their mind map, what questions they asked, what extra things they considered (branches), the whole thought process easily navigable and visible.
Personally after using Yggdrasil, I feel quite confident in using the LLM, as I can ask all the silly questions I want, without worrying about context pollution. It aligns really well with the natural exploratory tangential thoughts we have when trying to find solutions or learn something.
Hey! Author of the blogpost and I also work on Ollama's tool calling. There has been a big push on tool calling over the last year to improve the parsing. What's the issues you're running into with local tool use? What models are you using?
reply