1) functionally, not that I'm aware of. the only thing right now is running from a specific block onwards, something that we are rolling out this week. you'll be able to run the workflow in debug mode and get really granular information about each node, the inputs and outputs, and resume from any point in the workflow with mock data
2) there is, we also have a code node that uses E2B to run code in isolated sandboxes. it supports python and ts/js
3) yes, in the code node you can use frameworks/libs since we do RCE
You actually can, if you self-host there are environment variables to control what models are available to the copilot but it’s tuned to Azure for the time being. We can work on generalizing it further and documenting it better
Azure is just fine, as long as it's documented someplace. I'll take a look, although I also couldn't find prebuilt Docker images referenced in the compose.local file (I will look into what is being built into ghcr.io)
the prebuilt images are in compose.prod file only, not in compose.local.
since the copilot is a managed service, you’d be setting those azure credentials in your .env and the copilot would call into your azure openai deployment.
We keep all workflows running on Sim cloud backwards compatible always. The idea is that you build, deploy, and never have to make modifications again unless you want to.
If we release a breaking change that requires a migration for existing local workflows, we release notice at least a few weeks ahead of time/bake it into the db migrations.
Incase there are significant changes made, everything is versioned so you opt-in to upgrading.
Node-RED is great for IoT/edge/data flows. Sim is built specifically for AI agents—native LLM support, tool-use control, structured outputs, token-level observability, etc.
afaik, there are actually a ton of unique IoT integrations that node-red has. a majority of their nodes/flows have accompanying physical devices and sensors
1. we wanted to have full control over the agent orchestration and the execution since we didn't like the abstractions that many of the existing frameworks had built, and didn't want to have dependencies in places we didn't need them. so, we built the orchestration and execution engine from scratch, allowing us to do neat things like human in the loop, settings that run the same block 10 times concurrently, etc.
2. this would kind of serve as a drop-in replacement for langgraph. you could build a workflow with an agent and some tools, perhaps some form of memory. then, just deploy that as an API, call it from your frontend, and consume the streamed response on your chat client and without the need to maintain any infra at all.
3. we have a generic code block and an api block used to call APIs for integrations that we may not have, and you can use those to plug (langgraph) agents into the Sim ecosystem.
4. we are adding in the ability to deploy your workflow as an MCP server in the next week, stay tuned :) in the meantime, you can deploy the workflow as an API and have the agent call it as a tool. moreover, you can use the workflow block in sim to call other agents/worklows as well, so its easy to encapsulate a lot of complexity in a `parent` workflow that you call that dynamically routes and uses different tools based on the task at hand
2) there is, we also have a code node that uses E2B to run code in isolated sandboxes. it supports python and ts/js
3) yes, in the code node you can use frameworks/libs since we do RCE