While the way secrets work in Swarm seems weird when compared to Kubernetes, this is usually pretty easily solved by a quick overriding entrypoint in the docker stack file that does essentially this:
Speaking as a Co-Founder of NeuroForge here, with some exciting news: we're proud to announce a new alpha release for our project, Swarmgate - version 0.7.0. This tool, which started as a simple experiment within our team, has evolved into an essential project aimed at addressing the complex challenges of Docker Swarm management. Whether it's juggling multiple users in a single swarm or optimizing resources across various swarms, Swarmgate 0.7.0 is our latest answer to these issues, offering enhanced features and improvements. You can find it over at:
What's Swarmgate?
Swarmgate is our innovative Docker Socket Proxy designed to provide a tenant-specific view onto a Docker Swarm. It supports all necessary operations for deploying stacks, along with managing volumes, secrets, configs, networks, and more. This allows multiple teams to work collaboratively within the same cluster without interfering with each other, thanks to unique labels that filter requests based on resource ownership. It's about making Docker Swarm environments as efficient and user-friendly as possible.
Under the Hood:
We've built Swarmgate using Node.js and Express, with the dockerode (and docker-modem) library for Docker interaction. This technology stack ensures robust performance and seamless integration with Docker's API, providing a reliable and effective management tool.
What's New in 0.7.0?
Docker Registry Auth Verification: Enhanced security through Docker Registry authentication checks to prevent unauthorized access to images.
Security Enhancements: Removal of the :version/swarm endpoint to protect against potential security vulnerabilities by exposing swarm join tokens.
Simplified Proxying: Introduction of the proxyRequestToDocker function for straightforward proxying of requests without the need for filtering.
Resolved Log Issues: We've tackled service/tasks log parsing issues to ensure compatibility and ease of use with the Docker CLI.
A Gentle Reminder:
As enthusiastic as we are about Swarmgate 0.7.0, it's important to remember that this is still ALPHA software. It's primarily a defense against accidental disruptions within clusters. While we're diligently enhancing security features, it's advisable to use Swarmgate in environments where there's a high level of trust among users.
What's up next:
A better tutorial on how to set this up
More ideas from https://github.com/neuroforgede/swarmgate/issues/1
I love DRF for CRUD apis. It just gets the job done and you can Focus on data modelling.
We built our data hub/data Integration solution on top of it. [1] It was a good choice.
By far the most extensible and overridable library I have worked with so far. Even when you need to ressort to hacks, they never seem to break when upgrading a Version.
If you want to stick on one machine, you can always just use a single node Docker Swarm to get the fully automated zero downtime deploys you want with Docker Compose:
I have implemented this for our tool NF Compose that allows us to build REST APIs without writing a single line of code [0]. I didn't go the route of triggers because we generate database tables automatically and we used to have a crazy versioning scheme that was inspired by data vault and anchor modelling where we stored every change on every attribute as a new record. This allowed for simple point in time queries.
Sounded cool, but in practice it was really slow. The techniques that are usually employed by Data Vault to fix this issue seemed too complex. Over time we moved to an implementation that handles the historization dynamically at runtime by generating historizing SQL queries ourselves [1]. We now use transaction time to determine winners and use an autoincrementing column to determine who wins on ties. A lot of brainpower went into ensuring this design is concurrency safe. On a sidenote: Generating SQL in python sounds dangerous, but we spent a lot of time on making it secure. We even have a linter that checks that everything is escaped properly whenever we are in dev mode [2].
We do very similar things ourselves: our insights product (https://incident.io/learn) uses Metabase to power the dashboards.
The data that goes into those insights can be quite complex and the queries are actually threading JSON parameters through into BigQuery SQL queries using JavaScript UDFs to power the filters in the dashboard (show incidents with these custom field values). This works pretty well with signed Metabase dashboard links.
We have hit limitations with Metabase though. Performance of the instance can be a bit unpredictable and their support has been poor when things do go wrong, with very little willingness to take our feedback into account for new product features.
For that reason and more (such as more flexible dashboards) we’re going to move ourselves to Omni (https://omni.co/) for internal business analytics use cases, and will reconsider Metabase for our customer facing product dashboards when we do. Omni may work for these or we might build them bespoke, we’ll see at the time.
I recently ran a little shootout between Superset, Metabase, and Lightdash — all open source with hosted options. All have nontrivial weaknesses but I ended up picking Lightdash.
Superset the best of them at data visualization but I honestly found it almost useless for self-serve BI by business users if you have existing star schema. This issue on how to do joins in Superset (with stalebot making a mess XD) is everything difficult about Superset for BI in a nutshell. https://github.com/apache/superset/issues/8645
Metabase is pretty great and it's definitely the right choice for a startup looking to get low cost BI set up. It still has a very table centric view, but feels built for _BI_ rather than visualization alone.
Lightdash has significant warts (YAML, pivoting being done in the frontend, no symmetric aggregates) but the Looker inspiration is obvious and it makes it easy to present _groups of tables_ to business users ready to rock. I liked Looker before Google acquired it. My business users are comfortable with star and snowflake schemas (not that they know those words) and it was easy to drop Lightdash on top of our existing data warehouse.
I don’t think we did consider this, probably because we have a preference to buy instead of build with tools like these and prefer a team who can respond to our feedback.
It looks like a promising tool though! I’m sure we’ll blog about our experience with the new tooling once we’ve moved over, the team will no doubt have a lot to say about it.
I had a feeling from the article and your comments, which is why I mentioned the hosted service. :)
From their website[0]:
> Preset was founded by the original creator of Apache Superset™. Our team of experts contributes over 75% of all commits to the open-source software project.
I'd be interested to see your blog post, regardless of tool.
Curious, have you tried speeding things up with e.g. cube.js? We used it in a fully custom project and it was a Performance life saver. It works quite well with Superset actually.