One of the benefits of using dynamic secrets is that access to say databases carry a short TTL. Vault manages the lifecycle of these credentials and will automatically revoke them once the defined lifecycle has expired. To gain access to credentials a user would authenticate to Vault with say LDAP, this access can be controlled centrally with a policy defining access to secrets assigned on an individual user or group level.
Should an individual leave an organisation then the credentials they have obtained from Vault to access a datastore would expire automatically, normal process would apply to remove them from LDAP and disable the ability to require further credentials.
There is always a process problem with managing secrets but dynamic secrets in Vault stops long-lived secrets and reduces unofficial password sharing.
Nomad is very much alive and HashiCorp is committed to delivering a scheduler which concentrates on operational simplicity so teams can concentrate on building applications. It also gives the capability for running workloads other than Docker such as isolated fork for binaries, non containerised Java, etc. We have a great release with 0.8 and many features planned for the rest of the year.
Integrating Nomad with Vault and Consul is super easy and allows you to provide secrets, configuration and service discovery to the application with the right layer of abstraction, the application should not be aware of the scheduler it is running on. Cloud auto join allows super easy cluster config. Job files are declarative.
Yes, Nomad does not have all the features of Kubernetes, but we take a different approach believing in workflows and the unix phillosophy of a single tool for a single job. A fairer comaprison would be to comapare the HashiCorp suite of OSS tools to K8s, Nomad, Vault, Consul, Terraform, this gives you capabilities to manage your workload both legacy and modern.
Go mobile is pretty decent for building SDKs, I am always wary of solutions which abstract the native APIs as you can get pinned to a particular iOS or Android version while waiting for updates.
Go Mobile allows you to use the Go standard library and packages to build an SDK which integrates as a library by exposing C interfaces. With Android it creates the JNI stuff for you, iOS ObjectiveC bindings. As long as you avoid / be aware of any other C based dependencies it is pretty safe and maintainable.
Building an iOS framework still requires a mac as it needs to shell to xcodebuild but it does work pretty well.
With all of these tools you have to consider the Software Application Development lifecycle. What is the lifecycle of the application, are you prepared to completely re-write if the tooling disappears, can you suffer not being able to use the latest native framework while waiting for tools to update compatibility or are you prepared to pitch in and update the tooling yourself.
For me the main complexity with native apps is the UI, these APIs require specialist knowledge and can often wildly differ between platforms. React Native does a good job of abstraction and maybe this coupled to GoMobile is a dream partnership.
This is incredible news, protobuffers are an amazing and performant alternative to JSON all swift GRPC based backends for iOS apps would be phenomenal.
I'm the author of this post and thanks for the feedback, just wanted to make a few comments...
1. Half baked
Not sure I think this is a fair comment, it implies a lack of care, I do however think that work in progress is a fair assessment and actually I state this myself, the approach taken is an evolutionary one. With every use, identify improvements and make them so next time things are better. The more we share our learnings the faster the industry will advance and to many Microservices patterns are still a new approach, there are huge learnings to be had, some from personal experiences other from avoiding the failings of others. Just trying to share what I know in the hope it might help someone else.
2. Dependency Injection
I was fairly confident this is a marmite (love or hate) approach with Go, there are some threads out there longer than the complete works of Shakespeare arguing if it is needed or not. For me it helps with decoupling when taking a test first approach and this is the main reason I choose to use it as a pattern. I have written services with and without DI, I definitely prefer with, I did say it was opinionated :)
3. Service Discovery, Failover, Storage, etc
This is a huge topic in itself yet from people I speak to when discussing Microservices is something that often comes up. I personally hope that service discovery will be picked up at a Platform level by Mesosphere, Kubernetes or new PAAS providers like Docker Cloud or Google Container Service. At present I am using Consul, Consul Registrator, and HA Proxy but the bigger and more complicated the system the bigger this problem becomes and I am not 100% happy with things in this space. Storage is best avoided where possible, Docker certainly introduces problems with mutability, S3 is an option at the moment for shared storage but there are interesting things in the works with Docker Volumes. Event Sourcing is another huge area for Microservices, effectively decoupling them removes many of the problems around failover but this in itself huge complicated area. In general developing for failure is a good approach, deciding how things fail should be a cross functional requirement when discussing the features.
4. Feature Toggles
Good call and in fact at my day job we use these extensively, in fact we could not deploy without them. At the moment this is not a problem I have had to tackle and therefore have not added this feature. I suspect this is something that will affect me in the future once I move from writing new services to changing and maintaining existing ones. Feature toggling in microservices opens up a completely new approach, do you toggle in the service code or with a different copy of the service itself and toggle within the service routing?
Really appreciate all the comments, I have certainly learned some stuff and I'm really looking forward to where the industry will head in the next 12 months.
Should an individual leave an organisation then the credentials they have obtained from Vault to access a datastore would expire automatically, normal process would apply to remove them from LDAP and disable the ability to require further credentials.
There is always a process problem with managing secrets but dynamic secrets in Vault stops long-lived secrets and reduces unofficial password sharing.