Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The branch deploy sounds pretty interesting, I wish there was more detail on it than just a footnote


We do the same - it was surprisingly simple to implement -

  - CircleCI listens across all GIT branches and injects $CIRCLE_BRANCH into ENV
  - A build step populates a vhost template to point "$CIRCLE_BRANCH.qa.bofh.com" to "/var/www/$CIRCLE_BRANCH/current"
  - Another build step creates "manifest.json" with $CIRCLE_BRANCH and $CIRCLE_BUILD_NUM
  - The vhost config and the manifest are added to the release tarball/Docker image
  - The tarball/image gets shipped to the staging server 
  - (we ship to S3/Docker, then pull because of security concerns. But can ship inside the build process)
  - In staging, a script reads manifest.json and moves content to /var/www/$CIRCLE_BRANCH/$CIRCLE_BRANCH
  - The provided nginx vhost is copied into a standard dir with other vhosts
  - A symlink of "/var/www/$CIRCLE_BRANCH/current" is forced to point to the fresh release
  - nginx -s reload && profit
You can use a wildcard SSL to cover subdomains or add a step to trigger LetsEncrypt certbot as required.

This method works equally well with Docker images - just requires an interim step to launch the container. In staging, we use a naming convention to launch on predictable ports for every app. First 2 numbers are app specific, second 3 numbers are build-specific. For example, "25$CIRCLE_BUILD_NUM" (where $CIRCLE_BUILD_NUM is just the last 3 numbers of the build).


How do you handle the database? Do you reuse the one on staging or create a new instance with prepopulated data? If the former how do you deal with migrations and schema edits?


Yes, the db has been the most challenging aspect for us. We have 3 situations -

1. "Common baseline". With a relatively stable product, most branches (as in ~51%) do not impact the schema. For testing / QA purposes, these share one central QA db and pollute each out. Turns out, a lot of the times this is quite ok because the PR is about how the data is displayed, or improved logging, or UX change, or a security layer or anything else other than core domain knowledge - they don't care for the data that much.

2. "I'm special". Some branches do modify data (whether the format or the structure). To handle these, the manifest.json file has an option to request a separate database. If present, the rollout script will do "pg_dump + copy" of the shared staging DB, and duplicate it into "qa_$BRANCH", then update the config file (or .env for Docker) with the appropriate connection value. Additionally, it will all *sql files in a dir specified in manifest.json against the clone DB. This is done on every release, which does get annoying by resetting the qa data (we could add another manifest switch here I). On the upside, it forces you to codify all data migration rules from the start.

3. "I am very special". Some changes transform data in a way that requires business processing and cannot be done with easy SQL. Sorry, out of luck - we don't automate special cases yet. The developer has to pull the QA database to localhost, do his magic, and push it back. Not ideal, but hasn't caused any problems yet. If ain't broke...


Thanks for taking the time to answer!

FWIW this topic would make for a great technical post/how-to.

I also seem to recall that Automattic does this with their front end (calypso) which handles wordpress.com


We have a job every morning that clones the current production db instance and then deploys it onto a staging instance. All PRs with schema changes will be applied to this instance when merged. If something goes wrong, we just redeploy the saved image.


In a previous job, I built (what sounds like) a similar system. From a web UI, coworkers would see a list of recent commits, from which they could fire up an instance of the web app on demand. This created a new Docker container running the server that they could navigate to. A unique port distinguished each instance. Git tags indicated particularly important commits, e.g. new features that required more extensive testing.

Docker worked great for this task, allowing instances to be created and destroyed quickly. In a fresh instance of the web app you could create new accounts and import data then throw away the container when you were finished or wanted a fresh start. It worked especially well for non-technical people to see the latest changes as soon as they were pushed.


That is pretty nifty.


We're doing the same for microservice deployments to kubernetes, using Jenkins' multibranch pipeline - Jenkins just watches the git repo, and any commit to a branch that contains a Jenkinsfile gets built. At the end of the build process, we publish a docker image tagged with the branch and build number, and then use helm to upgrade or create a deployment in a kubernetes cluster, tagged to run the just-published docker image. Our cluster's running in AWS and has a package deployed to it called 'area53' which can set up route 53 DNS records for kubernetes services automatically.

Upshot of all that is that if you create and push a new branch called 'foo', a few minutes later that branch's code is up and running in AWS, under the name 'service-foo.dev.example.com'. We can then automatically run integration tests against that endpoint.

Best part is that the build process defined by the jenkinsfile, and the deployment defined by the helm chart, all live in the source, so you can play with them on a branch. Want to add another testing step to the build process? Branch, add it to the jenkinsfile, and your build pipeline change gets executed just for your branch, without stopping other branches from building. Once you're happy with the addition, merge the change and now every future branch gets that extra deployment step. Likewise if you modify the deployment chart, you can test it in a branch.


Is your Jenkinsfile open source? I'm trying to make something similar using Helm charts, would use that as reference.


Hey, we will cover that in another article in near future. :)

In general, the workflow that “e1g” described is relatively similar to our approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: