Hacker Newsnew | past | comments | ask | show | jobs | submit | bringking's commentslogin

If anyone wants to know what looking at an Animal or some objects on LSD is like, this is very close. It's like 95% understandable, but that last 5% really odd.


Yeah! I've tried to explain to people what taking LSD can be like, to those who've never experienced it. It's very similar to the output from these tools: the same stimulus but exaggerated, wrong in subtle or not so subtle ways, uncanny and fascinating. Basically never creates something from the whole cloth, out of nothing so to speak.


One of the hardest challenges I have had in my career is convincing our company to move to Continuous Delivery. 90% of the challenges weren't technical, but emotional. Shipping software comes with lots of feelings, fear, politics, etc. I had to personally work with various leaders across the org to help them through these feelings and perceived blockers.

We aren't 100% there yet, but we are shipping numerous times per day across 20 or so services and quality has gone _up_, not down


That is interesting because I am still trying to figure out how people do CD. Of course I know CI, we have it all set up. But we still work in sprints with manual testing and release every 2 weeks.

I could spend time on marking features 'frontend only, low impact' which we could deploy pretty much the same day. Still there are quite some features that need bigger amount of work where they might be 'done' by dev but I am sure they are not actually done, because security, because error checking. It of course is usually that one dev has his not tested feature merged to develop and then also some other dev has production ready one, then if one feature is production ready, I would have to put also some time to make release and pick only changes for accepted change. I am not sure that additional work to check what we can release ' right now' will pay off vs just taking time for fixes on acceptance by people who were working on code and then release it (after 2 weeks o 1 depending how fast it is done in sprint).

So are you having people who work only on picking changes that are low impact, or working on making stuff production ready by picking from develop? Maybe you pick changes by yourself, or you just defer manual testing to end users and rely on automated tests unit/integration?

p.s. Funny thing with automated tests is they are good at keeping old stuff working but not for testing newly developed features where actual tester can test new GUI/ new features. If you have a lot of GUI changes you cannot automate first tests...


A couple obvious things stand out from your situation. First, only allow merges of code that's been reviewed and has enough tests along with it. Second, have a pipeline that automatically runs all the tests whenever you merge, and if they pass, then goes on to automatically deploy. It's really that simple. It's not easy to get to that point, but it's simple.

Mostly it comes down to organizational changes and everybody getting used to what constitutes "enough" tests.


It was pretty important to work with the Project Managers and "scrum-masters" to decouple our release cycle from our sprint cycle. In our case, like yours, they were coupled together for no technical reason. It's hard to sum up all the changes we made to allow it to happen, but mostly it boiled down to a few technical decisions -

* Every PR is treated as "production" ready. This means if it isn't ready for user eyes, it gets feature flagged or dark shipped. Engineers have to assume their commit will go into prod right away. Feature flags become pretty important * Product Owners and QA validate code is "done" in lower environments (acceptance or staging, or even locally during PR phase). This helped us decouple code being in prod and "definition of done" in our sprints. * All API changes and migrations follow "expand and contract" models that makes the code shippable. e.g. even if we are building new features, we can ship our code at anytime cause the public api is only expanding. * More automated quality checks at PR time. Unit tests, integration tests, danger rules, etc. These vary from codebase to codebase. A key part of this is trusting the owners of that code or service. To a degree, if they are happy with their coverage, then they can ship. (Within limits of course. Not having unit tests at least would be a red flag)

Also, we still ship numerous times a day without a full automation test, we just make sure each release is really small (1-3) commits. The smaller the release the easier it is to manually QA it. So nothing fancy is needed, just smaller releases.

So to answer your question, we don't pick and choose work that is "low" impact or "high" impact, all code gets shipped the same and with the same cadence. It is our responsibility to ensure that when it goes to prod, it won't break anything


Facebook transitioned from a weekly release cycle [1] to continuous delivery while I was there. It was a big project. There's a nice writeup here:

https://code.fb.com/web/rapid-release-at-massive-scale/

[1] commits were pushed to employees on commit, that's how we tested, but not to the public


2017? Wow! My company shifted back in 2011!


I have come to be against continuous delivery - in 'whole product terms' there are vast hidden expenses.

In particular - documentation and support.

Using complicated interfaces is very challenging sometimes, to the point of obscene - Google searches for help turn up a variety of outdated answers - and it's impossible to know what's what.

I'm using Facebook Ads right now quite a lot - it's complex system that just shifts like quicksand under your feet.

Users make incredible efforts to learn the product, only to have it shift away from them like a ghost.

Documentation may or may not be up to date.

Locations of things change.

And who can you ask? Where do you search for support? Facebook has never really provided me answers to many questions in their documentation.

Tiny example - an advisor used to know how to list people that have commented on a post, so that you could 'invite them to like your page'. But it's changed and now he can't find it. One small thing now that he doesn't know how to do, a tool list from his tool-chest.

And who really gains from all of this? Seriously? I don't think anyone.

Is that 'new feature' really that important? It needs to be rolled out 'now' instead of in a major/minor delivery?

These changes are seldom well communicated either.

I suggest the total opposite might be better: release major iterations every year, minor iterations quarterly, and patches as needed.

Every time there is a release - provide users friendly release notes - something they can read at a quick glance and get up to speed. A little 10 second video for every change: "Oh, now you can do XYZ like this"

This way - you have predictability and consistency so users can know how to 'keep up'.

Also - the ability to use the OLD interface where possible for at least 1 year, or something like that - that we users are not forced onto the quicksand.

"Ok it's Jan 1 - FB Ads 2019 is released in 1 month - let's go over the changes - Mary, you can be responsible for highlighting the major changes and communicating them to the rest of the team, and highlighting any risks for us"

Otherwise, you're trolling along, these companies make changes that could feasibly have major impact on your business and you're out to lunch.

Maybe internally continuous delivery could be a useful thing ... but for the world at large the downsides are real and the upsides are limited.



Oh sorry I should read better! Sorry


Things have recently changed, please try again if you are still interested!


i'll do that. thanks


Tucson, AZ and Denver, CO locations as well ;)


We hugged it to death


I’m going to go out on a limb here, and say if you are relying on Product Hunt and Hacker News to send you all your traffic, you are still screwed.

Heh.


HN's smothering love.


I prefer the even simpler Michael Pollan guide - “Eat food. Not too much. Mostly plants.”


Pollan's quote could even be a good bumper sticker.


That's mentioned in the article.


It's just a beta and we are actively working on new features, so any feedback would be greatly appreciated!

On the plate next is time-travel state replacement, and in-browser NPM importing so you can require other components on the fly.


I am using Yahoo's fluxible app and it is quite good. Doesn't feel verbose at all in every day use.


I am not sure why it's becoming popular to say "It’s unacceptable that the underlying framework is bigger than the application itself." If your domain specific implementation is smaller than the framework itself, it means the framework did exactly what it was supposed to do, it reduced the amount of work you had to do to implement your app.


it is really unacceptable when your app can do without it - and that's Riot is trying to prove. It doesn't mean to hurt your feelings.


I agree that it is unacceptable/irresponsible in a way to have a larger payload than is necessary, but I think that is a different issue. Something like selective importing only what you need from the framework vs. the whole shebang.


That's another way of staying minimal, but still, the library makers get to maintain that pile of rarely used features and that may slow down development, delay releases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: