Hacker Newsnew | past | comments | ask | show | jobs | submit | cmcluck's commentslogin

Context: Was product guy at Google (built a few cloud products, did some work in the open source ecosystem), now CEO of a startup.

Background: I don't think I picked product management, it sort of picked me. When I was a really junior engineer I worked in a small team environment with much more senior engineers. We didn't have product management support, so someone needed to talk to the customer and figure out what they needed, and then later have the hard conversation when we were slipping our date. That ended up being me. Someone needed to document what we were doing, that was me. At the end of the day when we had little management support, someone had to represent the needs of the team and hold the team together during an aggressive corporate downsizing. The team looked to me to do that. I sort of drifted into this role without ever being asked to do it. I loved coding, but turns out I liked solving business problems just as much. My path to proper product management went through program management at Microsoft which was a bit of a half-way house. Good customer passion but more focused on execution than on the health of the business.

This doesn't directly answer your question, but I hope is helpful: what are the attributes I have seen of successful PMs? * Have good technical instincts. You don't necessarily have to code well but you need to smell credible to engineers and not have them flip the bozo bit on you. I watched a product guy argue that we should figure out how to reduce latency between global data centers and then someone kindly point out that speed of light was the problem at hand, and we really couldn't do much about it. Don't be that guy, you will never come back from that point. * Champion the customer. Product managers have to really 'get' their product deeply, understand it, use it, live with it. They need to be able to see it they way a customer sees it and represent the hiezen-customer to the team. The primary work product of the PM is the PRD (product requirements document) and the customer should shine through. * Own your business. It isn't enough to build neat technology, that people love, but if no one knows about it or you can't sell it you are wasting your time. Know your sales people, know your marketing strategy, understand the pricing model. Make sure they all get what the product does and is good for. * Be the janitor before you try to be the CEO. There are a million things a team needs to do. The product manager needs to fill the gaps. Win by doing the things the engineers can't or don't want to do, but don't 'wall paper' over problems with the team structure. Remember however that doing a gap job well indefinitely gets in the way of creating high functioning teams. You need to work your way out of a gap filling job. * Knowledge is currency. To lead, you have to have something and see something the engineers don't. Understand your competition, use their products, speak to a lot of customers, bring that knowledge back to the team and they will start to trust you. * Stay out of execution: you are not a project manager. The eng function should not be babies, they need to hire their own project managers to run their scrums, organize execution, etc. If things go well for your product you are going to be talking to customers, negotiating partnerships, etc just as the team starts to hit an inflection curve in execution, you can't afford to be trapped in the office running their processes.

Hope that helps.


[disclosure: Craig -- CEO] to name a few 1. support and services. this is a really important factor for most enterprises; they want to know they have expert staff on call who have a decent shot at getting a change they need upstreamed. 2. consolidation and operations at scale. turns out Kubernetes is being deployed as a 'devops tool' today and people create lots of teensy clusters. we built it to be super flexible and work either for smaller clusters, or work at scale (using namespaces, etc). there are advantages to running larger expert operated clusters (borg style) and we want to help enterprises get there with consolidation and operations tools. 3. integration tech. there are oceans of 'legacy' systems that basically run enterprise today and need to be integrated with. 4. help with for non-Google environments. despite the fact that there are tons of interested commercial parties in K8s, except for a few awesome community folks companies aren't putting a ton of resource into AWS, or OpenStack, etc and do unglamorous testing work, etc. Would love to help get the cloud provider model sustainable, and put effort into doing stuff like testing and better deployment tech.


Company centered & focussed on helping enterprise embrace open cloud, cloud native systems and help them realize value is fantastic. We are at a juncture where building a company on such tenets make sense....better chance of a building viable business model around it. Pain of vendor lock-in in enterprise software space will only grow strongly in coming years.


Greg, Is business model centered on delivery & facilitating heavy lifting necessary for large enterprise? Will this heavy lifting be delivered as "professional & support" service....batteries included like solutions for legacy system integrations. Primarily K8S focused high powered consultancy services??


cmcluck: can you say more about the "advantages to running larger expert operated clusters"?


Story goes like this: Joe and I did Google Compute Engine together. Once that was on rails we started looking at the gap between GAE and GCE. Joe found Docker way back before it was a household name, and we started thinking hard about the 'compute continuum'. What beyond the container format was needed. Brendan was working in the meantime on something that looked like cloud formation that we were also playing with. We had instantly good chemistry with the three of us, and he started looking at Docker too.

To raise awareness of Docker in Google, I asked Brendan to pull together a demo for our all hands. In a nutshell what he produced was the bones of Kubernetes. I remember looking at it and having a moment, he had built a mini borg cell on VMs. Basically made borg a devops accessible tool, not just a monolithic clustering tool (like Mesos was). When I saw that the product ramifications were obvious, I called Joe over to look and the rest is history.


This was all in the Seattle office. From the Mountain View office, some of the Borg team had been muttering about "something something Borg as a service", but we really didn't know the first thing about cloud products. When we saw the first demos of what would become Kubernetes, we knew pretty quickly that this was the way. From that point on, the synergy was pretty electric.


(This is Craig -- CEO) Ouch :). I was never much good at naming. Take 'Kubernetes' as an example. I still get razzed about it.

I would describe this as a case of 'domain based naming' (i.e. we could get the domain name, it was uncontested space).

I hope to create something positive out of it, if we do it up right hopefully the company character will dominate the name.


[This is Craig -- CEO of this new venture, co-founder of K8s (with Joe and Brendan), and person who started CNCF (with Jim Zemlin, and a bunch of community wonks from big tech)]

It is a funny you say this. I spent a lot of time looking around the community at what existed before starting CNCF, and agonized over this. We needed to take K8s to foundation so that it wouldn't be a 'Google project'. Google was actually the best steward of the tech you could imagine, because the plan was always to make k8s ubiquitous and just win on quality of infrastructure but the community had no way to know that.

I looked at OpenStack hard, and like the energy and enthusiasm but really worried about (1) balkanization that was emerging with no 'true north' -- it just didn't have technical taste, (2) the tragedy of the commons -- most vendors were focused on their own interests and neglected the end users, (3) lack of coherence.

When designing CNCF I tried hard to work through this by creating a better foundation structure. (1) the business board has very limited authority over projects, hopefully making sure that we avoid it being a pay-for-play affair. (2) we made provisions for little companies to get top level seats based on community contributions (ditto) (3) we created an empowered end user group that would have equal authority to any other affair to make sure real users interest are promoted (4) we added a TOC (technical oversight committee) that was the most empowered group to establish true north that is community elected -- the idea is they need to champion the projects and establish technical 'taste' (e.g. Brian Grant from Google -- the guy who drives consistency sat on this group, not me the guy who had access to the purse strings and who was focused on the business).

(side note: i picked this structure because i was geeking on government structures at the time, and figured that the separation of powers yields more sustainable administration)


So being a PTL in OpenStack I have some various comments and questions that would be nice to have your thoughts on.

In terms of looking at OpenStack hard; and reaching decisions based on various <things> did you do any reach out to the OpenStack community to actually communicate the things you found or heard or concluded so that the group there (including myself) can actually work on improving itself (or perhaps some of the reasons you stated aren't even correct and the community could have helped you clarify those)? If not then it concerns me that you may have reached conclusions without actually talking with that community (but I don't want to jump to any conclusions without getting your thoughts/input).

So far from looking outwards in on the CNCF and seeing how it compares to the OpenStack community (which I am more involved with, including other small side-communities that I also work in) I've yet to understand what exactly the CNCF is targeting. It seems to be a body that is just adopting various projects that align to some mission (?); I have personally a hard time understanding the reasoning some of the projects have been adopted, maybe you can shed some light on that (what is true north for the CNCF, where is it written down, what is the TOC actually making adoption yes/no decisions on? what criteria? what is the technical taste you talk about, where is it written down?)

The nice thing about OpenStack is that they are writing most/all of this down and agreeing on those kinds of questions in public:

https://github.com/openstack/governance/tree/master/referenc... (github is a mirror, not the source of this repo, but easier for browsing purposes).


A fair point. One thing worth remembering is that this was a point in time thing. I have seen a lot of movement and some very positive signals around convergence of OpenStack, and a real focus on the end user community. When I was doing the digging things felt different and there is a decent chance that were open stack where it is now I would have taken a different position.

The mission of CNCF is the promotion of 'cloud native technologies' -- specifically container packaged, dynamically scheduled, micro-services oriented workloads. It isn't about picking winners, it is about establishing a safe space for innovation and bringing to bear the collective communities. We have legitimately taken some time in getting the identity of the foundation established, but I feel like Dan Kohn (our new ED) is doing super work in creating a collaborative space for new projects.


Thanks for the part of the response though I'm still concerned at the 'felt' part though, especially if that felt part didn't involve talking with much/any(?) of that community in a public forum. Is there anything I can do to help you understand it better, I'd at least like to be able to echo whatever concerns you had to that community, because at that point it can be actual data that made the decision to go with CNCF creation and not just feelings or thoughts of movement or positive signals (all very fluffy things IMHO).


So in the CNCF, are competing implementations allowed / encouraged?

For me that would have repercussions for what other tech a CNCF project supports (e.g. is all stats monitoring based on Prometheus, or can 2 projects in the CNCF support different technologies)


Hi, I'm alexis richardson and chair of the TOC for CNCF. The answer is YES we allow competing implementations.


to elaborate for @hueving @mugsie et al., consider that OpenStack is organised around Nova, the scheduler++ that is at the heart of any OpenStack deployment. If the CNCF was "like OpenStack" then it could mandate that all projects are organised around Kubernetes, playing a role analogous to Nova. But we didn't want to be solely a "Kubernetes foundation". The market is early stage, and there are other valid approaches to orchestration, including Docker Swarmkit, Hashicorp Nomad, Mesos & DCOS templates like Marathon, and others. So, we need a different approach.

Of course there are people who want a KubeStack that is like OpenStack, for better or worse. That's fine too! We just don't want that to be the ONLY choice for customers.


Do you allow competing APIs for the same service? If not, how is that any different from OpenStack? If so, how do you address the issue of fragmentation across deployments?


Yes we do allow competing APIs.

You said it yourself in another comment on here: "It's never blatant, it's always calls for seemingly good things like extra pluggable points to make sure we don't favor particular solutions. Then it's making sure that any decision is brought to a huge vote by a giant committee that spends weeks arguing about if it's something they should even decide on, etc."

This kind of premature generalisation by committee is what has pulled OpenStack down; a situation from which it is now apparently recovering. CNCF seeks to avoid this, by encouraging projects towards interop but not in mandated ways.


Cool.

Do you ask projects to support all the implementations or just choose one?


Projects can do what they like. We believe that users, communities, market pressures, and so on, will drive good outcomes here. For example to date, all projects have worked to interoperate of their own volition. No committees were formed to achieve this.


So, also being a PTL in OpenStack I would say that the foundation design is very similar to the OpenStack one.

1 - the openstack board has 0 technical input to projects - the way to get things done in OpenStack is still to throw developers at it.

This does somewhat push the balance of power to larger companies - they have the money to employ developers.

2 - we have "community directors" who are elected by the people who actually commit code 3 - definite improvement over the initial setup of openstack, but that is currently changing with the User Committee

One question I would have about this - how are the end user groups requirements put forward? what mechanisms is there to ensure developers work on the defined priorities?

4 - Yup - we have the equivalent with the Technical Committee (the TC in OpenStack slang)

Separation of powers is ++ - but how does that play out when the TOC decides that they want to do something that does not mesh with the boards plans?


"how are the end user groups requirements put forward? what mechanisms is there to ensure developers work on the defined priorities?" --> projects are run by their leads, they are not told what to work on. In this sense, CNCF operates more like IETF/ASF but with (arguably) less intrusive governance.

The underlying idea here is that a well-run open source project gets plenty of strong direction from actual users, who must be interacted with directly.

There is a still-forming End User Board designed to create a strong forum for some types of User-Project discussion. But overall CNCF will lean towards "voluntary" and not "mandatory" requirements.


Disclosure: I am one of the Google people who founded the k8s project. Product guy, not engineer though.

We are really concerned about 'writing letters from the future' to the development community and trying to sell people on a set of solutions that were designed for internet scale problems and that don't necessarily apply to the real problems that engineers have.

I spent a lot of time early on trying to figure out whether we wanted to compete with Docker, or embrace and support Docker. It was a pretty easy decision to make. We knew that Docker solved problems that we didn't have in the development domain, and Docker provided a neat set of experiences. We had 'better' container technology in the market (anyone remember LMCTFY?) but the magic was in how it was exposed to engineers and Docker put lightening in a bottle. The big things were creating a composable file system, and delivering a really strong tool chain are the obvious two, but there were others. I remember saying 'we will end up with a Betamax to Docker's VHS' and I believe that remains true.

Having said that there are a number of things that weren't obvious to people who weren't running containers in product and at scale. It is not obvious how containers judiciously scheduled can drive your resource utilization substantially north of 50% for mixed workloads. It isn't obvious how much trouble you can get into with port mapping solutions at scale if you make the wrong decisions around network design. It isn't obvious that labels based solutions are the only practical way to add semantic meaning to running systems. It isn't obvious that you need to decentralize the control model (ala replication managers) to accomplish better scale and robustness. It isn't obvious that containers are most powerful when you add co-deployment with sidecar modules and use them as resource isolation frameworks (pods vs discrete containers), etc, etc, etc. The community would have got there eventually, we just figured we could help them get there quicker given our work and having been burned by missing this first time around with Borg.

Our ambition was to find a way to connect a decade of experience to the community, and work in the open with the community to build something that solved our problems as much as outside developers problems. Omega (Borg's successor) captured some great ideas, but it certainly wasn't going the way we all hoped. Kind of classic second system problem.

Please consider K8s a legitimate attempt to find a better way to build both internal Google systems and the next wave of cloud products in the open with the community. We are aware that we don't know everything and learned a lot by working with people like Clayton Coleman from Red Hat (and hundreds of other engineers) by building something in the open. I think k8s is far better than any system we could have built by ourselves. And in the end we only wrote a little over 50% of the system. Google has contributed, but I just don't see it as a Google system at this point.


Thank you, a very insightful comment. FWIW I think you made a smart choice by leveraging docker and it's ecosystem (not just the tools, also mind share, the vast amount of knowledge spread via blog posts and SO, has to be considered). While docker's technical merits over competing solutions were indeed arguable then, it's UX was a game changer in this domain, everyone got the concept of a Dockerfile and "docker -d" and many embraced it quickly.

Kubernetes came around exactly at the time when people tried to start using it for something serious and things like overlay networking, service discovery, orchestration, configuration became issues. For us, k8s, already in it's early (public) days, was a godsend.

Also, i can confirm: the k8s development process is incredibly open. for a project of this scale, being able to discuss even low-level technical details at length and seeing your wishes being reflected in the code is pretty extraordinary. It will be interesting to learn how well this model will scale in terms of developer participation eventually, i figure the amount of mgmt overhead (issue triaging, code reviews) is pretty staggering already.


Thanks for the insightful response. I didn't mean to say that the entire project came from some shot of inspiration by some Google engineer, nor do I think of Kubernetes as a "Google system" -- I'm well aware of that separation.

The main thing that I think makes Kubernetes the perfect example of a modern free software community is the open design process and how the entire architecture of Kubernetes is open to modification. You don't see that with Docker, and you don't see it with many other large projects that have many contributiors.


So is there ongoing work to replace borg with K8s (afaik the adoption of omega pretty much stalled and/or died)?

I myself would feel more confidence in K8s if there was usage of K8s outside of GKE and the dogfood being created was eaten by the rest of google as well (or at least there was a publicized plan to make that happen), because otherwise it feels more like K8s is a experiment that google folks are waiting to see what happens before the rest of google adopts it.

Thoughts?


Disclaimer: I work at Google and was a founder of the Kubernetes project.

In a nutshell yes. We recognized pretty early on that fear of lockin was a major influencing factor in cloud buying decisions. We saw it mostly as holding us back in cloud: customers were reluctant to bet on GCE (our first product here at Google) in the early days because they were worried about betting on a proprietary system that wasn't easily portable. This was compounded by the fact that people were worried about our commitment to cloud (we are all in for the record, in case people are still wondering :) ). On the positive side we also saw lots of other people who were worried about how locked in they were getting to Amazon, and many at very least wanted to have two providers so they could play one off against the other for pricing.

Our hypothesis was pretty simple: create a 'logical computing' platform that works everywhere, and maybe, if customers liked what we had built they would try our version. And if they didn't, they could go somewhere else without significant effort. We figured at the end of the day we would be able to provide a high quality service without doing weird things in the community since our infrastructure is legitimately good, and we are good at operations. We also didn't have to agonize about extracting lots of money out of the orchestration system since we could just rely on monetization of the basic infrastructure. This has actually worked out pretty well. GKE (Google Container Engine) has grown far faster than GCE (actually faster than any product I have see) and the message around zero lock-in plays well with customers.


Not speaking in an official capacity, but the analogy I've seen used is that big companies don't want to relive the RDBMS vendor lock-in experience.

I'm speaking about something other than k8s (Cloud Foundry), but the industry mood is the same. Folk want portability amongst IaaSes. Google are an underdog in that market, so it behooves them to support that effort -- to the point that there are Google teams helping with Cloud Foundry on GCP.

Disclosure: I work for Pivotal, we donate the majority of engineering to Cloud Foundry.


k8s is essentially "aws in a box" and it's a product that locks. As soon as k8s cluster is running in GKE - it becomes not that portable at all, due to operational complexity as well as tide up to the google infra.


disclaimer: i am a founder of the Kubernetes project and did the article with Cade at Wired. i also was product lead for compute engine back in the day fwiw :).

I am not sure which projects you have looked at from Google in terms of Open Source, but in the case of Kubernetes we have worked pretty hard to engage a community outside of Google and work with the community to make sure that Kubernetes is solid. One of the things that I like about the it is that many of the top contributors don't work at Google. People like Red Hat have worked very closely with us to make sure that (1) Kubernetes works well on traditional infrastructure (2) that it is a comprehensive system that meets enterprise needs, (3) that the usability is solid. People like Mirantis are working to integrate Kubernetes into the OpenStack ecosystem. The project started as a Google thing, but is bigger than a single company now.

Another thing worth noting: building a hosted commercial product (Google Container Engine) in the open by relying exclusively on the Kubernetes code base has helped us ensure that what we have built is explicitly not locked into Google's infrastructure, that the experience is good (since our community has built much of the experience), and that the product solves a genuinely broad set of problems.

Also consider that many of our early production users don't run on Google. Many do, but many also run on AWS or on private clouds.

-- craig


I'd be interested to see whether Google follow's Pivotal's lead and donates its IP to an independent foundation, as happened with Cloud Foundry.

Disclaimer: I work for Pivotal, in Pivotal Labs.


(disclaimer: i work at Google and was one of the founders of the project)

when we were looking at building k8s our mission was to help the world move forwards to a more cloud native approach to development. by cloud native i mean container packaged, dynamically scheduled, micro-services oriented. we figured that in the end our data centers are going to be well suited to run cloud native apps, since they were designed from the ground up for this approach to management, and will offer performance and efficiency advantages over the alternatives. we also however recognized that no matter how cheap, fast and reliable the hosting offering is, most folks don't want to be locked into a single provider and Google in particular. we needed to do what we were doing in the open, and the thing that we built needed to be pattern compatible with our approach to management and quite frankly address some of the mistakes we had in previous frameworks (Borg mostly as a first system).

we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this).

so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday.

now we really like the guys at Mesosphere and we respect the fact that Mesos runs the vast majority of existing data processing frameworks. by adding k8s on mesos you get the next-generation cloud native scheduler and the ability to run existing workloads. by running k8s by itself you get a lightweight cluster environment for running next gen cloud native apps.

-- craig


Thanks for this, it cleared up some confusion in my mind. A blogpost capturing these thoughts would be great.


"Adding k8s on mesos you get the next-generation cloud native scheduler and the ability to run existing workloads. by running k8s by itself you get a lightweight cluster environment for running next gen cloud native apps." @cmcluck

More references:

[1] https://mesosphere.com/blog/2015/04/22/making-kubernetes-a-f...

[2] http://blog.kubernetes.io/2015/04/kubernetes-and-mesosphere-...

[3] http://thenewstack.io/mesosphere-now-includes-kubernetes-for...


(disclosure: i work at Google and picked the name)

comments above are right -- we wanted to stick to the nautical theme that was emerging in containers and 'kubernetes' (or helmsmen is greek) seemed about right. the fact that the word has strong roots in modern control theory was nice also.

fun fact: we actually wanted to call it 'seven' after seven-of-nine (a more attractive borg) but for obvious reasons that didn't work out. :)


The GFS cell used back in 2004 for staging Borg binaries to production was /gfs/seven/, for the same reason :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: