Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How to convince big tech team that tests and code quality matter?
23 points by throwaway8365 on Dec 7, 2021 | hide | past | favorite | 71 comments
I recently joined a big tech company. I'm slowly realizing our codebase is too lax regarding automated tests, and frankly code quality in general (e.g. no teamwide style guide).

This must lead to a lot of problems which I'll only face after ramping up, but even onboarding is painful if you don't have e.g. clear automated testing set up.

I've been in the industry for about 10 years and testing has always been an important part of the job. I guess I saw it as a given; to me part of the job _is_ writing tests. I assumed this was fairly obvious in 2021.

How should I approach this with the team and my manager? Keep in mind I'm new and a IC, not very senior. Others in the team might already have struck this chord in the past as well. Or perhaps I should drink the cool-aid and shut up.



You should first establish yourself as a productive member of the team. No one is going to want to hear about how they could do things better from someone who just started. Once you have credibility and maybe some allies you'll want to have some evidence to support any new processes or workflows. Anecdotes about how you did things at other jobs won't carry a lot of weight.

The problem with software development techniques and tools like automated testing and style guides is that while they look like good ideas, and work at some places, there's not a lot of empirical evidence to support claims that automated tests will improve developer productivity or code quality.

Programmers usually perceive any new code base as crap in desperate need of new ideas and tools. It takes a while to get into a code base and really understand it, and by that time some things that seemed terrible or confusing at first will make sense.


> No one is going to want to hear about how they could do things better from someone who just started

I don't understand why more people don't get this. Like thanks for showing up and implying everyone is incompetent with your suggestions?


Lack of humility kind of plagues our profession.


It gets even worse when somebody is successful in being the new guy not understanding the new codebase and making big changes to recreate their previous workspace anyway.

Fresh eyes and a fresh perspective are useful but don’t go recreating the universe right away.


You shouldn't judge quality or suggestions by seniority


It's not seniority, it's someone showing up who don't understand the challenges that lead to the current situation and not learning them before pointing at issues.


It's not seniority that's the criteria. It's street cred / rapport.


This is an excellent observation. Thank you for making this. A pre requisite for "convincing" a large team is respect from the team. Assuming that it's not a toxic jungle, proven competence over a period of time is a good way of of earning said respect. Once you have that, you can cash it in for attention and people will listen.

A more tactical thing is to actually do some work. You said that they don't have a style guide. Create one which you personally use and share it with your immediate team so that your own lives are better. Your immediate manager and team mates will have to work with you and if you start doing things which make your work obviously better, they'll pick up and that will give you mileage in the long run.

Automated testing is hard in practice when there are a lot of external dependencies and when there's a huge legacy codebase. This is a case by case thing but just like before, if you have something to show and the respect to buy attention, you might be able to sell an idea to them.


"Show, don't tell" is a good principle. Don't tell the team how they should write better code. Show them an example that you can credibly argue represents improvement. Expect pushback and alternate opinions, don't fall in love with your particular style. Teams can get derailed over very minor things like tabs vs. spaces, so remember the important thing is a consistent style, not a particular style. Automated tests can be useful but don't expect the team to retrofit that, or for everyone to buy into it. You have to show the benefits in that environment.


Totally agree with all your points except

>there's not a lot of empirical evidence to support claims that automated tests will improve developer productivity or code quality.

If you'd have said "there aren't any published papers _proving_ this" I would've agreed simply because I do not know if such studies exist or not (they might); but I'd argue that automated tests obviously improve code quality and productivity. And also ramp up time, which is important in fast growing teams (my case).


I'm replying to this and also some of the sibling comments.

No, automated development-time tests do not necessarily equate to higher quality, given that they:

- Take time away from other activities

- May provide a false positive, lulling developers into a false sense of confidence

- Increase the time taken for certain CI/CD operations, sometimes by hours or days!

- The presence of a large volume of "test" code means that certain refactoring operations take a lot longer, which is a real cost.

People go into some work environments where tests DID help, and they generalise that to ALL work environments.

Automated build-time tests are less important, and entire categories may be potentially unnecessary when:

- The product is written in a strongly-typed language that catches most errors at compile time.

- If high-level "linting" tools are used to further enhance the code quality before the executable ever runs.

- If feedback from the production environment is more relevant. E.g.: for web applications with performance issues being monitored by an APM, but no critical code stability concerns at build time. Think typical web apps with no real consequences to a crash, not finance applications where a small error could mean Real Money.

- Where time-to-market is more important than chasing 100% robustness at the expense of meeting release dates.

You often see people from a PHP or Python background go on and on about the importance of tests, but those are dynamically typed languages with lots of unexpected pitfalls that only testing will uncover. Meanwhile entire teams do just fine without any automated tests when using languages like C#, Java, or Rust.


Types do a great job at assuring you’re getting generally the right kind of data in and out of different functions/methods, but they don’t assure that the program is actually doing the right thing. Strong typing just helps to assure that the program isn’t doing specific categories of the wrong thing (in the same way that Rust and Go make it difficult to write memory-unsafe code). I mean, Go and Rust both have testing capabilities built in. If it wasn’t necessary, why would they do that?

Beyond the most trivial of projects, I can’t think of a single project that I’ve worked on (or even looked at on GitHub) that has weighed the costs and benefits of automated testing and decided it just wasn’t worth it.


Conversely, I've actually never seen automated tests "in the wild". Every project I've come across has had zero tests, or close to it.

I'm dealing with a migration of a legacy code base right now, today, that could probably use some tests! But even suggesting this is a complete non-starter: The time required to write the tests vastly exceeds the time and budget allocated to the migration project. I'll simply enable an APM tool that costs $50 a month and call it a day. If something crashes in UAT we'll catch it. If not, we'll definitely catch it in PRD. If not, then it's probably not worth dealing with!

I've added tests to some of my projects in the past and caught virtually nothing with them. I always run my code "through its paces" at least a few times with a REPL, debugger, or some sort of tracing tool before it's ever committed to the repo.

The kind of bugs I find in production would never be found by typical automated test suites. Things like logic errors due to misunderstanding the requirements, or deadlocks in a database that only manifest with an obscure multi-user workflow combination.

As a random example, a significant issue in one codebase was that a distinct sort was case-insensitive but should have been case-sensitive. It was dropping a few hundred items out of hundreds of millions. Will your tests find this issue? What if I told you nobody noticed for years, and that the processing where this manifested takes 4 hours?

Will your dev team still be productive if changes require 4-hour tests every time they hit build or check in the code? To find a one-time issue that won't reoccur now that it's been fixed? Really?

Small, fast tests often find nothing and are worthless because they don't test anything of interest.

Big complicated tests are time consuming to write, still often find nothing, and slow down processes.


4 hour tests are a nonstarter, but ponder this: how can you guarantee that bug will never happen again? You can be pretty sure. You can reason about the code that you’re now familiar with and infer (based on your current understanding) that something like that won’t happen again. But you can’t guarantee it.

Some bugs don’t really deserve tests that run all the time - that I’ll grant you. I frequently have tests for stuff like that as a form of documentation for myself down the road. A year from now, will you remember the specific steps you took to fix your case sensitivity bug? I wouldn’t. But my VCS will. Future me will be very grateful if I write a “verify this annoying case sensitivity thing isn’t a problem” script somewhere so that if I’m seeing weird behavior in the future, I can be reasonably sure that it’s a different problem (or that my fix got reverted or overwritten).

I also don’t find a lot of value in unit tests or having 100% code coverage. But that doesn’t invalidate automated testing altogether. I cannot tell you how often tests - even just simple end to end, happy path tests - have saved my ass.

I’m willing to bet that you’re already testing your software manually, right? Like if you change (for the sake of example) a form, you’re probably going to submit that form a few times with different inputs. Somebody else down the road is going to do that too. Why wouldn’t you write down the different inputs you gave so that the next person has them available? Maybe that person doesn’t have the same context that you do. Maybe they just aren’t as thorough as you. Either way though: if you’ve already done the work to figure out how to fully exercise the form, why would you make someone else do that work again?

The next logical step after writing that stuff down is to automate it if for no other reason than to avoid having smart, expensive people sitting around doing data entry when a computer can do it for them.

That seems extremely uncontroversial to me and I have a hard time understanding why someone wouldn’t be on board with that.


> But you can’t guarantee it.

The fundamental logic error in the justification for tests is that I don't need to guarantee that the issue will never reoccur. A reasonable level of confidence is more than sufficient for most projects, most of the time.

As an example of an environment where automated tests are critical: I read a post by someone complaining that when they worked at Oracle the automated test suite ran through hundreds of thousands of individual tests and took hours despite using a huge farm of servers.

That guy was wrong! I would absolutely test something like a commercial RDBMS to death. Those tests are not optional. Similarly, if I was developing a file system such as ZFS, I would also test the heck out of it. Famously, Sun had a test lab where robots would physically pull drives out of servers!

But would I test a typical web form with automated tests? No. It's boring CRUD code. It's going to work, because I use parametrised queries using strongly-typed ORM to a CotS database platform using "boring" code paths that are well known to work. If the DB schema changes, the build will fail.

I'm more interested in testing if the form UX layout "looks pretty" and has a "nice layout to help the workflow." Ensuring those will also exercise the boring parts of the back end code anyway as I click through the form a bunch of times.

Once the code has been established as working, and strong typing is used end-to-end, why bother testing it over and over?


Personally, I don't have time to spend on solving the same problem twice. I want a guarantee that something isn't going to break again. Yeah, it probably won't break again in the exact same way, but if you're writing your test to cover one narrow failure mode, perhaps it's time to brush up your software QA skills.

You're making a set of assumptions when you write a piece of code, right? All the steps between when you actually submit a form and when the data is saved in your database are assumptions that you're implicitly relying on in order for your application to function. Individually, each of those components can work just fine, but no matter how "boring" they are, your application can still be broken.

For instance:

> a distinct sort was case-insensitive but should have been case-sensitive. It was dropping a few hundred items out of hundreds of millions.

You don't know that it's _going_ to work until you've _tested_ that it works. Right now, it sounds like your testing is almost exclusively manual. That's a valid approach, but it doesn't scale very far (neither in terms of application complexity nor team size and composition nor project age).

> Once the code has been established as working, and strong typing is used end-to-end, why bother testing it over and over?

You've established that it's working as expected _now_. What happens if somebody submits incomplete or malformed or outright malicious data? What happens if somebody opens two copies of your form, fills them both out, and then submits each of them in sequence? Which one should take precedence and does the notion of submitting this particular form more than once even make sense? What happens if your database isn't reachable but you have data that you need to save? What happens if another developer on your team updates the ORM and that includes a change to how data is sanitized on its way into the database that breaks some implicit assumption about how your data is going to be stored?

I hope you have some way to catch that stuff in code review. Syntactic rigor and being a bottleneck in the code review process is only going to get you so far.

Personally, I don't want to be figuring any of that stuff out at 3am when the pager is going off. I want to be confident that the team has solved _and documented_ all of the potential problems that they can think of (in the form of tests).


Lot of good reasons above. Another is that if code has solid tests, it makes code review a lot easier / faster, especially if somebody else is modifying your code.


I worked on a project with no tests. We had to refactor a specific functionality changing database entities, augmenting algorithms and adding behaviour. Hundreds of possible cases, no way to be tested manually. I refactor all the code to be "testable", I wrote hundreds of tests. I wouldn't have done it without tests. In the same project, a single algorithm had a dozen of parameters, dozens of possible scenarios. An issue occured in prod. Luckily algorithm had over 60 tests, we added a red test, fixed, run all green and we were very confident of the release, same confidence not possible without tests.


+1. That's one of the first things I look for if I need to refactor something. I want tests to exist to verify refactoring doesn't change functionality.


It's so obviously self evident that automated tests improve productivity, especially for developers new to a codebase.

A developer makes a non trivial change. How do they know nothing broke?

If there are no tests, they must spend an inordinate amount of time learning the entire system / product to gain any confidence they didn't break something, only to likely miss something that a senior team member will point out. The back/forth will take multiple developers time. At worst, it gets punted all the way to a QA team. Could be days.

Or, with tests, that happens far less often and the feedback loop can be more like 10 minutes.

Of course, tests don't solve onboarding, but they provide nice guard rails.


This assumes the tests are mostly valid. My experience in recent years has been that they won't be.


Of course I'm assuming useful tests, why wouldn't I? Note I don't claim tests completely solve all productivity or onboarding problems.

It's just odd to claim it's somehow ~unclear that automated tests aren't good for productivity. Forget the extremes (over testing, too many unit tests vs integration, potential rigidness): tests done right are productivity multipliers.

And that's not a "No True Scotsman" retort because the original claim had no qualifiers, and was suggesting that any version of automated tests are orthogonal to developer productivity. All you must do to refute that is think of any codebase you were able to make progress on faster because a test told you what was wrong faster than it'd take you to blame source code and send an email to a developer you maybe have never met yet.


> It's just odd to claim it's somehow ~unclear that automated tests aren't good for productivity.

No it's not, because it is unclear. Where's the actual evidence? How do you even define and measure "productivity" in software development?

My customers define productivity as "Delivered according to spec on time and within budget." That's the only measurement they care about. As hard as it is for programmers to understand beautiful "clean" code and automated tests are generally not stakeholder priorities. Yes, bad code and no testing has a cost that may have to get paid down the road. I think stakeholders understand that software will fall short of perfection.

For a long time the standard practice was unit testing, integration testing, and (if you were lucky) rigorous QA. Plenty of usable software got written that way. There's nothing inherently wrong with TDD and automated testing, and when done right those techniques can reduce the need for more complex testing (integration and QA) and help the team understand what the code is supposed to do. That's not the only way to write good code that works, but it's one way.

The problems of defining, measuring, and improving programmer productivity have been known and discussed since the 1960s. We have a set of fairly vague best practices (also known since the 60s/70s) but there's no silver bullet. The problems with developing non-trivial software in team environments mainly come from communication and team dynamics, not from lack of style guides or automated tests. Every little bit can help, of course, but let's not kid ourselves that TDD and automated testing are magic fairy dust.


> Of course I'm assuming useful tests, why wouldn't I?

Because useful tests seem pretty hard to have in practice.

Especially when starting testing for a project without tests and a team without testing experience.

Early testing reduces productivity, and the proposed gains may never develop.


How does the developer know if a failing case is due to a bug or something that needs to be changed? Is the new developer being dropped in with no support and no peer review and asked to make a non-trivial change? If so, then tests are not so useful in an strongly typed environment since interface changes will be caught at compile time.

It is not obviously self evident to me.


The question that matters is not whether automated tests improve code quality and productivity.

The question that needs to be answered is "if I took the time I previously spent on automated testing, and instead spent it on something else like code review or design review, would that result in greater code quality and productivity than automated testing did?"

And phrased that way, you see it's not a dichotomy. It depends on what you would spend the time on instead of automated testing. It depends on whether you already had peer review in your process. It depends on what sort of automated tests you spend time on. It's very hard to say something general without those details.

----

Note that I'm not taking a stance against automated testing. I prefer working in a test-first style myself. I'm just saying it's not that simple.


I think you are agreeing with me. There's anecdotal evidence like you offer. It's not necessarily obvious to everyone that automated tests improve code quality or it would be a settled issue (it's not).

In my own career (40+ years) I have worked with automated testing and TDD, and without, and I can't say that TDD "obviously" improves code quality. I know other professional programmers who have that experience. Like everything else in software development it's probably situational and depends on the individuals and the team.


Generally speaking "empirical evidence" means "published papers", with a little bit of flex for preprints or trustworthy-but-not-peer-reviewed experimental findings.


I was using it to mean actual data rather than anecdotes.

The subject of programmer productivity is both much-studied and discussed and very hard to define, measure, and compare. The studies that do have some empirical evidence seem to show that team dynamics and individual programmer personalities have the most dramatic effects, compared to things like programming language, tests, style guides, etc. It's not that consistent styles and automated testing are bad ideas, it's that they don't clearly make a big dent in team productivity in cases where one or a few incompetent (or merely slow) programmers, or one jerk, can derail everyone's productivity.

This I why I emphasized demonstrating a commitment to the team and showing some contributions before criticizing the environment and trying to propose changes. If you want the project to grind to a halt tell your new co-workers they are doing everything wrong. No automated test suite is going to fix that problem.


One thing that can help build credibility before making larger suggestions is working on obvious, uncontroversial pain points in any spare time you have.


> It takes a while to get into a code base and really understand it, and by that time some things that seemed terrible or confusing at first will make sense.

Classic Stockholm Syndrome.


That's one way to look at it. When you put a bunch of smart people together on a team they all tend to think they're right and they can get very passionate, even when they don't have any data or measurements to back up their opinions. A little humility and time spent adapting to the environment, team, and code base can go a long way.

Anecdotally my experience is that most programmers react to almost all code they didn't write as if it was crap. The more code they have to read and understand the more likely they are to complain about the quality of the code and the overall design of the system. Imagine someone looking at a photo of your house and telling you a hundred things wrong with it, without stepping foot inside or spending some time living there.


Some call it stockholm syndrome, we call it the Spotify model!


Probably should mention I'm an ex-Spotifier.


Aside from what Flankk suggested about being one with the tribe: while you're waiting for your tribal acceptance card, measure the problem in silence.

The change defect rate (how many percent of your commits are made to fix a problem introduced in an earlier commit) is very easy to sample manually, and gives you an idea of the scale of the problem.

Then you can go to the team and say, "Look, 40 % of the things we do are fixing up problems we created in the first place. With better processes in place, we could free up a lot of bandwidth for things that matter."

Or maybe you find out that you have a change defect rate of only 12 %, and the things you think are important, for some reason, don't matter in this team.

----

Measurements that complement the change defect rate for a fuller picture, according to Forsgren et al.

- Deployment frequency (literally, what's the average time between deployments to production)

- Time to fix (when a problem is first reported, how long does it take on average until the working fix is deployed to production?)

- Lead time (once a developer considers themselves "done" with the code, how long passes until that code is in deployed to production?)

The time to fix and change defect rate measure quality, so these are the ones you probably would focus on. The other two measure speed.


But no one does anything like you mentioned.

OP already has his mind set: *This must lead to a lot of problems which I'll only face after ramping up, but even onboarding is painful if you don't have e.g. clear automated testing set up.*

He just KNOWS that it MUST lead to a lot of problems.

I am fighting with this mindset at my work and I am sick and tired because people who KNOW - don't show any measurements.

Even if do measurements that in reality those unit tests of crud are waste of time I will be shot down by "that is what professionals do", while we basically have 0 defects because of backend code. While all the issues are on the frontend because of css/browser.


One of my favourite sayings:

> It ain't what you don't know that gets you into trouble; it's what you know for sure that just ain't so.


You want to become one with the tribe, wear the same clothing, and dance the same dance. Right now you are the outsider and outsiders are always a threat. If you come in there with your outsider ways you will immediately face resistance and risk not being accepted into the tribe. Once you have shown the other primates you are a trusted member of the group, then you can suggest that they cook the meat before eating it.


One more thing: once you become an outsider, it’s nearly impossible to become part of the tribe.


That cannot possibly be true since everyone starts out as an outsider and we know tribes form.


You two are using different definitions of outsider. Parent is saying “one who has been labeled as other” and you are saying “a brand new entrant”.


While almost every process can be improved, you've either made a big mistake or want to make one.

If they're suffering the dire consequences that you predict, the mistake that you've made is joining the team. Newcomers can't fix systemic problems unless the organization actually wants said problems fixed AND said newcomers are tasked with fixing said problems. Even then, odds are against success.

If they are shipping reasonable product, then they're not suffering said dire consequences. (You did check whether they ship before you joined, right?)

In that case, you're fairly ignorant about some key aspects of the company and how it works.

Such ignorance is a bad basis for suggesting change, no matter how good said change might be.


Formal style guidelines and mandatory tests for everything are largely wastes of time and often detrimental to software development.

To elaborate: formal style guides prevent simple things like formatting code in a manner that is easier to read in some cases (“why is this newline here? Why did you column aligned this block of expressions?”), and more insidiously force you to write worse code because they constrict you. Style guidelines should be just that - guidelines and engineers should have the decency to not request stylistic changes during code reviews unless they spot code that is obviously sloppy.

As for tests: insisting on having tests for everything impedes development in the present as well as in the future. More crucially the compulsive desire to have everything testable makes you write worse code as you need to abstract away parts that would simply be function calls or use patterns that obfuscate the code. Some will claim that this results in better designed code, but realistically speaking it just results in more complex code, which is almost always worse. You don’t need to use abstractions everywhere. Developers should focus on making shit work well and not over engineering code because they want to feel smart all the while making rationalizations about testable code and whatnot. I digress though so back to my original point regarding tests: in order for tests to justify their existence they have to test something significant and/or test a module that provides a service (read: hidden behind well defined API, api which you test to ensure that it doesn’t break its “contract”)


Wait until a bug comes up, fix it, write a test for it, then describe the problem, fix, and test on the next stand-up. Do that a few times and eventually others will catch on without correction or conflict. In short, just raise the bar.


When the team has a demo day, brown bag lunch, or something where people share new ideas, that’s a great time to show off testing.

Are tests part of your CI/CD pipeline now? You can always add them to your own code. And slowly increase the number of tests that way.

You might have a harder time adding a style guide if no one cares. You could try to add the style guide as part of your build pipeline. You might need to start by fixing all the low hanging fruit errors yourself.

Whatever you do, don’t do what I did and start complaining about stuff at every step while in a new job. It’s a big ship. Turn it slowly, or be prepared to live in the worst cabin and/or be thrown overboard.

Bring it up in your 1 on 1 with your manager, too. Just ask their thoughts. Maybe they also feel the same way, and have wanted to improve things.

Is there anything extraordinary about this job? You could always just find another job where testing & code quality are taken more seriously if it’s really important to you. If something is special, be prepared to live with the things you don’t like for a while.


See Chesterton's fence: "The principle that reforms should not be made until the reasoning behind the existing state of affairs is understood." [https://en.wiktionary.org/wiki/Chesterton%27s_fence]

There might not be a particular reason for why things are the way they are. It might have just grown to be this way.

I haven't read it but I've heard good things about "Working effectively with legacy code" by Michael Feathers.


I'm a huge fan of citing Chesterton's Fence, but it's much more applicable to a presence than to an absence. When something is missing the most common reason is sloth, and that doesn't deserve the same detailed consideration as when something's there.


Have you considered that for this business or function within the business, maybe it doesn't matter?

> This must lead to a lot of problems which I'll only face after ramping up, but even onboarding is painful if you don't have e.g. clear automated testing set up.

Maybe you'll learn why this is not the case after several months of working in your new role.


I fought exactly this battle for two years on my second team at Facebook. Never got very far TBH. Part of the reason is the "become a member of the tribe first" thing that others have mentioned, but there's a kicker: any variance from the existing attitude about tests will itself tend to make you an Outsider. Here are some suggestions for things you can do to gently and slowly nudge things toward a better place.

  * Make sure your own changes are properly tested.

  * Make sure existing tests are reliable and quick to run, since these are common excuses for not doing more.

  * Develop tooling to make writing tests easier, or to write new kinds of tests. Key word is "new" because that taps into the neophilia that's endemic at these places.

  * Use new techniques to reliably reproduce bugs that are still fresh in people's minds, and to show how this can find related bugs as well.
Don't spend too much time on any of these things, though. That will reduce your recognized "impact" and slow the process of gaining trust from the tech-lead "guardians of culture" who have usually decided (based on a mere 5-10 years' experience at only one company) that their current approach is perfect. Better to maximize that impact and then start promoting change. So, to a large extent, "drink the Kool Aid and shut up" really is the best approach available.


Since you mention a big tech company, what you’ll probably find is that there are people that care about these things around you, and they’re probably doing what they can and could do with support - not necessarily to do work, but also just moral support. (I like to pretend I’m one of those people in the quality space in my area in my big tech company…)

As others mention, be careful trying to change the world (and especially saying bad things about what you see) before you grow your credibility. But ask a few questions in team meetings (or, at something like FB, in workplace groups) about what test or staging or related infrastructure is available as part of ramping up, and that might get people that care to notice you’re a potential ally.

There’s even the (admittedly probably very) small chance that that the team wants to improve here, and doesn’t know how/didn’t have the resources to do better before - either way, you’ll learn more without alienating anyone.

Big tech companies also often offer mobility and culture variety, so keep your eye out for teams that align with what you care about. Learn what they do and how they got started at least - or possibly move there.

(If you’re at FB, feel free to reach out to me - same account name.)


Have you asked why there's no or very little testing happening? Make sure to ask from a few folks at different seniority, including whoever is the Big Kahuna.

The answers should be pretty revealing about the engineering culture at the company. You can then decide what it is that you want to do about it with better information.

You never know, maybe the Big Kahuna tells you you have open mandate to make improvements in the area...stranger things have happened.


In my experience, if you mention to people there's not enough testing, most of them will agree. They've probably been fighting that fight for a long time.

Cut your team some slack. Before you go telling them they're doing it wrong, do better yourself. Make sure your code is well tested. When doing code reviews, request more tests strategically. (Meaning, don't complain that there are no tests. Point out specific tricky or dangerous areas where an extra test could make a difference). If they don't know how, help them.

ASK about current practices ("Hey, is there a style guide I can use to help me get up to speed?"). DON'T tell them you know better. If you make them resent you from the beginning, you'll never make any progress.


Well, how about just doing your work first? If you like writing tests, do it. If you're allowed to commit them, OK. If it is good, others will follow.

But don't talk to much about it and beware forcing others to do their work like you do. If there is something like code review meetings, you may show it to others. If there is no SonarQube or infrastructure, establish it e.g. in Docker on your system. Have a plan in mind, how to transfer it to the company infrastructure, but don't force anyone to do so. Just if someone asks you.

You have an opportunity here to integrate YOUR personal way of testing and spreading the knowledge. Have faith that good things will prevail. Give it a year...


I was working for a world leading semiconductor company. I was in one of the software groups. I was new to the team. I was surprised that the team did not test the software they were releasing. When I requested if I could setup automated build and test servers, I was flatly told "we don't have time". And the next day 5000(I am not kidding you) static analysis defects were dumped into my lap with a deadline of few days. So I asked, how can one engineer cover 5000 static analysis defects in few days? The reply baffled me. I was told most of the defects have been "resolved", I just need to mark them as "resolved" again. So I asked, if these are already resolved why do they appear in the latest report and why do I have to repeat this again. Even marking a defect resolved needs investigation. The reply was "we have always been doing it this way"! All efforts to automate build, tests and static analysis were shot down. I left the company in less than two years. The other team members got high raises and praises for their "efforts" and the company finally failed to enter the market it was desperately trying to get into.

That is a big company in a nutshell, the management is busy spinning employees so good employees immediately call out BS and leave. So what is left is okayish to not so okay employees. Ultimately the company suffers, but who cares when there are bonuses given without understanding what really is happening in the company.


It takes a little time for people to see why testing is important. Since you're new to the organization, tread lightly and keep an eye out for places where testing can really make everyone's life better.

I've found that there are a few key junctures at which people take to testing. Here are some:

1) When dead-simple unit-tests catch old bugs. If cos(0) doesn't return 1, there's a problem.

2) When unit tests, for the first time, catch an important new bug in code they're writing.

3) When there's enough testing coverage that you can rip the guts out of a function, install new guts, run the tests, and be certain that everything is fine.

4) When good coverage newly applied to old code finds really subtle very old bugs in code everyone trusts.

Use the easiest-to-use unit-testing library you can. If there's no test-coverage today, some testing is worlds better than no testing. Once people can see for themselves how helpful it can be, the organization can get fancier if it needs to do so. I absolutely love GNU Octave's testing framework [1], as the syntax is as simple as

  function r = foo(x)
     r = x+2
  end

  %!assert (foo(3) == 5)
When it's easy, people take to it like water. I perpetually emphasize to students the importance of writing the most-boring/simple test first.

[1] https://wiki.octave.org/Tests


> some testing is worlds better than no testing

While it is probably true most of the time, like many things in live, "it depends".

When you are developing some not so well defined functions and everything is on quicksand, tests may just slow your progress, while not really providing useful feedback (i.e.: when you always modify a function and its test in tandem). Ideally you should clarify your requirements, but sometimes the process of producing the code is the refining step


You’re not going to convince people to change their views with words alone as a new team member.

Just do what you need to do in order to be productive. If that means writing tests, do it. It doesn’t matter if there’s CI in place. Maybe only you run the tests on your local machine occasionally. Just do it. Testing is a part of software engineering and you don’t have to ask permission to do your job.


This is tough. It all begins with the person leading the team. Even as CEO, I had a tough time convincing my CTO to do this. There are only two ways I see this could change - either the leadership changes or something drastic happens which forces them to understand the importance of this. Mostly the former is what is going to pan out IMO. People rarely change


Unless you've been brought in as leadership to turn things around, better to run far and fast. Working without sufficient tests is, from my experience, the easiest way to get 1 month of work done in 6. This will also affect your personal skills development and your outlook on how effective teams work. Get out before it infects you too.


I enforced it via code review, but this worked because I lead by example by seeding the entire project as the architect. However, my mandate for 100% code coverage eroded over time because people are either lazy or just not smart enough. Here is the thing, that's ok because when the shit hits the fan then it will be their responsibility to stand and explain why their shit code caused an issue.

Practice what you preach and ship excellence and if you do it well with good management then you'll be able to shape the team. Otherwise, good luck because we have yet to iron out what good quality even means.


You join a team/company that values tests and code quality. Should be a part of you interviewing the company. Trying to single handedly change the culture of a large company where you'll have few allies (because by this point, they'll have all self selected away) is a fools errand.


One general strategy is to make a suggestion, allow it to be shot down, then start gathering data about why fixing the thing would help.

I operate at a very high level in an organization but in general I accept disagreement, keep track of what the outcome was, and if I was right decide if I want to revisit the issue with the new data. The best outcome is I was wrong and nothing needs to change, or second that I was right but the impact was small so the effort was unjustified. If it turns out I was seriously right, then we can start putting motions in place to fix the issue, and I get credit for voicing concern early. I hope to be wrong!


Thanks everyone, great advice.

For all those saying "get a new job": not that simple. Immigration. If things are real shitty (which they aren't!), I'll get a new job anyway and move back home. But I'd rather not.


Just quit and move on. If they're that petty on what is industry standard, then they're going to be petty on other things as well. Also if the other developers are okay with not writing unit tests too, then it's a shit job.

But if you want to try, measure downtime or loss of productivity. Then convert it to dollars assuming average rate of dollars in the industry. And put the dollars in parenthesis ($10,000) to denote the loss.

Write an email, forward it up the chain. Then when the next outage happens, gently point out that email, and be prepared to offer solutions.


Do it for yourself first. Make a style guide and follow it, and setup an automated formatter that will implement that, and apply it to your own code.

Write tests for things that you work on.

If it’s valuable, and you are a productive team member, you will eventually be able to convince other to adopt your practices.

The best way to convince people to test is to show its valuable by finding/preventing bugs.


If I would be your team member my response would be "STFU and finish your task(s)".


Related question - how do you convince small teams (15 eng) QA, test and code quality matter?


Find a new job. Don't try to fix cultures, you can't.


This is not true. People who fix cultures are called "leaders" and the good ones do it every day, little by little. Chisel at the marble slab and eventually you'll sculpt a masterpiece.


Depends on the state of the culture. It has to be at least "fixable" with somewhat open-minded people at the top. There are 10x more people who try to fix cultures but end up burning out and being a scapegoat for everything.


Good luck being a leader when you are a new joiner hired as a IC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: