Is that why we do code reviews, because we don't trust each other? I find this attitude problematic. We do code reviews because humans make mistakes. Requirements can be misinterpreted. Different work in progress can be in conflict with each other. Reviews are a good way to learn from each other and keep abreast of what work is occurring outside your own bubble. Not doing code reviews because of "trust" kind of misses the point.
I find even just cursory and brief code reviews are beneficial and often pick up on issues. It also creates a more collaborative team and gives me a reason to communicate with devs I might not otherwise communicate with, even if that is just a brief comment on a PR.
Rejected.
ljm
If something hit production and caused a major fuck-up because there was no peer review process, then in all the places I've worked at the first action item in the post-mortem would be "we should introduce peer review." Otherwise someone would ask how we could ensure it wouldn't happen again, and no one would be able to say "because we trust them," and leave it at that. It would sound more like "I trust you will never make that mistake again," which sounds more like a threat. You also can't go to your clients and say "sorry, we let our team deploy stuff without reviewing because we trust them... so please continue to deal with our avoidable problems."
Thing is, I've lead my fair share of teams and I trust my team-mates and colleagues implicitly. It makes for a strong team. But we still do code review (peer review) because we want to build software that works well and support each other. We're not in the job to simply deploy code to prod as quickly as possible.
I'd also add that I think any software engineer should try and have the experience of working in a highly regulated field, like healthcare. It's hard to get an appreciation for why these things exist until you realise you're held to a much higher level of accountability because your oopsie moments can have much larger consequences. For me, it's been hard to go back to my old, cavalier attitude after that.
alkonaut
Exactly. I want my code peer reviewed partly because it makes it clear and formal that while we succeed as a team we also fail as a team. It’s much easier to talk about failures when a failure doesn’t have a single person attached to it.
mpweiher
> I want my code peer reviewed
TFA:
> Engineers [..] request reviews when they think it's necessary.
Problemo solved.
ahtihn
If code reviews aren't an expected part of the development process and there's pressure on delivering, it won't happen.
Ntrails
I don't believe that to be true. I believe the expected outcome is that people will still want code reviews on anything vaguely complex as insurance against fuckups. "Shit, WE missed something" is a nicer place to be than "I was sure I was perfect and I'm actually an idiot".
It does however mean you don't sit around waiting for someone to approve a three line delete PR that you know is safe. The "Rubber Stamp" review is a thing and it does waste time.
btschaegg
Since I am currently working in an environment that resembles GP's premise, I have to say that I find this take to be way too optimistic.
I'd be exactly in the boat you describe (and actually: I've yet to see a situation where someone reviewing my code did not lead to improvements). But: If I want my code reviewed, I have to actually fight for it. Since everybody's workload is too high, even the willing often simply don't have the time.
From higher up, at our place, there's no one that actively opposes code review as a practice, but they don't seem to get that it requires investment (i.e. time), either. That means that if management refuses to change their ways of planning or explicitly enshrine reviews as priority tasks, it simply won't happen except for maybe the gnarliest cases.
And that's from a perspective of the change's author being willing and proactive. I work with individuals that produce very questionable code at times. And I'm not talking style here: Code in 2021 that still is prone to SQL injection at every turn. Methods that are hundreds of lines long with a gigantic cyclomatic complexity that no one (including the author) will ever really understand. Method names like "process" and "process2". And so on.
Needless to say, those programmers will never push for their changes to be reviewed. The rest of the team simply discovers them after they blow up (which they do regularly) and we're debugging.
Now, these people are luckily a tiny minority (which at least makes this somewhat bearable), but I over and over again would wish for someone with more authority to show that they're interested in the matter. Which a policy on code review would do -- if only as a signal. Yes, we want to check each other's code. Yes, we want collaboration. Yes, that's more important than some arbitrary deadline for minor feature X. Yes, we expect our programmers to be professionals.
Another symptom of such an environment also is the fact that everyone is the king of their particular hill. No one every familiarizes themselves with another person's code unless they really have to. By now, I've jumped to insisting that every change on a critical component I'll ever make will be done through pair programming, and so far, at least that seemed to stick. But again -- that only happens if both parties involved actively fight for it instead of everybody understanding that this is just an expected part of work.
Ntrails
I appreciate the context and the different point of view! I live in a bubble of how I perceive my coworkers (and indeed myself), alongside the culture of my employer where "getting it right is more important than getting it out"
Was interesting to hear your experiences and maybe update my expectations of workplaces somewhat
btschaegg
Thanks for your response :)
> [...] alongside the culture of my employer [...]
I think you've hit the nail on the head here. As many others have said in different HN posts, culture is hard to change. And I would also understand the argument that one can also enforce culture changes with policies; my hope up to now simply has always been that engineers could "overrule" some artificial time limits in favor of more collaboration by referencing such a policy.
I guess if you're starting from a context where code reviewing practices "just work", introducing explicit policies also can't improve much, anyway. On that note: If you happen to know a good heuristic to detect such cultures during interviews or the like, I would be extremely interested :)
jolux
This is a fantastic comment, thank you for writing it. I especially appreciate the explanation of how management decisions produce engineering practices.
jksmith
This is where properly done scrum shines, provided the nature of the work lends itself to greenfield work (cycle time of one iteration). Good scrum is a dysfunction surfacing machine, so if you have a good system of risk management in place, the team can address in a constructive way. For all the scrum haters out there, come up with some system that codifies certainty, stability, and quality before your leadership starts pointing fingers.
healsjnr1
> If code reviews aren't an expected part of the development process and there's pressure on delivering, it won't happen.
This hits at the heart of why I strongly agree with this post and think PRs and code reviews are ultimately destructive.
Reviews are going to get dropped _regardless_ of whether they are in the process or not. Pressure makes code reviews and quality worse, not better.
More than that, the need to "appear" to review is going to slow things down and can cause an exponential back up. I saw this happen about a mouth ago: 3 day release cycle got bogged down and then took _3 weeks_ to get a single release out.
Process goes out the window when pressure hits. You are then operating in foreign environment at the most critical point and your safety net of useless.
Your safety net when things need to go fast should be highly automated fast build and deploy, coupled with great test coverage, from unit to end to end.
The only exception might be if you work in highly regulated field where you legal required to have a very low Mean Time To Failure. In that case things move slow unless you are very well resources.
Otherwise (ie the majority of products) focus that effort on having a very low Mean Time to Resolve. If you absolutely need some form of quality assurance, Pair. Do continuous PRs and knowledge sharing rather than out of context after the fact reviews.
gunapologist99
This sounds similar to the peer review process in publishing. Typically, getting other people to actually agree to voluntarily review your work is indeed a significant hurdle if the requested paper/book/chapter/white paper/code review is lengthy.
There is an easy way to resolve this:
Request peer review for SMALLER chunks of code.
Git makes this exceptionally easy -- it's certainly far easier than reviewing a large document with Word track changes!
ahtihn
The problem is that small changes are barely worth reviewing. Also a series of small changes can obfuscate the larger context of all those changes.
Each small change on its own might make sense and be ok to a reviewer, but if you step back and look at the series of changes together there might be some issues that could have been noticed.
mpweiher
Fix that.
alkonaut
Sure. But if you read my previous message this relies on people being comfortable asking. I am but not of my colleagues are. The formal process would let them use the benefit that I can use when I think it’s necessary.
mpweiher
> people being comfortable asking. I am but not [all] of my colleagues are.
Then that's what you need to fix.
alkonaut
Yes. And formal processes are 10x easier pthan magically “adding trust” or “confidence” or “seniority”.
pbalau
Good engineers will ask for code review. Most engineers won't.
mpweiher
> Good engineers will ask for code review
...when they need it. Actually, really good engineers will ask to pair when they need it. Review is a distant second.
btschaegg
> Review is a distant second.
I agree with you there, but I'm also sure that one reason for this is that many people still see reviews as the task that only happens after everything else.
I'm convinced there's a lot of useful middle ground to be had if reviews aren't just "here's 500 LOC, please review", but maybe more… staged.
Say: Rummage around in the code a bit for yourself (ask questions if you have to), come up with a strategy of solving your problem, and submit that for review. Not in code, but maybe using markdown, preudocode, small diagrams or the like. In person (i.e. via screenshare nowadays). And if the other party doesn't find obvious holes in your strategy, then go implementing it.
I remember an interview with some Linux kernel maintainer that mentioned the same effect -- people submitting a gigantic patch for something that will simply never be merged. And all that would have been necessary to avoid this wasted work was an e-mail asking if the maintainers are at all interested in the change.
rdw
This post sounds like wisdom, but after looking at it hard, it seems that the point is no more than "peer review processes are popular". Not to say that code review is bad in any way (I do it at my current role), but, this argument for them is not very good.
One downside of code review is reduced velocity. There are teams out there who use code review so effectively that they never ship big fuck-ups to production. They never have to tell clients that they have bad practices. Instead, they struggle to get clients because they pay much more per line of code shipped and therefore don't deliver as good of a value as their competitors.
This where the argument needs to be: whether and how to have code reviews be a net positive.
ljm
Ultimately I think it's down to where you want to pay for that decision (for want of a better term), or where to put the bottleneck. I think that putting the bottleneck in production (which would block all work unrelated to a fix if a catastrophic fuckup took place) might as well be a hedge against the possibility of things fucking up so badly in a given timeframe. You could also make code review a bottleneck, but I haven't seen that work well in practice compared to putting it at QA or as part of a release process (unless you combine them in a continuous delivery workflow).
I can see how that equation balances in favour of no code review when it's in the context of a startup that is probably still figuring out how to make money, especially with a subscription based app launcher. As far as prototyping and early stage iterations go, you're probably just seeing what works and reviewing the code isn't so important at that point. There would still be one or two things you'd ask someone to check, e.g. security related stuff.
I think you make a good point though: what makes an effective code review process? I don't think there's a single answer to that because it depends on the circumstances.
I would at least say that it goes better if it's treated as a priority and issues are raised and dealt with promptly, rather than leaving it as a chore you eventually get around to. And you would have to see value in it beyond it being a sanity check, as others in this thread have described (knowledge transfer, for example).
baron816
I'd argue that social proof often makes for a good argument. Perhaps not the best argument, but still a good one. "Best practices" still carry weight. Of course you can make an argument that the thing everyone does is wrong, and win. That's happened millions of times in this industry. But if the argument is nearly strictly between "this is what everyone does" vs "this is what I do", the former is going to win.
jolux
Causing a major fuck-up in production is probably also a sign that you need better release validation and deployment practices.
jaredsohn
For many developers, people merge their branches directly to production so the pull request review is where that checking happens.
jolux
Sure, I mostly meant automated processes. They won't always save you from yourself but doing blue-green, having an extensive test suite, etc, are all things that will help reduce the risk of a bad deployment.
lewisl9029
Yeah exactly, code review isn't a good place to catch actual bugs. Humans are terrible at catching bugs consistently... Humans probably introduced those bugs in the first place. It's also an exceedingly poor use of human time, which costs a heck of a lot more than machine time.
We really should be relying on automated systems to catch bugs for the most part, and leave code review for what it's good for: making sure a different human can understand and maintain the code.
In fact, if we manage to catch a bug in a code review that didn't also trigger our automated systems to alert us, that should probably prompt us to look into how we can expand the coverage of that automated system, similar to what we'd do in a postmortem if the bug wasn't caught in code review and reached production, because we can't rely on that happening the next time.
jolux
> code review isn't a good place to catch actual bugs.
Disagree, but it's also a good place to be ensuring that the system is evolving in a way that is reasonable and consistent — or that it is unreasonable and inconsistent for good reasons.
munchbunny
How do you systematically (and proactively) ensure your testing is thorough? With testing you often run into unknown unknown cases.
I’ve always code reviewed/gotten code reviewed on both the implementation and the tests, with completeness as the main quality you’re reviewing test code for. If that’s how we do it, we’re right back at code reviews.
lewisl9029
That's definitely an unsolved (unsolvable?) problem. The best we can really do is start with a common sense set of initial tests based on the user observable behaviors we want from the system and then add more tests organically as we learn more about the system's edge cases through incidents, bugs and regressions.
I'm not saying code review can't find any bugs (in practice bugs are found in code review all the time), I'm just saying that if we do find a bug through code review, it means we got lucky, and we shouldn't count on getting lucky again, so better add a test/linter rule for the next time when we might not get so lucky. That's just one more way to grow your automated systems organically.
brabel
If there's one thing I've learned over many years writing lots and lots of automated tests for everything is that no matter how far you go with your tests, barring formal proofs or equivalent, the tests cannot save you from, at some point, breaking production badly. One small, tiny thing that slipped through your tests and boom - huge fuck up (even small things can easily cause your whole new feature to fail miserably on the big launch).
jaredsohn
> to fail miserably on the big launch
What big launch? :) Is useful to feature flag new functionality so that you can release gradually and spot the bugs - often via error monitoring and user feedback - before they affect everyone. Yes, realize it can happen but there are techniques to minimize the chance of it happening.
watwut
Ok, but many companies don't use such ridiculous process.
jaredsohn
If you want to learn more about this, read up on 'continuous deployment'. I remember when I was first exposed to it it seemed dangerous/scary but with the right culture changes it can work really well; lots of companies do it.
kcb
It's not as ridiculous as it sounds as long as the testing and verification happens as part of the PR.
saurik
Sure, but then you are just begging the question: if you have a better process for release verification you might not be need any per-commit code review at all.
dtech
iirc Google does it, it's not ridiculous at all. You need a lot of automated tests and canary deployment to pull it off though.
joshuamorton
Not really. Googler merges to HEAD, but systems aren't running directly from head.
sharken
It's the same in the financial sector, you can't just push code without a review.
However, the review process is far from perfect and can create a false sense of security.
To review a piece of code, that code should ideally be small in scope. For larger pieces it's rather common that the reviewer don't have enough time to do a good job.
If there were better incentives for doing a review, then it would greatly improve the situation.
But i haven't seen that yet.
XorNot
I've had a lot of trouble getting people to do proper reviews. A minimum to me is that you actually compile and execute the code in some way to check its sane. Better would be the reviewer actually adds to the test suite for the code to prove their expectations of how it works.
In almost all cases it's very hard to get people to look outside the web browser for the diffs. Diffs show you something, but never the whole context.
The other problem I've found is stamping down on style arguments: they have no place in code reviews, and a lot of the time you get a round of "please change this variable to be named something else" - which might be valid, but raises the question of why the reviewer didn't do it, and more often leaves a weird authority gap - I have no power to just reject such a request because I think it's invalid, nor does the business have anyway to resolve the issue. In fact if there's a dispute at all, very rarely is there any process for bringing in a third party to mediate or break the stalemate (which is hilarious in some ways since everyone by now should realize 2-party systems can't achieve consensus).
Code reviews are very, very cargo-culty and I've not seen them ever include a thorough design: people give up on the very near edge cases, and so the whole system falls apart because it's just a hurdle to get through and not a collaborative or constructive process.
kqr
Why should it be on the reviewer to compile and execute the code, rather than the author?
Or are you saying that one can actually find more bugs when two people compile and execute the same code rather than just one?
When I review code, I want the author to tell me how they have verified that the code has the desired effect -- but if they do that, I'm going to trust that they did the actual verification and I'm not going to simply retrace their steps. That seems like a waste of time to me.
DoctorDabadedoo
Not OP here. Code reviews, to me, are opportunities to
1) Catch edge use case / business requirements misunderstandings
2) Mentor new hires/junior engineers on the ways things should be done (e.g. internal culture about some architecture/design).
BUT, I absolutely despise code reviews that nitpick on non documented or 'because I prefer that way' stuff:
1) Task tickets that only have a vague description, have their own 'play detective' about the requirements, but a lot of new ones surface during code review.
2) Requirements are met, but there are internal/personal technical directives that are written nowhere, are not enforced with automated checks and may dramatically change a feature/patch is implemented (I can only think about that guy that worked in a company in which only LEFT JOIN sql statements were allowed).
Fuck that noise.
ahtihn
Compiling and running the code is handled by a build server and automated tests. That's not the point of code reviews.
The main purpose of code reviews is checking the general structure of the code and tests. Is the code well designed? Are error cases considered and tested?
Another important purpose of code reviews is knowledge sharing.
hsn915
This strikes me as weird excuse for code reviews.
To spot quality problems before going to production .. what you need is testing.
Not "unit testing" - which is just another pointless process that does not actually provide the purported benefits.
You need a separate QA team that handles testing - manual and automated.
kevingadd
If you don't catch major errors until the QA team has time to do a full comprehensive test of the product then those errors are probably sitting in the codebase for days or weeks, and additional changes are being piled on top of them which will make it harder to identify which commit was the problem. Do a code review and you'll catch a bunch of stuff before it even hits QA.
kqr
Correct. Any time you need a completely separate team to handle any aspect of the lifecycle of the product you're introducing delays and miscommunication. Besides, a separate team to do a part of the job scales badly: it needs to grow at least linearly with the number of developers, sometimes superlinearly.
You should absolutely have a small QA team, but their job shouldn't be doing the QA -- it should be helping others do their own QA.
pc86
Major errors are easy to spot and will likely take QA minutes to find. They're also probably not going to do a full test of the entire system with every change unless you've got a very small product.
One thing I've seen that I like a lot is individual changes creating individual front ends. You test the one thing in that change, and it's done. This tends to cause a lot of problems with CORS, but then what doesn't?
Agreed with you though that good code reviews (whatever that means) obviates most of this.
kqr
The actual job is always fast. What takes time is for the job to sit in the queue outside the QA department. The figurative papers moving between desks is always what kills your efficiency, not people doing work slowly.
ljm
I don't see that as mutually exclusive. A QA team can be more effective if you're taking steps beforehand to limit the scope of their work, which means your engineers need to start getting more hands on with that kind of work (code reviews, automated testing, whatever...)
Your quality problems might well come from poor or non-existent architectural decisions and tech-debt, and QA won't pick up on that stuff unless it manifests as a faulty requirement.
The earlier you catch a problem, the easier it is to fix.
asiachick
Our tests have caught 10s of thousands of bugs. We couldn't move forward without them. Maybe they just fit our use case more than yours.
In our case we have a public API and several backends. The tests run against the public API. To know that a new backend is working it has to pass all the tests. Getting any new backend working correctly without the tests would be nearly impossible.
quartesixte
> For me, it's been hard to go back to my old, cavalier attitude after that.
I’ve heard from my workplace’s software team that this makes hiring from certain sectors particularly difficult because they don’t get this. Too used to move fast and break things and MVP.
mumblemumble
We (at least should) also do code reviews in order to make sure everyone understands the code.
On my team, the majority of code review comments are not discussing potential bugs, they're making sure everybody knows how the new thing works, why it was designed that way, implementation tradeoffs, etc. All that discussion is extremely valuable over the long run. For example, it means that nobody has to pay attention to email while they're on vacation in case something goes pear-shaped with a module they implemented.
That said, I 100% agree with the article's concluding paragraph about YMMV. Raycast is operating in an entirely different business domain from the one I'm in. It sounds like the reasons why Raycast likes their way of doing things don't really apply to us, and I don't think the reasons why we like our way of doing things don't really apply to Raycast.
mike_d
> do code reviews in order to make sure everyone understands the code.
I like a hybrid "the person who wrote it can't deploy it" model. You get an informal code review and two sets of eyes/brains to the extent that the deploying engineer feels like they understand it enough to shepherd it into prod weighted against how important the component and the scope of the change is.
It self scales from "copy change on the website yolo" to "this involves two new racks of servers and a database migration, I'm going to need to spend a week understanding it to the point I can make a plan."
kqr
Ooh, I really like this. Do you have practical experience with this or do you like it in principle?
rebeccaskinner
> We (at least should) also do code reviews in order to make sure everyone understands the code.
This is the most common reason I see cited for code reviews, but I'm not entirely sure if I buy it, at least not for a lot of teams.
In most cases you are going to have one, maybe two other people looking at a given PR. The positive way of looking at this is that you've just doubled (in some cases tripled) the number of people who are familiar with the code. In reality though, this is still going to be a very small fraction of most engineering organizations. If you have a small team who wholly owns a small codebase then it could be meaningful, but a lot of the time you're still talking about a rounding error from zero people knowing the code- most people in the organization will still be touching the code in the future without any prior knowledge of it, and will have to start understanding it from zero.
Even if you are dealing with a small enough code base, in a small enough team, that the reviewer is adding a substantial margin of familiarity with the codebase to the team, reviewing the code is only going to give an extremely shallow level of understanding. The reviewer isn't going to have all of the context that the original developer had. They won't know all of the things that weren't done or why, the tradeoffs that were made, requirements, business context, etc. Best case scenario, someone wrote that down in a well-known place (comments in the code near the feature, something relevant in a docs directory, hopefully not as an archived jira issue that will never be looked at again). The reviewer probably has their own work to do anyway, and once they approve the PR they'll flush most of what they had out of their short-term memory anyway.
In a lot of ways, this is the same set of reasons that people aren't very likely to find bugs in a code review either. Someone without a lot of context on the problem isn't likely to find many business logic errors, or missed requirements. Since you were busy working on your feature, you probably don't know who else on what other teams might have been doing something relevant to your work, so you wouldn't know to get a review from people who might have something useful to contribute (and they wouldn't know to proactively review your code). One of the reason senior engineering roles often devolve into telling people to talk to one another is to try to address this kind of situation, but that only scales up to a small fraction of the overall code churn at organizations of any size.
For what it's worth, I ask for PRs, and I review my co-workers code. I make comment when I see something that might be useful, and I graciously accept feedback and try to make sensible decisions about when to make changes and when to let the feedback go. I actively search for PRs that might touch sections of the code I've been working in or am very familiar with to try to optimize the quality of the feedback I can give. I'm not really convinced any of this actually helps deliver better code in the moment, but maybe it does some good as a way to help teach my co-workers about things (or to learn about things they can teach me), and anyway it's such an ingrained part of engineering culture that it doesn't seem worth the political capital you'd have to spend to try to change it, and even if you succeeded it would be a prime target for scapegoating if anything went wrong later. It does slow down the development process I think, but people need down time anyway. If waiting on a code review gets people an hour or two of downtime to check their email or read HN between tasks then maybe it's actually worth it for that. The burnout caused by superfluous process and the occasionally large pile-ons and people demanding poorly thought out changes, gatekeeping your ability to deliver business critical work is essentially a cost of doing business that we seem to have collectively accepted in software.
I should also note that I also work on some open source projects, and in that setting PRs work much better. Not only does such code often need more review, but you also typically have a smaller group of people who know the whole codebase, and are much better central repositories of knowledge. The review process is also a much more effective way of doing collaborative decision making in an open source project IMHO.
alecbz
Yeah, IMO this is like a publisher saying "we don't have editors: we trust our authors". The intended ethos of editors and code review is that mistakes and imperfections are the norm, and you need a second pair of eyes to iron them out.
That said, I do sometimes encounter individuals or cultures that seem to view code review more as a mechanism for catching abnormal/unexpected mistakes than as a normal part of the process of writing code.
jiggawatts
Something that I've discovered is that as the volume of text that I write goes up, the errors increase. More experience hasn't resulted in fewer typos. On the contrary, I make far more small mistakes now than I used to.
It's precisely because I've learned to read and write so fast that I "skip over" the simple parts of sentences, making me oddly blind to certain categories of spelling errors!
Especially when I've gone over the same text over and over, I'll have memorised it to the point that I can skip entire sentences.
This is why you need someone else to step in and read the text for the "first time" and not skip anything.
The same applies to programmers and code review.
hsn915
Books are published once, not through iterations.
The equivalent in book writing would be that you are not allowed to write more than one page at a time, and someone must approve it before you start working on the next page.
kqr
Another difference between books and code is that books are read and interpreted by humans (who are very tolerant of grammar bad, in the grand scheme of things) and code is read and interpreted by computers (who will swallow any ambiguity whole and just steam on with their interpretation.)
If books were read by machines rather than humans, and were published incrementally, then it would make sense to ask someone to edit each iteration.
dragonwriter
> Books are published once, not through iterations.
Incorrect. Bugs (“errata”) are regularly fixed in bugfix versions (“printings”) of books, whereas feature changes are regularly fixed in more major versions (“editions”). And that’s after the equivalent of a “1.0” release.
Books are also regularly iteratively released either to a closed group of reviewers or even the public (e.g., Manning MEAPs, PragProg Beta Books, etc.) prior to the equivalent of a “1.0” release.
mpweiher
> Bugs (“errata”) are regularly fixed in bugfix versions (“printings”) of books,
But those fixes don't fix the already printed books.
dragonwriter
> But those fixes don't fix the already printed books.
For actual physical books (as opposed to ebooks, where making the complete new version available to purchasers for errata, but usually not new editions, is common), this is usually true for logistical reasons, though issuing errata sheets (usually online now, though the practice is older but notification and distribution was harder in the past) to keep existing users up-to-date is not uncommon.
mpweiher
> this is like a publisher saying "we don't have editors: we trust our authors"
No it's not.
If an author produces gibberish and nobody checks, the published result is gibberish. And with print books uneditable in a run starting in the thousands.
If a programmer produces gibberish, it won't compile.
It also won't pass the unit test or acceptance tests, never mind QA or the alpha/beta testers.
eterm
My experience tells me there's no shortage of compilable gibberish.
mpweiher
My experience tells me there's no shortage of published gibberish. ¯\_(ツ)_/¯
erpellan
That final, 1 word sentence is one of the major problems of PRs. An engineer might spend several hours really thinking through a problem, talking with colleagues, whiteboarding options and coming up with a workable solution that addresses all the obvious issues and a bunch of non-obvious ones.
Only for a drive-by take-down by someone with none of the context. It's a totally asymmetric investment of time. Even helpful review comments might only take a minute to write but a day to incorporate.
detaro
> Only for a drive-by take-down by someone with none of the context
If that's a regular issue that's a culture problem. Starting with "if you've been talking with colleagues about your problem, why is someone with no context reviewing the result?", people not investing time in reviews, people doing "take-downs" instead of asking questions if they don't understand things, hold up merge for non-urgent concerns ...
> Even helpful review comments might only take a minute to write but a day to incorporate.
Either the feedback is worth the investment of the day of time, then no problem, otherwise it's not that important and doesn't need to be done (or depending on what it is can be done later or ...)
tshaddox
> If that's a regular issue that's a culture problem.
Couldn’t one also argue that if code reviews are routinely catching problems mentioned earlier in the thread (misinterpreted requirements, conflicts with other WIP, etc.) then that’s a cultural problem? It just seems odd to assume that the person assigned the task couldn’t possibly be expected to routinely avoid those problems, but that adding rigorous code review could routinely avoid them.
kqr
Yup, definitely. Code review discovers both issues stemming from cultural problems and silly bugs and understandability issues. Peer review in general is a very powerful tool for things that should be designed to satisfy customer needs.
cfer43432t
> otherwise it's not that important and doesn't need to be done
Except that this decision is often not up to the code author, because the tooling rejects for example committing changes until all discussions are resolved (see Gitlab for example) and some @holes make their quest for any reason not to resolve the discussions that they started until the code author writes the exact code with the exact words in the exact indentation with the exact architecture, etc that those @holes like, which makes the whole code review like a torture and the REAL productivity killer. Not to mention that this does not provide any improvement to the code base either in most cases.
dromtrund
> drive-by take-down by someone with none of the context.
If this is an issue in your workday, you should bring it up with management or the individual rejecter. If someone has the authority to kill PRs, they should also be required to put in the effort to understand it and/or be available for discussion at an earlier stage of the development process. PRs shouldn't ever be rejected without mutual agreement - neither internally in an organization, or in an open source project.
sanderjd
Is this a common practice? I've never worked under a code review process where the accept / reject decision was being made by "drive-by" reviews from "someone with none of the context". It has always been team members with a lot of context reviewing each others' code.
plorkyeran
I've never even worked with a code review process where outright rejecting PRs is a normal part of the process for anything other than external contributions. The normal process is more of that if you think a PR is a bad idea it's up to you to persuade the author of that. You'll certainly sometimes have the author withdraw the PR without being truly convinced, but something like "I don't like this. Rejected." would just be disregarded if you can get other people to approve it.
cfer43432t
I think the issue is not that people make drive-by issues, the problem is that people argue about things endlessly which does not bring any value neither to the customers nor the company, that is only for the vanity of the reviewer, and many reviewer get offended when someone does not take blindly their "wisdom" and for retaliation they block code commits endlessly which is enforced by automated tools like Gitlab.
kps
> Only for a drive-by take-down by someone with none of the context.
Then write it down. If the code reviewer can't follow what's going on, what hope is there for the new hire looking at it six months from now?
ipaddr
Because looking through 6 month old prs are what new hires are doing to learn the codebase?
fragmede
Running through `git blame` and looking at the commit message, the PR, and the PR comments is a very practical way to learn about a codebase! Beyond some high level code organization stuff that exists in a readme, learning a codebase is really about getting a history lesson of how the product evolved, the company pivoted, and the team re-org'd.
It'll be 6-months if you're lucky. Try figuring out why there's a particular "if" clause, one or two or four years later. Software maintenance, especially when the original author has moved onto another team/organization/company/career, is a frustrating art of almost remembered stories and hoping that you can figure out Chesterton's fence, lest it become Chesterton's barbed wire. The worst is when you fix a small bug with code and introduce an even bigger big in the process.
detaro
Not necessarily looking through PRs, but presumably they need to work with actual code in your code base that was merged at some point, unless your product is growing so fast that new people just write new code all the time? (Same applies for not-new coworkers that now need to touch that code area anyways)
yxhuvud
Sure. If they are good and have a decent grasp of using version control then will look at the history of a certain piece of code if they don't understand why it is the way it is. I do it often, even when I'm no longer a new hire.
lalaithion
That is one of the things I do to learn new codebases.
Nursie
> Only for a drive-by take-down
Code reviews aren't a "take-down", they're a process of helping each other produce better solutions and better code.
> Even helpful review comments might only take a minute to write but a day to incorporate.
Then like all pieces of work, a decision must be taken whether that day is worth it, no?
cfer43432t
That's what YOU think. But in practice they produce sub-par solutions and worse quality in my experience. Code reviews are basically a virtual piss-contest where people argue endlessly, everyone trying to show-off their intellectual superiority. And reviewers (not code authors) often can't handle rejection for some reason.
Nursie
It sounds like you've worked in some massively toxic teams. That's all I can say.
strken
Take this with a grain of salt, but a good code review should be an exploratory process. Most of the non-trivial comments should be questions[0], and the reviewer should be operating under the assumption they'll have to jump on a call or even fix it themselves.
[0] Not just because the reviewer is trying to be nice, but because they don't know what constraints and problems the author discovered in the process of writing the PR. If the reviewer writes "use a map here, instead of scanning an array" without knowing how many items are handled, then they may be making the PR worse by replacing a cheap linear scan over a maximum of 50 items with a bunch of costly yet constant time hashing. Often the solution is an explanatory comment rather than the "obvious" fix.
coffeefirst
Fair, and I've seen this, but usually it's because the reviewers are treating "I would personally prefer to do this another way" as blocking feedback, when it isn't.
Good reviews can take a light touch, in part by differentiating non-blocking ideas and suggestions from "hey I think this is a bug."
Some of my favorite reviews of all time actually don't change a single line of code, but the questions reviews ask zero in on what's going to confuse the next developer so it can be documented appropriately.
Tobias42
Usually I answer unspecific 1-sentence criticisms with a question what alternative solution the reviewer would suggest. Sometimes they look into it more deeply and really come up with a better solution. If they don't, at least the discussion is over.
physicsguy
I require PRs with code review on projects I am an admin on because I don’t trust myself. I sure as hell don’t trust other people!
AnimalMuppet
It's worse than that. I don't trust me. You shouldn't either. I need the review.
watwut
The best code I have seen was on projects without code review. And projects with it were more mess - they were surface consistent but overall hard to comprehend.
The deciding factor was ownership and accountability tho - you maintained own code and if you done it crappily, you knew.
The code review is related to assumption that everyone can change everything - meaning all in all inconsistent mess. It is also related to unfortunate assumption that code review is how people learn system. They don't, unless they are original author. The rest of then knows pieces of it only.
paiute
I prefer a top down where project leads are responsible for code, I.e. they review all code and they are simultaneously responsible for getting things done. It forces balance. I will let juniors make a mess in certain areas and not others. Sometimes you even let bugs in if you know a specific test will fail so people are more careful and don't depend on reviews too much. If you are a bad lead and nitpick every little thing, then you get to explain why your project isn't ready. Honestly getting devs to git add -p and look over their own pr catches much of it.
I agree that code review is not a great medium to disseminate information about the codebase, but assigning individual ownership and accountability doesn't make the need to share information go away.
Ownership and accountability are workable as long as the same person sticks around long enough to fix all their mistakes. If that person leaves, the code becomes orphaned and it's up to one of their teammates to find out where the skeletons are buried.
watwut
No it does not. But it also leads to one-two clear person you know to ask. It also leads to same person explaining same thing multiple times. So as long as he cares just a little, the explanation will improve.
And the person doing explanation actually understand the whole part. That is big one too - you are not explained bits of it by someone who knows only small parts of it. And the system was not changed by third party without the person who explains having at least vague idea it was changed.
> Ownership and accountability are workable as long as the same person sticks around long enough to fix all their mistakes. If that person leaves, the code becomes orphaned and it's up to one of their teammates to find out where the skeletons are buried.
That is actually exactly the same without clear areas. Except everyone is new all the time.
If you have high turnover, this aspect will be bad matter what.
And it is not that bad with responsibility either. Old person does explains things to new person. Taking over, when you are working consistently in same area is not that hard, actually. Some parts you conclude to be mess, but usually not whole of it.
---------
And imo, quality and knowledge heavily depends on other parts of team. Analysis and testing. Both, if work reasonably well are massive resource for knowing how the hell the system is supposed to work.
And once you have reliable source for requirements, that is not developers, then deciphering the code is much easier.
devanl
In my mind at least, the issue with relying on on ownership, especially in a smaller team with limited resources, is that the owner isn't just the person who authoritatively understands that subsystem. The owner ends up being assigned _all_ of the work on that subsystem because it's "their responsibility".
As a result, any work that's assigned to someone else will only interact with at the interfaces / boundaries. And of course, that's sort of the intention with modularity / loose coupling - it's not necessary to learn the gory details of the XYZ subsystem's implementation, only its API. Knowing the details is certainly valuable, but there's no baseline impetus to get the owner to explain the details.
Taking over from someone who's ceding responsibility, but staying at the company, I agree that's manageable. But sometimes people get laid off suddenly (or perhaps get hit by a bus, etc) and there's no chance to get the one person who has it all in their head to explain it to you.
You mentioned high turnover, but if anything, a place with high turnover has less information locked up in a single person's head. If Alice who's worked at the company for 25 years and has always owned the code leaves, you're going to have a much harder time getting all of the tribal knowledge out of her head on two weeks notice compared to Bob who only made it 6 months.
To me, the value of code review is that even at its worst, someone other than the author is forced to look at the code. No process can force a reviewer to take an active interest in a particular subsystem, but at least we can make them look at pieces of it and see if it gets them curious enough to ask further questions of the author and better understand the system.
For myself, knowing that someone else will have to understand my code, even through the limited lens of a code review, makes me more diligent about making sure that the code is clean and the pull request has enough context that someone playing code archeologist will be able to make sense of it, even if I'm not walking them through it in real time. I will admit that this is not a universal thing - I've worked with coworkers where no matter what process is in place, they won't do the basic courtesy of reading their own changelists to see that they accidentally committed random junk files from their local machine.
I agree that good documentation around requirements is valuable. I find pull requests / code reviews are a great opportunity to highlight how the code relates to specific requirements, since the focus is narrower.
sanderjd
This is the exact opposite of my experience.
Jensson
The more problems you have the more policies you add to prevent future problems. I really doubt that a low quality codebase would become higher quality with the addition of code reviews and vice versa.
marcosdumay
A formal review step where you accept or reject the code, yes, is always there because of lack of trust.
Informal reviews can have a lot of different shapes, and provide a lot of different benefits. But anything that is a stop-the-process activity is there because you expect quality problems from the earlier steps.
(That said, no good developer trusts oneself. So if you don't have anything on your process that makes you trustworthy, lack of trust is the correct approach.)
mulmen
I tend to scan most reviews for the team even if the code review itself is not directed to me (we use a round-robin/domain expert routing method). It helps me understand what my teammates are working on and give me a better understanding of our system. I can't imagine anyone interpreting that as a lack of trust. Is looking at our source code inherently distrustful?
asiachick
Agreed, so many times reviewers have saved me from myself as well as passed on solutions I didn't know, pointed out functions in our giant codebase I wasn't aware of, taught me new language concepts I wasn't familiar/comfortable with (ie, C++11->14->17->20) and generally helped limit our technical debt.
I'm a fan of code review
For me, code review is collaboration. We're helping each other.
The one thing I'd suggest is, as much as possible, try to phrase reviews like a question: "Do you mean to do X here?", "Would using fooTheFob here work?", "Is this a race condition?". At least it feels better to me.
snarf21
I think it is trust in the trust and verify sense. Meaning, we trust each other to do our best but let us verify that we are all on the same page and aren't breaking things or doing something harmful.
joombaga
What is the "trust and verify sense"? I've been told this by managers before but I've honestly never understood it. "Trust" to me means "be willing to take action without further examination of the facts at hand". If I trust your code I wouldn't review it. But I don't trust your code, and I don't trust you to have taken all of the appropriate considerations when you were writing it. What's the difference between "trust but verify" and "don't trust, verify"?
Is the phrase "trust but verify" an attempt to avoid damaging self-esteem? There seems to be some emotional/ego-based attachment to the concept of trust.
jayd16
Its a Reagan quote but in this context it means that you trust their intentions, effort etc, but its still worth checking.
strken
I think it's differentiating total lack of trust ("all your PRs suck while mine are perfect, so I have to read yours") from universal fallibility ("your PRs are great, but all code has bugs, so we all review each other's code to catch some of them").
sanderjd
Yes, catching mistakes (which everyone makes) and bidirectional communication are the points of code reviews, it's not a lack of trust.
islon
I have an open source project in which I work alone.
I never work directly on master, always on a branch, and when I'm done I create a PR for myself.
Can't remember a single PR that I reviewed where I didn't find some issue or bug.
I guess you can say I don't trust myself...
actually_a_dog
Absolutely. I take issue with all 3 of his main points. You covered the first point well, so, I'll take the last 2.
> Pull requests don't prevent bugs. Reviewing code is hard! You can catch obvious mistakes, but you won't have consistent in-depth scrutiny. Many issues reveal themselves only after extensive use, which doesn't happen during code reviews. We rely on fast iterations with user feedback.
So many things I can say about this. "Fast iterations with user feedback" are great, until you introduce a security bug that you find out about because someone exploits it. Catching "obvious" mistakes alone has significant value. Since he acknowledges that code reviews catch some bugs, this criticism seems to be that they don't catch all bugs. To that, I say: show me a practice which does catch all bugs, and I'll show you why it doesn't. Even the most rigorous development process doesn't prevent all bugs. [0] This just doesn't sound like a fair criticism to me.
> Pull requests slow down. Code reviews aren't high priority for engineers. They have to make time to clean up their review queue. In reality, PRs are an afterthought and not what you want to spend your time on as an engineer.
This is largely a culture problem. Code review should be a high priority for engineers. Maybe it's not the highest priority, but, it's fair to expect one could get a decent review on a PR that isn't too large in, say, 1-2 days or less. If you have PRs sitting in review for weeks at a time on a regular basis[1], one of two things is happening: either engineers aren't properly prioritizing code review, or your PRs are too large and/or have too many dependencies.
Which brings me to the second thing you can do, which is make PRs that are as small and simple as possible, but no more. Minimize dependencies to help make your PRs understandable. And, it should be fairly obvious, but the fewer lines of code (including config changes) you have in your PR, the easier it will be to review, so, the faster it will get done.
Code review is also a great opportunity to learn and teach. Every senior+ engineer should be reviewing code frequently, IMO.
Finally, yes, code reviews will slow you down if your basis for comparison is just "engineers merge code when they think it's ready." That's intentional, and good. The cost is more than paid for by better code quality, leading to fewer instances of "here's a PR to fix the issues with my previous PR."
[1]: I've only had this happen once or twice in my career so far. In each case, it was because the code surrounding the code I was working on was not stable and kept changing. These were clearly exceptional instances.
damethos
I completely agree. This article is bad advice, please do not follow it.
jayd16
If it wasn't about trust you could just read the commit history. Its certainly about trust.
mpweiher
> Is that why we do code reviews, because we don't trust each other?
I don't see any other reason to require code reviews before a change can be merged.
> humans make mistakes
Then write unit tests, religiously. And pair. Or "... request reviews when they think it's necessary".
In my experience code reviews are not very good at catching mistakes.
> Different work in progress can be in conflict with each other.
Code reviews are the cause of too much work in progress, not the solution. Not the only cause.
> Reviews are a good way to learn from each other
In my experience, they are pretty bad at this. Pairing is vastly superior, or giving talks.
> reason to communicate with devs I might not otherwise communicate with
Then communicate with them. And if you want a review "... request reviews when they think it's necessary".
> Rejected
Exactly. That's not a useful form of interacting with your peers.
nrb
> I don't see any other reason to require code reviews before a change can be merged.
In the US, a SOC 2 audit of your org’s change management process is going to be a really bad time without this.
mpweiher
Do I really have to add "...unless required by law/contract"?
nrb
Well, doing this _before_ undergoing these certifications would still be a good reason that was omitted from the very absolute claim that was made.
xmlninja
I trust you with my life but do I also trust you with my money and my wife?
globalise83
Trust but verify
mixedCase
I'm glad this works better for them, or at least they feel that it does. I'd also bet that they're either a small minority of teams that jell so well that they manage to scale a codebase without it turning into a disgustingly inconsistent mess; or that indeed their codebase qualifies as such or is still too small for the effects to be very visible.
Code reviews are not a tool intended mainly to catch bugs. Types, automated tests and manual tests are going to do the bulk of your bug-catching after all. Code reviews' more relevant purpose is to act as QA for the code itself far more than it is to verify if the code achieves what it sets out to do. This is in service of both maintainability (which impacts your ability to make future changes and staff retention) as well as performance (by introducing the chance for another dev to detect suboptimal approaches).
Once your teams grow, code piles up, and static analysis can't cover all your bases, either your team corroborates the code that you write matches/defines group standards; or each developer's careful design accumulates as "organic growth" carefully duct taped together to barely work.
And just to state the obvious, code reviews are merely a tool to achieve this. Pair programming is another.
commandlinefan
> Code reviews are not a tool intended mainly to catch bugs
The source actually says that informal code reviews are very poor way to catch bugs, while formal code reviews are one of the best.
I don't have "Code Complete" at hand to follow up on that, but I strongly suspect that the formal code review was defined in this research as reviewing the entire codebase line-by-line on meetings. As opposed to reviewing just a single change to the system.
Can you/anyone shed some light on this?
Fishkins
My reading is that a formal code review is still focused on a single change, but with a more explicit/required process for review. This sentence in the article backs that up:
> Before any code at Google gets checked in, one owner of the code base must review and approve the change (formal code review and design inspection)
That's clearly referring to a specific change, not a review of the whole codebase.
I used to work for a large company. They have homegrown review software with extensive checklists of criteria the code must meet before it can pass review. The software won't let the code progress unless two reviewers go through and explicitly check off every box to certify it meets their standards. I'd call that formal code review.
I work on a smaller team now. We do consistently perform code reviews, but there's nothing actually preventing someone from merging without a review. We also have a list of things to look for when performing a code review, but nothing forces reviewers to look at the document regularly. That process is closer to an informal code review.
I still find bugs regularly when reviewing code. That's both true of other people's PRs as well as my own. I imagine there's a lot more variability with an informal process than a formal one, but IMO informal review is still useful. Code review of course also provides benefits other than bug detection.
I always review my own PR before passing it to someone else. Another commenter recommended that practice, and I heartily endorse it.
whoomp12342
IMO code reviews are a tool for mentorship and keeping architecture in line
ipaddr
They can also be used for bullying, micromanaging and other social constructs
xboxnolifes
something, something, quote about all tools being able to be misused.
azov
Coming up next: “We don’t use source control. Git just slows us down. Our engineers don't want to spend time writing commit messages when they can be writing features instead. We keep all our code in a shared folder and we trust our team members to not mess it up!”
Well, this can be done. Lots of software was written before code reviews or source control existed. But I’m not longing for that time.
mslate
Early Facebook is one notable example.
LenP
Interesting read, and it may be working well for Raycast, but it didn’t resonate with me. The main drawback of code reviews, as they correctly note, is a lack of velocity due to engineers looking at code review as a “not my real work” thing. That’s a cultural issue in my opinion.
At my current startup, we are trying something different: the code reviewer fetches the feature branch and writes tests for the change as part of the review. As ridiculous as that sounds, it’s been working better than we’d imagined.
First, it’s really forced us to design modular and testable APIs. Separation of concerns at the file or library level gives a false sense of good, modular, reusable design. However, having an engineer write a test for your implementation (essentially a second client to consume your interface) really puts this to the test (!). The quality of feedback emerging from this process is often much richer than what a “skim the diff” code review offers.
Additionally, writing the test suite is an “unavoidable” task for engineers. This works two-fold: firstly, we don’t put it off as an “end of the day” item as with traditional code reviews on which you can spend as much or as little time as you have (it’s a ‘compressible’ task, unlike writing tests). Secondly, people send smaller, focused, easy-to-test PRs as a courtesy to the reviewer - and perhaps in part due to what we call MARBs... mutually-assured review-bombs :)
wreath
> At my current startup, we are trying something different: the code reviewer fetches the feature branch and writes tests for the change as part of the review. As ridiculous as that sounds, it’s been working better than we’d imagined.
How is this different from the "traditional" QA-writing-tests-for-devs process?
LenP
QA writing unit tests for devs is not “traditional” - it’s a signal to leave the place ASAP.
As an engineer, you’re expected to write, review, and test code. We simply shuffle responsibilities such that you usually test other engineers’ code and not your own (there is some discretion involved - for very small changes we often write or tweak the test ourselves)
wreath
Yeah that's why I put traditional in quotes. I guess, rather outdated? But whatever.. the question is how is this different? The engineer who writes the code still not the one who tests it, am I misunderstanding this?
LenP
Ack. You’re right in that the engineer who writes the code is not the one who writes the test, but recall that all this happens at the code-review phase.
When QA finds holes when writing unit tests (rare), it goes on a backlog. For us, if the reviewer finds a bug or finds the code not very testable, your code isn’t getting merged. We catch both errors and design issues (especially premature optimizations and overly large API surface area) very early. Habitually, our engineers write simple, well-documented code with concise APIs at a well-defined level of abstraction (we also follow some other heuristics to make this obvious) on the first revision, so we rarely (6 total in the last 3 months if I’m querying correctly) have more than two revisions despite the additional ‘write tests’ step from the reviewer.
We found that engineers take great joy in trying to break others’ code, but also enjoy collaborating in a scratch-each-others-backs way when they get to give useful feedback. We’ve been able to channel that into great test coverage (around 99% FWIW). We’re pre-launch, so ultimately we may uncover bugs in production at the same rate as traditional code reviewing shops ¯\_()_/¯, but we’ve had close to zero issues needing manual intervention in months of early user testing.
tombert
I think I largely agree with this article.
When I worked for Apple, we had a strict "all code needs to be reviewed" policy, which I had no problem with. Then, after being there for about 1.5 years, I make a PR, ask two different people to approve it, which they do, and it gets merged. A week later, the code is released, it looks like my code caused a fairly major bug.
My manager's manager and my manager had a meeting with me, and told me that this was unacceptable. I ask "what exactly was I supposed to do differently? Clearly this bug was sneaky enough to where two other people also didn't see it." Their response was "the code needs to work, you wrote the code, the code was sloppy. This is your fault." Apparently they took the bug pretty seriously, since it was actually mentioned in my yearly review.
This is fine, I probably should have tested the code a bit better, but it always annoyed me that my "sloppy" code managed to pass code review [1], and yet I'm the only one to get in trouble over it. If the PRs aren't catching bugs and I still am going to get in trouble, then I'm failing to see why I should bother waiting an hour for someone to look through it. Instead, why not make it so we can merge quickly, test it quickly, and revert it if something breaks?
[1] Obviously I'm biased here, but I honestly think that in this particular case I was pretty easily the least-liked person on the team, and it was easier to throw me under the bus than anything else.
igetspam
You had shitty management.
tombert
No argument here. I should probably have switched teams, since some of my friends who are still at Apple seem to have had a better experience.
dsjoerg
> test it quickly
Yes, in your scenario the lack of testing stands out as the major opportunity for improvement. If a bug is so important that it's mentioned in a performance review, then it's important enough that there should be tests that would have caught it. Automated preferably, or manual if necessary.
And everyone involved with the software, from you on up and sideways, should be calling for this testing. It's clearly important / worthwhile!
tombert
There actually were unit tests for it, there was just an edge case I didn’t think to test that caused things to crash, which caused a cascading error.
janstice
Edge cases and cascading failures are things that code reviews should be focusing on, as that’s a place where more brains is better than one, rather than the usual “does the code match the formatting and naming conventions”.
dsjoerg
Sure. Seems like a classic "seven whys" kind of situation... coming down hard on the person who wrote the bug is counterproductive in a way that the whole tech industry understands, except for your former bosses at Apple.
tombert
To be clear, I’m not absolving myself on this. The bug was still my fault at the end of the day, I just feel like my managers handled it poorly.
kqr
Did you purposefully introduce this bug? Then it was your fault. Otherwise, from all the evidence we have, you couldn't predict it from where you were standing at the time. You can't be at fault for that.
Any other conclusion is a cultural problem that hints at a "shoot the messenger" philosophy ruling.
If we want a Westrum generative culture (and we do!) we can't go around assigning fault to ourselves or others when we end up in bad situations.
Focus on whether the process is good or bad, not where individuals ended up due to random variance around the mean.
End rant. Sorry, this is one of the major things that upset me about how other people run their organisations.
999900000999
That's on the QA team.
If your able to write code that screws up key functionality QA should catch it in Beta.
Unless you did something wacky like deploy straight to prod.
dboreham
> worked for Apple
Well at least it had one positive effect :)
tombert
I guess from a resume-fuel perspective. I don’t think I was a great fit for the company.
baskethead
I don't know what your point is. You would have gotten in trouble whether or not code reviews were present. If you're saying that code reviews should not be default, then the entire blame would rest on your shoulders anyway. Are you saying you wish you got in less trouble at Apple and the code reviewers should have gotten in more trouble for missing YOUR bug?
tombert
> Are you saying you wish you got in less trouble at Apple and the code reviewers should have gotten in more trouble for missing YOUR bug?
If “my” bug was a result of sloppy code, my PR should have been rejected.
I don’t really think anyone should get in trouble for a code bug (at least in a majority of cases), but yes, if this bug was obvious enough for me to be in trouble for not catching it, then the people approving the PR should share responsibility for it.
baskethead
If code reviewers share responsibility for the code they review, why on earth would anyone review code? Everyone would stop reviewing other people's code. Then you're in the same situation where no one reviews code, and then you yourself would be responsible for your own code by yourself.
It sounds like you think you shouldn't have gotten in trouble for having bugs in your code. I have no comment on that situation because there's not enough back story. Generally "punishing" coders for bugs isn't good policy unless it's particularly egregious. If you were told that your code was sloppy, and if your performance review was affected by it, maybe it was sloppy code? Like I said, there's not enough back story.
If it was an innocuous bug or if it was a hard-to-determine bug, then I don't think you should have been written up, but again, there's not enough backstory to determine that.
But to say that your responsibility is lessened because it was blessed by code reviewers and they should share in your punishment, is, frankly, immature. It does sounds like they were ineffective code reviewers. If your code was sloppy, they should have picked it up, and if it was a hard-to-detect bug, neither you nor the code reviewers should be written up for it.
lmm
> If code reviewers share responsibility for the code they review, why on earth would anyone review code?
For the same reason that people write code they'll be responsible for: it's their job. Code review works when it's seen as equal priority and equal importance to writing code.
tombert
> If code reviewers share responsibility for the code they review, why on earth would anyone review code?
Ok, and if there are no shared responsibility, then what’s to stop the reviewers from just mindlessly clicking “approve” to shut the person up? I know for a fact that that happens all the time.
> If you were told that your code was sloppy, and if your performance review was affected by it, maybe it was sloppy code?
Sure, but again, isn’t “sloppy code” the lowest hanging fruit for a code reviewer? If the PR process doesn’t spot that, what exactly does it offer?
> But to say that your responsibility is lessened because it was blessed by code reviewers and they should share in your punishment, is, frankly, immature
I think we are just going to have to agree to disagree on this. If you are signing off on something, you are attaching your name and credibility towards it. I didn’t really want any of the reviewers to get into “trouble”, but i do think they should share a percentage of the responsibility on breakage if I am going to get in trouble if they signed off on something that was “sloppy”.
cdaringe
> Pull requests don't prevent bugs
Pull requests find bugs on every software and hardware team I have ever participated in. I cannot imagine living without this process. PRs increase knowledge and awareness of features. I fully concede that it is discouraging in some circumstances, but those tend to be the exception, not the rule. Most often it’s discouraging around a core set of individuals who bring more dogmatism and policing to PRs, which aren’t entirely welcome.
Power to this team, but either they’ve discovered the holy grail, or they will be compensating for this in some other internal process, certainly not user feedback.
geoah
Couldn't agree more. I'm fairly certain that I'm a better coder and team member than I used to be because I was part of both ends of many many PRs.
dboreham
I think the underlying problem with this debate is a widespread failure to understand the purpose and practicality of reviews. Actually I see this pattern repeated with many other practices in our field : tests for example. It's a kind of mass delusion, imho.
Yes having human #2 look at whatever code human #1 wrote with the goal of improving it and catching errors, is in the abstract a good idea.
But _requiring_ all code to be reviewed before merge to main, while allocating no real resources to that activity, while also expecting high velocity of development, is magical thinking.
It typically leads to "software engineering theater". Code reviews look to be happening, but for the most part they're not. And worse: some things don't get fixed because the participants see review as an obstacle unworthy of their energy to surmount. Some great features don't get developed because the engineer with some spare creativity perceives that their weekend efforts may be for naught due to ensnarement in review webs.
I think the author is on the right track by at least refusing to enter into the delusion, and rejecting the magical thinking.
notmyfuture
For a small team of capable engineers, working on something that has low consequence for any single change failing plus a strong business motivation to move fast: sure, do without code reviews if you want.
There is no universal "best practise" for building software, there is just whatever works best for your context. Practices can, and should change as your business context does. There are some things that are net beneficial for a team in the majority of situations though, I'd consider code reviews one of these things and make that the default.
SkyPuncher
I feel like this article is written with a bunch of false dichotomies. You can still have code reviews and still have trust.
We're required to do code reviews for SOC II compliance. Even if we werent, I'd likely still encourage my team to do them.
I trust everyone on our team. They're all extremely talented and competent. They could all ship things to prod everyday without a peer code review and I doubt we'd notice any difference in quality.
The reality is code reviews serve way more than making big brother happy. They help to avoid silos. They help to gain perspective. They help people mentor and grow. They help the team build a collective understanding of how to work effectively.
that_guy_iain
> Pull requests don't prevent bugs.
I remember once I was giving a talk on code review and I said that code review doesn't prevent bugs and people were shaking their heads. Everyone hangs on to this one thing, but if you look at the majority of pull requests that have been approved you'll see a lack of comments about pontential bugs. Also, if you do see a comment about bugs they'll often be disregarded. This drives me up the wall when I've previously pointed out bugs and they've been found to be actual bugs and had to be fixed.
I think the biggest problem with code reviews is it often becomes adversarial and people want to come out on top. I've had it more than once that someone has suggested another way of doing something, when I asked what the benefit of that way was or what was the downside of the way I was doing it no answer was forth coming however I was expected to implement their way. This becomes enraging when you point out flaws in their way but they can't find any benefit of their way.
As well as, often people just don't want to do the extra work. You point about a bunch of small improvements. It's a pain.
You end up with people asking for code style changes even tho there is a code style and the changes they're asking for aren't in those guidelines-
In my opinion, code review is a massive waste of time that is often done purely as cargo cult. People know it's a good thing to do but people have no idea how to code review and what is and what is not benefical during code review.
curious_cat_163
> I think the biggest problem with code reviews is it often becomes adversarial and people want to come out on top.
Is that really a problem with code reviews? Or, is that a cultural issue for a given team?
baskethead
It sounds like your company has a shitty culture. I've never had issues with code reviews in the companies I worked at. One company was particularly rigorous, where we had to print it ou on paper, and a room of engineers would go over things line by line. But that also had the best engineered software I had seen in my career.
Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
There should be a distinction between opinion and technical problems. If you suggest a change that is opinion-based, then it can be taken or ignored. I never comment on suggestions, because it's a waste of my time and theirs. If you are suggesting a chance that is a technical problem, then that should be changed. If the person doesn't want to make the change for the technical problem, then that's a culture problem in the team or company.
throwawaythekey
> Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
I find that there are two components of style:
- Formatting
- Clean abstractions, modularization... everything else that goes into making you a great (not just passable) programmer
Formatting should be handled by tools like others have said. For everything else I have the options:
- Turn the style guide into a book which no one will ready anyway
- Ask them to read an actual book but it isn't practical in the scope of a PR, and even after reading the ideas take a while to digest
- Attempt to mentor the employee via the PR, often appearing pedantic because it's hard for a novice to appreciate what they don't know.
- Code that passes CI is working code - screw everything else!
I take the mentoring route whenever it's presented to me but have generally found that some other team members (especially as they get older) have no interest in best practice and become annoyed by the code review process. If the mentee has persistence it tends to work out but moreso over 6mth-1yr timeframes. For mentees without the patience we drop the code review and keep them in areas of the code base where they can't do too much damage.
I guess what I'm trying to say is that our culture is willing to bend the processes to fit individual's strengths, and overall our workplace is pretty happy/chill. Maybe the engineering quality takes a hit but I can definitely sympathize with where the OP is coming from.
that_guy_iain
> It sounds like your company has a shitty culture.
People say this a lot then I look at their code reviews and their code reviews are basically the same. As I said in another comment the root cause of this crappyness is the fact that the majority of people can't code review for crap and they're rushing it.
> Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
This doesn't stop people asking. I have a default comment "This is a code style change and as a rule I don't do code style changes unless it's required by the code style guidelines because everyone has their own preferences for code style. If I was to do this change then I would have to do other change and that would result in lots of pull requests being updated just because the specific reviewer wanted it one way and then another reviewer on another pull request wanted it another way." Sometimes then go and have the code style changed stating "Iain won't do code style changes unless they're in the guidelines" which I personally feel is a bitchy comment about the fact they just bikeshedded on my pull request and I wouldn't the change.
> If the person doesn't want to make the change for the technical problem, then that's a culture problem in the team or company.
Here comes the crux, what if the other person just disagress with your assessment and decides the technical problem doesn't exist or business value of changing it makes it a bad idea to change? For example, in code review I create a table and someone comes a long and says that tables with a large number of columns is a performance issue. I disagree and point out that splitting it out and having multiple tables with joins is a performance issue. Now we have a whole confrontation. Someone points out a valid issue that would affect 1 out of 1 million users and would take 3 days to fix. Again confrontation. Sure you can say with the right culture these issues are easy to work around but the fact is office politics is such a big thing we have a phrase for it which everyone understands. So many people are trying to advance their careers that they all want to be seen as really good. And for most people the way they look good is by making someone else not look good.
autarch
> Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
I'd say any issues of style should be caught by required code formatting and linting tools, which optionally run pre-commit and _must_ pass in CI. This should all be handled _before_ code review. Then review can focus on substance, not style.
zzzeek
> I remember once I was giving a talk on code review and I said that code review doesn't prevent bugs and people were shaking their heads. Everyone hangs on to this one thing, but if you look at the majority of pull requests that have been approved you'll see a lack of comments about pontential bugs.
that's because the UX for github PRs sucks. Use Gerrit instead. It's not as pretty but it's super productive.
> I think the biggest problem with code reviews is it often becomes adversarial and people want to come out on top. I've had it more than once that someone has suggested another way of doing something, when I asked what the benefit of that way was or what was the downside of the way I was doing it no answer was forth coming however I was expected to implement their way. This becomes enraging when you point out flaws in their way but they can't find any benefit of their way.
that's not a problem caused by code review, it's caused by people who don't know how to work with others (or otherwise simply refuse to). to the degree that the code review process shines a light on this, that's good news, because that's a people problem that should be fixed.
> As well as, often people just don't want to do the extra work. You point about a bunch of small improvements. It's a pain.
when you look at a code review and you see problems in what's there, catching them in the code review is great, because that means less chance of having to fix it later.
if you see a code review and nothing's wrong with it, you just approve it. sometimes you can note things as "nits", meaning, fine for now but maybe next time do it this way.
> You end up with people asking for code style changes even tho there is a code style and the changes they're asking for aren't in those guidelines-
syntactical style and formatting should be automated, if not with tooling that actually rearranges the code in that way (e.g. [black](https://github.com/psf/black) for Python) then at least including well tuned linter rules. We use black, my own zimports tool for sorting imports, flake8 with four or five plugins as well as "black check", so we spend exactly zero time dealing with anything to do with style or code formatting, including having to even type any of it. if people are fighting over code style then there need to be tools that lock all of that down so there's no disagreements.
> People know it's a good thing to do but people have no idea how to code review and what is and what is not benefical during code review.
I work on the Openstack project and that is where I learned to do code review with Gerrit, it is completely essential for large projects and is an enormous productivity enhancement. Decades ago before CVS was even invented, the notion of using source control seemed crazy to me. That position was obviously insane, even though we were stuck using Visual Source Safe at the time. That's how it feels to me to be doing a real project today without code review and formatting tooling in place. Like even for myself, if working alone, for a large project with lots of changes I still use a code review tool, just to review myself.
lucumo
> sometimes you can note things as "nits", meaning, fine for now but maybe next time do it this way.
That's actually one of the costs of PRs (or any kind of mandatory and blocking code review). I've worked in places where the code review wasn't hard. People fixed these nits as a final step before pushing the whole thing to master. With blocking code reviews people either do that 'nit' thing, or keep the remark to themselves because it isn't worth fuzzing over.
All code review processes have some degree of rubber stamping, but the mandatory ones have a bit more.
Orthogonally, I feel that more reviewers makes the process and the results better. The best review process I've seen was slow as hell: (nearly) every team member would do (nearly) every code review. People would start riffing off of each other and actually learned from each other. You could actually get a good team-wide conversation going about the best approach to doing stuff a certain way.
that_guy_iain
> that's because the UX for github PRs sucks. Use Gerrit instead. It's not as pretty but it's super productive.
That won't affect the quality of what people find and comment about.
> that's not a problem caused by code review, it's caused by people who don't know how to work with others (or otherwise simply refuse to). to the degree that the code review process shines a light on this, that's good news, because that's a people problem that should be fixed.
You say that like fixing people is an easy thing to do. It's the hardest thing to do. And the fact I've seen this at so many companies/teams leads me to believe that it's an extremely common thing.
> when you look at a code review and you see problems in what's there, catching them in the code review is great, because that means less chance of having to fix it later.
> if you see a code review and nothing's wrong with it, you just approve it. sometimes you can note things as "nits", meaning, fine for now but maybe next time do it this way.
Yea... Those aren't getting done at 9/10 companies. They're getting ignored.
> syntactical style and formatting should be automated, if not with tooling that actually rearranges the code in that way (e.g. [black](https://github.com/psf/black) for Python) then at least including well tuned linter rules. We use black, my own zimports tool for sorting imports, flake8 with four or five plugins as well as "black check", so we spend exactly zero time dealing with anything to do with style or code formatting, including having to even type any of it. if people are fighting over code style then there need to be tools that lock all of that down so there's no disagreements.
Yea, we have those, I still get asked for stupid ass code style changes at every company for the last 8-years. Why? Because people can spot them and understand them. But can't code review for shit. They can't understand the idioms for the programming language, they can't understand how certain paradims work, they don't pay attention to root causes of bugs, they don't know how to design a database, they don't pay attention to performance, etc. I would say 80% of the industry can't code review for crap. And that is the biggest problem and because they can't do that all the other problems are just symptons.
> I work on the Openstack project and that is where I learned to do code review with Gerrit, it is completely essential for large projects and is an enormous productivity enhancement. Decades ago before CVS was even invented, the notion of using source control seemed crazy to me. That position was obviously insane, even though we were stuck using Visual Source Safe at the time. That's how it feels to me to be doing a real project today without code review and formatting tooling in place. Like even for myself, if working alone, for a large project with lots of changes I still use a code review tool, just to review myself.
This doesn't dispute or refute my point. It merely says in some cases it's used well. It doesn't remove the fact that majority of the time it's cargo cult and the majority of people who code review can't code review for crap or the fact that a high percentage of people just skim code review.
zzzeek
sounds like you just work at awful jobs. i wouldn't stay at companies with the problems you describe above. wishing you good luck.
that_guy_iain
You will find that is the majority of IT companies. The thing is, it root of the issue is people and people are generally the same no matter where ever you go.
jolux
1. Let's separate the idea of code review from the GitHub Pull Request feature. They are often coupled, but they don't need to be. You could easily have a policy and a workflow that requires reviews, but doesn't care whether they happen before or after a merge. I'm pretty sure this is how things work at Etsy, and I know there to be a diversity of code review practices more broadly in high-performing engineering organizations.
2. Of course code reviews don't catch all bugs. This is the same argument that is frequently used to reject static type systems and memory safety. The purpose of code review is to ensure that more bugs are caught, the kinds of bugs that can be caught through code review. It also helps ensure that a high level of code quality is maintained, and that at least two people understand how a change works, which is good for team cohesion and resilience.
atomashpolskiy
The problem is not with code reviews but with how adversarial they have become. Instead of making the best attempt to comprehend and embrace the author's style and intention and finding compelling arguments for the question "why this code is probably fine and should be merged as-is?" people nit on all kinds of stuff, mostly highly subjective. Along the way they don't actually catch that many problems or bugs but instead new problems and bugs are introduced while implementing their nits. All surrounded by a lot of noise that has to be dealt with in an already stressful multitasking environment. I agree with other posters in this thread that most people can't code review for crap and have very perverted idea about code review aims and goals. Code reviews as they are practiced now only serve to destroy trust, velocity and team morale. As someone noted elsewhere in this thread, if it's me (directly or indirectly) who's going to be held accountable for a deficiency in my code, whether it has been reviewed or not, why do I have to adjust my code to someone's arbitrary criticism? Common sense tells me that not requiring to have my code reviewed at all times and, by doing so, stressing my personal responsibility for the things I do, would actually increase my sense of ownership and keep me on my toes, more likely leading to better coded/tested changes, wouldn't it? I.e. it is not about blindly trusting all code that I produce but rather trusting in my ability to judge and make final decisions: to request code reviews or not, to address comments or not, to do my job as I see fit or not.
RegW
> The problem is not with code reviews but with how ... they have become.
Perhaps "banal" is the word you are meant to use.
There's a saying about that a 10 line PR will get 10 comments, but a 1000 line PR will get a "looks good!".
Of course this is not really what is happening. A 10 line PR can be completely unintelligible and a 1000 line PR might be a joy to read.
Once the code base has become a ball of mud, no one really bothers to review anything properly - it's going to take as long as writing it yourself. So perhaps, we get picky about some worn out code style point (incorrect indent etc) just to show we are alive.
theonething
> Once the code base has become a ball of mud, no one really bothers to review anything properly
This is the standard code review experience I've had across companies I've worked at. It's just theatre, bad and irritating theatre.
abeppu
> Instead of making the best attempt to comprehend and embrace the author's style and intention and finding compelling arguments for the question "why this code is probably fine and should be merged as-is?" ...
I don't think we should take it as given that the goal should be to approve the code in the originally written form. In some sense, I think if you're in an environment where you're going to be doing PRs for every change anyway, if your code is consistently perfect when you cut the PR, then probably you've cut it too late?
What if instead we frame PRs as the basis for an artifact-based discussion about how to do something. The code needs to exist in a complete enough form to provide a concrete basis for discussion. But if you see your reviewer as a resource and collaborator, who might have ideas or insights that you didn't, then ask for those ideas or insights a bit earlier, before the code is polished.
The best PRs aren't the ones that catch a bug. The best PRs offer some criticism or question or suggestion that allows you to reframe an abstraction or refactor something to be simpler, more flexible, more testable, more readable, faster, whatever, and from which the original author learns something. Your code might not have had a bug before, but that doesn't mean it cannot and should not be improved.
Circling back to 'trust' though, this approach does require a degree of trust. The reviewer can't be focused on nits and gotchas; more minor things are inevitable if our colleagues pull us in sooner, but we have to trust that they'll get sorted out.
detaro
> The problem is not with code reviews but with how adversarial they have become.
I've never really seen that. What do you think is the reason for people thinking that's the thing to do?
adinisom
Incentives set up a race to the bottom.
I worked in an environment that was like Raycast and later adopted gatekeeping code reviews. If the code reviewer wasn't happy you couldn't merge. This gives the reviewer power to have the author shape the code the way the reviewer prefers at little cost to the reviewer. And of course the authors often respond by doing likewise when asked to review code. My approach as a reviewer was to provide feedback without blocking merges, but this alone is not enough to incentivize reciprocal behavior.
There was a guy at work who would always complain about numbers that weren't given names as #defines regardless of whether it made sense or not (Magic Numbers!). My co-worker asked him why he was so picky about it. Apparently because someone did that to him.
yupper32
> Pull requests don't prevent bugs.
Is this some anti-vax satire?
Of course pull requests (code reviews) reduce bugs. And of course some slip through. It doesn't need to be 100% to be useful.
Abroszka
I haven't seen any research that supports either. I wouldn't be surprised to learn that code review does nothing other than share knowledge, or to learn that it reduces bugs by 50%. I just honestly have no idea, I do code reviews, because we do code reviews.
In fact I have seen alarmingly little research about code management. Code review, standup, agile, etc. does any of it do anything useful? I only came across anecdotal evidence which can be dismissed.
buttersbrian
I think pull requests are synonymous with code-reviews here.
I think it is unquestionable that code-reviews catch bugs. Its value as a use of time is another question.
Abroszka
> I think it is unquestionable that code-reviews catch bugs.
Yes, it does catch some, but does it catch more than no code review? There is no point in catching bugs if it also creates bugs to catch.
yupper32
Sorry, are you saying that it's possible that code reviews cause more bugs than they solve?
That's absurd...
Verdex
Perhaps unlikely. Or really surprising. But not absurd.
Imagine some reviewer who rejects all code reviews that don't have some recognizable pattern in them from the gang of four book. Fully complete and working code gets hacked up last minute to accommodate this person who has more seniority than sense.
Or ... maybe someone who always want there to be a single return in every function. Or heck, someone who demands that you always do early return. Either way they demand that some carefully crafted code is swapped over to the equivalent dual. And during that transformation a mistake is make that nobody notices.
I think the point is that the science way to indicate that code reviews work is to actually do the experiment. Instead of just saying, "why that's absurd that a code review could cause more defects."
I mean isn't that how hand washing got introduced into medicine? "Hey everyone, let's try washing our hands!" "Why, the hands of a gentleman are always clean. It's absurd to suggest that we should wash our hands."
adinisom
I've seen it happen.
I was on a team of competent people working on a dense code base before and after introduction of gatekeeping code reviews. The code base was large so no two people would be working in the same area. Before code reviews:
- Team members working on a section of code would find and fix bugs in it.
- Team members would watch the repository history to monitor for any mistakes.
When gatekeeping reviews were introduced, they were not particularly effective at finding bugs because bugs tended to be subtle and required time working on the particular code in question to identify. But the introduction of gatekeeping code review caused bugs because:
- Once reviewed, code was assumed to be good enough because hey, two people agreed on it.
- Any bugs identified would require code reviews to fix which would be an uphill battle and probably involve project management.
- Nobody's looking that closely at history because who wants to deal with finding problems and pushing through fixes? And hey, it's already reviewed.
- Knowledge of code that is being reviewed becomes stale as you start working on other features. This impacts pre-commit testing as you make changes to mollify the reviewer, and thus lead to a bug.
- Lowering velocity lowers the rate of fixing bugs.
- A bug could be fixed but stuck in review, so it's still in the codebase for other developers to run across. You could argue: well, it's documented in JIRA! But bugs can manifest in different ways and affect multiple system.
Abroszka
Why would it be absurd? If there is a process to reduce risks, then people take more risks. I have been guilty of submitting code review when I'm not 100% sure it's perfect, just because I know that there is a code review process. So this definitely exists, not sure how common it is though.
deckard1
Also, the trick to code reviews is to leave in a few low-hanging obvious bike sheds. The reviewers will tell you what color to paint them. Done.
I'm being slightly sarcastic. But not sarcastic to the point that I haven't done just that. Just as the nail that sticks out gets hammered, the code that is too perfect gets increased scrutiny.
aflag
It depends on the team’s culture. Where I work most of the code reviews have 0 comments
buttersbrian
If it catches ANY bugs it is objectively better than no PR/code-review from a bug standpoint. Unless you're arguing that pr/code-review creates more bugs than it solves?
Again, I think the question is value. Is a dev's time best used in code-review vs. something else? It's almost certainly the case that time is better spent elsewhere depending on the develop and needs or the organization.
munchbunny
I think the question is the same one as "do bike helmets reduce bike injuries?" I.e. without code reviews, is everyone more careful? I can't think of any proper academic research that answers the question.
In my anecdotal experience, there is at least one class of bugs that code reviews are good at catching that a large expenditure of self-review effort often doesn't catch: security bugs.
Abroszka
> If it catches ANY bugs it is objectively better than no PR/code-review from a bug standpoint.
No, that's false. That's only true if you also assume that people write the same quality code whether there is a code review process or not. Which might or might not be true, no idea.
cunac
if anything I would think that person would write same code but without PR would lost the opportunity of having better quality. People are blind to own mistakes. I can't count how many times I would stare at a code not seeing something and other person would spot problem almost immediately.
aflag
I don't have a measure of how many bugs it prevents. But I've caught a few in code reviews and people caught a few of mine too. So, it does prevent some.
dovetailed
canyonero
Yeah -- I LOLed at this one because I've caught at least hundreds, maybe thousands of bugs during review on pull requests.
It's weird and silly to expect prevention with tool that is not designed to prevent. Peer review (and the PR model) exists as quality control tool for many different aspects of someone else's work.
jeremy8883
Yeah, in the past month, I've caught out 2 bugs in a code review. One was a performance degradation due to a function being passed down into a react `useMemo` hook dependency array. The other was that the code was fixing a symptom, but not the cure.
I guess it depends on the quality of the reviewers.
rmk
This was the way most software was built. In the past decade, there has been a heavy reliance on code reviews, perhaps because of the rise of dynamic languages where it helps to have test coverage and eyeballs on the code to get close to the help that a compiler provides. Is the software of this decade much better quality-wise than the stuff built without resorting to extensive code reviews? I doubt it somehow.
Code review has become a way to design by committee: if every team member gets to critique one aspect or another on every code review, most design paradigms will fail to make it through the gauntlet of the ad-hoc design process the code review entails.
Code review has also become a way to micromanage individual contributors without explicit effort from the engineering manager or similar authority figure. It can now be administered by your colleagues who can now revel in the power that it gives them over you.
Finally, code review is the ultimate busy-work that caters to the conceit of today's software developers: that they are craftsmen who produce bespoke objects of art and beauty. After all, if each piece of code is so thoughtfully critiqued and subjected to such scrutiny, it must be exquisite indeed! Oftentimes the reality is that the team comprises a half-dozen overqualified people polishing a turd that exists to squeeze both sides by creating a cozy monopoly or an oligopoly in a two-sided market. But that is too embarrassing to admit, and it certainly doesn't bode well for pulling in obscene amounts of funding and prestige!
sinaa
The core value of code reviews is knowledge transfer and increased harmony across the codebase.
You certainly don't want to work in a codebase riddled with different ways of doing the same thing. Code reviews help with making the patterns more structured and more widely adopted. It also helps everyone at all skill levels to learn something from their peers, through reading their code and sharing thoughts.
It also makes more people in the team familiar with the codebase, such that more people are capable of changing and improving the related parts rather than just the original author.
The potential benefit of finding defects is only a nice to have!
Linus Torvalds talk on Google about how GIT is more a way of working then a piece of software. Everybody commits upwards in a tree of trust. If you do it this way you get automatic code reviews and in any team someone should be responsible for the "product" anyway and highest up in the hierarchy.
jeffbee
It's a great example of how Linus has a massive blind spot for the failings of the git model. He's "sorry" they use Perforce. Git is "better than everything else out there" but he doesn't mention Perforce. And yet, the Perforce model is the one that got Google to be one of the fastest-moving organizations on the planet, with the biggest code base.
nix0n
> the Perforce model
I've used both Git and SVN enough to understand how a VCS might change how an organization creates code, but I've never used Perforce. What is "the Perforce model"?
jeffbee
In Perforce the concepts are different. You have a client, with a view that includes a subset of the repo, and you have changelists which incorporates your related edits. When done with a changelist, you submit it to trunk, usually, because while you can branch a Perforce repo, you mostly don't need to.
Basically all the stuff in the first half of the talk where he whines about stepping on toes, conflicting with other people, politics, special write access groups, are all non-issues that do not sound familiar to Perforce users. If two people are using Perforce they can both work on changes to the same files taken from the main branch (trunk) and submit them separately without conflict or even awareness of the other person. As a bonus this is all dramatically faster in Perforce than in git for large repos.
In this talk Linus demonstrates that he believes git is a state-of-the-art SCM system, without being familiar with the actual state of the art.
kubanczyk
Let's say we collaborate on the code below. I'll change line 1, you'll change line 5, we'll merge both changes and then we'll see how Perforce makes the code run successfully "without conflict or even awareness of other person".
1 x := 0
2 x++
3 x++
4 x++
5 if x > 7 {
6 fail_horribly_at_runtime()
7 }
LenP
To answer your question, neither Git nor Perforce will cause a merge conflict. You’d need a SCM that isn’t line-based for that.
With large teams, you need code review (which this bug can elide), great test coverage (the tests will likely have a merge conflict) and very importantly, a commit-by-commit build health indicator after merging. You can also do this in batches and bisect to find poisonous commits should a batch be broken - flakes notwithstanding.
jeffbee
Explain how you think git is different in this regard.
detaro
You're the one that claimed Perforce was better than git and could do this "without conflict or even awareness of the other person", so why don't you explain how git is worse? (I've never used Perforce and am genuinely curious what clever kinds of things it can do around merges, but more detail than "it's state-of-the-art and git is not" would be useful)
jeffbee
No, it is Linus who makes a bunch of claims about how Perforce is worse, in the video.
If two people edit the same line of code based on the same original version, that's a conflict and there's nothing any SCM can do about it. Human must merge. But there's no reason that several developers can't work on different functions or other disjoint areas of the same file, unlike what Linus is claiming in the video.
The video is essentially a long straw-man argument where Linus takes shitty part of CVS and SVN, which were terrible, and applies them to Perforce, which it's clear he has never used.
tick_tock_tick
> In this talk Linus demonstrates that he believes git is a state-of-the-art SCM system, without being familiar with the actual state of the art.
I don't think that's very fair Perforce is mostly just a shitty subset of git's features that some people pretend is sufficient. You can use git almost the same as Perforce and have almost as bad of an experience as using Perforce directly.
larsrc
Apart from the other reasons for code reviews already mentioned, there's this: an inexperienced developer (in a particular area) may not _know_ when a code review is appropriate. A change that looks innocent may have wider consequences than expected. Code reviews spread knowledge, and knowledge is the true output of software engineering.
alkonaut
Sounds nice but perhaps a best fit if you are 1) a small team 2) people know and trust each other already 3) people are all and equally
comfortable asking for reviews (disturbing others, essentially).
4) the code has a priority on feature progress rather than stability and correctness.
The situation can be very different in any of those aspects and mandatory review can still be a good idea. Are you building code that runs on surgery equipment together with 4 people you just met? I think you should probably have a formal review process. Or are you iterating towards an MVP for some webapp with a few people you consider friends? Lgtm just deploy to prod.
codeptualize
Good CR's are great, bad CR's are an incredible waste of time and you might as well not do them at all.
We had a similar system where you trust people to know when you can skip the CR, worked great. We also ranked CR comments on importance, focussed on preventing issues, and avoid subjective discussions as much as possible. It meant CR's were mostly about bug catching, sometimes constructive conversations weighing pro's and cons, and other times making suggestions that may or may not be applied.
It's so much nicer than a dogmatic nitpicking CR culture that completely forgets about what's valuable and not. Maybe it's my relative small sample size but I feel they always go together.
etchalon
The only time I make anyone on my team submit a code review is if they're a junior developer, and we're actively trying to make them a better developer.
Otherwise, my experience with code reviews has been that it's one developer telling another developer how they'd approach the problem, or their preferred style, or some such nonsense, with the originally submitted code good enough to get the job done.
Everything else this author says rings true to me when code reviews are a universal practice.
They slow everything down, and only occasionally is that slow down worth it.
przemelek
I used to love code reviews when I worked for Motorola in 2004, it was doing wonders, we observed showing up "phantom developer" where mix of inputs from different people was causing new look on code.
Those were Fagan-like code inspections. A lot of paperwork (we were doing formal reviews on paper, with one person in role of reader), but it was working. This whole paperwork was forcing people to follow rules and it seemed to work (but was sometimes very painful).
Latter in other companies I found out that code reviews are in most cases bullshit placeholders, and in many cases are only for making some developers feeling more important, or other devs not loosing contact with code. I saw most stupid things in code moved to position of examples how to write code only because of code reviews...
Good code reviews are doing wonders, but as I can see without very strong culture in dev teams and whole company those code reviews don't add anything useful. And this strong culture is difficult. From observations it seems that this is easier to build in team culture to work with "master/main only" without branches and code reviews, than to build culture of code reviews which are working.
From my observations most of comments in code reviews are variations on "I would code it in different way", or "why you are doing it in this way/I don't like your variable name" of course usually written in less direct way.
Last year problems with Log4J2 showed that even in OpenSource peer reviews don't help. Good developers let to introduce to one of most used libraries something what never should be introduced there. For sure variable names were nice, but somehow whole "why we are adding this" was lost, because devs were looking only for some easy to spot things.
So.... yep, good code reviews are important, but my guess is that most of code reviews are only to make some devs happier that they still know what happens in code, and that variables are named in acceptable by them ways....
system16
I guess it depends a lot on the company's dev culture or team dynamic.
In my current company, our code reviews are essentially a second set of eyes by a peer. When things are "flagged" in code review it's never confrontational. Mainly they provide insight for those who haven't worked on the code base as long as to "why" things are the way they are, and it can open up discussions for improvements (or at least things to ponder for the backlog.) The odd time neat language or platform tricks are also learned.
I've worked in other places though where code reviews are done by rockstars or ninjas just looking for a reason to criticize or find fault with another's code so they can stroke their own ego. And if their code is ever questioned, expect a tantrum. Toxic environments like these say more about the company and people there than code reviews though.
Looking forward to the follow-up "no QA by default".
hsn915
I seem to be one of the minority of people for whome this post has resonated well.
In my experience, code reviews are awful in the same way that micromanagement is awful. It's based on a lack of trust, first and foremost. It's also insulting. What's more insulting is being forced to deal with unreasonable review comments.
More than once, I had the experience of spending several days working on a feature, only to be blocked by some unreasonable "team member" who has weird fetishes about spacing and variable names and such fluff. It's really frustrating and insulting.
What's more, it doesn't really contribute anything to actual code or product quality.
The overwhelming sentiment I see on comments supporting the mandating of code reviews on every single commit seems to be based on the idea that everyone is a junior who makes obvious mistakes that are easy to catch by just having someone else take a look at the code.
I suppose this should not be very surprising.
I've noticed in the last few years more and more companies prefer to hire beginners (since they are cheap), and there's a huge wave of people who are learning programming to improve their careers and their lives -- which is a good thing for these people -- but it means a lot of teams are full of newbies.
So it makes sense in this context to install a lot of "guard rails".
What I don't understand is why everyone thinks mandating code reviews is a good idea in the general case.
eyelidlessness
In my experience, code reviews promote trust. They make changes a collaboration and a conversation, where reviewers and implementor(s) bring different intuitions and concerns, and help expand understanding of risks and goals. Sure, a lot of that can happen before any code is written, but it’s inherently more abstract.
Code reviews aren’t just about code, and probably shouldn’t be framed that way.
pxllh
I always really enjoy reading both the Raycast and Linear engineering blogs because they’ve really got a great approach to empowering distributed teams, and giving engineers the autonomy to work well.
I can’t necessarily say this approach would work everywhere but it’s nice to see companies critiquing and challenging methodologies.
windows2020
I've seen a number of flavors of code reviews in practice:
* Superficial - Omission of optional lagging comma
* Valid in context - Quadratic logic on small finite set
* Ensure intention - Task not completed/completed with adverse effects
Each takes progressively more effort, skill and context knowledge. There's a constant battle between resiliency and efficiency. There also tends to be a gradient of skill involved. Context is king when it comes to an opinion on this.
lazyant
According to LinkedIn, this company has 12 employees. Once they hit like 20 - 50 engineers and some decide to write code "their way" and some leave, it's going to be a mess to be onboarded there and make sense of code you haven't written yourself.
abstract_put
I completely agree. I have worked at a company that had exactly this trajectory. It's also not just the number of employees, but also the age of the code and the number of micro-pivots that nobody bothered actually refactoring through the code. Specifically the company went:
- All senior developers who had a strong sense of the actual product and what they're trying to do -> no pull requests, everybody happy
- An awkward middle area where eventually they had tripled in size and code quality was haemorrhaging customers and paralyzing attempts to release -> "what do we change?", nobody happy
- Another doubling in size, giant organizational change -> PRs are mandatory, people aren't happy but any attempts to reduce PR burden has almost a straight line to bugs in production
0xbadcafebee
> we trigger an internal release every night [...] This allows us to test new changes within 24 hours [...] This workflow helps us to consistently ship updates every other week.
If instead of testing 1x a day, you tested 3x a day, you'd debug 3x faster, meaning you could ship 3x faster. Now imagine testing 100x a day. Every merge to main should test immediately and ship immediately. That's how you build trust, speed, and quality.
Your developers should be going, "fuck. I'm not merging this piece of shit yet and breaking production. I need more tests." Some people will say that waiting until you have enough tests is crazy because it's not fast enough. But the only way to get fast is to have trust, and the only way to have trust is more quality. More quality means less bugs, which means less cost and delays, which means shipping faster, cheaper. Go slow to go fast, shift left.
Tests are better than code reviews. With a code review, you're only reviewing one change at one point in time. 300 commits later, you're not reviewing how the 1st commit is affected by the 301st commit. But a test around the 1st commit is verifying against the 301st commit. Tests are the eyeballs that never stop looking.
kgeist
It probably makes sense when all the developers are experienced seniors with deep knowledge of the codebase, but often it's not the case. I don't think it scales beyond a small team of senior founders who only start making their first MVP. According to Crunchbase, they were founded 2 years ago and have below 10 employees.
At our company, code reviews help:
1) onboard new devs and teach them about the codebase, catch silly rookie mistakes early on
2) code reviews encourage to write readable code, what I write is readable to me, but can be incomperehensible to others, and code reviews help figure it out
3) seniors make mistakes, too, and it's nice to have another line of defense (some mistakes can be ticking bombs and not catchable in testing immediately)
In my opinion, as a developer, you shouldn't even trust yourself, let alone others, to deliver good product (especially if it's mission-critical software with a large userbase), because errare humanum est.
fjfaase
The article is not saying that no code reviews are done, but that it is not manditory. For on boarding code reviews are useful, but you could also do some pair programming or work together on the code.
kgeist
We have not only code reviews, but also mandatory architecture reviews. Everyone dumping random code to main doesn't scale in the long term without a coherent architectural/technical vision. Without it, it's very likely to end in a Big Ball of Mud. We started that way too - faster iterations, little design, and it did help us gain some foothold, but after a few years it became unmanageable and hard to extend/mantain. It's been 3 years since we started untangling our legacy system built that way and our growth is still impeded by it. Context matters: their approach works OK if you're only starting out, but at some point you're going to rethink your processes. Many (most?) companies start that way, foregoing code reviews, even tests and proper QA, to release earlier; but eventually you start to appreciate them as your product matures. It's important to not miss the moment when your codebase becomes large/complex enough that you need to rethink your processes.
codeapprove
If you view code reviews as expressing a lack of trust, you're probably doing them wrong. Code reviews should be a conversation among people who trust each other. You're saying "hey I'd like to know your thoughts about this, and I'd also like you to learn a bit about what I am doing"
When done right code reviews spread knowledge and build trust among team members. Of course there are certainly situations which call for a push without a code review, but those should be exceptions not the rule.
While a lot of the issues around code review are cultural, I also think there's a lack of good tooling. That's why I'm building CodeApprove which makes reviewing code on GitHub faster and more enjoyable:
https://codeapprove.com/
pearjuice
Code reviews make sure the other party at least thinks a bit about the code they write. "Someone is going to look at this, let's at least tidy it up or not be cobbled together underperforming spaghetti". Sure, you can trust your co-workers but do you trust them enough that they have that thought every day of the year?
foxbarrington
Agreed. It's amazing how code magically gets better when you know someone else will be reading it.
brunojppb
Early in my career I was like "cool, I can just merge this to master". Now after working in several projects and in several small and large teams, I am like "Gez, I can't merge this straight to master. I need a second pair of eyes on this"
da39a3ee
> PRs are an afterthought and not what you want to spend your time on as an engineer.
Couldn't disagree more, and don't even understand where it's coming from. When you've dome a substantial body of work, it certainly makes sense -- and is satisfying, and enjoyable -- to explain what it is and how it works and present it to people. And it makes sense for those reviewers, given that they work on the same codebase, to learn about the code change.
I get the impression that the authors of TFA maybe just aren't writing any interesting code.
loosescrews
If you find code reviews are significantly slowing down your process, the problem might be your code review tool. My employer switched from Github to Gerrit and I have for the most part stopped feeling like code reviews are slowing me down. I just keep stacking my changes and the reviewers can get to them when they have time. This also allows me to review others' changes when I have time and not feel like I need to drop everything in order to unblock people.
donatj
This is a remarkably frustrating read. Of the value we get from code review, catching bugs is one of the smallest.
Things we catch include
- Duplicating functionality we have elsewhere.
- Duplicating business logic we are elsewhere.
- Recommending cleaner abstractions, particularly with younger devs.
- Query optimization - The time for your DBA/data developer to review things is well before they are a problem.
Developers with this focus on “speed” in my experience tend to build a lot of throwaway code.
wreath
> - Duplicating functionality we have elsewhere. - Duplicating business logic we are elsewhere.
This is especially important when you have people otherwise working in silos, which seems to be the case for Raycast.
brokencode
I think this would work really well for a small and experienced team, though where I work, we have a lot of developers fresh out of college, and I would be pretty nervous setting them loose without close review of their work.
skeeter2020
Especially because a HUGE benefit of code reviews is exposure and knowledge transfer. For us it's a key area where anyone can get insight into the how and why, plus control their participation from requester/reviewer to passive observer. It's actually a pretty poor place to catch bugs introduced by new, junior devs.
klabb3
The idea is not that everyone has to be omnipotent, it's that you trust people enough to send for review and ask for help when appropriate, instead of always. I.e. when you're new to the code base or language.
brokencode
I don’t think I’ve ever reviewed code from a new hire that didn’t have multiple problems or style issues that needed to be addressed. That’s also actually true for code from most experienced devs too. And it’s rare for my code to get through code review with no issues found. So bottom line, I just don’t think this would work for most teams and developers.
klabb3
New hires definitely fall in the category of where code reviews (and many other forms of knowledge sharing) make sense. That's not what this is about, it's about that they become mostly ritualistic when mandatory, the same way that standups and many meetings are often useless, especially when mandated for no clear purpose.
ReleaseCandidat
> I just don’t think this would work for most teams and developers.
Most teams and developers don't do code reviews because the only person that could do code reviews would be the developer himself.
brokencode
Do you mean to say that most developers work alone, so don’t have anybody to review their code? That’s probably true for hobby projects and the like, but companies rarely have teams of one working on anything.
ReleaseCandidat
> Do you mean to say that most developers work alone, so don’t have anybody to review their code?
Yes and no. They do work in teams, but this teams are so small, that everybody has 'their' part of the code, where nobody else even takes a look at, much less changes anything. I'm talking mostly about non-software companies.
Btw. most people that actually do work alone are self-employed. But code reviews don't make a sense if there is no other developer.
Someone1234
When my code has been code reviewed I've had bugs found, improvements made, and edge cases brought up. I genuinely believe my code is better for having been reviewed, and I work harder on the quality of my code because it is going to get reviewed.
Now, code reviews aren't a panacea, there's an awful lot of bike-shedding going on (in particular whenever interfaces are reviewed) but they're a net positive in my view. It isn't about trust, it is about creating a quality product, along with postmortems (an under-utilized philosophy) and automated tooling (inc. testing, validators, etc).
autarch
I think the word "trust" is covering a lot of ground in this piece and the discussion here. There are a lot of different things we may or may not trust about our colleagues and ourselves.
Here's how I think about it:
* I trust my colleagues and myself to do good work to the best of their ability.
* I trust my colleagues and myself to care about the quality of their code.
* I trust my colleagues and myself to follow coding standards.
* I trust my colleagues and myself to write tests and to do manual testing when needed.
* I trust my colleagues and myself to pay attention to CI and code quality tool output and to act appropriately on that output.
* I _do not trust_ my colleagues or myself to do perfectly bug-free work all the time.
* I _do not trust_ my colleagues or myself to be able to model the entire system we are working on in their heads.
* I _do not trust_ my colleagues or myself to always come up with a complete set of tests, including all corner cases and paths through the code.
* And finally, I _do_ trust my colleagues and myself to be human, which means we are not perfect, we get tired, we get distracted, we forget things, and we make mistakes. But I _also_ trust my colleagues and myself to try to help each other avoid problems in the face of this imperfection, and one of the best ways to do this is with code review.
canyonero
I could see this working well for teams that are engineering led and where customers don't expect a ton from the software. If you're trying to ship fast and get to market quickly, then fine, skip the tests, skip the code review.
Once you have lots of engineers shipping many different apps and need to work within and across teams. This system is not going to be fun. When a nasty bug ships and you're responsible, you'll wonder "should I have requested review for that one?" When one of your colleagues ships a few bugs that force you to paged during your on-call. This system is likely to erode trust on both ends. IMO, a no code-reviews model is going to stunt junior engineers career growth as they will not be performing what is common practice on software engineering teams. It's also going to keep out many other stakeholders who may wanna weigh in on the software you're delivering, be it UX, Accessibility, Security, Documentation, Product experts. Pull requests keep others on your team in the loop and educated.
I agree that this model gives you the advantage of speed, but I don't think it builds trust. I trust my colleagues and we do pull requests. I don't feel like I'm missing trust because I can't push to `main`. Code reviews have very little to do with trust. They serve as a communication tool and serve a tool to give the best possible experience for customers and provide an opportunity for alignment with our colleagues.
renewiltord
Most small tech companies probably operate in this manner. We do, too. The practice is that devs do want a second pair of eyes on the code since they want other people to detect flaws in their assumptions or choices before there is pain.
If I'm doing something trivial, I might just push to `master`. For instance, I don't run tests on a `README.md` change.
Interesting to encode it explicitly, though. Strong culture choice, for sure. And probably well suited to their product since they are probably all dogfooding `master` and pushing only periodic tags off it.
siva7
He basically described the classic subversion workflow using git. I feel like people rediscover subpar approaches using technologies which intended to evolve those approaches into a better workflow
abstract_put
I've heard it referred to as "trunk-based development". I think it's not so much subpar as contextual. If there's a team of 2 developers pushing as hard as possible to get something out, I completely agree with OP that code reviews are a waste of time (unless requested). If there's a team of 100 developers, I can't imagine the insanity of pushing directly to the shared branch without review.
Somewhere between those the context is such that the benefits of code reviews outweighs the costs. Do you think otherwise?
wreath
The article mentions fast user feedback and dogfooding as alternative ways to get feedback to code reviews. I think these are two different things, user feedback and dogfooding are great ways to know if you built the right thing or you built an intuitive solution but they have nothing to do with the underlying engineering of the solution.
I much prefer (software) design reviews, they make code reviews a sort of a sanity check to see if the design was followed (reasonably), we're testing the right thing, and maybe there were some business rules that weren't followed because they weren't obvious in the design process. This is especially important when you have a lot of engineers in the team/organization.
Clicking around in Raycast's jobs page, I get the impression they prefer ICs to work individually and ship things on their own with little to no collaboration, so no code reviews seems to be aligned with their values.
CivBase
> Pull requests discourage trust.
Good. You shouldn't trust my code until it's been reviewed. I don't even trust my own code until its been reviewed! I've gotten much better with time specifically because of the things I've learned from reviews, but having another pair of eyes on my code still frequently uncovers things I overlooked while in the weeds.
> Pull requests don't prevent bugs.
This is just wrong. It doesn't prevent all bugs, but nothing does. Reviews, testing, static analysis, requirements, proofs - they're all designed to improve confidence in software. They all work to varying degrees, but none of them are perfect.
> Pull requests slow down.
Good. This is a feature. If a change is big enough that it takes significant time for another qualified engineer to review, there's a high chance that an issue will be uncovered. At the very least, it ensures another engineer is familiar with the changes.
If reviews drag on because engineers don't treat them as high priority, then that's a culture problem - not a problem with reviews.
ram_rar
This works great when team size is tiny and everyone is an experienced engineer. I grew my team from just 2->8. Without a robust CI/CD pipeline and establishing code review practices we would have been breaking prod every other day.
I agree with author that trust is proportional to velocity. But it takes sometime to build that trust. Writing code reviews is skill that engineers need to acquire. A lot of that is emotional quotient than IQ. Code reviews should give context to the PR owner that {s}he doesn't have by just going through the code. Let the review bots and other integration tests take care of breakage etc.
I check PR stats for my teams repo and track mean time of PR approval and few merge time metrics. This is just to make sure, that there is no back and forth happening during reviews and median time to merge PR is relatively constant. One can leverage these metrics to know which team member is being a douchebag.
DaveEM
This is an interesting read and I'm glad to see that the closing paragraphs emphasize the need for each team to evaluate their own needs. That said, for me it all comes down to the definition of "best product" in this statement:
"They all want to build the best product in the shortest time possible"
From my perspective, the "product" is not limited to what customers see or the code itself. It's the set of outcomes from the code being written and executed. For example... Do other engineers understand the code? Can they support it efficiently? Does it fail gracefully? What is the impact of failures? Are failures automatically detected? Does it lead to security breaches? Did the code author learn things they could do better that they didn't know to ask about? Did other engineers learn from reading the code or asking questions? Are customer requirements satisfied? How much is customers' trust impacted by failures, bugs, and feature that don't meet requirements? How big is your cloud provider bill? Etc...
If the priority is quickly delivering and iterating on features that satisfy customers' needs AND engineers will be supporting their own code, then the "No Code Review by Default" approach has its merits. In fact, I have personally found this approach to work well on constrained, isolated pilot projects where the priority is to unblock key scenarios for a limited set of customers and/or learn about customer needs before building a the "real" solution.
For anything else, I would suggest following the author's own advice: "ask yourself if the circumstances of others apply to you".
ryanbrunner
I don't trust myself to never, ever introduce a bug. I'd rather that my teammates look it over, because they may have more context on certain areas of the system than I do, or even can just a typo.
I trust my teammates, but if "trust" means "assume they are perfect developers with perfect knowledge of every part of the system who will never make a mistake", sure I don't trust them I guess.
zarzavat
This comes under "do things that don't scale". If you have 100 developers is this an efficient way to work? Absolutely not.
If you have <10 is it an efficient way to work? Yes, it can be. Replace the formal process of code reviews with an informal process of reviewing new commits. Encourage everyone to regularly read the code and not silo themselves away.
keithnz
I feel their "responsibility" and "trust" argument is maybe a little oversold here, but their final paragraph is right on the money.
I would argue a better title is "No rules/process by default". Which doesn't mean you go wild, having a strategy to develop software is very important, but strategies need to be customized. It means you apply just enough and constantly adapt so you have just enough of whatever you need to make sure development works smoothly. People will say PRs improve communication, can help prevent problems, help understand what changes are going into the code base etc, but all of those things can be done in many different ways, and depending on the group of people, there can be more effective ways of achieving that. So adopt what you need ( which may be PRs, or some variation ). But whatever you do, don't go overboard with justification.
The dangers with overselling and defending your justification for doing or not doing something can be a problem. It also can mean you don't adapt quickly to change. By overly emphasizing your approach being about "responsibility" and "trust" you kind of make out that PRs are about a lack of trust and responsibility, which is incorrect and may prevent you from adapting because you have misattributed a quality to a practice/process. This often happens when people go from A (problematic) -> B ( much better). They attribute too much weight to what makes B better than A when they may need to change those things to go to C (even better). In my 40 odd years of observing software development, this is really common, especially for many devs with around 5-10 years experience (including myself). You find much better ways of doing things, and you start becoming advocates for those things. In reality, you always need to be adapting and questioning and always deconstructing and reconstructing your strategies about how to do software.
cfer43432t
The biggest mis-conception about code reviews that it is for catching mistakes and improve quality. Well, you know what improves quality? QUALITY ASSURANCE. If you never heard about it, THAT is the issue, and your group/section/company should really educate itself if this is a new concept.
Also code review as a communication tool? Maybe yes, but again there are better tools for the job.
bob1029
There is a Russian proverb I heard used very early on in my career that has really stuck with me over the years:
"Trust, but verify"
I think a lot of other commenters nailed the trust equation in more verbose terms. I would add that in some industries or for certain customers, a 2+ person code review rule is mandatory for compliance, regardless of any perceptions around how mundane the changeset might be.
mkl95
Code reviews where some coworker looks at the changes, types "LGTM", and proceeds to approve it do next to nothing to prevent bugs and bad decisions.
Code reviews where some coworker runs the code on their machine and has plenty of time and power to express the ways it could break or it could be improved are useful.
When the former review happens more often than the latter, if often has to do with soft skills rather than hard skills. Some requirements for a good review are honesty, mutual respect, and ability to handle criticism. Other than that, some engineers struggle to understand there may be more than one good approach to a problem.
Ultimately, engineers are not entirely to blame for bad reviews. In my experience it is often a company culture issue. Authoritarian leadership trickles down to middle management, senior engineers, etc. If people don't feel empowered to challenge others code regardless of their seniority or perceived status, weird or downright bad technical decisions will be made.
the_arun
OP is mentioning their side of the story for what they did. I feel people are going overboard with their emotions. Few points
As OP says, team members could ask for review if they do not have confidence in what they are changing. But code reviews usually makes everyone wiser - domain context, design thinking etc., So we need to find other channels to ensure that.
Few Questions:
1. Do we have data on how many bugs do we fix at review stage?
2. How many times we blindly approve based on our "trust" with our colleagues? Biases definitely play a major role in reviews.
3. What happens if one engineer quits with others not having any context? So we cannot say Code Review is not needed for every team. It may work for some.
4. How many PRs we merge using our admin rights without getting any peer review (because of delays or getting attention from our peers)?
touchpadder
This may work in a small colocated team where everyone is experienced/smart cofounder and knows the codebase by heart but in a larger team where new devs keep joining it would be a total disaster. Undoing errors that were pushed to master is a bigger waste of time than doing reviews. If all developers are good the reviews usually take a minute and only a quick scan for anything alarming.
In my private projects when going again over my code in i.e. Source Tree I'm tempted to stage stuff that is messy with many comments and should be rewritten hoping I'll come back to this another day. The temptation is too big when there's no one looking. The fact someone will be reading your code encourages a proper cleanup before creating a PR.
efxhoy
There is obviously a tradeoff here with the optimal amount of code review that varies with impact and difficulty. Dogmatically doing and waiting for an in depth review on a tiny and likely insignificant change is silly. Blindly pushing big db migrations is also silly.
simonbarker87
I can see their point of view but it sounds like they focus on the downsides of code review and not the benefits.
Code reviews increases awareness of changes, helps with desiloing, gives junior devs a chance to see how others write code in their own time, reduces chance of mistakes, encourages collaboration.
To stop the endless ping ponging back and forth I advocate for Must/Should/Could in all code reviews. So long as everyone gets it, it massively speeds up code review and increases team happiness around the process.
We're very small (3 full time devs, a mostly full time tester, a part time student, and another student temporarily full time), and the three full timers are pretty seasoned, so this may not apply universally, but....
We don't review code as a matter of course. But if anyone of us writes code that doesn't pass our personal smell test, or that we think it won't pass someone else's smell test, or <insert criteria here>, screens are being shared and ideas tossed about.
We almost always <solve the problem we were having|create a better solution|decide to accept the debt and open an issue until we have a better way>, in pretty short order.
So it's sort of JIT/OnDemand code reviews. Kinda.
jkingsbery
> They can request a review if they want somebody else to take another look at their code.
This can work sometimes and with some groups of engineers, but the engineers I worked with in my career who most needed this (1) would never ask for another set of eyes and (2) were the exact type of person who think a code review equals "people don't trust me."
When I started at my current large tech company as a senior software engineer, I learned a ton from having "more junior" engineers review my code. I also learned a ton from reviewing other people's code. Even now that I have the fancy title of Principal Software Engineer, if I want to make a change to our team's code, I need two ship-its from my teammates, same as everyone else. It's not about "not being trusted," it's about earning trust of others by not pretending your poop doesn't smell.
> Code reviews aren't high priority for engineers.
Well there's your problem. On teams I've been on that do code reviews well, code reviews are considered a high priority. When code reviews are given high priority, and when engineers take the time to plan their work so that changes are small enough to understand, code reviews really don't take that long and are not a blocker to productivity.
> It's up to you and your colleagues to set the rules for your team. Don't adopt best practices in a dogmatic fashion. Rather, ask yourself if the circumstances of others apply to you.
This is true - companies shouldn't blindly copy practices from other companies. I have been a part of teams before that didn't have required code reviews and were still effective, but this required a unique set of circumstances in which all engineers were pretty senior, and so we were doing a lot of owning features end-to-end. It is fun to be part of a team like that, but it doesn't scale very well.
What this article also doesn't address is that reviewing code is its own skill, and many engineers are not naturally good at it. Just as it is bad to blindly copy other companies, it's also not great to blindly equate "we're not good at a thing" with "this thing isn't useful."
anthonyskipper
Luckily Raycast work in an unregulated industry. In a lot of regulated industries this would not be an option as 4-eye reviews are a mandatory policy requirement.
I also don't know how smart it is to brag about, seems like just asking for something to go wrong.
adam_arthur
There are other ways to solve the problems the author calls out.
Reviews being slow can be largely solved with review SLAs/automation to ping reviewers. Code review themselves can be quite fast if you know the code author well. Some people you can trust to make high quality changes and the review covers more high level/directional questions. Others are more junior/haven't built the trust yet and require more scrutiny.
But having done a large number of reviews, it really doesn't take long to do even an in-depth review, if you're already familiar with the area of the code in question. The goal of a review is not to test the code for the code author, or to identify bugs for them... which is where things could really slow down. I used to do 10 or so reviews a day and it would take an hour or so to get through them, while still writing a lot of commentary on them, where applicable... not a rubber stamp.
I know the author mentions not wanting there to be a lack of trust, but its pretty evident they'll realize the need to build trust first as they scale.
There is also an aspect of the code author writing code to a higher standard when they know it will have to go through another set of eyes before being merged. The best people can set this standard for themselves regardless, but it's unlikely that you could run an org at scale on this premise without a drop in code quality.
It is an interesting idea to try to run a company only hiring people that have the skills/attitude to be implicitly trusted. I think this approach is very feasible at a small scale if you can maintain very high hiring standards and are able to nab mostly 10x style engineers. These people will make mistakes, but will recognize them and self correct. Then you could just have people contributing under a guiding set of principles/standards, and "retroactive" review to ensure standards being followed.
But almost certainly impossible to hire at this quality/bar at scale.
Having said all that, I did work in a startup where in the early days certain people had write access and pushed directly. It mostly worked well at that small scale, but lots of technical debt developed in certain areas of the product that took years of effort to reverse/fix.
Code review: the process by which a team member learns something useful from another team member (and occasionally also vice versa), by reviewing their code with them.
"We unashamedly don't do code reviews unless requested explicitly" == "We are proud to announce we don't like learning from each other unless shit hits the fan*"
* which it will
weatherlight
We do code reviews because people make mistakes, because code reviews are learning opportunities; not because we trust or not trust each other.
bhawks
Two things I love about code reviews is the whole team has a general awareness of the changes happening in the code base. The other thing is how as the team grows/changes cultural norms ranging from testing, style, interface design and so on gets established, normalized and strengthened.
Having worked in places where there were no code reviews the software development culture was all over the place. I wouldn't go back.
dan-robertson
I’m curious how they avoid pushing changes that break the build. Maybe they have some other reason that this doesn’t happen or is hard but having a large number of people trying to collaborate on a constantly broken repo is not a recipe for productivity.
I also wonder if there are typically 0 reviews or typically 1 — that is, are people even reading their own changes before they push them.
ReleaseCandidat
> I’m curious how they avoid pushing changes that break the build.
By not pushing changes that break the build.
In other words: that does not work if you can't test (actually test, not just try to build) your code before pushing it to main.
AlfeG
Is there anyone on the planet, that always run all tests locally even for last second minor change that definitely will not break a build?
Especially funny for distributed around globe teams. When You spent half a day fixing a build issue from the "night" workers. Have eat a lot of this.
ec109685
I wonder how much refactoring is _not_ done because of the need for a code review. In the era before code reviews, I used to go to town on my codebase when I saw something I wanted to fix. Are folks more hesitant to make such changes now because they are afraid of the burden on the code reviewer?
justrudd
Not really. But if I start doing a major refactoring while working on a new feature, I’ll generally cut a new branch, move the refactoring over to it, and do a PR of just the refactoring. That makes it easier for the reviewer to only have to look for degradation of current functionality. While the refactoring is being reviewed, I’ll then usually start working on the new functionality on top of that branch.
It can get a little unwieldy if I’ve got a lot of refactoring going on which I currently do. I inherited a codebase that never said no to any “hey. That’s a neat blog post. Let’s try that way for this feature”.
geuis
Absolutely not. This is a recipe for disaster and a terrible example to set for any upcoming engineers that haven’t had much industry experience yet.
Code reviews help me be more confident in my own work. Even if 90% of the time it’s fine, there’s always that 10% where a second pair of eyes catches a mistake or offers a suggestion that makes the code even better.
klabb3
Meh. If 90% of the time it's a mindless ritual there is probably a better way to achieve the same goal. Code reviews are a huge time sink, especially for unimportant style- and naming nits.
Code review as mentoring can be great, if it's a directed 1:1 effort. Usually, it is not.
marcinzm
>Code reviews are a huge time sink, especially for unimportant style- and naming nits.
This isn't a problem with code reviews, this is a problem with your team's processes that code review has highlighted. These problems will show up in others ways if not during code reviews since clearly there's strong disagreement on coding styles and conventions. So go and fix or implement the style guides, linters and so on.
This is like saying that the way to fix fevers is to get rid of thermometers.
klabb3
> since clearly there's strong disagreement on coding styles and conventions
The same people who are perfectly happy to do a back and forth, adding 3 days of code review time, would never go in and refractor existing code to "fix" stylistic issues. It's a cognitive bias that we think everything is up for philosophical debate in reviews.
Btw, I'm one of those people who drove stylistic debates in an organized fashion. I learnt that you simply can't police everything, even if it we pretend that it's a good idea to do so. It's much better to share ideas and have them spread through voluntary mechanisms, such as dedicated tech talks, mentoring, postmortems and other deliberate knowledge sharing.
That said, automated enforcement of pure formatting can work well if there's standard tooling, e.g. gofmt and rustfmt. But contrary to reviews, that's a way to suppress debate during day to day business.
Btw, my experiences are from a mega corp, not claiming it's true for every org.
tuyiown
> So go and fix or implement the style guides
How do you make this converge across dev without endless debate or resentment ?
> linters and so on.
What happens when a rule has to change ? update the entire codebase impacting everyone with conflicts ? tolerate divergence existing code ?
Talking about thermometers, are you actually tracking fevers, or merely minor bad breath ?
abstract_put
I think there's a lot of grey area for what "style" is, but e.g. formatting and what not is pretty easy and not-contentious in my experience. Most people want consistency more than they want their way.
How I did it as part of a ~35 person dev team was adopt one of the popular ones from the community (e.g. Google's) - treat it as a benevolent dictator. Apply it to the entire code base in one giant shot (disrupting any pull requests that were in flight but fixing them proactively), then enforce it with tooling. Leave the door open to anyone who disagrees with the defaults. It's on them to propose the change with rationale then put it to a quick majority vote with a subset of the team. I think 2 things have actually changed since the very beginning (line length from 80 -> 120 and something else that I don't remember right now).
When something changes, making a sweeping change is part of accepting it - if it's too risky/disruptive then it has to be very valuable to do.
etchalon
There's a large number of things in any decently complicated system that a "style guide" can't cover and that there are dozens of potential ways to solve.
LaserDiscMan
As well as a time sink, code review can be detrimental if it's used as a KPI gaming mechanism or for passive aggressive one-upmanship.
watwut
Hours long discussion about whether to call it "thing" or "thingId". True story.
mdaniel
> offers a suggestion that makes the code even better
plus, software engineering is a team sport, so if something works but is opaque, that's a fine thing to point out in a review, otherwise one runs the risk of being The "Foo Component" Owner™ and that's usually no fun and is for sure not healthy for any reasonably sized org
I am also a monster fan of the "Apply Suggestion" button in GitLab, which allows me to meet the reviewer half-way by even doing the legwork for the suggested change. If they agree, push button and we're back on track. It's suboptimal for any process that requires/encourages signed commits, but damn handy otherwise
watwut
> there’s always that 10% where a second pair of eyes catches a mistake or offers a suggestion that makes the code even better.
Testing is pretty awesome process to find bugs.
Clubber
QA departments are expensive. Execs in their infinite wisdom opted to dump that task on developers who earn 3x as much as QA people. Synergies!
seattle_spring
As a dev, I'd much rather do my own testing then have to explain extremely basic shit to the average SDET I've worked with. No, Chuck, the 404 for the favicon on page A did not cause the padding to be off on page B.
Clubber
You've had some bad experiences with QA people. The people I've worked with are methodical and work like the terminator to find bugs. One guy was so good he could even write queries against QA to show me where the data was inserting / updating improperly. A good QA person is worth their weight in gold.
In my experience, it's hard to have a dev as a good tester. I'm ok at it but not as good as a pro QA person. Good devs look for shortcuts and efficiencies, which is the opposite mentality for good testing.
marcosdumay
> who earn 3x as much as QA people
And never learned how to properly test software... Or earn 10x as much and consider it not a priority.
passive
I think a key thing here is that they are only doing releases every other week.
So when changes skip a code review, they are still subjected to a good amount of peer testing before they ever reach a customer.
I think that's a reasonable approach in the situation, and I can see why they have gotten good results from it.
lwhi
> "Every company or team is different, but one thing is the same: They all want to build the best product in the shortest time possible."
Maybe this is part of the problem?
Why do we need to build so incredibly quickly? Is this the only way?
fiala__
It's definitely part of the problem and a big reason why software engineering is in the state it's in. To me it always seemed almost like a religion. I wish we took more from the stereotype of traditional engineering, where standards, thorough testing and reliability trump satisfying some trust fund kid founder's impatient demands.
Emigre_
Code reviews are great, they prevent silos. It gives other teammates the chance of looking at your code, and understand what it does. It’s a knowledge-spreading tool, apart from their bug-catching aspect.
curious_cat_163
Code reviews help. However, it is unclear to me if PRs are the best way to do them. I often find a walkthrough on a screen share a lot more helpful to receive/give feedback. It also ends up saving time. The downside of this approach is that it has to be synchronous.
jyounker
GitHub's & GitLab's review systems are atrocious.
jayd16
I don't think every single commit needs to be reviewed, especially early on in a dev cycle especially if there is no production environment to even screw up yet. In that sense, I think they should be optional.
Code moving to prod gets reviewed though.
cryptonector
> Side note: We use rebase instead of merge to keep a much cleaner project history and eliminate unnecessary merge commits.
Yes!
micromacrofoot
Good luck getting any sort of security certification without reviews. This works fine for small companies, but once you start getting customers at a certain scale it will be a dealbreaker. Run without reviews as long as you can, it feels great!
jsiaajdsdaa
My current company (dinosaur insurance company) has a one developer per service mapping right now and it is absolute HEAVEN on Earth.
The best services rise to the top, and the other ones are refactored until they are ready to be consumed.
alexhf
I wish this talked about the engineering time lost when a bad PR makes it to production. Fixing those may eliminate any time gains from skipping code review. Also, this violates security compliance standards (e.g. SOC2).
integrate-this
This is a great way to end up in a situation where you have to re-write entire modules whenever you need to update old code... which is kinda where our teams tend to end up anyway so I guess it's not _that_ awful?
northisup
congratulations on finding engineers without egos that know when to ask for it, i've worked with a ton of great, if not over confident, people that 100% needed code review to be policy to not take the whole system down.
why i like code review:
1) code review is a great place for mentorship
2) people mess up, its ok, major issues have been caught in code review
nikolay
Well, you don't have this luxury if you're SOC-2 compliant - code reviews are mandatory.
revskill
Code review in real world means peer review, it's the PEER part that matters, not the CODE part.
I do think collaboration matters here, not about trust or code.
Good software comes from good collaboration.
blank_dan
If your code is unimportant then so are your bugs that get into production. So I guess I agree. Don’t waste time writing, testing, reviewing unimportant code.
sbeckeriv
The only thing I agree with from this "Make your own rules". I feel like it is click bait otherwise. Reading this I would apply for a job here personally.
zzzeek
so they didn't like github PRs so...they ditched code review altogether? how strange and sad. there's lots of tools that can be used for code reviews that don't have any bearing on whether or not you also use github for source control.
We at the SQLAlchemy org actually have Gerrit integrated with Github PRs so you can use both UXes for the same patch at once, and other orgs do this too.
honkycat
Code reviews are a great opportunity to mentor newer developers as well.
lxe
Code reviews are kinda forced if regulatory compliance is necessary.
bawolff
I think code review is like any other quality control: they come at a cost, but its worth it if implemented well. If the culture around code review is bad then its utter hell.
maxehmookau
> Code reviews aren't high priority for engineers.
This a far bigger issue for me than a lack of trust. Code reviews should be a high priority for your engineers, especially the senior ones.
spamizbad
This is written by a company that recently raised a series A[1], so I'm not surprised they're willing to optimize for velocity at all costs. It's not the worst trade-off you can make at that stage, but it's a trade-off only worth making early on. And I would argue trust (or lack thereof) shouldn't be a factor in code reviews.
Most of the value of code reviews comes into play when working on more mature systems that sit on top of a stable foundation. And the benefits of those reviews are significant: they help socialize the codebase (I'm using socialize rather than "understand" because I don't think you can get a complete picture of it by doing them, but you can lay down a mental foundation from them) to the team, they let you share techniques and information on how to best utilize a given tech stack, catch missing test cases, identify faulty assumptions, and help Jr/Associate level engineers to be stronger contributors. I've also seen them used successfully as glorified whiteboards/sketchpads to share and develop ideas between engineers.
To extract this value out of code reviews, there are a three key things that must be done.
First, automate as much dumb bikeshedding as you can: have precommit hooks for code formatting, linting, type checking, dependency validation, test coverage etc. Tools already exist for most languages to do this, so you're not stuck having to write your own. A few dozen lines of YAML eliminates hundreds of hours of worthless debates and churn. Nobody should waste time commenting about this stuff.
Second, avoid "large" PRs. Don't get too caught up in change sizes, but think more about "How many major components will people actually need to review?" Sometimes a 2000-line changeset has 1500 lines you just skim and 500 lines of real stuff you have to pay attention to.
Third, you need to give some context in the PR message. Link out to relevant things, explain what you doing, and even call stuff out you want people to see "Please draw your attention to FooBar in baz.py - I'm a little unsure this handles all situations"
Code reviews often get wedged into SDLCs because someone broke something in prod and people concluded "code reviews would've prevented this!" -- I personally think this is the wrong mentality. You should instead see them as a way to improve the quality of the codebase and your team's skills and knowledge. Rather than trying to "catch" a catastrophic production bug, focus instead on reducing the number of catastrophic bugs that get created in the first place - via better tests, better shared understanding, and sharper skills.
Is that why we do code reviews, because we don't trust each other? I find this attitude problematic. We do code reviews because humans make mistakes. Requirements can be misinterpreted. Different work in progress can be in conflict with each other. Reviews are a good way to learn from each other and keep abreast of what work is occurring outside your own bubble. Not doing code reviews because of "trust" kind of misses the point.
I find even just cursory and brief code reviews are beneficial and often pick up on issues. It also creates a more collaborative team and gives me a reason to communicate with devs I might not otherwise communicate with, even if that is just a brief comment on a PR.
Rejected.
Thing is, I've lead my fair share of teams and I trust my team-mates and colleagues implicitly. It makes for a strong team. But we still do code review (peer review) because we want to build software that works well and support each other. We're not in the job to simply deploy code to prod as quickly as possible.
I'd also add that I think any software engineer should try and have the experience of working in a highly regulated field, like healthcare. It's hard to get an appreciation for why these things exist until you realise you're held to a much higher level of accountability because your oopsie moments can have much larger consequences. For me, it's been hard to go back to my old, cavalier attitude after that.
TFA:
> Engineers [..] request reviews when they think it's necessary.
Problemo solved.
It does however mean you don't sit around waiting for someone to approve a three line delete PR that you know is safe. The "Rubber Stamp" review is a thing and it does waste time.
I'd be exactly in the boat you describe (and actually: I've yet to see a situation where someone reviewing my code did not lead to improvements). But: If I want my code reviewed, I have to actually fight for it. Since everybody's workload is too high, even the willing often simply don't have the time.
From higher up, at our place, there's no one that actively opposes code review as a practice, but they don't seem to get that it requires investment (i.e. time), either. That means that if management refuses to change their ways of planning or explicitly enshrine reviews as priority tasks, it simply won't happen except for maybe the gnarliest cases.
And that's from a perspective of the change's author being willing and proactive. I work with individuals that produce very questionable code at times. And I'm not talking style here: Code in 2021 that still is prone to SQL injection at every turn. Methods that are hundreds of lines long with a gigantic cyclomatic complexity that no one (including the author) will ever really understand. Method names like "process" and "process2". And so on.
Needless to say, those programmers will never push for their changes to be reviewed. The rest of the team simply discovers them after they blow up (which they do regularly) and we're debugging.
Now, these people are luckily a tiny minority (which at least makes this somewhat bearable), but I over and over again would wish for someone with more authority to show that they're interested in the matter. Which a policy on code review would do -- if only as a signal. Yes, we want to check each other's code. Yes, we want collaboration. Yes, that's more important than some arbitrary deadline for minor feature X. Yes, we expect our programmers to be professionals.
Another symptom of such an environment also is the fact that everyone is the king of their particular hill. No one every familiarizes themselves with another person's code unless they really have to. By now, I've jumped to insisting that every change on a critical component I'll ever make will be done through pair programming, and so far, at least that seemed to stick. But again -- that only happens if both parties involved actively fight for it instead of everybody understanding that this is just an expected part of work.
Was interesting to hear your experiences and maybe update my expectations of workplaces somewhat
> [...] alongside the culture of my employer [...]
I think you've hit the nail on the head here. As many others have said in different HN posts, culture is hard to change. And I would also understand the argument that one can also enforce culture changes with policies; my hope up to now simply has always been that engineers could "overrule" some artificial time limits in favor of more collaboration by referencing such a policy.
I guess if you're starting from a context where code reviewing practices "just work", introducing explicit policies also can't improve much, anyway. On that note: If you happen to know a good heuristic to detect such cultures during interviews or the like, I would be extremely interested :)
This hits at the heart of why I strongly agree with this post and think PRs and code reviews are ultimately destructive.
Reviews are going to get dropped _regardless_ of whether they are in the process or not. Pressure makes code reviews and quality worse, not better.
More than that, the need to "appear" to review is going to slow things down and can cause an exponential back up. I saw this happen about a mouth ago: 3 day release cycle got bogged down and then took _3 weeks_ to get a single release out.
Process goes out the window when pressure hits. You are then operating in foreign environment at the most critical point and your safety net of useless.
Your safety net when things need to go fast should be highly automated fast build and deploy, coupled with great test coverage, from unit to end to end.
The only exception might be if you work in highly regulated field where you legal required to have a very low Mean Time To Failure. In that case things move slow unless you are very well resources.
Otherwise (ie the majority of products) focus that effort on having a very low Mean Time to Resolve. If you absolutely need some form of quality assurance, Pair. Do continuous PRs and knowledge sharing rather than out of context after the fact reviews.
There is an easy way to resolve this:
Git makes this exceptionally easy -- it's certainly far easier than reviewing a large document with Word track changes!Each small change on its own might make sense and be ok to a reviewer, but if you step back and look at the series of changes together there might be some issues that could have been noticed.
Then that's what you need to fix.
...when they need it. Actually, really good engineers will ask to pair when they need it. Review is a distant second.
I agree with you there, but I'm also sure that one reason for this is that many people still see reviews as the task that only happens after everything else.
I'm convinced there's a lot of useful middle ground to be had if reviews aren't just "here's 500 LOC, please review", but maybe more… staged.
Say: Rummage around in the code a bit for yourself (ask questions if you have to), come up with a strategy of solving your problem, and submit that for review. Not in code, but maybe using markdown, preudocode, small diagrams or the like. In person (i.e. via screenshare nowadays). And if the other party doesn't find obvious holes in your strategy, then go implementing it.
I remember an interview with some Linux kernel maintainer that mentioned the same effect -- people submitting a gigantic patch for something that will simply never be merged. And all that would have been necessary to avoid this wasted work was an e-mail asking if the maintainers are at all interested in the change.
One downside of code review is reduced velocity. There are teams out there who use code review so effectively that they never ship big fuck-ups to production. They never have to tell clients that they have bad practices. Instead, they struggle to get clients because they pay much more per line of code shipped and therefore don't deliver as good of a value as their competitors.
This where the argument needs to be: whether and how to have code reviews be a net positive.
I can see how that equation balances in favour of no code review when it's in the context of a startup that is probably still figuring out how to make money, especially with a subscription based app launcher. As far as prototyping and early stage iterations go, you're probably just seeing what works and reviewing the code isn't so important at that point. There would still be one or two things you'd ask someone to check, e.g. security related stuff.
I think you make a good point though: what makes an effective code review process? I don't think there's a single answer to that because it depends on the circumstances.
I would at least say that it goes better if it's treated as a priority and issues are raised and dealt with promptly, rather than leaving it as a chore you eventually get around to. And you would have to see value in it beyond it being a sanity check, as others in this thread have described (knowledge transfer, for example).
We really should be relying on automated systems to catch bugs for the most part, and leave code review for what it's good for: making sure a different human can understand and maintain the code.
In fact, if we manage to catch a bug in a code review that didn't also trigger our automated systems to alert us, that should probably prompt us to look into how we can expand the coverage of that automated system, similar to what we'd do in a postmortem if the bug wasn't caught in code review and reached production, because we can't rely on that happening the next time.
Disagree, but it's also a good place to be ensuring that the system is evolving in a way that is reasonable and consistent — or that it is unreasonable and inconsistent for good reasons.
I’ve always code reviewed/gotten code reviewed on both the implementation and the tests, with completeness as the main quality you’re reviewing test code for. If that’s how we do it, we’re right back at code reviews.
I'm not saying code review can't find any bugs (in practice bugs are found in code review all the time), I'm just saying that if we do find a bug through code review, it means we got lucky, and we shouldn't count on getting lucky again, so better add a test/linter rule for the next time when we might not get so lucky. That's just one more way to grow your automated systems organically.
What big launch? :) Is useful to feature flag new functionality so that you can release gradually and spot the bugs - often via error monitoring and user feedback - before they affect everyone. Yes, realize it can happen but there are techniques to minimize the chance of it happening.
However, the review process is far from perfect and can create a false sense of security.
To review a piece of code, that code should ideally be small in scope. For larger pieces it's rather common that the reviewer don't have enough time to do a good job.
If there were better incentives for doing a review, then it would greatly improve the situation.
But i haven't seen that yet.
In almost all cases it's very hard to get people to look outside the web browser for the diffs. Diffs show you something, but never the whole context.
The other problem I've found is stamping down on style arguments: they have no place in code reviews, and a lot of the time you get a round of "please change this variable to be named something else" - which might be valid, but raises the question of why the reviewer didn't do it, and more often leaves a weird authority gap - I have no power to just reject such a request because I think it's invalid, nor does the business have anyway to resolve the issue. In fact if there's a dispute at all, very rarely is there any process for bringing in a third party to mediate or break the stalemate (which is hilarious in some ways since everyone by now should realize 2-party systems can't achieve consensus).
Code reviews are very, very cargo-culty and I've not seen them ever include a thorough design: people give up on the very near edge cases, and so the whole system falls apart because it's just a hurdle to get through and not a collaborative or constructive process.
Or are you saying that one can actually find more bugs when two people compile and execute the same code rather than just one?
When I review code, I want the author to tell me how they have verified that the code has the desired effect -- but if they do that, I'm going to trust that they did the actual verification and I'm not going to simply retrace their steps. That seems like a waste of time to me.
1) Catch edge use case / business requirements misunderstandings
2) Mentor new hires/junior engineers on the ways things should be done (e.g. internal culture about some architecture/design).
BUT, I absolutely despise code reviews that nitpick on non documented or 'because I prefer that way' stuff:
1) Task tickets that only have a vague description, have their own 'play detective' about the requirements, but a lot of new ones surface during code review.
2) Requirements are met, but there are internal/personal technical directives that are written nowhere, are not enforced with automated checks and may dramatically change a feature/patch is implemented (I can only think about that guy that worked in a company in which only LEFT JOIN sql statements were allowed).
Fuck that noise.
The main purpose of code reviews is checking the general structure of the code and tests. Is the code well designed? Are error cases considered and tested? Another important purpose of code reviews is knowledge sharing.
To spot quality problems before going to production .. what you need is testing.
Not "unit testing" - which is just another pointless process that does not actually provide the purported benefits.
You need a separate QA team that handles testing - manual and automated.
You should absolutely have a small QA team, but their job shouldn't be doing the QA -- it should be helping others do their own QA.
One thing I've seen that I like a lot is individual changes creating individual front ends. You test the one thing in that change, and it's done. This tends to cause a lot of problems with CORS, but then what doesn't?
Agreed with you though that good code reviews (whatever that means) obviates most of this.
Your quality problems might well come from poor or non-existent architectural decisions and tech-debt, and QA won't pick up on that stuff unless it manifests as a faulty requirement.
The earlier you catch a problem, the easier it is to fix.
In our case we have a public API and several backends. The tests run against the public API. To know that a new backend is working it has to pass all the tests. Getting any new backend working correctly without the tests would be nearly impossible.
I’ve heard from my workplace’s software team that this makes hiring from certain sectors particularly difficult because they don’t get this. Too used to move fast and break things and MVP.
On my team, the majority of code review comments are not discussing potential bugs, they're making sure everybody knows how the new thing works, why it was designed that way, implementation tradeoffs, etc. All that discussion is extremely valuable over the long run. For example, it means that nobody has to pay attention to email while they're on vacation in case something goes pear-shaped with a module they implemented.
That said, I 100% agree with the article's concluding paragraph about YMMV. Raycast is operating in an entirely different business domain from the one I'm in. It sounds like the reasons why Raycast likes their way of doing things don't really apply to us, and I don't think the reasons why we like our way of doing things don't really apply to Raycast.
I like a hybrid "the person who wrote it can't deploy it" model. You get an informal code review and two sets of eyes/brains to the extent that the deploying engineer feels like they understand it enough to shepherd it into prod weighted against how important the component and the scope of the change is.
It self scales from "copy change on the website yolo" to "this involves two new racks of servers and a database migration, I'm going to need to spend a week understanding it to the point I can make a plan."
This is the most common reason I see cited for code reviews, but I'm not entirely sure if I buy it, at least not for a lot of teams.
In most cases you are going to have one, maybe two other people looking at a given PR. The positive way of looking at this is that you've just doubled (in some cases tripled) the number of people who are familiar with the code. In reality though, this is still going to be a very small fraction of most engineering organizations. If you have a small team who wholly owns a small codebase then it could be meaningful, but a lot of the time you're still talking about a rounding error from zero people knowing the code- most people in the organization will still be touching the code in the future without any prior knowledge of it, and will have to start understanding it from zero.
Even if you are dealing with a small enough code base, in a small enough team, that the reviewer is adding a substantial margin of familiarity with the codebase to the team, reviewing the code is only going to give an extremely shallow level of understanding. The reviewer isn't going to have all of the context that the original developer had. They won't know all of the things that weren't done or why, the tradeoffs that were made, requirements, business context, etc. Best case scenario, someone wrote that down in a well-known place (comments in the code near the feature, something relevant in a docs directory, hopefully not as an archived jira issue that will never be looked at again). The reviewer probably has their own work to do anyway, and once they approve the PR they'll flush most of what they had out of their short-term memory anyway.
In a lot of ways, this is the same set of reasons that people aren't very likely to find bugs in a code review either. Someone without a lot of context on the problem isn't likely to find many business logic errors, or missed requirements. Since you were busy working on your feature, you probably don't know who else on what other teams might have been doing something relevant to your work, so you wouldn't know to get a review from people who might have something useful to contribute (and they wouldn't know to proactively review your code). One of the reason senior engineering roles often devolve into telling people to talk to one another is to try to address this kind of situation, but that only scales up to a small fraction of the overall code churn at organizations of any size.
For what it's worth, I ask for PRs, and I review my co-workers code. I make comment when I see something that might be useful, and I graciously accept feedback and try to make sensible decisions about when to make changes and when to let the feedback go. I actively search for PRs that might touch sections of the code I've been working in or am very familiar with to try to optimize the quality of the feedback I can give. I'm not really convinced any of this actually helps deliver better code in the moment, but maybe it does some good as a way to help teach my co-workers about things (or to learn about things they can teach me), and anyway it's such an ingrained part of engineering culture that it doesn't seem worth the political capital you'd have to spend to try to change it, and even if you succeeded it would be a prime target for scapegoating if anything went wrong later. It does slow down the development process I think, but people need down time anyway. If waiting on a code review gets people an hour or two of downtime to check their email or read HN between tasks then maybe it's actually worth it for that. The burnout caused by superfluous process and the occasionally large pile-ons and people demanding poorly thought out changes, gatekeeping your ability to deliver business critical work is essentially a cost of doing business that we seem to have collectively accepted in software.
I should also note that I also work on some open source projects, and in that setting PRs work much better. Not only does such code often need more review, but you also typically have a smaller group of people who know the whole codebase, and are much better central repositories of knowledge. The review process is also a much more effective way of doing collaborative decision making in an open source project IMHO.
That said, I do sometimes encounter individuals or cultures that seem to view code review more as a mechanism for catching abnormal/unexpected mistakes than as a normal part of the process of writing code.
It's precisely because I've learned to read and write so fast that I "skip over" the simple parts of sentences, making me oddly blind to certain categories of spelling errors!
Especially when I've gone over the same text over and over, I'll have memorised it to the point that I can skip entire sentences.
This is why you need someone else to step in and read the text for the "first time" and not skip anything.
The same applies to programmers and code review.
The equivalent in book writing would be that you are not allowed to write more than one page at a time, and someone must approve it before you start working on the next page.
If books were read by machines rather than humans, and were published incrementally, then it would make sense to ask someone to edit each iteration.
Incorrect. Bugs (“errata”) are regularly fixed in bugfix versions (“printings”) of books, whereas feature changes are regularly fixed in more major versions (“editions”). And that’s after the equivalent of a “1.0” release.
Books are also regularly iteratively released either to a closed group of reviewers or even the public (e.g., Manning MEAPs, PragProg Beta Books, etc.) prior to the equivalent of a “1.0” release.
But those fixes don't fix the already printed books.
For actual physical books (as opposed to ebooks, where making the complete new version available to purchasers for errata, but usually not new editions, is common), this is usually true for logistical reasons, though issuing errata sheets (usually online now, though the practice is older but notification and distribution was harder in the past) to keep existing users up-to-date is not uncommon.
No it's not.
If an author produces gibberish and nobody checks, the published result is gibberish. And with print books uneditable in a run starting in the thousands.
If a programmer produces gibberish, it won't compile.
It also won't pass the unit test or acceptance tests, never mind QA or the alpha/beta testers.
Only for a drive-by take-down by someone with none of the context. It's a totally asymmetric investment of time. Even helpful review comments might only take a minute to write but a day to incorporate.
If that's a regular issue that's a culture problem. Starting with "if you've been talking with colleagues about your problem, why is someone with no context reviewing the result?", people not investing time in reviews, people doing "take-downs" instead of asking questions if they don't understand things, hold up merge for non-urgent concerns ...
> Even helpful review comments might only take a minute to write but a day to incorporate.
Either the feedback is worth the investment of the day of time, then no problem, otherwise it's not that important and doesn't need to be done (or depending on what it is can be done later or ...)
Couldn’t one also argue that if code reviews are routinely catching problems mentioned earlier in the thread (misinterpreted requirements, conflicts with other WIP, etc.) then that’s a cultural problem? It just seems odd to assume that the person assigned the task couldn’t possibly be expected to routinely avoid those problems, but that adding rigorous code review could routinely avoid them.
Except that this decision is often not up to the code author, because the tooling rejects for example committing changes until all discussions are resolved (see Gitlab for example) and some @holes make their quest for any reason not to resolve the discussions that they started until the code author writes the exact code with the exact words in the exact indentation with the exact architecture, etc that those @holes like, which makes the whole code review like a torture and the REAL productivity killer. Not to mention that this does not provide any improvement to the code base either in most cases.
If this is an issue in your workday, you should bring it up with management or the individual rejecter. If someone has the authority to kill PRs, they should also be required to put in the effort to understand it and/or be available for discussion at an earlier stage of the development process. PRs shouldn't ever be rejected without mutual agreement - neither internally in an organization, or in an open source project.
Then write it down. If the code reviewer can't follow what's going on, what hope is there for the new hire looking at it six months from now?
It'll be 6-months if you're lucky. Try figuring out why there's a particular "if" clause, one or two or four years later. Software maintenance, especially when the original author has moved onto another team/organization/company/career, is a frustrating art of almost remembered stories and hoping that you can figure out Chesterton's fence, lest it become Chesterton's barbed wire. The worst is when you fix a small bug with code and introduce an even bigger big in the process.
Code reviews aren't a "take-down", they're a process of helping each other produce better solutions and better code.
> Even helpful review comments might only take a minute to write but a day to incorporate.
Then like all pieces of work, a decision must be taken whether that day is worth it, no?
[0] Not just because the reviewer is trying to be nice, but because they don't know what constraints and problems the author discovered in the process of writing the PR. If the reviewer writes "use a map here, instead of scanning an array" without knowing how many items are handled, then they may be making the PR worse by replacing a cheap linear scan over a maximum of 50 items with a bunch of costly yet constant time hashing. Often the solution is an explanatory comment rather than the "obvious" fix.
Good reviews can take a light touch, in part by differentiating non-blocking ideas and suggestions from "hey I think this is a bug."
Some of my favorite reviews of all time actually don't change a single line of code, but the questions reviews ask zero in on what's going to confuse the next developer so it can be documented appropriately.
The deciding factor was ownership and accountability tho - you maintained own code and if you done it crappily, you knew.
The code review is related to assumption that everyone can change everything - meaning all in all inconsistent mess. It is also related to unfortunate assumption that code review is how people learn system. They don't, unless they are original author. The rest of then knows pieces of it only.
Ownership and accountability are workable as long as the same person sticks around long enough to fix all their mistakes. If that person leaves, the code becomes orphaned and it's up to one of their teammates to find out where the skeletons are buried.
And the person doing explanation actually understand the whole part. That is big one too - you are not explained bits of it by someone who knows only small parts of it. And the system was not changed by third party without the person who explains having at least vague idea it was changed.
> Ownership and accountability are workable as long as the same person sticks around long enough to fix all their mistakes. If that person leaves, the code becomes orphaned and it's up to one of their teammates to find out where the skeletons are buried.
That is actually exactly the same without clear areas. Except everyone is new all the time.
If you have high turnover, this aspect will be bad matter what.
And it is not that bad with responsibility either. Old person does explains things to new person. Taking over, when you are working consistently in same area is not that hard, actually. Some parts you conclude to be mess, but usually not whole of it.
---------
And imo, quality and knowledge heavily depends on other parts of team. Analysis and testing. Both, if work reasonably well are massive resource for knowing how the hell the system is supposed to work.
And once you have reliable source for requirements, that is not developers, then deciphering the code is much easier.
As a result, any work that's assigned to someone else will only interact with at the interfaces / boundaries. And of course, that's sort of the intention with modularity / loose coupling - it's not necessary to learn the gory details of the XYZ subsystem's implementation, only its API. Knowing the details is certainly valuable, but there's no baseline impetus to get the owner to explain the details.
Taking over from someone who's ceding responsibility, but staying at the company, I agree that's manageable. But sometimes people get laid off suddenly (or perhaps get hit by a bus, etc) and there's no chance to get the one person who has it all in their head to explain it to you.
You mentioned high turnover, but if anything, a place with high turnover has less information locked up in a single person's head. If Alice who's worked at the company for 25 years and has always owned the code leaves, you're going to have a much harder time getting all of the tribal knowledge out of her head on two weeks notice compared to Bob who only made it 6 months.
To me, the value of code review is that even at its worst, someone other than the author is forced to look at the code. No process can force a reviewer to take an active interest in a particular subsystem, but at least we can make them look at pieces of it and see if it gets them curious enough to ask further questions of the author and better understand the system.
For myself, knowing that someone else will have to understand my code, even through the limited lens of a code review, makes me more diligent about making sure that the code is clean and the pull request has enough context that someone playing code archeologist will be able to make sense of it, even if I'm not walking them through it in real time. I will admit that this is not a universal thing - I've worked with coworkers where no matter what process is in place, they won't do the basic courtesy of reading their own changelists to see that they accidentally committed random junk files from their local machine.
I agree that good documentation around requirements is valuable. I find pull requests / code reviews are a great opportunity to highlight how the code relates to specific requirements, since the focus is narrower.
Informal reviews can have a lot of different shapes, and provide a lot of different benefits. But anything that is a stop-the-process activity is there because you expect quality problems from the earlier steps.
(That said, no good developer trusts oneself. So if you don't have anything on your process that makes you trustworthy, lack of trust is the correct approach.)
I'm a fan of code review
For me, code review is collaboration. We're helping each other.
The one thing I'd suggest is, as much as possible, try to phrase reviews like a question: "Do you mean to do X here?", "Would using fooTheFob here work?", "Is this a race condition?". At least it feels better to me.
Is the phrase "trust but verify" an attempt to avoid damaging self-esteem? There seems to be some emotional/ego-based attachment to the concept of trust.
I never work directly on master, always on a branch, and when I'm done I create a PR for myself.
Can't remember a single PR that I reviewed where I didn't find some issue or bug.
I guess you can say I don't trust myself...
> Pull requests don't prevent bugs. Reviewing code is hard! You can catch obvious mistakes, but you won't have consistent in-depth scrutiny. Many issues reveal themselves only after extensive use, which doesn't happen during code reviews. We rely on fast iterations with user feedback.
So many things I can say about this. "Fast iterations with user feedback" are great, until you introduce a security bug that you find out about because someone exploits it. Catching "obvious" mistakes alone has significant value. Since he acknowledges that code reviews catch some bugs, this criticism seems to be that they don't catch all bugs. To that, I say: show me a practice which does catch all bugs, and I'll show you why it doesn't. Even the most rigorous development process doesn't prevent all bugs. [0] This just doesn't sound like a fair criticism to me.
> Pull requests slow down. Code reviews aren't high priority for engineers. They have to make time to clean up their review queue. In reality, PRs are an afterthought and not what you want to spend your time on as an engineer.
This is largely a culture problem. Code review should be a high priority for engineers. Maybe it's not the highest priority, but, it's fair to expect one could get a decent review on a PR that isn't too large in, say, 1-2 days or less. If you have PRs sitting in review for weeks at a time on a regular basis[1], one of two things is happening: either engineers aren't properly prioritizing code review, or your PRs are too large and/or have too many dependencies.
Which brings me to the second thing you can do, which is make PRs that are as small and simple as possible, but no more. Minimize dependencies to help make your PRs understandable. And, it should be fairly obvious, but the fewer lines of code (including config changes) you have in your PR, the easier it will be to review, so, the faster it will get done.
Code review is also a great opportunity to learn and teach. Every senior+ engineer should be reviewing code frequently, IMO.
Finally, yes, code reviews will slow you down if your basis for comparison is just "engineers merge code when they think it's ready." That's intentional, and good. The cost is more than paid for by better code quality, leading to fewer instances of "here's a PR to fix the issues with my previous PR."
---
[0]: https://news.ycombinator.com/item?id=29801451
[1]: I've only had this happen once or twice in my career so far. In each case, it was because the code surrounding the code I was working on was not stable and kept changing. These were clearly exceptional instances.
I don't see any other reason to require code reviews before a change can be merged.
> humans make mistakes
Then write unit tests, religiously. And pair. Or "... request reviews when they think it's necessary".
In my experience code reviews are not very good at catching mistakes.
> Different work in progress can be in conflict with each other.
Code reviews are the cause of too much work in progress, not the solution. Not the only cause.
> Reviews are a good way to learn from each other
In my experience, they are pretty bad at this. Pairing is vastly superior, or giving talks.
> reason to communicate with devs I might not otherwise communicate with
Then communicate with them. And if you want a review "... request reviews when they think it's necessary".
> Rejected
Exactly. That's not a useful form of interacting with your peers.
In the US, a SOC 2 audit of your org’s change management process is going to be a really bad time without this.
Code reviews are not a tool intended mainly to catch bugs. Types, automated tests and manual tests are going to do the bulk of your bug-catching after all. Code reviews' more relevant purpose is to act as QA for the code itself far more than it is to verify if the code achieves what it sets out to do. This is in service of both maintainability (which impacts your ability to make future changes and staff retention) as well as performance (by introducing the chance for another dev to detect suboptimal approaches).
Once your teams grow, code piles up, and static analysis can't cover all your bases, either your team corroborates the code that you write matches/defines group standards; or each developer's careful design accumulates as "organic growth" carefully duct taped together to barely work.
And just to state the obvious, code reviews are merely a tool to achieve this. Pair programming is another.
Well, not to detract from your point (I agree), but what little research has been done in this area suggests that code reviews are actually one of the best ways to catch bugs: https://kevin.burke.dev/kevin/the-best-ways-to-find-bugs-in-...
I don't have "Code Complete" at hand to follow up on that, but I strongly suspect that the formal code review was defined in this research as reviewing the entire codebase line-by-line on meetings. As opposed to reviewing just a single change to the system.
Can you/anyone shed some light on this?
> Before any code at Google gets checked in, one owner of the code base must review and approve the change (formal code review and design inspection)
That's clearly referring to a specific change, not a review of the whole codebase.
I used to work for a large company. They have homegrown review software with extensive checklists of criteria the code must meet before it can pass review. The software won't let the code progress unless two reviewers go through and explicitly check off every box to certify it meets their standards. I'd call that formal code review.
I work on a smaller team now. We do consistently perform code reviews, but there's nothing actually preventing someone from merging without a review. We also have a list of things to look for when performing a code review, but nothing forces reviewers to look at the document regularly. That process is closer to an informal code review.
I still find bugs regularly when reviewing code. That's both true of other people's PRs as well as my own. I imagine there's a lot more variability with an informal process than a formal one, but IMO informal review is still useful. Code review of course also provides benefits other than bug detection.
I always review my own PR before passing it to someone else. Another commenter recommended that practice, and I heartily endorse it.
Well, this can be done. Lots of software was written before code reviews or source control existed. But I’m not longing for that time.
At my current startup, we are trying something different: the code reviewer fetches the feature branch and writes tests for the change as part of the review. As ridiculous as that sounds, it’s been working better than we’d imagined.
First, it’s really forced us to design modular and testable APIs. Separation of concerns at the file or library level gives a false sense of good, modular, reusable design. However, having an engineer write a test for your implementation (essentially a second client to consume your interface) really puts this to the test (!). The quality of feedback emerging from this process is often much richer than what a “skim the diff” code review offers.
Additionally, writing the test suite is an “unavoidable” task for engineers. This works two-fold: firstly, we don’t put it off as an “end of the day” item as with traditional code reviews on which you can spend as much or as little time as you have (it’s a ‘compressible’ task, unlike writing tests). Secondly, people send smaller, focused, easy-to-test PRs as a courtesy to the reviewer - and perhaps in part due to what we call MARBs... mutually-assured review-bombs :)
How is this different from the "traditional" QA-writing-tests-for-devs process?
As an engineer, you’re expected to write, review, and test code. We simply shuffle responsibilities such that you usually test other engineers’ code and not your own (there is some discretion involved - for very small changes we often write or tweak the test ourselves)
When QA finds holes when writing unit tests (rare), it goes on a backlog. For us, if the reviewer finds a bug or finds the code not very testable, your code isn’t getting merged. We catch both errors and design issues (especially premature optimizations and overly large API surface area) very early. Habitually, our engineers write simple, well-documented code with concise APIs at a well-defined level of abstraction (we also follow some other heuristics to make this obvious) on the first revision, so we rarely (6 total in the last 3 months if I’m querying correctly) have more than two revisions despite the additional ‘write tests’ step from the reviewer.
We found that engineers take great joy in trying to break others’ code, but also enjoy collaborating in a scratch-each-others-backs way when they get to give useful feedback. We’ve been able to channel that into great test coverage (around 99% FWIW). We’re pre-launch, so ultimately we may uncover bugs in production at the same rate as traditional code reviewing shops ¯\_()_/¯, but we’ve had close to zero issues needing manual intervention in months of early user testing.
When I worked for Apple, we had a strict "all code needs to be reviewed" policy, which I had no problem with. Then, after being there for about 1.5 years, I make a PR, ask two different people to approve it, which they do, and it gets merged. A week later, the code is released, it looks like my code caused a fairly major bug.
My manager's manager and my manager had a meeting with me, and told me that this was unacceptable. I ask "what exactly was I supposed to do differently? Clearly this bug was sneaky enough to where two other people also didn't see it." Their response was "the code needs to work, you wrote the code, the code was sloppy. This is your fault." Apparently they took the bug pretty seriously, since it was actually mentioned in my yearly review.
This is fine, I probably should have tested the code a bit better, but it always annoyed me that my "sloppy" code managed to pass code review [1], and yet I'm the only one to get in trouble over it. If the PRs aren't catching bugs and I still am going to get in trouble, then I'm failing to see why I should bother waiting an hour for someone to look through it. Instead, why not make it so we can merge quickly, test it quickly, and revert it if something breaks?
[1] Obviously I'm biased here, but I honestly think that in this particular case I was pretty easily the least-liked person on the team, and it was easier to throw me under the bus than anything else.
Yes, in your scenario the lack of testing stands out as the major opportunity for improvement. If a bug is so important that it's mentioned in a performance review, then it's important enough that there should be tests that would have caught it. Automated preferably, or manual if necessary.
And everyone involved with the software, from you on up and sideways, should be calling for this testing. It's clearly important / worthwhile!
Any other conclusion is a cultural problem that hints at a "shoot the messenger" philosophy ruling.
If we want a Westrum generative culture (and we do!) we can't go around assigning fault to ourselves or others when we end up in bad situations.
Focus on whether the process is good or bad, not where individuals ended up due to random variance around the mean.
End rant. Sorry, this is one of the major things that upset me about how other people run their organisations.
If your able to write code that screws up key functionality QA should catch it in Beta.
Unless you did something wacky like deploy straight to prod.
Well at least it had one positive effect :)
If “my” bug was a result of sloppy code, my PR should have been rejected.
I don’t really think anyone should get in trouble for a code bug (at least in a majority of cases), but yes, if this bug was obvious enough for me to be in trouble for not catching it, then the people approving the PR should share responsibility for it.
It sounds like you think you shouldn't have gotten in trouble for having bugs in your code. I have no comment on that situation because there's not enough back story. Generally "punishing" coders for bugs isn't good policy unless it's particularly egregious. If you were told that your code was sloppy, and if your performance review was affected by it, maybe it was sloppy code? Like I said, there's not enough back story.
If it was an innocuous bug or if it was a hard-to-determine bug, then I don't think you should have been written up, but again, there's not enough backstory to determine that.
But to say that your responsibility is lessened because it was blessed by code reviewers and they should share in your punishment, is, frankly, immature. It does sounds like they were ineffective code reviewers. If your code was sloppy, they should have picked it up, and if it was a hard-to-detect bug, neither you nor the code reviewers should be written up for it.
For the same reason that people write code they'll be responsible for: it's their job. Code review works when it's seen as equal priority and equal importance to writing code.
Ok, and if there are no shared responsibility, then what’s to stop the reviewers from just mindlessly clicking “approve” to shut the person up? I know for a fact that that happens all the time.
> If you were told that your code was sloppy, and if your performance review was affected by it, maybe it was sloppy code?
Sure, but again, isn’t “sloppy code” the lowest hanging fruit for a code reviewer? If the PR process doesn’t spot that, what exactly does it offer?
> But to say that your responsibility is lessened because it was blessed by code reviewers and they should share in your punishment, is, frankly, immature
I think we are just going to have to agree to disagree on this. If you are signing off on something, you are attaching your name and credibility towards it. I didn’t really want any of the reviewers to get into “trouble”, but i do think they should share a percentage of the responsibility on breakage if I am going to get in trouble if they signed off on something that was “sloppy”.
Yes having human #2 look at whatever code human #1 wrote with the goal of improving it and catching errors, is in the abstract a good idea.
But _requiring_ all code to be reviewed before merge to main, while allocating no real resources to that activity, while also expecting high velocity of development, is magical thinking.
It typically leads to "software engineering theater". Code reviews look to be happening, but for the most part they're not. And worse: some things don't get fixed because the participants see review as an obstacle unworthy of their energy to surmount. Some great features don't get developed because the engineer with some spare creativity perceives that their weekend efforts may be for naught due to ensnarement in review webs.
I think the author is on the right track by at least refusing to enter into the delusion, and rejecting the magical thinking.
There is no universal "best practise" for building software, there is just whatever works best for your context. Practices can, and should change as your business context does. There are some things that are net beneficial for a team in the majority of situations though, I'd consider code reviews one of these things and make that the default.
We're required to do code reviews for SOC II compliance. Even if we werent, I'd likely still encourage my team to do them.
I trust everyone on our team. They're all extremely talented and competent. They could all ship things to prod everyday without a peer code review and I doubt we'd notice any difference in quality.
The reality is code reviews serve way more than making big brother happy. They help to avoid silos. They help to gain perspective. They help people mentor and grow. They help the team build a collective understanding of how to work effectively.
I remember once I was giving a talk on code review and I said that code review doesn't prevent bugs and people were shaking their heads. Everyone hangs on to this one thing, but if you look at the majority of pull requests that have been approved you'll see a lack of comments about pontential bugs. Also, if you do see a comment about bugs they'll often be disregarded. This drives me up the wall when I've previously pointed out bugs and they've been found to be actual bugs and had to be fixed.
I think the biggest problem with code reviews is it often becomes adversarial and people want to come out on top. I've had it more than once that someone has suggested another way of doing something, when I asked what the benefit of that way was or what was the downside of the way I was doing it no answer was forth coming however I was expected to implement their way. This becomes enraging when you point out flaws in their way but they can't find any benefit of their way.
As well as, often people just don't want to do the extra work. You point about a bunch of small improvements. It's a pain.
You end up with people asking for code style changes even tho there is a code style and the changes they're asking for aren't in those guidelines-
In my opinion, code review is a massive waste of time that is often done purely as cargo cult. People know it's a good thing to do but people have no idea how to code review and what is and what is not benefical during code review.
Is that really a problem with code reviews? Or, is that a cultural issue for a given team?
Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
There should be a distinction between opinion and technical problems. If you suggest a change that is opinion-based, then it can be taken or ignored. I never comment on suggestions, because it's a waste of my time and theirs. If you are suggesting a chance that is a technical problem, then that should be changed. If the person doesn't want to make the change for the technical problem, then that's a culture problem in the team or company.
I find that there are two components of style:
- Formatting
- Clean abstractions, modularization... everything else that goes into making you a great (not just passable) programmer
Formatting should be handled by tools like others have said. For everything else I have the options:
- Turn the style guide into a book which no one will ready anyway
- Ask them to read an actual book but it isn't practical in the scope of a PR, and even after reading the ideas take a while to digest
- Attempt to mentor the employee via the PR, often appearing pedantic because it's hard for a novice to appreciate what they don't know.
- Code that passes CI is working code - screw everything else!
I take the mentoring route whenever it's presented to me but have generally found that some other team members (especially as they get older) have no interest in best practice and become annoyed by the code review process. If the mentee has persistence it tends to work out but moreso over 6mth-1yr timeframes. For mentees without the patience we drop the code review and keep them in areas of the code base where they can't do too much damage.
I guess what I'm trying to say is that our culture is willing to bend the processes to fit individual's strengths, and overall our workplace is pretty happy/chill. Maybe the engineering quality takes a hit but I can definitely sympathize with where the OP is coming from.
People say this a lot then I look at their code reviews and their code reviews are basically the same. As I said in another comment the root cause of this crappyness is the fact that the majority of people can't code review for crap and they're rushing it.
> Any issues of style should be in the style guide. Anything not in the style guide can be ignored. Or the style guide should be updated.
This doesn't stop people asking. I have a default comment "This is a code style change and as a rule I don't do code style changes unless it's required by the code style guidelines because everyone has their own preferences for code style. If I was to do this change then I would have to do other change and that would result in lots of pull requests being updated just because the specific reviewer wanted it one way and then another reviewer on another pull request wanted it another way." Sometimes then go and have the code style changed stating "Iain won't do code style changes unless they're in the guidelines" which I personally feel is a bitchy comment about the fact they just bikeshedded on my pull request and I wouldn't the change.
> If the person doesn't want to make the change for the technical problem, then that's a culture problem in the team or company.
Here comes the crux, what if the other person just disagress with your assessment and decides the technical problem doesn't exist or business value of changing it makes it a bad idea to change? For example, in code review I create a table and someone comes a long and says that tables with a large number of columns is a performance issue. I disagree and point out that splitting it out and having multiple tables with joins is a performance issue. Now we have a whole confrontation. Someone points out a valid issue that would affect 1 out of 1 million users and would take 3 days to fix. Again confrontation. Sure you can say with the right culture these issues are easy to work around but the fact is office politics is such a big thing we have a phrase for it which everyone understands. So many people are trying to advance their careers that they all want to be seen as really good. And for most people the way they look good is by making someone else not look good.
I'd say any issues of style should be caught by required code formatting and linting tools, which optionally run pre-commit and _must_ pass in CI. This should all be handled _before_ code review. Then review can focus on substance, not style.
that's because the UX for github PRs sucks. Use Gerrit instead. It's not as pretty but it's super productive.
> I think the biggest problem with code reviews is it often becomes adversarial and people want to come out on top. I've had it more than once that someone has suggested another way of doing something, when I asked what the benefit of that way was or what was the downside of the way I was doing it no answer was forth coming however I was expected to implement their way. This becomes enraging when you point out flaws in their way but they can't find any benefit of their way.
that's not a problem caused by code review, it's caused by people who don't know how to work with others (or otherwise simply refuse to). to the degree that the code review process shines a light on this, that's good news, because that's a people problem that should be fixed.
> As well as, often people just don't want to do the extra work. You point about a bunch of small improvements. It's a pain.
when you look at a code review and you see problems in what's there, catching them in the code review is great, because that means less chance of having to fix it later.
if you see a code review and nothing's wrong with it, you just approve it. sometimes you can note things as "nits", meaning, fine for now but maybe next time do it this way.
> You end up with people asking for code style changes even tho there is a code style and the changes they're asking for aren't in those guidelines-
syntactical style and formatting should be automated, if not with tooling that actually rearranges the code in that way (e.g. [black](https://github.com/psf/black) for Python) then at least including well tuned linter rules. We use black, my own zimports tool for sorting imports, flake8 with four or five plugins as well as "black check", so we spend exactly zero time dealing with anything to do with style or code formatting, including having to even type any of it. if people are fighting over code style then there need to be tools that lock all of that down so there's no disagreements.
> People know it's a good thing to do but people have no idea how to code review and what is and what is not benefical during code review.
I work on the Openstack project and that is where I learned to do code review with Gerrit, it is completely essential for large projects and is an enormous productivity enhancement. Decades ago before CVS was even invented, the notion of using source control seemed crazy to me. That position was obviously insane, even though we were stuck using Visual Source Safe at the time. That's how it feels to me to be doing a real project today without code review and formatting tooling in place. Like even for myself, if working alone, for a large project with lots of changes I still use a code review tool, just to review myself.
That's actually one of the costs of PRs (or any kind of mandatory and blocking code review). I've worked in places where the code review wasn't hard. People fixed these nits as a final step before pushing the whole thing to master. With blocking code reviews people either do that 'nit' thing, or keep the remark to themselves because it isn't worth fuzzing over.
All code review processes have some degree of rubber stamping, but the mandatory ones have a bit more.
Orthogonally, I feel that more reviewers makes the process and the results better. The best review process I've seen was slow as hell: (nearly) every team member would do (nearly) every code review. People would start riffing off of each other and actually learned from each other. You could actually get a good team-wide conversation going about the best approach to doing stuff a certain way.
That won't affect the quality of what people find and comment about.
> that's not a problem caused by code review, it's caused by people who don't know how to work with others (or otherwise simply refuse to). to the degree that the code review process shines a light on this, that's good news, because that's a people problem that should be fixed.
You say that like fixing people is an easy thing to do. It's the hardest thing to do. And the fact I've seen this at so many companies/teams leads me to believe that it's an extremely common thing.
> when you look at a code review and you see problems in what's there, catching them in the code review is great, because that means less chance of having to fix it later. > if you see a code review and nothing's wrong with it, you just approve it. sometimes you can note things as "nits", meaning, fine for now but maybe next time do it this way.
Yea... Those aren't getting done at 9/10 companies. They're getting ignored.
> syntactical style and formatting should be automated, if not with tooling that actually rearranges the code in that way (e.g. [black](https://github.com/psf/black) for Python) then at least including well tuned linter rules. We use black, my own zimports tool for sorting imports, flake8 with four or five plugins as well as "black check", so we spend exactly zero time dealing with anything to do with style or code formatting, including having to even type any of it. if people are fighting over code style then there need to be tools that lock all of that down so there's no disagreements.
Yea, we have those, I still get asked for stupid ass code style changes at every company for the last 8-years. Why? Because people can spot them and understand them. But can't code review for shit. They can't understand the idioms for the programming language, they can't understand how certain paradims work, they don't pay attention to root causes of bugs, they don't know how to design a database, they don't pay attention to performance, etc. I would say 80% of the industry can't code review for crap. And that is the biggest problem and because they can't do that all the other problems are just symptons.
> I work on the Openstack project and that is where I learned to do code review with Gerrit, it is completely essential for large projects and is an enormous productivity enhancement. Decades ago before CVS was even invented, the notion of using source control seemed crazy to me. That position was obviously insane, even though we were stuck using Visual Source Safe at the time. That's how it feels to me to be doing a real project today without code review and formatting tooling in place. Like even for myself, if working alone, for a large project with lots of changes I still use a code review tool, just to review myself.
This doesn't dispute or refute my point. It merely says in some cases it's used well. It doesn't remove the fact that majority of the time it's cargo cult and the majority of people who code review can't code review for crap or the fact that a high percentage of people just skim code review.
2. Of course code reviews don't catch all bugs. This is the same argument that is frequently used to reject static type systems and memory safety. The purpose of code review is to ensure that more bugs are caught, the kinds of bugs that can be caught through code review. It also helps ensure that a high level of code quality is maintained, and that at least two people understand how a change works, which is good for team cohesion and resilience.
Perhaps "banal" is the word you are meant to use.
There's a saying about that a 10 line PR will get 10 comments, but a 1000 line PR will get a "looks good!".
Of course this is not really what is happening. A 10 line PR can be completely unintelligible and a 1000 line PR might be a joy to read.
Once the code base has become a ball of mud, no one really bothers to review anything properly - it's going to take as long as writing it yourself. So perhaps, we get picky about some worn out code style point (incorrect indent etc) just to show we are alive.
This is the standard code review experience I've had across companies I've worked at. It's just theatre, bad and irritating theatre.
I don't think we should take it as given that the goal should be to approve the code in the originally written form. In some sense, I think if you're in an environment where you're going to be doing PRs for every change anyway, if your code is consistently perfect when you cut the PR, then probably you've cut it too late?
What if instead we frame PRs as the basis for an artifact-based discussion about how to do something. The code needs to exist in a complete enough form to provide a concrete basis for discussion. But if you see your reviewer as a resource and collaborator, who might have ideas or insights that you didn't, then ask for those ideas or insights a bit earlier, before the code is polished.
The best PRs aren't the ones that catch a bug. The best PRs offer some criticism or question or suggestion that allows you to reframe an abstraction or refactor something to be simpler, more flexible, more testable, more readable, faster, whatever, and from which the original author learns something. Your code might not have had a bug before, but that doesn't mean it cannot and should not be improved.
Circling back to 'trust' though, this approach does require a degree of trust. The reviewer can't be focused on nits and gotchas; more minor things are inevitable if our colleagues pull us in sooner, but we have to trust that they'll get sorted out.
I've never really seen that. What do you think is the reason for people thinking that's the thing to do?
I worked in an environment that was like Raycast and later adopted gatekeeping code reviews. If the code reviewer wasn't happy you couldn't merge. This gives the reviewer power to have the author shape the code the way the reviewer prefers at little cost to the reviewer. And of course the authors often respond by doing likewise when asked to review code. My approach as a reviewer was to provide feedback without blocking merges, but this alone is not enough to incentivize reciprocal behavior.
There was a guy at work who would always complain about numbers that weren't given names as #defines regardless of whether it made sense or not (Magic Numbers!). My co-worker asked him why he was so picky about it. Apparently because someone did that to him.
Is this some anti-vax satire?
Of course pull requests (code reviews) reduce bugs. And of course some slip through. It doesn't need to be 100% to be useful.
In fact I have seen alarmingly little research about code management. Code review, standup, agile, etc. does any of it do anything useful? I only came across anecdotal evidence which can be dismissed.
I think it is unquestionable that code-reviews catch bugs. Its value as a use of time is another question.
Yes, it does catch some, but does it catch more than no code review? There is no point in catching bugs if it also creates bugs to catch.
That's absurd...
Imagine some reviewer who rejects all code reviews that don't have some recognizable pattern in them from the gang of four book. Fully complete and working code gets hacked up last minute to accommodate this person who has more seniority than sense.
Or ... maybe someone who always want there to be a single return in every function. Or heck, someone who demands that you always do early return. Either way they demand that some carefully crafted code is swapped over to the equivalent dual. And during that transformation a mistake is make that nobody notices.
I think the point is that the science way to indicate that code reviews work is to actually do the experiment. Instead of just saying, "why that's absurd that a code review could cause more defects."
I mean isn't that how hand washing got introduced into medicine? "Hey everyone, let's try washing our hands!" "Why, the hands of a gentleman are always clean. It's absurd to suggest that we should wash our hands."
I was on a team of competent people working on a dense code base before and after introduction of gatekeeping code reviews. The code base was large so no two people would be working in the same area. Before code reviews:
- Team members working on a section of code would find and fix bugs in it.
- Team members would watch the repository history to monitor for any mistakes.
When gatekeeping reviews were introduced, they were not particularly effective at finding bugs because bugs tended to be subtle and required time working on the particular code in question to identify. But the introduction of gatekeeping code review caused bugs because:
- Once reviewed, code was assumed to be good enough because hey, two people agreed on it.
- Any bugs identified would require code reviews to fix which would be an uphill battle and probably involve project management.
- Nobody's looking that closely at history because who wants to deal with finding problems and pushing through fixes? And hey, it's already reviewed.
- Knowledge of code that is being reviewed becomes stale as you start working on other features. This impacts pre-commit testing as you make changes to mollify the reviewer, and thus lead to a bug.
- Lowering velocity lowers the rate of fixing bugs.
- A bug could be fixed but stuck in review, so it's still in the codebase for other developers to run across. You could argue: well, it's documented in JIRA! But bugs can manifest in different ways and affect multiple system.
I'm being slightly sarcastic. But not sarcastic to the point that I haven't done just that. Just as the nail that sticks out gets hammered, the code that is too perfect gets increased scrutiny.
Again, I think the question is value. Is a dev's time best used in code-review vs. something else? It's almost certainly the case that time is better spent elsewhere depending on the develop and needs or the organization.
In my anecdotal experience, there is at least one class of bugs that code reviews are good at catching that a large expenditure of self-review effort often doesn't catch: security bugs.
No, that's false. That's only true if you also assume that people write the same quality code whether there is a code review process or not. Which might or might not be true, no idea.
It's weird and silly to expect prevention with tool that is not designed to prevent. Peer review (and the PR model) exists as quality control tool for many different aspects of someone else's work.
I guess it depends on the quality of the reviewers.
Code review has become a way to design by committee: if every team member gets to critique one aspect or another on every code review, most design paradigms will fail to make it through the gauntlet of the ad-hoc design process the code review entails.
Code review has also become a way to micromanage individual contributors without explicit effort from the engineering manager or similar authority figure. It can now be administered by your colleagues who can now revel in the power that it gives them over you.
Finally, code review is the ultimate busy-work that caters to the conceit of today's software developers: that they are craftsmen who produce bespoke objects of art and beauty. After all, if each piece of code is so thoughtfully critiqued and subjected to such scrutiny, it must be exquisite indeed! Oftentimes the reality is that the team comprises a half-dozen overqualified people polishing a turd that exists to squeeze both sides by creating a cozy monopoly or an oligopoly in a two-sided market. But that is too embarrassing to admit, and it certainly doesn't bode well for pulling in obscene amounts of funding and prestige!
You certainly don't want to work in a codebase riddled with different ways of doing the same thing. Code reviews help with making the patterns more structured and more widely adopted. It also helps everyone at all skill levels to learn something from their peers, through reading their code and sharing thoughts.
It also makes more people in the team familiar with the codebase, such that more people are capable of changing and improving the related parts rather than just the original author.
The potential benefit of finding defects is only a nice to have!
https://www.youtube.com/watch?v=4XpnKHJAok8
Linus Torvalds talk on Google about how GIT is more a way of working then a piece of software. Everybody commits upwards in a tree of trust. If you do it this way you get automatic code reviews and in any team someone should be responsible for the "product" anyway and highest up in the hierarchy.
I've used both Git and SVN enough to understand how a VCS might change how an organization creates code, but I've never used Perforce. What is "the Perforce model"?
Basically all the stuff in the first half of the talk where he whines about stepping on toes, conflicting with other people, politics, special write access groups, are all non-issues that do not sound familiar to Perforce users. If two people are using Perforce they can both work on changes to the same files taken from the main branch (trunk) and submit them separately without conflict or even awareness of the other person. As a bonus this is all dramatically faster in Perforce than in git for large repos.
In this talk Linus demonstrates that he believes git is a state-of-the-art SCM system, without being familiar with the actual state of the art.
With large teams, you need code review (which this bug can elide), great test coverage (the tests will likely have a merge conflict) and very importantly, a commit-by-commit build health indicator after merging. You can also do this in batches and bisect to find poisonous commits should a batch be broken - flakes notwithstanding.
If two people edit the same line of code based on the same original version, that's a conflict and there's nothing any SCM can do about it. Human must merge. But there's no reason that several developers can't work on different functions or other disjoint areas of the same file, unlike what Linus is claiming in the video.
The video is essentially a long straw-man argument where Linus takes shitty part of CVS and SVN, which were terrible, and applies them to Perforce, which it's clear he has never used.
I don't think that's very fair Perforce is mostly just a shitty subset of git's features that some people pretend is sufficient. You can use git almost the same as Perforce and have almost as bad of an experience as using Perforce directly.
The situation can be very different in any of those aspects and mandatory review can still be a good idea. Are you building code that runs on surgery equipment together with 4 people you just met? I think you should probably have a formal review process. Or are you iterating towards an MVP for some webapp with a few people you consider friends? Lgtm just deploy to prod.
We had a similar system where you trust people to know when you can skip the CR, worked great. We also ranked CR comments on importance, focussed on preventing issues, and avoid subjective discussions as much as possible. It meant CR's were mostly about bug catching, sometimes constructive conversations weighing pro's and cons, and other times making suggestions that may or may not be applied.
It's so much nicer than a dogmatic nitpicking CR culture that completely forgets about what's valuable and not. Maybe it's my relative small sample size but I feel they always go together.
Otherwise, my experience with code reviews has been that it's one developer telling another developer how they'd approach the problem, or their preferred style, or some such nonsense, with the originally submitted code good enough to get the job done.
Everything else this author says rings true to me when code reviews are a universal practice.
They slow everything down, and only occasionally is that slow down worth it.
In my current company, our code reviews are essentially a second set of eyes by a peer. When things are "flagged" in code review it's never confrontational. Mainly they provide insight for those who haven't worked on the code base as long as to "why" things are the way they are, and it can open up discussions for improvements (or at least things to ponder for the backlog.) The odd time neat language or platform tricks are also learned.
I've worked in other places though where code reviews are done by rockstars or ninjas just looking for a reason to criticize or find fault with another's code so they can stroke their own ego. And if their code is ever questioned, expect a tantrum. Toxic environments like these say more about the company and people there than code reviews though.
Looking forward to the follow-up "no QA by default".
In my experience, code reviews are awful in the same way that micromanagement is awful. It's based on a lack of trust, first and foremost. It's also insulting. What's more insulting is being forced to deal with unreasonable review comments.
More than once, I had the experience of spending several days working on a feature, only to be blocked by some unreasonable "team member" who has weird fetishes about spacing and variable names and such fluff. It's really frustrating and insulting.
What's more, it doesn't really contribute anything to actual code or product quality.
The overwhelming sentiment I see on comments supporting the mandating of code reviews on every single commit seems to be based on the idea that everyone is a junior who makes obvious mistakes that are easy to catch by just having someone else take a look at the code.
I suppose this should not be very surprising.
I've noticed in the last few years more and more companies prefer to hire beginners (since they are cheap), and there's a huge wave of people who are learning programming to improve their careers and their lives -- which is a good thing for these people -- but it means a lot of teams are full of newbies.
So it makes sense in this context to install a lot of "guard rails".
What I don't understand is why everyone thinks mandating code reviews is a good idea in the general case.
Code reviews aren’t just about code, and probably shouldn’t be framed that way.
I can’t necessarily say this approach would work everywhere but it’s nice to see companies critiquing and challenging methodologies.
- All senior developers who had a strong sense of the actual product and what they're trying to do -> no pull requests, everybody happy - An awkward middle area where eventually they had tripled in size and code quality was haemorrhaging customers and paralyzing attempts to release -> "what do we change?", nobody happy - Another doubling in size, giant organizational change -> PRs are mandatory, people aren't happy but any attempts to reduce PR burden has almost a straight line to bugs in production
If instead of testing 1x a day, you tested 3x a day, you'd debug 3x faster, meaning you could ship 3x faster. Now imagine testing 100x a day. Every merge to main should test immediately and ship immediately. That's how you build trust, speed, and quality.
Your developers should be going, "fuck. I'm not merging this piece of shit yet and breaking production. I need more tests." Some people will say that waiting until you have enough tests is crazy because it's not fast enough. But the only way to get fast is to have trust, and the only way to have trust is more quality. More quality means less bugs, which means less cost and delays, which means shipping faster, cheaper. Go slow to go fast, shift left.
Tests are better than code reviews. With a code review, you're only reviewing one change at one point in time. 300 commits later, you're not reviewing how the 1st commit is affected by the 301st commit. But a test around the 1st commit is verifying against the 301st commit. Tests are the eyeballs that never stop looking.
At our company, code reviews help:
1) onboard new devs and teach them about the codebase, catch silly rookie mistakes early on
2) code reviews encourage to write readable code, what I write is readable to me, but can be incomperehensible to others, and code reviews help figure it out
3) seniors make mistakes, too, and it's nice to have another line of defense (some mistakes can be ticking bombs and not catchable in testing immediately)
In my opinion, as a developer, you shouldn't even trust yourself, let alone others, to deliver good product (especially if it's mission-critical software with a large userbase), because errare humanum est.
When done right code reviews spread knowledge and build trust among team members. Of course there are certainly situations which call for a push without a code review, but those should be exceptions not the rule.
While a lot of the issues around code review are cultural, I also think there's a lack of good tooling. That's why I'm building CodeApprove which makes reviewing code on GitHub faster and more enjoyable: https://codeapprove.com/
Couldn't disagree more, and don't even understand where it's coming from. When you've dome a substantial body of work, it certainly makes sense -- and is satisfying, and enjoyable -- to explain what it is and how it works and present it to people. And it makes sense for those reviewers, given that they work on the same codebase, to learn about the code change.
I get the impression that the authors of TFA maybe just aren't writing any interesting code.
Things we catch include
- Duplicating functionality we have elsewhere. - Duplicating business logic we are elsewhere. - Recommending cleaner abstractions, particularly with younger devs. - Query optimization - The time for your DBA/data developer to review things is well before they are a problem.
Developers with this focus on “speed” in my experience tend to build a lot of throwaway code.
This is especially important when you have people otherwise working in silos, which seems to be the case for Raycast.
Most teams and developers don't do code reviews because the only person that could do code reviews would be the developer himself.
Yes and no. They do work in teams, but this teams are so small, that everybody has 'their' part of the code, where nobody else even takes a look at, much less changes anything. I'm talking mostly about non-software companies.
Btw. most people that actually do work alone are self-employed. But code reviews don't make a sense if there is no other developer.
Now, code reviews aren't a panacea, there's an awful lot of bike-shedding going on (in particular whenever interfaces are reviewed) but they're a net positive in my view. It isn't about trust, it is about creating a quality product, along with postmortems (an under-utilized philosophy) and automated tooling (inc. testing, validators, etc).
Here's how I think about it:
* I trust my colleagues and myself to do good work to the best of their ability. * I trust my colleagues and myself to care about the quality of their code. * I trust my colleagues and myself to follow coding standards.
* I trust my colleagues and myself to write tests and to do manual testing when needed. * I trust my colleagues and myself to pay attention to CI and code quality tool output and to act appropriately on that output. * I _do not trust_ my colleagues or myself to do perfectly bug-free work all the time. * I _do not trust_ my colleagues or myself to be able to model the entire system we are working on in their heads. * I _do not trust_ my colleagues or myself to always come up with a complete set of tests, including all corner cases and paths through the code. * And finally, I _do_ trust my colleagues and myself to be human, which means we are not perfect, we get tired, we get distracted, we forget things, and we make mistakes. But I _also_ trust my colleagues and myself to try to help each other avoid problems in the face of this imperfection, and one of the best ways to do this is with code review.
Once you have lots of engineers shipping many different apps and need to work within and across teams. This system is not going to be fun. When a nasty bug ships and you're responsible, you'll wonder "should I have requested review for that one?" When one of your colleagues ships a few bugs that force you to paged during your on-call. This system is likely to erode trust on both ends. IMO, a no code-reviews model is going to stunt junior engineers career growth as they will not be performing what is common practice on software engineering teams. It's also going to keep out many other stakeholders who may wanna weigh in on the software you're delivering, be it UX, Accessibility, Security, Documentation, Product experts. Pull requests keep others on your team in the loop and educated.
I agree that this model gives you the advantage of speed, but I don't think it builds trust. I trust my colleagues and we do pull requests. I don't feel like I'm missing trust because I can't push to `main`. Code reviews have very little to do with trust. They serve as a communication tool and serve a tool to give the best possible experience for customers and provide an opportunity for alignment with our colleagues.
If I'm doing something trivial, I might just push to `master`. For instance, I don't run tests on a `README.md` change.
Interesting to encode it explicitly, though. Strong culture choice, for sure. And probably well suited to their product since they are probably all dogfooding `master` and pushing only periodic tags off it.
Somewhere between those the context is such that the benefits of code reviews outweighs the costs. Do you think otherwise?
I much prefer (software) design reviews, they make code reviews a sort of a sanity check to see if the design was followed (reasonably), we're testing the right thing, and maybe there were some business rules that weren't followed because they weren't obvious in the design process. This is especially important when you have a lot of engineers in the team/organization.
Clicking around in Raycast's jobs page, I get the impression they prefer ICs to work individually and ship things on their own with little to no collaboration, so no code reviews seems to be aligned with their values.
Good. You shouldn't trust my code until it's been reviewed. I don't even trust my own code until its been reviewed! I've gotten much better with time specifically because of the things I've learned from reviews, but having another pair of eyes on my code still frequently uncovers things I overlooked while in the weeds.
> Pull requests don't prevent bugs.
This is just wrong. It doesn't prevent all bugs, but nothing does. Reviews, testing, static analysis, requirements, proofs - they're all designed to improve confidence in software. They all work to varying degrees, but none of them are perfect.
> Pull requests slow down.
Good. This is a feature. If a change is big enough that it takes significant time for another qualified engineer to review, there's a high chance that an issue will be uncovered. At the very least, it ensures another engineer is familiar with the changes.
If reviews drag on because engineers don't treat them as high priority, then that's a culture problem - not a problem with reviews.
I agree with author that trust is proportional to velocity. But it takes sometime to build that trust. Writing code reviews is skill that engineers need to acquire. A lot of that is emotional quotient than IQ. Code reviews should give context to the PR owner that {s}he doesn't have by just going through the code. Let the review bots and other integration tests take care of breakage etc.
I check PR stats for my teams repo and track mean time of PR approval and few merge time metrics. This is just to make sure, that there is no back and forth happening during reviews and median time to merge PR is relatively constant. One can leverage these metrics to know which team member is being a douchebag.
"They all want to build the best product in the shortest time possible"
From my perspective, the "product" is not limited to what customers see or the code itself. It's the set of outcomes from the code being written and executed. For example... Do other engineers understand the code? Can they support it efficiently? Does it fail gracefully? What is the impact of failures? Are failures automatically detected? Does it lead to security breaches? Did the code author learn things they could do better that they didn't know to ask about? Did other engineers learn from reading the code or asking questions? Are customer requirements satisfied? How much is customers' trust impacted by failures, bugs, and feature that don't meet requirements? How big is your cloud provider bill? Etc...
If the priority is quickly delivering and iterating on features that satisfy customers' needs AND engineers will be supporting their own code, then the "No Code Review by Default" approach has its merits. In fact, I have personally found this approach to work well on constrained, isolated pilot projects where the priority is to unblock key scenarios for a limited set of customers and/or learn about customer needs before building a the "real" solution.
For anything else, I would suggest following the author's own advice: "ask yourself if the circumstances of others apply to you".
I trust my teammates, but if "trust" means "assume they are perfect developers with perfect knowledge of every part of the system who will never make a mistake", sure I don't trust them I guess.
If you have <10 is it an efficient way to work? Yes, it can be. Replace the formal process of code reviews with an informal process of reviewing new commits. Encourage everyone to regularly read the code and not silo themselves away.
I would argue a better title is "No rules/process by default". Which doesn't mean you go wild, having a strategy to develop software is very important, but strategies need to be customized. It means you apply just enough and constantly adapt so you have just enough of whatever you need to make sure development works smoothly. People will say PRs improve communication, can help prevent problems, help understand what changes are going into the code base etc, but all of those things can be done in many different ways, and depending on the group of people, there can be more effective ways of achieving that. So adopt what you need ( which may be PRs, or some variation ). But whatever you do, don't go overboard with justification.
The dangers with overselling and defending your justification for doing or not doing something can be a problem. It also can mean you don't adapt quickly to change. By overly emphasizing your approach being about "responsibility" and "trust" you kind of make out that PRs are about a lack of trust and responsibility, which is incorrect and may prevent you from adapting because you have misattributed a quality to a practice/process. This often happens when people go from A (problematic) -> B ( much better). They attribute too much weight to what makes B better than A when they may need to change those things to go to C (even better). In my 40 odd years of observing software development, this is really common, especially for many devs with around 5-10 years experience (including myself). You find much better ways of doing things, and you start becoming advocates for those things. In reality, you always need to be adapting and questioning and always deconstructing and reconstructing your strategies about how to do software.
Also code review as a communication tool? Maybe yes, but again there are better tools for the job.
"Trust, but verify"
I think a lot of other commenters nailed the trust equation in more verbose terms. I would add that in some industries or for certain customers, a 2+ person code review rule is mandatory for compliance, regardless of any perceptions around how mundane the changeset might be.
Code reviews where some coworker runs the code on their machine and has plenty of time and power to express the ways it could break or it could be improved are useful.
When the former review happens more often than the latter, if often has to do with soft skills rather than hard skills. Some requirements for a good review are honesty, mutual respect, and ability to handle criticism. Other than that, some engineers struggle to understand there may be more than one good approach to a problem.
Ultimately, engineers are not entirely to blame for bad reviews. In my experience it is often a company culture issue. Authoritarian leadership trickles down to middle management, senior engineers, etc. If people don't feel empowered to challenge others code regardless of their seniority or perceived status, weird or downright bad technical decisions will be made.
As OP says, team members could ask for review if they do not have confidence in what they are changing. But code reviews usually makes everyone wiser - domain context, design thinking etc., So we need to find other channels to ensure that.
Few Questions:
1. Do we have data on how many bugs do we fix at review stage?
2. How many times we blindly approve based on our "trust" with our colleagues? Biases definitely play a major role in reviews.
3. What happens if one engineer quits with others not having any context? So we cannot say Code Review is not needed for every team. It may work for some.
4. How many PRs we merge using our admin rights without getting any peer review (because of delays or getting attention from our peers)?
In my private projects when going again over my code in i.e. Source Tree I'm tempted to stage stuff that is messy with many comments and should be rewritten hoping I'll come back to this another day. The temptation is too big when there's no one looking. The fact someone will be reading your code encourages a proper cleanup before creating a PR.
Code reviews increases awareness of changes, helps with desiloing, gives junior devs a chance to see how others write code in their own time, reduces chance of mistakes, encourages collaboration.
To stop the endless ping ponging back and forth I advocate for Must/Should/Could in all code reviews. So long as everyone gets it, it massively speeds up code review and increases team happiness around the process.
https://careerswitchtocoding.com/blog/moscow-the-best-code-r...
We don't review code as a matter of course. But if anyone of us writes code that doesn't pass our personal smell test, or that we think it won't pass someone else's smell test, or <insert criteria here>, screens are being shared and ideas tossed about.
We almost always <solve the problem we were having|create a better solution|decide to accept the debt and open an issue until we have a better way>, in pretty short order.
So it's sort of JIT/OnDemand code reviews. Kinda.
This can work sometimes and with some groups of engineers, but the engineers I worked with in my career who most needed this (1) would never ask for another set of eyes and (2) were the exact type of person who think a code review equals "people don't trust me."
When I started at my current large tech company as a senior software engineer, I learned a ton from having "more junior" engineers review my code. I also learned a ton from reviewing other people's code. Even now that I have the fancy title of Principal Software Engineer, if I want to make a change to our team's code, I need two ship-its from my teammates, same as everyone else. It's not about "not being trusted," it's about earning trust of others by not pretending your poop doesn't smell.
> Code reviews aren't high priority for engineers.
Well there's your problem. On teams I've been on that do code reviews well, code reviews are considered a high priority. When code reviews are given high priority, and when engineers take the time to plan their work so that changes are small enough to understand, code reviews really don't take that long and are not a blocker to productivity.
> It's up to you and your colleagues to set the rules for your team. Don't adopt best practices in a dogmatic fashion. Rather, ask yourself if the circumstances of others apply to you.
This is true - companies shouldn't blindly copy practices from other companies. I have been a part of teams before that didn't have required code reviews and were still effective, but this required a unique set of circumstances in which all engineers were pretty senior, and so we were doing a lot of owning features end-to-end. It is fun to be part of a team like that, but it doesn't scale very well.
What this article also doesn't address is that reviewing code is its own skill, and many engineers are not naturally good at it. Just as it is bad to blindly copy other companies, it's also not great to blindly equate "we're not good at a thing" with "this thing isn't useful."
I also don't know how smart it is to brag about, seems like just asking for something to go wrong.
Reviews being slow can be largely solved with review SLAs/automation to ping reviewers. Code review themselves can be quite fast if you know the code author well. Some people you can trust to make high quality changes and the review covers more high level/directional questions. Others are more junior/haven't built the trust yet and require more scrutiny.
But having done a large number of reviews, it really doesn't take long to do even an in-depth review, if you're already familiar with the area of the code in question. The goal of a review is not to test the code for the code author, or to identify bugs for them... which is where things could really slow down. I used to do 10 or so reviews a day and it would take an hour or so to get through them, while still writing a lot of commentary on them, where applicable... not a rubber stamp.
I know the author mentions not wanting there to be a lack of trust, but its pretty evident they'll realize the need to build trust first as they scale.
There is also an aspect of the code author writing code to a higher standard when they know it will have to go through another set of eyes before being merged. The best people can set this standard for themselves regardless, but it's unlikely that you could run an org at scale on this premise without a drop in code quality.
It is an interesting idea to try to run a company only hiring people that have the skills/attitude to be implicitly trusted. I think this approach is very feasible at a small scale if you can maintain very high hiring standards and are able to nab mostly 10x style engineers. These people will make mistakes, but will recognize them and self correct. Then you could just have people contributing under a guiding set of principles/standards, and "retroactive" review to ensure standards being followed.
But almost certainly impossible to hire at this quality/bar at scale.
Having said all that, I did work in a startup where in the early days certain people had write access and pushed directly. It mostly worked well at that small scale, but lots of technical debt developed in certain areas of the product that took years of effort to reverse/fix.
"We unashamedly don't do code reviews unless requested explicitly" == "We are proud to announce we don't like learning from each other unless shit hits the fan*"
* which it will
Having worked in places where there were no code reviews the software development culture was all over the place. I wouldn't go back.
I also wonder if there are typically 0 reviews or typically 1 — that is, are people even reading their own changes before they push them.
By not pushing changes that break the build. In other words: that does not work if you can't test (actually test, not just try to build) your code before pushing it to main.
Especially funny for distributed around globe teams. When You spent half a day fixing a build issue from the "night" workers. Have eat a lot of this.
It can get a little unwieldy if I’ve got a lot of refactoring going on which I currently do. I inherited a codebase that never said no to any “hey. That’s a neat blog post. Let’s try that way for this feature”.
Code reviews help me be more confident in my own work. Even if 90% of the time it’s fine, there’s always that 10% where a second pair of eyes catches a mistake or offers a suggestion that makes the code even better.
Code review as mentoring can be great, if it's a directed 1:1 effort. Usually, it is not.
This isn't a problem with code reviews, this is a problem with your team's processes that code review has highlighted. These problems will show up in others ways if not during code reviews since clearly there's strong disagreement on coding styles and conventions. So go and fix or implement the style guides, linters and so on.
This is like saying that the way to fix fevers is to get rid of thermometers.
The same people who are perfectly happy to do a back and forth, adding 3 days of code review time, would never go in and refractor existing code to "fix" stylistic issues. It's a cognitive bias that we think everything is up for philosophical debate in reviews.
Btw, I'm one of those people who drove stylistic debates in an organized fashion. I learnt that you simply can't police everything, even if it we pretend that it's a good idea to do so. It's much better to share ideas and have them spread through voluntary mechanisms, such as dedicated tech talks, mentoring, postmortems and other deliberate knowledge sharing.
That said, automated enforcement of pure formatting can work well if there's standard tooling, e.g. gofmt and rustfmt. But contrary to reviews, that's a way to suppress debate during day to day business.
Btw, my experiences are from a mega corp, not claiming it's true for every org.
How do you make this converge across dev without endless debate or resentment ?
> linters and so on.
What happens when a rule has to change ? update the entire codebase impacting everyone with conflicts ? tolerate divergence existing code ?
Talking about thermometers, are you actually tracking fevers, or merely minor bad breath ?
How I did it as part of a ~35 person dev team was adopt one of the popular ones from the community (e.g. Google's) - treat it as a benevolent dictator. Apply it to the entire code base in one giant shot (disrupting any pull requests that were in flight but fixing them proactively), then enforce it with tooling. Leave the door open to anyone who disagrees with the defaults. It's on them to propose the change with rationale then put it to a quick majority vote with a subset of the team. I think 2 things have actually changed since the very beginning (line length from 80 -> 120 and something else that I don't remember right now).
When something changes, making a sweeping change is part of accepting it - if it's too risky/disruptive then it has to be very valuable to do.
plus, software engineering is a team sport, so if something works but is opaque, that's a fine thing to point out in a review, otherwise one runs the risk of being The "Foo Component" Owner™ and that's usually no fun and is for sure not healthy for any reasonably sized org
I am also a monster fan of the "Apply Suggestion" button in GitLab, which allows me to meet the reviewer half-way by even doing the legwork for the suggested change. If they agree, push button and we're back on track. It's suboptimal for any process that requires/encourages signed commits, but damn handy otherwise
Testing is pretty awesome process to find bugs.
In my experience, it's hard to have a dev as a good tester. I'm ok at it but not as good as a pro QA person. Good devs look for shortcuts and efficiencies, which is the opposite mentality for good testing.
And never learned how to properly test software... Or earn 10x as much and consider it not a priority.
So when changes skip a code review, they are still subjected to a good amount of peer testing before they ever reach a customer.
I think that's a reasonable approach in the situation, and I can see why they have gotten good results from it.
Maybe this is part of the problem?
Why do we need to build so incredibly quickly? Is this the only way?
Code moving to prod gets reviewed though.
Yes!
The best services rise to the top, and the other ones are refactored until they are ready to be consumed.
why i like code review:
1) code review is a great place for mentorship 2) people mess up, its ok, major issues have been caught in code review
I do think collaboration matters here, not about trust or code.
Good software comes from good collaboration.
We at the SQLAlchemy org actually have Gerrit integrated with Github PRs so you can use both UXes for the same patch at once, and other orgs do this too.
This a far bigger issue for me than a lack of trust. Code reviews should be a high priority for your engineers, especially the senior ones.
Most of the value of code reviews comes into play when working on more mature systems that sit on top of a stable foundation. And the benefits of those reviews are significant: they help socialize the codebase (I'm using socialize rather than "understand" because I don't think you can get a complete picture of it by doing them, but you can lay down a mental foundation from them) to the team, they let you share techniques and information on how to best utilize a given tech stack, catch missing test cases, identify faulty assumptions, and help Jr/Associate level engineers to be stronger contributors. I've also seen them used successfully as glorified whiteboards/sketchpads to share and develop ideas between engineers.
To extract this value out of code reviews, there are a three key things that must be done.
First, automate as much dumb bikeshedding as you can: have precommit hooks for code formatting, linting, type checking, dependency validation, test coverage etc. Tools already exist for most languages to do this, so you're not stuck having to write your own. A few dozen lines of YAML eliminates hundreds of hours of worthless debates and churn. Nobody should waste time commenting about this stuff.
Second, avoid "large" PRs. Don't get too caught up in change sizes, but think more about "How many major components will people actually need to review?" Sometimes a 2000-line changeset has 1500 lines you just skim and 500 lines of real stuff you have to pay attention to.
Third, you need to give some context in the PR message. Link out to relevant things, explain what you doing, and even call stuff out you want people to see "Please draw your attention to FooBar in baz.py - I'm a little unsure this handles all situations"
Code reviews often get wedged into SDLCs because someone broke something in prod and people concluded "code reviews would've prevented this!" -- I personally think this is the wrong mentality. You should instead see them as a way to improve the quality of the codebase and your team's skills and knowledge. Rather than trying to "catch" a catastrophic production bug, focus instead on reducing the number of catastrophic bugs that get created in the first place - via better tests, better shared understanding, and sharper skills.
[1] https://techcrunch.com/2021/11/30/developer-productivity-too...