23 Comments

I lead the organization that includes Critique. It was so heartwarming to read this post; thank you for this synthesis! Truly proud of all the hard work put in by all the teams mentioned here!

Expand full comment

Thank you Niranjan for your kind words!

Expand full comment

Isn't Critique actually Gerrit, just themed as Critique internally? I recall Gerrit was used for android and it really does look exactly like it.

Expand full comment

Edit: See Brian's comment below.

Expand full comment

Critique and Gerrit don't share the same code base at all. Gerrit has acquired a lot of functionality that's similiar (and a similar look and feel to Critique) over time due to internal folks asking for that :)

Gerrit actually predates Critique by quite a bit. Maybe circa 2009, Rong Ou and others were starting the project to write Critique (which was going to be a replacement for the previous system, Mondrian, written by Python creator Guido van Rossum), I tried to persuade the team that they should just adopt Gerrit for internal use, because I thought it was suboptimal to be maintaining two separate code review tools.

In retrospect, I was totally, totally wrong - Critique turned out to be far better than Gerrit in lots of ways, and the trend has been for Gerrit to acquire functionality that's similar to Critique over time.

Expand full comment

This is a much better response than I could have written. Thank you for sharing!

Expand full comment

LOVE love critique. Great intro on why it rocks

Expand full comment

Thank you Erika!

Expand full comment

Interesting (and well researched analysis). I loved the statistics at the end, gives a nice point of reference.

I’m curious to see when the process will be completely taken over by AI, not just suggesting fixes. Maybe one AI suggests and another one rates the suggestions 🙃

Expand full comment

Thank you Anton! Haha, soon we'll just be writing docs and AI will be doing the implementing.

Expand full comment

Check out CodeApprove (https://codeapprove.com) which brings a lot of what Xooglers love about Critique to GitHub PRs.

Expand full comment

Thanks for sharing Sam!

Expand full comment

Nice detailed analysis on Critique! I saw Nikhil's post on X the other day, but didn't know Critique was this good.

Expand full comment

Thanks Abhinav!

Expand full comment

nice to see Gerrit mentioned, I used to work with it.

great post, Leo.

Expand full comment

Thank you NK! Really cool you've worked with Gerrit - I've actually never used it before.

Expand full comment

Great inspo for the PR review tool we recently debuted at GitClear (https://www.gitclear.com/help/pull_request_code_review_quality_overview), thanks for posting this!

One thing that left me curious was what sort of "Static Analysis" is seen as desirable by Critique? I observe "Static Analysis" being mentioned as desirable both by the Reddit person and the Mozilla developers.

To some extent it seems like the interest in static analysis is to help keep devs in line with the style guides & best practices, but I would imagine that a linter / IDE would be the better way to enforce code conventions (while the code is being written, instead of waiting for review). Does anyone have examples of the valuable Static Analysis that Critique performs? If this analysis could be handled by a linter, why wouldn't that be preferred by Google? Too heterogeneous a dev IDE ecosystem?

Expand full comment

GitClear looks great - thanks for sharing!

Static Analysis at Google covers a lot more than just linting (it includes linting). I'd recommend this publication by Google regarding static analysis for more info: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/3198e114c4b70702b27e6d88de2c92734c9ac4c0.pdf

Expand full comment

Wow, what a goldmine of specific, interesting detail!! Thanks so much for sharing this, and for giving the world a good target to aim for in building the most useful possible PR review tool. I've long been mystified by how Github, Gitlab and Bitbucket are all effectively identical in their PR review experience. It's great to learn that there are tools that go beyond the boring GitHub implementation. So far I think my favorite features I've seen listed are the "diff each file only since the version that was last reviewed," "how much did time to run tests change," and "suggest code that addresses this comment." Juicy stuff, thanks again for all this inspo!

Expand full comment

You're right that the static analysis feature partially serves this function (although it's also for more expensive checks that wouldn't be convenient to run locally).

Google is not heavily prescriptive about the tool you use for actually writing code. Many of the IDEs and editors that people do use, particularly the ones that have strong internal support, have style checks built into them, and it's obviously much more efficient to expose / correct these things at the time that code is being authored.

However, if an engineer wants to, they can use `ed` to just edit the code as raw text. What the static analysis mechanism does is to provide a consistent leveling bar for every piece of code that's committed to make sure that the same standards are applied uniformly irrespective of the local development environment.

Expand full comment

Yep, that makes sense, thanks for sharing!

Expand full comment

Only one of the presubmit checks is that there are no linter errors, but there is also slow running integration tests and tests on the transitively affected code, config diffs and lots of little project specific ones, like js payload size.

Expand full comment

Nice article, loved it

Expand full comment