Comments on: Some developers are more equal than others https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/ Software Development Analytics Thu, 31 Dec 2015 09:08:00 +0000 hourly 1 By: Jesus M. Gonzalez-Barahona https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-260 Thu, 31 Dec 2015 09:08:00 +0000 http://blog.bitergia.com/?p=1361#comment-260 In reply to Doug Hellmann (@doughellmann).

I just noticed that most *-specs repositories are not included in the dashboard (only openstack/qa-specs and openstack/security-specs, with a total of 39 reviews), and that openstack/governance and openstack/requirements are as well not in the dashboard.

You can check if a given repository (or wildcard repositories) are in the dashboard by writing “project:repo_name” (eg, “project:*-specs”) in the box labeled as “Gerrit activity”, right below the main menu at the top of the dashboard. And once you get the dashboard filtered for that repository(s), check the table “project:Descending”, near the bottom of the dashboard.

We collected the data with the idea of focusing in code, and that’s probably the reason why those repos are excluded (the remaining *-specs being in fact a bug on our side).

]]>
By: Jesus M. Gonzalez-Barahona https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-259 Wed, 30 Dec 2015 23:05:39 +0000 http://blog.bitergia.com/?p=1361#comment-259 In reply to Doug Hellmann (@doughellmann).

In this little post I was focused on finding out how people can “push” their changes through code review, because that’s an interesting characteristic. Time-to-merge is one of the factors in time-to-deploy (of new features, bug fixes), and therefore is an interesting metric to track.

But you’re completely right. If you want to find effective contributors, finding people who help others to improve can be very valuable. In a different context we’re studying mentorship in code review, by analyzing who is reviewed by who, how are those reviews, and how people improve over time, trying to link that to specific “mentors” (reviewers that are specially helpful). And yes, that’s very interesting, and important for the long term health of a project.

An interesting path open is what you suggest in your comment: finding good practices, such as outstanding reviewer comments. Thanks a lot for it.

]]>
By: Doug Hellmann (@doughellmann) https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-258 Wed, 30 Dec 2015 19:31:03 +0000 http://blog.bitergia.com/?p=1361#comment-258 After thinking about this further, there may be better ways to identify ways to be an effective contributor. For example, what could we learn if we could identify the reviewers who most often provide actionable review comments and then follow up on the results to help contributors land their patches? Could we help other reviewers improve the feedback they give by studying the way feedback is given on those successfully updated reviews? And could we compile a list of frequent comments (or at least themes) to help contributors craft their patches so they are more likely to be accepted with fewer revisions? To find the “good reviewers” I would start by looking for folks who vote negatively on one patchset and then +1 or +2 after a new patchset is added to the same review.

]]>
By: Doug Hellmann (@doughellmann) https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-257 Wed, 30 Dec 2015 16:47:23 +0000 http://blog.bitergia.com/?p=1361#comment-257 In reply to Jesus M. Gonzalez-Barahona.

If you filter on repositories ending in “-specs” that would identify all of the design documents. You would also want to remove the “openstack/governance” repository, since that contains policy documents that would be subject to lengthy discussion requirements. There are other sorts of repositories containing artifacts that aren’t code, but it’s less clear that those should be filtered in the same way. For example, “openstack/releases” holds data files describing deliverables we prepare, but changes to that repository do require some thoughtful review and we try to review those fairly quickly, except during release freezes. The openstack/requirements repository with the list of allowed dependencies is similar in that regard.

]]>
By: Jesus M. Gonzalez-Barahona https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-256 Wed, 30 Dec 2015 16:12:55 +0000 http://blog.bitergia.com/?p=1361#comment-256 In reply to Doug Hellmann (@doughellmann).

Thanks for the suggestions. We’re now working in an schema where you would have access to every transaction in the core review process (reviews, approvals, rejections, verifications, etc.) With that, you could do the kind of analysis on rejections by CI that you mention.

For spec repos (and other kind of “special” repos that could have differential characteristics with respect to code review), it is a matter of filtering them all together. That’s doable in Kibana, knowing the list of repos… Is there any list of them, or some pattern that could be used for matching the names?

]]>
By: Doug Hellmann (@doughellmann) https://bitergia.com/blog/metrics/some-developers-are-more-equal-than-others/comment-page-1/#comment-255 Wed, 30 Dec 2015 14:24:13 +0000 http://blog.bitergia.com/?p=1361#comment-255 It would be interesting to see separate data for patches that were rejected by the CI system because a Jenkins job failed versus those where a human reviewer left a -1 with a request for a change. Sometimes we end up with multiple patchsets on a review because the easiest way to run the integration tests is to let the automated systems do that for you. I wonder how often that really happens, though.

It would also be interesting to separate out the “specs” repos, which contain design documents for which we expect much longer review times because of the nature of the content, from more standard code repos, where we hope to have a higher velocity by making incremental changes.

]]>