The Journey Of A Jersey Commit

Let’s do a small behind-the-scenes trip and show you what tools and processes we are using in the Jersey team in order to take care of our code quality, build stability and automation of various tasks, such as integration testing against WebLogic and GlassFish servers. I am aware, that we aren’t doing anything unseen-before, we might not even be tech-hipster-compliant, but I thought someone might like to look behind our curtain and see what tools we choose. In the end, we are open source and we put emphasis on our community, so sharing those things (not completely, I won’t tell you passwords neither give you the VPN access) is a natural choice. However, I will not cover everything and extensively, it’s a bit late already, so let’s keep things short. I can always write some follow-up if there is some interest.

This post will mainly be describing the lifecycle of our internal commit. Things are quite different with pull-requests from the community via GitHub, which I might write about some other day.

Who would not like (g)it?

Probably obvious fact – we use git as our versioning system. It is flexible, offers variety of 3rd party convenience tools and works great with (surprise, surprise…) GitHub and with tools we are using to manage our commit lifecycle.

There is still a lot for me to learn about git as it is very powerful, but the good thing is the documentation is great and amount of resources on the internet is enormous. There is virtually no question one might have not answered either on StackOverflow or elsewhere.

Review time

After one of us pushes a commit, the process starts. We have a rule, that every piece of code is checked by at least four eyes. And the minimum amount of eyes does not seem to change in the near future, at least not until we have a one-eyed or three-eyed colleague.

Therefore we have incorporated a code review tool in our process. For Jersey commits, it is currently, Gerrit, the tool originally developed for the Android project. BTW, have you noticed, that so far all of our tools end with -it. But that’s going to change soon, read on.

But before a human looks at the change, the code is checked by a series of automated jobs. We call Hudson for help with that. Gerrit triggers several Hudson jobs to verify the commit.

Copyright header check

Yes, we are free and open source, but we work within a large company and there is some “lawyer stuff” to be considered as well. Thus we have certain rules regarding the copyright and license headers in each file touched. This does not relate to the code quality, but you know, rather safe than sorry.

Code style watchdog

As you may have already read in Michal’s in-depth blogpost, recently we incorporated Checkstyle in our verification build in order to ensure consistent code quality and to keep adhering to our code conventions. Its a tool that makes you want to scream in the beginning, but slowly becomes less and less obtrusive as you are getting used to.

Verification

After that (actually it is done in parallel), two more jobs are being executed – one builds the entire project and runs all the unit tests and the other one performs the integration and end to end testing.

If anything fails, we can either re-trigger one of the phases if we think the failure is intermittent, or fix the problem and push a new patch set. And of course, the process starts again from scratch.

Human touch

In order to give the otherwise automated process some humanity (no, that’s not the reason – we are developers…), after those checks are successfully passed, there’s one more step. One of the colleagues has to take a look at the change and either approves the change (wow) or give his comments (which is, at least in my case, more likely). New patch set, new verification, new review…

Jobs again

Most of the time, this loop isn’t infinite, so the commit gets finally approved once. This means, it is merged into our master branch. After this happens, another Hudson jobs takes care every while to sync the changes to (and from) GitHub. So if everything works nicely (and it, luckily, does most of the time), you can checkout and use very fresh (~5 mins) Jersey snapshot.

Things can get wrong even after

But there are more jobs, that won’t let us rest (nor REST) on our laurels.

There’s a whole bunch of other automated processes that test if we still integrate well into GlassFish and WebLogic, that try to release the snapshot artifacts to the maven repository at java.net, that test our code with other projects, such as MOXy, that perform the TCK tests to ensure we didn’t break the JAX-RS specification, etc. A whole bunch of jobs.

But those are things that usually don’t go the bad route too often.

And even after – “after”

Of course, we are just humans, and our automated verifications are written by humans. So do we have bugs? Oh, hell yeah! And who finds them? FindBugs? Maybe a small bit, but…

Of course it’s you, the good people from the Java community. And we do our best too. And our colleagues, that are using Jersey in their applications across the company.

And there are also other things than bugs to consider, so are also running some performance tests on a regular basis and try to automate and fine-tune that process as much as possible – more on that maybe next time.

Next week…

… this article might be outdated. Or maybe next month. Those are the things we do care about and are constantly trying to evolve and improve it (as much as we can, because there is always enough other work to do). Because we believe, that a good infrastructure is essential and protect us and our users from potential problems.

Leave a Reply

Your email address will not be published. Required fields are marked *