On a complex client project, we faced the challenge of designing a complete development and build setup. On the one hand, it needed to allow for feature-based development. On the other, it had to provide insightful (test) results about the state of the software and all its components. Below, I’ve explained why we chose the specific project setup and how we built the components.
1. The Maven-based development project
Some key aspects of the project were clear right from the beginning:
- Maven-based development
- Multiple JEE web applications, which are to run in the WebSphere application container
- RCP client for system operators
- MQ messaging between the applications
- MQ messaging and HTTP communication for data providers and consumers
During the software development stage, the project setup naturally evolved. This included versioning the source code, which we migrated seamlessly from Subversion to Git while development was ongoing. But the changes to the project setup also involved the structuring of the source code or the individual Maven modules, and the structuring of the EAR containers. As part of these changes, we also migrated from WebSphere 7 to WebSphere 8.5 while development continued. Discussing our experiences with Git migration would probably require a whole separate article. In the next section, I’ll describe the current setup and why we chose it.
2. Setting up the basic project structure with multiple Maven modules
The project consists of five independent web applications and an RCP client. This setup raised a key question: how should we organize their source code in the repository? At this point, we had several options: a repository for each Maven component, a repository for each application, or a repository for the entire project. One consideration was crucial here: the build and in particular the release build needed to be fast and as consistent as possible.
The project contains several Maven modules, which are used jointly by the applications. We therefore had to make sure that the applications referenced the correct release version of these modules. This makes the application build quite complicated because there are inevitably several steps to it. So we focused first on the (release) build of the modules. Then we updated the referenced versions in the other modules. After that, we did the same for their release builds. This split really complicates things. We had learned this from painful experience before the Git migration. Up until this point, the components were individually versioned. This resulted in an incredibly complex release process. In particular, it made delivering a bug fix for the productive version very complicated. And this is also when time pressure usually becomes a factor.
Ultimately, we came to the following decision:
- Just one overall repository
Versioning for the entire project takes place in a shared Git repository. This enables us to identify and reproduce the precise release status.
- Creating a clean structure for the Maven modules
We divided the project into a clean Maven structure. The root POM contains all the version definitions as well as references to the child modules. The next level of structuring takes place in the backend and frontend. The backend contains the WebSphere applications on the next levels, and the frontend contains the RCP client.
3. Designing the Maven project structure
The module structure consists of the following sections:
Projekt-Root + backend | + core | | + coreModuleA | | | - pom.xml | | + coreModuleB | | | - pom.xml | | - pom.xml | + common | | + commonModuleA | | | - pom.xml | | + commonModuleB | | | - pom.xml | | - pom.xml | + webservice | | + webserviceModuleA | | | - pom.xml | | + webserviceModuleB | | | - pom.xml | | - pom.xml | - pom.xml + frontend | + client | | + clientBundleA | | | - pom.xml | | + clientBundleB | | | - pom.xml | | + product | | | - pom.xml | | - pom.xml | + client-tools | | + toolBundleA | | | - pom.xml | | + toolBundleB | | | - pom.xml | | - pom.xml | - pom.xml - pom.xml
There are no profiles below the backend, only on the frontend. This allows us to build the entire backend with a single build. We deactivated the frontend by default.
The frontend module build uses Tycho plug-ins: they connect the two worlds of the Maven and OSGi or RCP builds. However, a major restriction applies here: you can build just one RCP product within a Maven reactor, that is to say, a Maven build. This is where Maven profiles come in. They are an easy way to control precisely which RCP product is to be built.
Ultimately, we implemented the following profile setup:
frontend/pom.xml: <modules> <module>client-tools</module> </modules> <profiles> <profile> <id>client</id> <modules> <module>client</module> </modules> </profile> </profiles> frontend/client-tools/pom.xml <profiles> <profile> <id>toolA</id> <modules> <module>toolBundleA</module> </modules> </profile> <profile> <id>toolB</id> <modules> <module>toolBundleB</module> </modules> </profile> </profiles>
We integrated all the other modules directly in the modules section in the respective parent POM. This setup allows us to build the entire project without the RCP components with a simple “mvn clean install” in the project root. We can also build this module in the corresponding RCP module without working on any other modules. What’s more, we can do so via the same Maven invocation.
4. Continuous Integration
We built the project entirely via Jenkins using the above structure. We then deployed it on various development and test systems. At a fundamental level, two Jenkins jobs are enough to build the core components: one job to build all the backend components and one job for the frontend.
We also created two versions for each of these two jobs: one as a continuous job and one as a nightly job. The difference between them is that we trimmed the continuous job for speed. Using the incremental build option in the advanced settings in Maven is one way to achieve this. The nightly job on the other hand always does a full build. It also performs the Sonar analysis of the entire project.
4.1. Backend build
In the version control system (VCS) configuration of the Jenkins job, we entered “frontend/.*” under Excluded Regions. This means that pushes on the frontend code do not trigger a backend build. We configured the build as “mvn clean deploy” in the root directory of the Git clone. Besides this, the build doesn’t have any special features.
4.2 Frontend- (client) builds
The frontend build is more complex, as we also had to factor in pushes in the common part. We therefore configured the following under Excluded Regions:
This means that changes below these two folders are ignored. Everything else triggers a build, including changes to the POM files. The individual build steps are more extensive at this point. This is because the parent POMs have to be available for a complete build of the frontend: even if we are building just the frontend, we still need all the part POMs. One workaround might be resolution via a Maven proxy (Nexus). But this creates a risk: we might activate outdated versions if the backend build has not yet deployed the respective components in the proxy.
For this reason, we explicitly used individual build steps to build all the relevant artifacts. Our aim is to have a complete and up-to-date Maven hierarchy:
Pre-Build-Steps: 1. Root-Pom: pom-File: pom.xml Maven-Command: mvn --non-recursive clean install 2. Backend-Pom: pom-File: backend/pom.xml Maven-Command: mvn --non-recursive clean install 3. Common-Komponenten: pom-File: backend/common/pom.xml Maven-Command: mvn clean install 4. Frontend-Pom: pom-File: frontend/pom.xml Maven-Command: mvn --non-recursive clean install Nach diesen Pre-Build-Steps sind alle Voraussetzungen gegeben, um den RCP-Client zu bauen und zu deployen: Client-Build: pom-File: frontend/client/pom.xml Maven-Command: mvn clean deploy
5. Fine-tuning the build
After implementing the above setup, continuous integration (CI) already works reliably. We can also follow the usual scheme with continuous and nightly builds. For example, we can trigger continuous builds via pushes. The nightlies on the other hand build once at night and perform advanced tests and the Sonar analysis.
However, various limitations quickly emerged. They included the size of the project. This is reflected in how long it takes to perform a build and abort a build (although aborting a build would not affect the other steps within the build). Now it was time to fine-tune the build to make it faster and more tolerant.
5.1 Separating WebSphere deployment from the build
We have deployed the individual applications on various WebSphere Dev instances within the build. However, each application takes several minutes, which makes the build time extremely long. As a result, the developers do not really get “fast” feedback.
To solve this problem, we moved the WebSphere deployment to a Maven profile and executed that profile in a separate build job. This means that only the entire application is in fact built within the actual build. Downstream deployment jobs deploy the applications to the WebSphere instances.
We therefore added a delivery module in all the root POMs of the WebSphere applications:
... | + core | | + coreDelivery | | | - pom.xml | | + coreModuleA | | | - pom.xml ... | + webservice | | + webserviceDelivery | | | - pom.xml | | + webserviceModuleA | | | - pom.xml | | + webserviceModuleB | | | - pom.xml ...
The process within the delivery modules is exactly the same for all WebSphere applications. Fundamentally, we have not changed the project build. However, we have introduced several downstream jobs for the WebSphere deployment of the individual components. Only the build job can trigger these jobs. This means that the downstream jobs do not start unless the application has been rebuilt.
Here is an example of the core deployment job:
POM file: backend/core/coreDelivery/pom.xmlMaven command: mvn clean deploy -Pdeploy2WAS
This now means that a change triggers a build. WebSphere deployments do not run independently until afterwards. Usually these deployments run in parallel for the individual WebSphere applications.
5.2 Test, integration and production deliveries
For deployment to the test, integration and production environments, we needed to provide the release notes, transfer documents, any scripts for the database updates, and so forth. To simplify this step, we created additional profiles in the respective delivery modules. These profiles generate zip archives containing the necessary files via the Maven Assembly plug-in. In all the delivery modules, we followed the same procedure or implemented the same behavior. This decision really paid off.
6. Release build
The build cannot be performed in a single Maven run due to the RCP client. This rules out a real Maven release build via the Maven Release plug-in. For this reason, we set up a series of individual Jenkins jobs to replicate the individual steps. If these jobs are directly linked to each other, we can also perform the release build “with just one click.”
6.1 Breaking down the build steps
The individual steps of a release build are:
- Set version entries to release version and push them
- Build the backend and frontend
- Create a tag
- Set version entries for next dev iteration
- Create a site build of the release version
We carry out the entire release build on a separate release branch. We create this branch from the development step immediately beforehand and then merge it back. This is why the tag is not created until after a successful build.
We numbered the build jobs consecutively and gave them descriptive names. This makes it easy for any authorized user to perform a release build at any time:
ReleaseStep01-SetReleaseVersion ReleaseStep02.1-BuildBackend ReleaseStep02.2-BuildFrontend ReleaseStep03-CreateTag ReleaseStep04-SetDevVersion ReleaseStep05-CreateSite
It’s possible to execute the two “actual” build steps in parallel. This is why we combined them in Step 2.
6.2 Setting the versions
The release build requires two updates of all the version entries: one for the actual release build with the effective release version, and then another to set the following development version.
The Maven versions plug-in can set the versions in all POM files of a Maven project. This works smoothly in a Jenkins job. It gets more problematic with a Maven RCP project. In our case, this applies to the frontend part. This is because you have to set the versions in both the POM files and in the feature and product files.
Fortunately, Tycho has a plug-in for this. However, you must follow the sequence of steps exactly. The Tycho versions plug-in also initially resolves the entire dependency tree before updating the versions of the individual components or files. For this reason, the plug-in must be able to resolve all the dependencies cleanly. The easiest way to do this is to deploy all the components in the Maven proxy and run a normal build beforehand. It is also essential to perform the update via the Tycho versions plug-in before updating via the Maven versions plug-in. Otherwise, the Tycho versions plug-in will not find the shared components, as they already have the new version. As a result, we created the Jenkins job for setting the versions with the following structure:
- Clone the repo and check out the release branch
- Set the versions of the frontend part via the Tycho versions plug-in
- Set the versions of the backend part via the Maven versions plug-in
- Push the changes
We run this process twice during a release, so we set up this job as a parameterized job. In addition, we created two trigger jobs to trigger the actual version update job. The two trigger jobs are ReleaseStep01-SetReleaseVersion and ReleaseStep04-SetDevVersion.
6.3 Release build
There are no special features here. Clone the repo, check out the release branch and Maven build or Maven deployment.
6.4 Create a tag
This job creates the Git tag via ShellStep. So, clone the repo, check out the release branch, create a tag, and push.
6.5 Maven site build
The Maven site build is very special and not feasible in one step. One major reason for this is that the build stops due to OOM errors even with an extremely large Java heap. This is probably due to the many different steps in this process. They include creating the Javadocs and documenting the different database schemas via the SchemaSpy Maven plug-in. These different steps consume a lot of resources, hence the OOM errors. We therefore divided the site build into individual steps and then merged them afterwards. The whole thing takes place inside a shell script, which I will not explain here.
The build and its duration have further potential for improvement. As I outlined above, the continuous build is built incrementally and sometimes very quickly as a result. Here the speed depends on the position of the triggering commit in the dependency tree. However, the deployments are triggered for all the WebSphere applications. This is because at this point, it is not possible to determine which web application has actually been rebuilt.
An alternative would be to split the build into several separate jobs. These jobs would, in turn, map the dependencies. This has two advantages: Firstly, only the relevant components are rebuilt. Secondly, only the rebuilt components are deployed in WebSphere. This completely eliminates the deployment downtime for the applications that do not contain any changes and therefore do not have to be redeployed.