Organic Documentation

I have always enjoyed modelling requirements in use case form. It was method for me to think a problem through and confirm understanding. The issue that I have with use case modelling is that once the solution is developed, the use case model quickly becomes outdated. Except of course if you are extremely disciplined.

From a longevity point of view, I believe there are principally two types of documentation. Firstly there is documentation that will rarely change. This would be the documentation that describes the architecture of the application, the various interaction points and the components involved in the application. Secondly there is documentation that is more volatile, and would change more regularly as features are added to and removed from the system. For example the use case model.

Ultimately one wants documentation to be organic and should be changing at the same rate as the code. With this in mind, the code is definitely a source of documentation, but this is not enough. The code only defines the how, not the what or the why. If you are following clean coding principles, then by making your code readable, without any unnecessary comments or javadocs, some of the why is conveyed in the code. But not enough of the why is covered, in my opinion.

So what can we add to the code, to add the additional meaning, that changes at the same frequency as the code? Well, the tests of course. All good software developers should be following a Test Driven Development(TDD) approach. In my opinion the unit tests if written appropriately could be used to convey the appropriate functional application knowledge. However unit tests are still technical and this is forcing something into a space that it should not really be playing in.

Behaviour Driven Development(BDD) is a progression on the TDD approach to software development. BDD bridges a gap between Scrums User Stories and the Test Driven Approach 1. It draws on the Ubiquitous Language as described by Eric Evans in his book, Domain Driven Design, Tackling Complexity in the Heart of Software, allowing business users to relate their requirements to the development team, who in turn, transform these requirements into tests and ultimately working software.

According to Dan North2 BDD is just like TDD. The difference being that it is for everyone, not only developers. If you are in a small team of only developers then TDD is fine and having a ubiquitous language to communicate with the business is not as beneficial. However when you have Product Owners, Business Analysts, Testers, Project Managers and Developers all communicating around a single domain, BDD starts to add value.

So if you have BA’s and product owners elaborating on User Stories in the Given, When, Then format. The developers can implement the tests and then complete the functionality. When you come to amending the feature, you can pull out the tests from your version control and whoever is defining the amendment to the feature can utilise the  existing tests to further define or refine the requirements.

I believe you now have the potential for documentation that truly lives with the application. In the process you are eliminating waste and adding value, where it matters most, to your client. So next thing on my list of things to write about is to create a concrete example using BDD.

References

  1. Behaviour Driven Development – available at http://www.code-magazine.com/article.aspx?quickid=0805061&page=1
  2. BDD is like TDD if … – available at http://dannorth.net/2012/05/31/bdd-is-like-tdd-if/
Posted in Software development | Tagged , , , , , , , , , , , , | Leave a comment

Using JPA – Adding relationships and queries

Last week I created an introductory post on using OpenJPA as your JPA implementation, which is available here. This week I wanted to add onto this by adding a relationship and some simple queries.

So in this example, I have built on last weeks Customer entity. I have added an Address entity, which is referenced from Customer as a residential address and a postal address. Residential address is required, hence it is a input into the Customer constructor, and the Postal address is optional. Once again I have followed a Test Driven Development (TDD) approach and the methods are tested using JUnit.

As always I have used Maven to do the build and manage dependencies and I have used Liquibase as per my previous example to manage the database tables and test data.

The full source for the project is available on GitHub at https://github.com/craigew/JPARelationships.

First off the new entity created is called Address and is a very simplistic representation of an address as a basic POJO with @Entity annotation.

[gist https://gist.github.com/craigew/6742083 /]

Because I have created the new entity, which I want to be managed by OpenJPA, I must add the following line to the persistence.xml file found in the META-INF folder.

[gist https://gist.github.com/craigew/6742118 /]

Next I added the relationship to the Customer entity as a residential address and a postal address. As I stated before, the residential address is required whereas the postal address is optional. This is specified with the optional attribute in the @OneToOne annotation. The @JoinColumn annotation refers to the foreign-key on the Customer table.

[gist https://gist.github.com/craigew/6742150 /]

Even though I have added the additional fields to the customer entity, the methods I created last week in the CustomerManagementService remain unchanged. JPA handles the inserting and updating of the new address fields for us.

What I did do, was I added the ability query for customer to the CustomerManagement service. So in order to enable this, I added the following method to the DataAccess class to hide the scaffolding code.

[gist https://gist.github.com/craigew/6742218 /]

This method is then simply referenced from the CustomerManagementService, to return a typed list of all Customers, or all Customers in a country.

[gist https://gist.github.com/craigew/6742245 /]

And lastly, the Customer tests look like the following.

[gist https://gist.github.com/craigew/6742269 /]

Now that I have got a grasp of how JPA is implemented I will be refactoring the domain model into a more appropriate Customer and Address using Archetypes. And then expose these methods using a RESTful API to represent a simplistic Hexagonal architecture.

As always if you find this useful, please give me a like ;-).

Posted in Example code, Software development | Tagged , , , , , , , , , , , , , , | 1 Comment

Staying close to the Gemba in Software Development

There are several posts about how the Gemba walk is not necessarily suited to Software Development1 and is not enough when identifying areas of improvement in software development2. Based on my experience I agree.

Executives or managers walking the floor in a software development environment are not going to get the same benefit as they would if the were walking the floor in a factory. In a physical factory you can at least see things happening and moving. In the software development environment you are going to see a bunch of people sitting at desks, having stand up discussions around white boards or a bunch of stickies on a whiteboard. Not very insightful.

Before going any further, what is the Gemba. In business the Gemba is the place where value is created. And the Gemba Walk is the act of going to see the process, understand the work and ask questions3.

I want to argue the importance of actually having leaders with influence and credibility in the Gemba. On the frontline so to speak. If a developer becomes a manager, or architect for that matter, and they do not continue developing code that is released to production, I feel they become less effective over the years. That is my experience. I was a developer, who became a systems analyst, solution architect and then development manager. I was tinkering all the time with various technologies, but I found it became harder and harder to make decisions as I became less hands on in developing production code.

As a result I believe that if you are not actively within the Gemba it becomes difficult to properly understand the abstract non tangible world of software development. And you start to communicate in generalisations and not concrete examples. Therefore organisations must value leaders in development roles, in the product teams, who are  actively coding. These are the people who can create the space for experimentation with techniques and practices to improve the development craft. Therefore eliminating waste and adding more value to clients.

References

  1. The Traditional Gemba Walk Has Low Value in Software Engineering – available at http://zsoltfabok.com/blog/2013/06/gemba-walk-has-low-value/
  2. Gemba Walk is Not Enough – available at http://brodzinski.com/2013/01/gemba-walk-not-enough.html
  3. Gemba – available at http://en.wikipedia.org/wiki/Gemba
Posted in Leadership, Lean, Software development | Tagged , , , , , , , , , , , , | Leave a comment

Lean Software Development Principle – Build quality in

This is the second post in a series of posts on a journey into the world of Lean Software Development. The post will focus on some approaches to helping you build quality into your delivery.

The people and culture

I feel it is important to first have the entire team on the same wavelength when it comes to quality. It is no good to have testers who are only interested in testing a completed product. Or developers who don’t see the value in writing unit tests. Or BA’s who are only interested in writing specs and engaging with the business.

If a developer is not prepared to ask for a code review from a peer, then they should not be on the team. Code reviews and pair programming are particularly useful in identifying issues in the code and also embrace a culture of learning and sharing.

A culture of quality must be embedded into everything that anyone on the team does. As Scott Ambler says1, your process should not allow defects to occur. However when defects do occur, the increments should be small. This allows you to validate, fix and iterate. This is where the agile development methodologies add a lot of value by working on small pieces of work.

Development approach

Developers must be writing unit tests, I feel it is inexcusable for developers not to be. And shame on all the managers who push teams to deliver at the expense of unit tests. Not writing unit tests results in quality problems being found late in the development cycle, by under-appreciated testers . That is if the defects are picked up at all. Nothing like letting your clients test for you! I know for a fact, that if I am making changes, having a suite of unit tests to run against my code increases my confidence in making the change.

Developers must also be checking in often, and the code they are checking in must be production quality. Small, complete, quality pieces of work, that are unit tested, should be getting checked in multiple times per day.

Another approach to improve quality is pair programming. I have always had a good experience when pair programming. We have generally provided better code and I have learnt something from the other developer with the defect count being lower. Having a developer ask for a code review from another developer is also beneficial, especially when the piece of code is complex. Four eyes is generally better than two eyes when picking up possible issues.

Quality Assurance should rather be seen as Quality Assistance2. The QA teams time and effort is much better spent assisting BA’s and developers to identify possible quality issues up front and being pro-active. The Test Analysts should be helping the developers identify and write the correct tests with the correct test coverage and they should be helping the BA’s to find potential quality issues in their specifications.

Testers should be first class citizens in the development lifecycle, not the second or third class citizens many organisations treat them like. If you can get the Test Analysts assisting up front with identifying unit tests, firstly they are completely aware of the change and secondly they are assisting with enforcing a Test Driven Development approach.

The use of technology

Now that you have got developers writing unit tests and testers adding value at the beginning of the development lifecycle, you need to automate. Setup a CI server such as Jenkins to run all the unit tests and deploy the code automatically into a test environment every morning. This way you know that you are deploying quality code(the unit tests have validated this for you), and yesterdays changes are available to be tested today.

Along with this, have a code coverage plugin such as Sonar running against the database. The developers should then get into the habit of viewing Sonar every morning to pick up any new violations they have added the day before. This is a habit we have instilled in our team and the first thing I do in the mornings, with a cup of coffee, is to fix any violations from the day before.

And why do testers still perform regression packs manually. Regression packs should be automated and run daily. The test analyst should only be updating automated regression packs to that they can be adding value, not doing what a machine should be doing.

In closing

Quality must be everyones mantra and should be instilled into everything that anyone on the project team does. Fixing defects adds no value to clients, and can actually degrade value when defects are picked up in production by a client.

References

  1. Agility@Scale: Strategies for Scaling Agile Software Development available at https://www.ibm.com/developerworks/community/blogs/ambler/entry/principles_lean_software_development?lang=en
  2. Agile Testing: It’s about time – Atlassian Summit 2011 available at http://www.youtube.com/watch?v=dYFzehMukAc&feature=youtu.be.
Posted in Lean, Software development | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Using OpenJPA as your JPA implementation

In this post I provide an example of creating a persistence layer using Apache’s OpenJPA implementation of the Java Persistence API(JPA). I created the example using a TDD approach and I have  once again used Liquibase to version control the database. I have used a few more Liquibase parameters, than what I used in my previous post on Liquibase, primarily around managing the local test database and the test data that I needed to run the examples.

This is a very basic example of using JPA and I will be building on this over the next few weeks in order to provide a more concrete example. As always the full source for the project is available on GitHub and all dependencies are managed via Maven. To do a mvn install you will need to have access to a Oracle schema and simply change the database connection properties to point to your Oracle instance.

The source is available here https://github.com/craigew/JPAIntroduction.

First off, you need to add the following dependency to your POM to add OpenJPA to your project.

[gist https://gist.github.com/craigew/6659242 /]

Next, because I am unit testing my code, I want to deploy this onto Tomcat, and it is not running in a JEE container I needed to add the following enhancer plugin to the POM. The enhancer adds code to your persistent classes at build time, adding the necessary fields and methods to implement the required persistence features. There are three approaches to enhancement:

  1. Build time (as per this example)
  2. On deployment
  3. Or at runtime

Further information on the enhancer is available on the OpenJPA site here.

[gist https://gist.github.com/craigew/6659212 /]

Before moving onto the actual JPA code I want to explain some of the other configuration in the POM.

Lines 11 and 12 – When I am developing I want the tables to be dropped and recreated by liquibase. The “test” context maps to the liquibase change log and with the context set to “test” my test data is only executed in the test environment.

Line 39 – This is the converse to lines 11 and 12. Now that this is in the release profile, I do not want to drop and recreate all the tables. And because I have not got the “test” context set none of my unit testing data is created in the database.

Line 27 – I want to run my tests when running the dev profile.

Line 53 – I don’t want to run the test when running the release profile.

[gist https://gist.github.com/craigew/6659255 /]

Finally, we can get to the actually code.

First off the unit tests. These provide you with an idea of the methods that I have created. I prefer using a TDD approach because it makes you do a little bit of thinking and planning up front, and you get the massive benefit of being able to confidently refactor your code. In this particular example, I started out with a solution I was not happy with and I was able to continuously refactor, knowing that my tests would tell me if I had broken anything.

[gist https://gist.github.com/craigew/6659601 /]

As you can see the example is a very simple, create, select and update.

Now that we have got everything wired up, it is easy to create the first entity.

[gist https://gist.github.com/craigew/6659621 /]

You will notice that it is extending a class called BaseEntity. I created this because I want all my entities to generate their unique id’s or primary keys in the same consistent manner. And I prefer my entities to be as simple as possible.

I am using an oracle sequence, that will be shared across all entities, to generate the unique identifier. As with the tables the Oracle sequence is generated via liquibase.

[gist https://gist.github.com/craigew/6659643 /]

Now that we have created our entity class, we need to wire them up so that OpenJPA knows about them. Note the “persistence-unit”, this is used in the code to create the entity manager. You will see this implementation in the DataAccess class. In one of my next posts I would want to explore whether this persistence-unit would be a suitable technical match to what the Domain Driven Design guys call a Boundary.

[gist https://gist.github.com/craigew/6659656]

Now when I am creating my service classes I don’t want my application polluted with scaffolding type code. So I created a class, using generics, to hide all the scaffolding from the service classes.

[gist https://gist.github.com/craigew/6659684 /]

And the service class becomes extremely simple.

[gist https://gist.github.com/craigew/6659695 /]

So this is a very simple first implementation of JPA, in some of my next posts I want to explore the relationships between entities and create a bigger domain model.

If you have found this useful then please give me a like. ;-).

Posted in Example code, Software development | Tagged , , , , , , , , , , , , , , , , , , | 1 Comment

Lean Principle – Eliminate waste

First some background

This is the first in a series of posts that I mentioned I would be doing in this post.  The observations for this move towards Lean Software Development are taken from a project where we were rejuvenating an online system. The system had been neglected for a considerable period of time and, prior to this project, we had tried unsuccessfully to replace it with a packaged solution. The organisation is currently moving away from a truly waterfall SDLC with very bloated analysis and design phases and no automation.

Waste Elimination

Waste is often one of those low hanging fruits. As you have less confidence in the quality of the delivery, more time is added up front for analysis and design. With each failed release, more analysis and design is done as a stop gap.  And the result is that you become really good at doing analysis and design. Not delivering software to the client.

This quote by Peter Drucker really sums it up for me – “There is nothing so useless as doing efficiently that which should not be done at all”.

So what is waste? Bugs are waste. Inefficient use of time is waste. And ultimately anything that does not add value to the client is waste.

To define waste, waste is anything that does not add value to the customer, this includes items such as unnecessary code or functionality, unclear requirements and slow communication or processes.

Below are my observations around how eliminating waste, as a Lean principle, assisted in turning our delivery around.

The importance of  a change leader

Eric Ries who authored the Lean Startup, notes that organisations have “muscle memory”. I have personally experienced this where habit is hard to break and it is just too easy to fall back to the old way of doing things.

I feel it is imperative that when embarking on a journey like this, that someone in the team has the mandate and is empowered to bring about changes. This person should then be constantly reminding the members of the team of the need to change. It might be best to bring in an outsider, or move someone from another team, because the “muscle memory” of the team will keep leading the team back to the well trodden path. Change is not easy and for the first portion of the journey the reminding might need to be more often than not. But as the habits to embrace change form, the reminding will be required less often.

And without someone or something constantly challenging you to change,why should you change?

Automate, Automate, Automate.

When we started we had Jenkins build server that required multiple clicks to do a build and deployment to an environment. This resulted in someone having to wait for each step to complete before initiating the next step. Waste. Our build and deployment is now a one click affair and is deployed automatically to our test environments in the early morning. This has had a couple of benefits:

  1. A developer is not required for the deployment.
  2. Those individuals who are dependent on test environments and who arrive at work early in the morning are immediately productive.
  3. With a developer not being required for deployments, BA’s and testers were now doing the deployment into the QA environments. This allowed for whoever was doing functional testing to pull work whenever they were ready. And developers had one less context switch because they did not have to stop what they were doing to babysit a deployment.

Another area to automate deployments is the database component. This is where the usage of a tool such as Liquibase assists with automating and versioning your database changes. I wrote about how we have started using Liquibase here, and we are still in the process of bedding this down into our environments.

Proximity

Having the team members in close proximity to each other is a must. BA’s, testers and developers being able to quickly communicate around the status of changes is invaluable.  Rapid feedback eliminates much of the context switching that happens when team members are distributed. I am always astounded at how even a different floor in the same building can affect communication.

It is difficult to put a value to the conversations had over the top of a monitor or simply being able to turn around and have a quick discussion with a colleague.

Documentation

At the end of the day, documentation adds very little value to clients. Working software is what counts. That does not mean that there should be no documentation, but rather the documentation should simply be good enough to transfer an understanding.

I believe that documentation should be the lowest fidelity possible to effectively transfer meaning and understanding. Photos of whiteboards, annotated screen print outs or the paper you used to workout a flow are more than adequate. Once coding is complete the code becomes the best form of documentation.

I will caveat this though, if developers, testers and BA’s are not working hand in hand and sitting with each other, then I would not advocate this and would rather recommend an alternate method of communicating understanding and requirements.

Clean Code

Our code base is anything but pristine. So to make the smallest change is not only risky, but also consumes far more time than it should. This particular task I don’t think will ever be 100% complete, but with each change we make we ensure we clean and refactor the code, and create unit tests allowing us to confidently start making changes. Following principles such as clean code and Software Craftsmanship are guiding us in this journey to deliver better quality code.

Developer tools

From a developers point of view we all want to be as efficient as possible and this means that our tools need to be the best. A developer should be able to build the entire source code and run the unit tests multiple times an hour. This means the machines we work on must have more than enough processing power and memory. If the developers are working on underpowered machines, they are most likely wasting massive amounts of time a day. Especially if there is an expectation that unit tests are to be run with every build.

Feature branching

I mentioned in another post how we moved from feature branching in version control to feature branching in code. Prior to this move waste was created every time we created a new branch to start development for a release. Firstly other branches were not getting the benefits of enhancements to our frameworks, secondly massive amounts of time were spent creating and merging branches. Time spent merging code is a real waste of time and effort, and is risky.

By toggling changes, we have the ability to deliver change quicker. Small changes and big change happen side by side, with big changes moving into production in a “OFF” state and not being visible to clients or affecting clients in any way. With this approach it is important to always be committing production quality code, so that you can deploy as and when is needed.

A benefit of this is that if a defect is unfortunately picked up in production, it can be toggled off. Thereby minimising the impact on the client and allowing the developers a grace period to correct the defect.

So I have touched on a few areas where organisations would benefit from moving to a LSD approach. And as I noted in the beginning of this post, this is the first in a series of posts and I will coming back to refine, add to and update my thoughts over time, while adding posts on Lean Software Development.

Posted in Lean, Software development | Tagged , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Using Jersey to create RESTful services with POJOs

All code for this example is available on GitHub at https://github.com/craigew.

This post will provide some examples of how to use the Jersey framework with POJO support. All the methods, barring the first example method, will consume and/or produce JSON in the request and response. I have provided examples for each of the most common verbs, namely:

  • GET – will provide a list based on the URI.
  • POST – will create a new entry.
  • PUT – will replace/update an entry based on the URI.
  • DELETE – will delete a entry based on the URI.

So first off I added my dependencies to my POM. As always my dependencies are managed using Maven.


<dependencies>
   <dependency>
     <groupId>com.sun.jersey</groupId>
     <artifactId>jersey-server</artifactId>
     <version>1.17</version>
   </dependency>
   <dependency>
     <groupId>com.sun.jersey</groupId>
     <artifactId>jersey-json</artifactId>
     <version>1.17</version>
   </dependency>
   <dependency>
     <groupId>com.sun.jersey</groupId>
     <artifactId>jersey-client</artifactId>
     <version>1.17</version>
   </dependency>
   <dependency>
     <groupId>com.sun.jersey</groupId>
     <artifactId>jersey-servlet</artifactId>
     <version>1.17</version>
   </dependency>
</dependencies>

Next I want to package this as a war, so I added the following to the POM file.


   <packaging>war</packaging>
   <build>
     <plugins>
       <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-war-plugin</artifactId>
          <configuration>
            <webXml>web\WEB-INF\web.xml</webXml>
          </configuration>
      </plugin>
     </plugins>
   </build>

I am deploying this example to Tomcat by simply copying the built war into the webapps folder of my Tomcat install. For some reason I could not run the example straight from my IDE, IntelliJ. For some reason it was not deploying the required dependencies when running or debugging from IntelliJ. That is something to have a look at, when I have got time.

The final configuration to do is in the web.xml file.

The config below tells the Jersey API where my annotated classes are that it needs to listen for.

<init-param>
   <param-name>com.sun.jersey.config.property.packages</param-name>
    <param-value>com.craigew.rest</param-value>
</init-param>

The next config tells Jersey that I am wanting to use it’s POJO mapping features.

<init-param>
    <param-name>com.sun.jersey.api.json.POJOMappingFeature</param-name>
    <param-value>true</param-value>
</init-param>

Finally you tell jersey servlet to intercept all request with url pattern of /api/*.

<servlet-mapping>
    <servlet-name>JerseyAPI</servlet-name>
    <url-pattern>/api/*</url-pattern>
</servlet-mapping>

Now we can start coding. The first method I create does not utilise the Jersey POJO mapping features. I simply manually create a JSON response for a very basic “Hello World” example.


@Path("/person")
public class PersonApi {

   @Path("/greet")
   @GET
   @Consumes(MediaType.APPLICATION_JSON)
   @Produces(MediaType.APPLICATION_JSON)
    public JSONObject sayHello() {
      try {
         return new JSONObject().put("greeting", "Hello world");
      } catch (JSONException e) {
         return null;
      }

    }

}

So the URI for this example would look something like:

…/api/person/greet

And it simply returns a JSON object with a string containing “Hello World”.

The next example I take a parameter from the path {name}.


@Path("/greet/{name}")
 @GET
 @Consumes(MediaType.APPLICATION_JSON)
 @Produces(MediaType.APPLICATION_JSON)
public PersonResponse sayHelloToSomeone(@PathParam("name") String name){
      PersonResponse personResponse =new PersonResponse();
      personResponse.setResponse("Hello " + name);
      return personResponse;
 }

Now we have started to use the Jersey POJO mapping features. In this code snippet we are returning a POJO object that Jersey then serialises into JSON for us.

So the URI for the above call would look something like:

…/api/person/greet/craig

And the service will then return JSON looking like the below snippet:

 {
   response: "Hello craig"
 }

The final example I will write about is when we are passing a JSON string via the Payload. Jersey then deserialises the JSON into our POJO, allowing us to easily work with the object in Java.


package com.craigew.model;

public class Person {
private String name;
private String surname;

public Person() {}

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
   }

  public String getSurname() {
    return surname;
  }

  public void setSurname(String surname) {
    this.surname = surname;
  }
}

First thing to note is that the empty constructor is required by the parser to initially create the object. The getters and setters I found where also necessary. Simply making the attributes public did not work.

In the example below I simulate creating a new person.


@Path("/create")
 @POST
 @Consumes(MediaType.APPLICATION_JSON)
 @Produces(MediaType.APPLICATION_JSON)
 public PersonResponse createAPerson(Person person){
    System.out.println("Creating a person");
    PersonResponse personResponse =new PersonResponse();
    personResponse.setResponse(person.getName() + " " + person.getSurname() + "        added to the database");
    return personResponse;
 }

The URI for the above code example would look like:

…/api/person/create

With the following JSON in the payload of the POST request.

{“name” : “Craig”, “surname” : “Williams”}

The nice thing about writing your RESTful service like this is that you can directly unit test them using JUnit.

@Test
public void should_create_a_person(){
  Person person=new Person();
  person.setName("Craig");
  person.setSurname("Williams");
  Assert.assertEquals("Craig Williams added to the database",new     PersonApi().createAPerson(person).getResponse());
 }

Although this is a very superficial example of unit testing it allows you to remove the infrastructure while developing and you can then practice TDD properly, by first writing your tests and then implementing your methods.

For manually testing RESTful services, including the infrastructure, I use a Chrome plugin called, Advanced Rest client. It is by far the best client I have used while developing RESTful services.

Posted in Example code, Software development | Tagged , , , , , , , , , , , , , , | 1 Comment

Seven essays on Lean Software Development

Over the course of the next seven weeks I will be writing an essay per week on each of the seven Lean Software Development principles. I will be writing about my experience in an amazing turnaround, were we went from being dysfunctional, barely delivering with real quality issues to where we are now with a decent delivery cadence, trust from the “business” and quality issues being eliminated.

I believe we achieved all of this by inadvertently applying the seven principles of LSD, namely:

And we achieved this turnaround in less than eight months.

I hope to provide concrete examples of how we implemented the principles, and the tools we used to accomplish our turn around. I will describe how we were, and what we are like now, and the absolutely amazing difference this has made to the team.

Although we did not set out with LSD in mind, much of our thinking was shaped by Eric Ries’s the Lean Startup and the Toyota Way, which are contributors to the LSD movement. I have not yet read any of Tom and Mary Poppendieck’s books, but they are most definitely next on my reading list. And once I have read one of their books I hope to come back to review my essays.

Posted in Lean, Software development | Tagged , , , , , , , | 1 Comment

Versioning your database using Liquibase

Delivering database changes through the various environments and into production has always required a lot of overhead. You would have to write scripts, keep track of the order that the scripts need to be run and provide detailed instructions to the DBA’s when running the scripts. Add multiple developers into the mix and this becomes difficult to maintain and control. Bring in Liquibase.

Liquibase is a fantastic tool that allows you to use your version control system to version your database scripts and then to use your CI server to deploy the changes into your environments or to generate the SQL scripts for the DBA’s to review and run.

I had a requirement whereby we wanted to execute our database changes directly into the development and test environments. However once we moved into UAT we were required to generate the SQL scripts for the DBA’s to review and execute on the UAT and finally the production environment. This is where the flexibility and power of Liquibase really comes to the fore.

The progression of database changes would happen as follows:

  1. Developer makes changes into the development environment directly using Liquibase from their IDE via maven.
  2. Once the changes are committed, the overnight build will run the change into the testing environment along with the deployment of code into the testing environment.
  3. When we are ready for UAT the build will generate the SQL file containing all the changes for the DBA to review and run into the UAT environment.
  4.  The same file executed on UAT will be promoted to production.

For this example I created a VirtualBox VM with Linux Mint and installed Oracle XE onto the box and created two schemas to replicate a development and production environment. For installing oracle onto the Linux Mint VM I followed these fantastic instructions, I would never have got the install right if it was not for the detailed instructions, and I most likely would have just reverted to MySQL.

So first off I created a separate project to house our database scripts, and arranged the project as per the best practices with a file per release referenced in the master file. Below is the example of the change-log-master.xml.


<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
  <preConditions>
     <dbms type="oracle" />
  </preConditions>
  <include file="db.changelog-13.8.xml" relativeToChangelogFile="true" />
</databaseChangeLog>

I have created two properties files, one for the development environment and another for the “production” environment. Below is an example of the development properties files.


changeLogFile=db/ddl/db.changelog-master.xml
driver=oracle.jdbc.OracleDriver
url=jdbc:oracle:thin:@x.x.x.x:1521/xe
username=dev
password=dev
verbose=true
dropFirst=false

I then setup Maven to manage the dependencies and the build.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

    <groupId>com.craigew.database</groupId>

    <artifactId>LiquibaseScripts</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
      <dependency>
        <groupId>com.oracle</groupId>
        <artifactId>ojdbc6</artifactId>
        <version>11.2.0</version>
      </dependency>
    </dependencies>
    <build>
    <plugins>
      <plugin>
        <groupId>org.liquibase</groupId>
        <artifactId>liquibase-maven-plugin</artifactId>
        <version>2.0.3</version>
        <configuration>
           <changeLogFile>src/main/liquibase/changelog/db.changelog-master.xml</changeLogFile>
           <propertyFile>src/main/liquibase/properties/liquibase-${env}.properties</propertyFile>
           <promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
           <verbose>true</verbose>
       </configuration>
       <dependencies>
         <dependency>
           <groupId>org.liquibase.ext</groupId>
           <artifactId>liquibase-oracle</artifactId>
           <version>1.2.0</version>
         </dependency>
       </dependencies>
    </plugin>
  </plugins>
 </build>
</project>

To run the scripts against the different environments I have used Maven profiles to determine what I want Liquibase to do. In the development environment Liquibase must execute the scripts directly onto the database when we do a Maven install, but on the “production” environment I want Liquibase to generate the SQL script. Below is an extract of the profiles from the pom with the two different profiles.


<profiles>
  <profile>
    <id>dev</id>
    <activation>
        <activeByDefault>true</activeByDefault>
    </activation>
    <build>
      <plugins>
        <plugin>
           <groupId>org.liquibase</groupId>
           <artifactId>liquibase-maven-plugin</artifactId>
           <executions>
             <execution>
               <phase>install</phase>
               <goals>
                  <goal>update</goal>
               </goals>
             </execution>
           </executions>
        </plugin>
      </plugins>
     </build>
 </profile>
 <profile>
    <id>release</id>
     <build>
       <plugins>
         <plugin>
           <groupId>org.liquibase</groupId>
           <artifactId>liquibase-maven-plugin</artifactId>
           <configuration>
               <migrationSqlOutputFile>src/main/liquibase/output/migration-release-${version}.sql
               </migrationSqlOutputFile>
           </configuration>
           <executions>
             <execution>
               <phase>install</phase>
               <goals>
                   <goal>updateSQL</goal>
               </goals>
              </execution>
           </executions>
         </plugin>
      </plugins>
   </build>
 </profile>
</profiles>

You will notice in the above extract from the pom that the goals differ between the two profiles. The dev profile executes the “update” goal, whereas the release goal executes the “updateSQL” goal.

The update goal executes the changes directly onto the database, with the updateSQL goal generating the required SQL scripts to affect the changes described in the XML. By default the script is generated into the target directory in a file called migrate.sql. For this example I output it to a different folder with the release version appended to the file name.

So with the empty schema’s, that I setup earlier, available in my local oracle instance I am ready to run my first change.


<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
<changeSet id="8.0_1" author="cw">
   <createTable tableName="person">
       <column name="address" type="varchar(255)"/>
   </createTable>
   <rollback>
       <dropTable tableName="person"></dropTable>
   </rollback>
 </changeSet>
</databaseChangeLog>

This change simply creates a table with a single column. All our changeSets have a rollback script as a standard.

Running a Maven install from my IDE with the env property set to dev will execute this change directly against the database and create the new table for me.

If you look at your schema the new table is created as expected, but Liquibase has generated two other “helper” tables as well. DATABASECHANGELOG keeps track of all your changes so that they are not run more than once in an environment, and DATABASECHANGELOGLOCK is used by Liquibase to prevent multiple developers updating the database at the same time.

I can now write the necessary code to access this table along with the appropriate tests.

Now when we are ready to move to the UAT environment I can run the build using the following parameters.

mvn install -P release -Denv=prod

This will run Liquibase using the prod properties file (liquibase-dev.properties) with the release profile. And because we have used the updateSQL goal the SQL script is generated. The generated SQL script goes through a final review by the DBA’s and is executed in the UAT.environment.

At this point we would normally create a release branch for the short period we are in UAT. At the same time we create our next change log in trunk so we can start making changes for our next release.

By using Liquibase we have been able to eliminate waste from our development lifecycle allowing us to be leaner in our delivery. We no longer have to create multiple database deployment documents for our DBA’s. The Liquibase scripts in our subversion repository are the only source of the truth and we also have a repeatable process with no human intervention, apart the DBA’s manually running the script into UAT and production. However the ultimate goal is to have one click deployment into all our environments, with no DBA needing to run the scripts.

A complete example project is available at https://github.com/craigew/LiquibaseIntegration.

Posted in Database, Software development | Tagged , , , , , , , , , , , , , , , , , , , | 1 Comment