Tuesday, November 29, 2011

Eclipse moving forwards

Most Java devs (and others too) have a love/hate relationship with Eclipse. Many a flame war debate has been had on the subject.

From my own personal experience, I think Eclipse is moving forwards in the right direction. Helios and Indigo both feel snappier, and the installation/upgrading of plugins is easier.

I had to upgrade my work instance of Eclipse and even though there were conflicting versions of different plugins, it was easier to resolve than in previous versions of Eclipse.

Some of the other language editors could do with some love (PHP for example) but overall I'm finding my productivity is increasing with the later versions of Eclipse and I have to wrestle with it less.

Wednesday, June 22, 2011

Server configuration with Mercurial

I've been playing a lot lately with Mercurial, and in my opinion it's the best SCM around. I also have been administering some of my servers (actually Rackspace VMs) and ran into the age old problem that sys admins have had; keeping the server config synchronised.

The problem in a nutshell is that you install a set of applications (through yum or apt-get or whatever) and configure them. However you run into the problem of config propagation, versioning/history, rolling back to a known configuration, etc. I've seen a few sys admins roll their own solution, usually involving rsync and a lot of logging.

My solution was to use Hg to do all the heavy lifting, with a wrapper bash script that I knocked up very quickly that invokes Hg to version configuration files. It maintains knowledge about where the file came from by copying the file to be relative to the repository directory. For example if a user was to edit /etc/hosts the file will reside in the repository at $REPOS_HOME/etc/hosts

How it works is that you run
$ editconf <file>
the script does the following.

  1. Resolves the absolute path of a file (using a realpath bash script that a friend knocked up)

  2. Checks if the file exists in the repository


    1. If the file doesn't exist the file is copied and added to the repository


  3. Drops through to the users editor (defaults to vim)

  4. Copies the file to the repository when the user exits the editor

  5. Attempts to commit the file


If the user is created for the first time (ommitting the initial check), the script is smart enough to add it first. Mistakes are fixed by leaving an empty commit message (most SCMs wont commit of course), and reverting the file.

Branches can be made, merged, and state can be pushed around various servers with very little effort.

I mentioned this approach to some people and they thought it was a nifty idea and asked me to share my scripts. They can be found on BitBucket under the nesupport[1] SysTools project. They're licensed under the MPL, and since they were knocked up in a hurry patches/feedback are always welcome. Further instructions are found in the scripts themselves (if further action is required).

Further work could include the automated sharing of repository state (cron job) and synchronising what's in the repo with what's on the filesystem.

[1] The code was developed for a project I'm working on with somebody else who agreed to open source our sys admin scripts, of which editconf is a part.

Monday, May 30, 2011

Integration is the mess that noone wants to clean up

The title for this post came as a rather tongue in cheek comment by the sys admin at my workplace. However I think this is a rather accurate description of a personal project that I've been working on for the past six months. This project was designed as an educational project into different technologies that I've wanted to play with; so a lot of the decisions were guided by that. Across the way I've thought about different issues, found different tips and tricks and thought I'd share them.

The goal of this project is an integration piece between a website (through RSS) and social media sites like Facebook, MySpace, and Twitter (with all the links fed back to the original site). The goal was to not have to replicate data across the multiple sites; as well as give me an excuse to play with new toys.

The runtime environment is Google App Engine as this app would run infrequently and GAE is free. I also have wanted to write a GAE app for a while, so this seemed like a good idea. The language is Java since that's my primary development language and I don't know Python.

The build tool is Ant as I despise Maven. I use Maven at work, and I find it to be inflexible, bloated and just plain annoying. It does have some good ideas however and given Ant's import abilities, one can write generic build tasks that implement good build practices without all the pain. I did consider other build tools like Gradle but I found personally that after thinking about what I wanted to do for a while I was able code the build declaratively with Ant being smart enough to handle all the heavy lifting. I'm considering Open Sourcing my Ant build library that I've accumulated when I've cleaned it up a little bit.

Maven also provides dependency management however I personally find Ivy's dependency management to be more mature, cleaner, simplier and easier to configure/use than Maven.

Discussions about build tools can often get people a bit hot under the collar and I found The Java Posse podcast on the subject to be very educational (as a hint they're not very Maven friendly).

My IDE of choice is Eclipse (Helios), with the relevant Google plugins. One other reason that I dislike Maven is that the M2 integration sucks (I've heard it's a lot better with Indigo however I've yet to try). However Ant was not immune from problems either in that the runtime I am using is 1.8.1, and the Ant build editor doesn't recognise syntax from that version so Eclipse tells me my build.xml has errors in it. Runs fine however.

For my SCM I'm using Mercurial as it is the best VCS around, beating Git by a gazillion miles (yes I have actually used Git in a sophisticated manner and Hg is so much easier and more intuitive).

The app itself is broken down into 3 parts (with Spring handling all the configuration). The first handles datastore/RPC services, with a GWT frontend, Spring MVC providing the CRUD RPC endpoints, and Objectify handling persistence. The second part is implementing the manual workflow of OAuth2 with Spring MVC rendering the views (very simple JSPs) and accepting callbacks. The third was the part that actually made posts to the social media sites and kept everything in sync.

I spent a lot of time trying to bash JPA to work on GAE, however the DataNucleus implementation is quirky at best. It does certain tasks (like the assigning of PK values) differently from all other JPA implementations and the JDO junk that gets stuck in you .class file can mess with other annotations or class behaviour. I spent one Saturday ripping out JPA and replacing it with Objectify (guided by tests for you TDD purists out there), and I've little trouble ever since.

A few big design principles that I've been trying to hammer into myself is good old DRY and others from the SOLID acronym like SRP; guided by tests. Working with a framework like Spring is that if you don't follow these ideas then you can get yourself into trouble quickly enough to realise something's amiss. For example even when using Objectify one still has to deal with transaction management. It's the same bit of code that runs for all DAOs so where do you put it? One should of course favour composition over inheritance. I went with an AOP (JDK proxy) approach where a transaction manager provides advice around the DAO methods, injecting an "entity manager" which the DAOs then used for DS operations. Very elegant but not as easy if one doesn't code to interfaces (the L of SOLID as I understand it).

However GAE itself doen't help with the testing side of these principles, and it's an area where Google could really improve their tooling, especially if they want more than Micky Mouse projects running on GAE. It's nigh impossible to preload the local DS file with test data for integration/acceptance tests (where the tests run in one JVM and the dev_server serving your app runs in another). I hacked a solution together but it broke when the SDK was updated. The only way therefore is to use an interface which you build into your app. Since most of my entities were being served in a XML representation over HTTP to be consumed by a GWT front end it wasn't too much of a pain to drive my tests through the browser using Selenium. However for a more sophisticated project it's a nail in the coffin for testability.

Learning Spring MVC was a joy, with the ability to render a view of the model (through JSP) or return data in a HTTP response body (ala REST/RPC) with minimal effort. As far as MVC frameworks go it's the best I've worked with to date. However there is some room for improvement with the way that Content-Types (MIME) are handled in an AJAX world, but I've detailed that problem before

Running code on GAE of course other than native servlets is annoying, which some readers may already know because of the way GAE attempts to serialize everything. I had to rework my atchitecture a bit to try and get around that, but I still get errors in the logs due to Spring classes not being Serializable.

I've probably left a few details out here and there, and questions/comments are welcome (unless you want to troll then bugger off). Constructive criticisms of technology choices are always interesting and I like to chig wag about that; although please don't start a flame war.

Overall I found this project to be satisfying and educational and I've come out the other side a better engineer/developer.

Monday, May 16, 2011

Filling a table in Jasper Reports

You wouldn't think it, but figuring out how to fill a table with data in Jasper Reports (JR) was actually more difficult than it sounds. Poor documentation, bad/incorrect/plain stupid examples and forum posts that have been left for years with no answer! Due to library constraints on this project, this example is with Jasper Report/iReport 3.4.7 and YMMV with other versions.

Pictorial example of a table in a JR report


Say that you're producing a report with a table of Customers that is embedded within a report with other data (see image above). JR treats the table as a subreport (but with different XML tags) which means that data you fill the parent/master report with isn't instantly available to the table. This is a caveat that isn't intuitive to find out/understand until you find out that the table is a subreport. To make matters worse iReport assumes that you're getting your data either from a straight JDBC connection, or populates your table's <dataSourceExpression> with a JREmptyDataSource which will populate your table fields with null.

^How helpful^.

If you're in any sort of Enterprise system you'll no doubt have DAOs, and different models (domain, DTOs, etc) to feed into your reporting code, so you'll need to strip out the empty data source iReport sticks in your template.

Fortunately there is a JRBeanCollectionDataSource class that maps field names in the report template to properties in the data using the Java Beans naming convention. The last step is actually make your data available to the table. This is a combination of fixing the template and writing a reporting DTO class.

Firstly a field in the report will need to be a Java Collection type. I didn't have much success with non JSE collections, and it's better to code to interfaces anyway.
<field name="customers" class="java.util.List"/>
The DTO will need to provide an instance of that collection type with a getter that will match the name of the field in the template. Using the example in the image
public class CustomerList  {
public List<Customer> getCustomers() { ... }
}
Then for each field placeholder in the table, if it maps to a property getter in the Customer object then that property value will get substituted into the report.

The final configuration is to tell the table to source its data from the collection.
<dataSourceExpression><![CDATA[new net.sf.jasperreports.engine.
data.JRBeanCollectionDataSource($F{customers})]]>
</dataSourceExpression>
Then when the report is filled with an instance of CustomerList that you've prepared earlier, the report engine will iterate over the Collection and fill each row of the table.

Once you've done some digging/filtering and realised how JR does it's tables it's actually pretty easy/plain obvious. However because of the afore mentioned reasons (incorrect examples sending one down the garden path) it can be time consuming and frustrating. Given that a table is a core requirement of most reports wanted by a business it would make sense to me to make putting a table in a report to be so dead simple.

Friday, April 15, 2011

Intuitive frameworks is how it should be

On my current project, the server has to have the ability to receive image data embedded in a JSON object, therefore the JSON string representing the data is a base64 encoding of the binary image. The entity model (that is persisted to the DB through JPA/Hibernate) has the image data field of type byte[].

Turns out that JBoss' RESTEasy is smart enough to use Jackson's ability to decode base64 text based on the type of the destination field.

That intuitive step by the framework is what helps give me a good feeling inside knowing I don't have to deal with type conversions, just what I want to do with the data.

Next task off the board please ....

Monday, March 28, 2011

FF obeys the rules and Spring 3 chokes

It's really quite sad/annoying when a toolkit as sophisticated as Spring chokes on simple matters.

I'm writing a GWT client to make RPC (XML/HTTP) calls to the server which is implemented in Spring MVC. My Controller has a @RequestBody on the input type which is a class that is annotated with JAXB annotations to serialise/deserialise from XML. Therefore my Controller is at the mercy of the HttpMessageConverter that Spring uses to rip the data out of the HTTP request, turn it into a POJO and give it to my Controller.

In the Spring config (I use the XML config), the <mvc:annotation-driven/> by default registers an instance of AnnotationMethodHandlerAdapter which (among other duties) is responsible for determining which message converter to use. What is nice is that a JAXB converter is registered automatically if the JAXB libs are on the classpath. The converter that is chosen is the one that can process the MIME type or Content-Type that is in the HTTP request header. In this case it would be application/xml which is processed by the default MarshallingHttpMessageConverter which in turn delegates the real work to JAXB.

However Firefox (FF) obeys the rules regarding XHR requests, in that it appends to the Content-Type header a charset. So application/xml becomes application/xml; charset=UTF-8. Because of this the entire server side code unravels because the Content-Type is not smart enough to parse the charset out of the Content-Type header; and throws an exception that the Content-Type is not recognised.

The solution after much reading up on the internals of the Spring MVC framework is to create my own bean tree where the supported type contains the charset. The message converters support changing the supported MIME types, which are modelled by the class MediaType. MediaType supports a Charset in its constructor. Therefore I end up with the following config.

<!-- Override default AnnotationMethodHandlerAdapter that
mvc:annotation-driven provides -->
<bean class="org.springframework.web.servlet.mvc.
annotation.AnnotationMethodHandlerAdapter">
<property name="messageConverters">
<list>
<ref bean="stringHttpMessageConverter" />
<ref bean="marshallingHttpMessageConverter"/>
</list>
</property>
</bean>

<bean id="marshallingHttpMessageConverter"
class="org.springframework.http.converter.xml.
MarshallingHttpMessageConverter">
<property name="marshaller" ref="jaxbMarshaller" />
<property name="unmarshaller" ref="jaxbMarshaller" />
<property name="supportedMediaTypes">
<list>
<!-- This handles browsers like FF who add
the charset to the XHR request. -->
<bean class="org.springframework.http.MediaType">
<constructor-arg index="0" value="application"/>
<constructor-arg index="1" value="xml"/>
<constructor-arg index="2" value="UTF-8"/>
</bean>
</list>
</property>
</bean>

<bean id="stringHttpMessageConverter"
class="org.springframework.http.converter.
StringHttpMessageConverter"/>

<oxm:jaxb2-marshaller id="jaxbMarshaller"
contextPath="org.altaregomelb.sync.domain"/>

Given the framework for converting data from HTTP requests already contains the logic to parse out Content-Type from the header, it's a shame that the default beans that are created by the <mvc:annotation-driven/> don't parse the charset properly instead of throwing an exception, given that it is part of the standard to include a charset in XHR requests. All XML/HTTP requests wont be XHR requests it's true, but given the prevalence of AJAX apps out there; the framework should account for it.

References
Spring 3 API

Thursday, February 24, 2011

Technological justice

In The Age today, it's reported that the Federal court backed iiNet. Finally some sane and reasonable justice from the courts in technological matters.

Well done iiNet.

Wednesday, February 2, 2011

How to prepopulate your GAE dev_server for testing

This post assumes that you know how the Google App Engine Datastore basically works, and how to perform local unit testing of your code on the dev_server provided in the GAE SDK.

There is a massive gaping hole in the GAE SDK, both in terms of functionality and documentation (of course if there is some documentation on the matter please post it in the comments) in regards to the population of your local datastore (which is persisted to a file) for testing. Why does this matter? For integration, or acceptance testing. Not all testing my Google Overlords wants to be done in a memory only datastore, or within the one process/JVM/however Eclipse runs my dev_server and my JUnit tests.

Even if you setup your test code to point to the same file that your dev_server is reading from, your application wont see your entities. To say it's a frustrating problem is an understatement. Turns out there is a combination of fields that have to be set in your test code for the underlying datastore code to populate the file in such a way to get this to work

Rather than repeat the solution, it can be on the ever helpful Stack Overflow

This solution was found though a lot of pain, trial and error. I hope someone gets a use out of this post to prevent them agonising over the amount of blood lost from when their head hit the desk screaming "why Google why!!!!"

Friday, January 21, 2011

How to beat the competition by being more agile

Over at Forbes there's a discussion about how Facebook beat MySpace. The simple solution I got out of that was Facebook is more agile, but that they also priotise customer input. Enough users want something, Facebook gives it to them (and one would assume as fast as possible). That's responding to change, obviously a key Agile principle.

Makes me think about how doing business might continue to change over the years. The suits and the bean counters may want to do one thing (lots of Powerpoint presos with forecasts), and the techs on the ground might only be thinking three months ahead - getting the latest feature out the door. There's something to be said for long term vision but do short term sprints take precedence? After all the company has to make money to keep the suits in a job.

One thing Facebook does need to do is focus on quality. To much of it breaks (and often non deterministically). The fact that I can't invite friends to an event now for three days means something's going dreadfully wrong. Since you listen to customers Facebook, can you please fix your defective parts as well as push out the latest and greatest features?