Thursday 20 December 2012

Demystifying Apache CXF: A RESTful Hello World App

The first post of this how-to series showed you what it takes to expose a Hello World application as a SOAP over HTTP Web Service using CXF. For this post, I'll show you how to expose the same app as a RESTful service.

In the Java world, we use JAX-RS for mapping a class to a RESTful service. Giving a RESTful interface to our Hello World app is just a matter of adding JAX-RS annotations to HelloWorldImpl:

In the class, I tell the JAX-RS provider (i.e., CXF):
  • HelloWorldImpl is a resource available on the URL relative path "/helloWorld" (@Path("/helloWorld")).
  • the HTTP reply sent back to the client should have the Content-Type set to "text/hml" (@Produces).
  • sayHi is to be called when the HTTP request is a GET and the relative path is "/helloWorld/sayHi/" + [variable] (@Path("sayHi/{text}")).
  • to bind the URL parameter with the method argument text (@PathParam).

As in the previous how-to, I'm going to deploy the app onto an embedded Jetty server instead of deploying it onto a standalone web container:

RuntimeDelegate.getInstance().createEndpoint(...) is a JAX-RS method that returns an unpublished endpoint. It takes in:
  • a class responsible for configuring and launching the web server. This class differs across JAX-RS providers. CXF expects this class to be JAXRSServerFactoryBean.
  • an object that extends Application. This user-defined class must return JAX-RS annotated classes responsible for processing client requests. For us, this means returning HelloWorldImpl:

Back to our Server.java file, I tell the endpoint to bind the server to the URL http://localhost:9000. Then, from the endpoint, I create a org.apache.cxf.endpoint.Server object and invoke start(...) to publish the service. Note that, underneath, org.apache.cxf.endpoint.Server is a configured Jetty. 

Before testing the service, I add the required CXF libraries to the Java classpath by declaring them as dependencies in project's POM :

If you compare this POM with the POM of the first how-to, you'll note that now I've swapped the JAX-WS frontend with the JAX-RS one.

All that is left is to run the server with the following Maven commands:

Once the server is up, accessing via your browser the URL http://localhost:9000/helloWorld/sayHi/Ricston should give you "Hello Ricston".

Wednesday 19 December 2012

Demystifying Apache CXF: A Hello World App

"CXF scares the sh**t out of me!". This was a client's comment during our discussion on Apache CXF and I think it's a feeling shared by many. This feeling usually arises from a complaint I hear on CXF: the lack of documentation. However, I suspect that CXF's flexibility confuses numerous newbies and therefore contributes to the negative feelings towards it. The bad news is that, for some, it's hard to live without CXF. Popular open source software such as Apache Camel, Mule, and JBoss AS rely on CXF for their out-of-the-box web service support.

With the hope of increasing the understanding of Apache CXF, I've decided to publish the first of a series of how-tos. Each how-to demonstrates how to accomplish a particular task in CXF. For example, in this one I'll show how to publish a SOAP over HTTP Web Service. Throughout the how-tos, I will assume you have knowledge of basic Java and web services (e.g., SOAP, WSDL, REST). I will also assume you have knowledge of Maven. Maven is the tool I use for setting up dependencies, build and run the apps described by the how-tos. Don't worry, I'll make the apps available on GitHub :-).

Before getting our hands dirty with this how-to, I'll give a 10,000 foot view of Apache CXF for those who are new to the framework. The following is its definition taken from CXF's official website:

"Apache CXF is an open source services [Java] framework. CXF helps you build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS or JBI."

In other words, with CXF, we can mix and match frontend APIs [1], protocols and transports to implement a service. For instance, we can instruct CXF to expose a Java application's interface as a WSDL and have the app talk with clients using CORBA over IIOP. But CXF is not just used to "build and develop services". We can also use CXF to build and develop service clients such as a SOAP consumer [2].

Now that we have an idea of what CXF is, let's follow the first how-to: Publish a SOAP over HTTP Web Service that takes in a string and returns "Hello" with the input string appended to it.

I have the following class:

I want the class to be exposed to the outside world as a Web Service. That is, I want to have:
  • a WSDL that reflects the class interface and is accessible to clients
  • SOAP requests translated into object method invocations and method results translated into SOAP replies.

With CXF, different approaches exist for doing the above. In this post, I'll cover the standard approach and leverage CXF's support for JAX-WS. In the Java world, JAX-WS is a spec for mapping a class to a SOAP Web Service. I map HelloWorldImpl by adding annotations to it:

The JAX-WS annotations in the class instruct the JAX-WS provider (i.e., CXF) to:
  • Expose HelloWorldImpl as a Web Service (@WebService)
  • Set the parameter name in the WSDL to "text" (@WebParam(name = "text")) [3].

The next step is to launch the Web Service so I can process SOAP requests. One strategy for launching the Web Service is to deploy the app onto a web container such as Tomcat. But this requires quite a bit of setup. I'd like to keep things simple for this first how-to. JAX-WS provides an easy way for launching the Web Service from within your application [4]:

Endpoint.publish(...) tells the JAX-WS provider to start an embedded web server (Jetty for CXF) on a different thread. I give the method a URL from where the Web Service will be operating on and the class to be published as a Web Service.

What happens if I try to run the app without including the CXF libraries in the Java classpath? It will actually run without a hitch, that is, the Web Service will start up. The problem is I won't be using CXF as the JAX-WS provider. If you're using the Oracle JDK, JAX-WS RI is the default JAX-WS provider since it comes bundled with the JDK. Changing from JAX-WS RI to CXF is just a matter of declaring two CXF dependencies in the POM file:

To test the Web Service, from your console, goto the project's root directory and type:

Accessing http://localhost:9000/helloWorld?wsdl from your browser should bring up the Web Service's WSDL.

1: A frontend programming API is an API for mapping your application interface (e.g., Java interface) to a different interface (e.g., RESTful).
2: Funnily enough, this isn't mentioned in the definition.
3: @WebParam is required because a JAX-WS provider cannot deduce the parameter name from the compiled code in some situations.
4: Note that I'm not saying this is the suggested approach for a production environment. Even though it takes more time to setup, a web container offers several advantages over an embedded approach.

Wednesday 14 November 2012

A Practical Solution for XML-to-Relational Mapping

A challenge in the field of integration is persisting to a relational database the contents of an XML document [1]. At first glance, this might seem like a no-brainer. Create a database table where each column maps to an element defined in the XML document's XSD. Then write an INSERT SQL statement that iterates through each element of the XML document. 

But what if an XML element has attributes? What about elements containing other elements? How will we deal with elements that may occur multiple times within an element? What if element order is important? And if the XML schema is recursive? Yikes! As other people have pointed out, XML-to-relational mapping is not a trivial problem and the solution mentioned above will only work for the simplest of XSDs.

In addition to a complex XML schema, we have another issue to consider: the size of the XSD. We could be tempted to write the mappings (i.e., the SQL) for an XSD that consists of a small no. of element definitions. On the other hand, writing the mappings for a 50 KB XSD is suicidal.

What we need is a library that hides from us the intricacies of persisting XML documents. In the Java space, attempts have been made at producing an XML-to-relational mapper. For instance, Hibernate 3 has a little known experimental feature were EntityMode.DOM4J is used to persist and retrieve XML, much in the same way you would do with POJOs. Unfortunately, this feature was considered to have too many "holes" and removed from Hibernate 4.

An alternative to Hibernate 3's EntityMode.DOM4J feature is Shrex: an open source prototype mapper originating from a paper's thesis. I think Shrex has potential. With a few code changes to the prototype, I generated a database schema from a non-trivial XSD and "shredded" an XML document into relations. Having said that, I don't think Shrex is a practical solution. First off, Shrex's code base is un-maintained. To make things worse, it's devoid of community support. Therefore, taking the plunge with Shrex means that you'll be the one maintaining it. Secondly, Shrex was developed in an academic environment and, to my knowledge, never used in the wild. You'll probably have to make significant changes to the code base so that Shrex can meet your mapping requirements.

Till now, I've presented software that do the XML-relational conversion in one step. Yet, the lack of support makes these mappers impractical for a project that must meet a tight deadline. The Java EE  platform provides well supported standards for: 
  • Binding XML documents to objects called JAXB, and 
  • Persisting objects into a relational store called JPA

Leveraging JAXB and JPA, we could use objects as a go-between for persisting XML documents [2]. However, JAXB and JPA require Java classes for holding the XML data. Moreover, these classes must be annotated with mapping instructions. Writing the classes and annotations is undoable for a large or complex XSD. But generating them is doable and we can do it using an awesome XJC extension developed by Aleksei Valikov: Hyperjaxb3



Given an XSD, Hyperjaxb3 generates POJO classes annotated with JAXB and JPA. Let me demonstrate how to use Hyperjaxb3 by walking you through a Maven project I've developed and made available on GitHub.

The following is the XSD of the XML documents I  want to persist:

If you're careful, you'll note that the complex type envelopeType has an element and attribute with identical names: "from". Running Hyperjaxb3 on this schema will generate an EnvelopeType class containing two getters and setters with the same method names; illegal under the rules of Java. To prevent a name collision, I wrote a JAXB binding file:

In the binding file, aside from the name customisation, I instruct Hyperjaxb3 to generate classes:
  1. that follow the JPA spec, and
  2. have the package name org.opensourcesoftwareandme.
In the project's POM, I configure Maven to execute the Hyperjaxb3 plugin during the build:

The plugin, by default, scans "src/main/resources" for any XSDs and generates classes from them.

In addition to the plugin, I declare the dependencies for the project:

The dependencies to note are:
  1. hyperjaxb3-ejb-runtime: provides utility methods for (de)serialization, retrieving the persisted ID, etc... 
  2. hibernate-entitymanager: the JPA provider which in this case is Hibernate
  3. derby: the database used for testing
To test the mapping, I wrote a JUnit test case:

In a few words, the test case:
  1. Creates a map and populates it with configuration settings that tell Hibernate the database to use for persisting the XML document and how it should be set up. 

  2. Initialises Hibernate with the configuration settings

  3. Uses Hyperjaxb3's utility classes, JAXBContextUtils  and JAXBElementUtils, to deserialize the XML test document mails.xml:

  4. Tells Hibernate to persist the object

  5. Retrieves the persisted object and asserts that the data content is the same as in the XML doc.
You can run Hyperjaxb3 by entering "mvn test" from your console, which will run the test as well. The generated classes are found in the target folder.

I'm very interested in seeing other alternatives for persisting XML documents to a relational database. I'd also love to know if persisting XML documents with Hyperjaxb3 would work for your project, and if not, why.

1: The wisdom of employing a relational database instead of a document store is debatable.
2: Note that this isn't a general solution for persisting XML documents.

Tuesday 6 November 2012

Mule ZeroMQ Transport: Lightweight RPC with ØMQ and Protocol Buffers

Check out John D'Emic's awesome blog post demonstrating how ØMQ and Mule could be used for exchanging stock quote data between a front-end and back-end application.

Monday 22 October 2012

Testing Web Services from JUnit using SoapUI

There is no doubt that SoapUI is a superb tool for testing Web Services. Yet, some people seem to think that SoapUI is just used for manual testing. In addition to manual tests, SoapUI supports automated tests, that is, test suites. Test suites are a step in the right direction towards automation but running them still requires human intervention. In other words, the user still has to launch SoapUI and click on the "Run..." button. This doesn't go down well in a CI environment.

Since SoapUI is a Java application, test suites can be invoked directly from JUnit removing the need for human intervention. On my GitHub page, I've uploaded a project that demonstrates how to do this. The project has a JUnit test case that launches a SoapUI mock Web Service and then invokes the test suite for it. Let me give a brief overview on how I went about developing the project.

The first step was to set up a SoapUI project and test suite based on a WSDL. There's plenty of documentation online on how to do this. In the test suite, I created a test case for each WSDL operation:


For every test case, I asserted that the SOAP response isn't a SOAP fault, complies with the schema, it's a valid SOAP response and the outcome of the operation is correct:


The next step was to create a mock Web Service in the same project so I have something to run the test suite against. Again, there is documentation out there on this.


Now it was time to start developing the JUnit test case. I could have imported into my Java project the SoapUI libraries from the SoapUI distribution. But being a sadist, I decided to use Maven : P. In the project's POM, I declared the repo where to find SoapUI and its dependencies:

I then declared the dependencies that I want to be retrieved and included in the classpath:

The last step was to write the test for my Web Service:

testCalculatorService is annotated with @Test which instructs the JUnit runner to execute the test. What the test does is simple:
  1. It loads the test suite and mock Web Service configurations from the SoapUI project calculator-soapui-project.xml
  2. Launch the mock Web Service
  3. Execute the test suite
You can run the test by typing in your console "mvn test" in the project root directory.

Friday 12 October 2012

Saturday 18 August 2012

Attaching a file in Cordova & uploading it with jQuery Mobile

This post is for those unlucky souls who are tasked with developing an Android solution that attaches a file to an HTML form using Cordova and uploads the form via jQuery Mobile. Documentation on these topics is scarce so I've created a minimal example on my GitHub page demonstrating how to do this. Instructions to run the example are found inside the project's README file.


Sunday 12 August 2012

Building ZeroMQ for Android

This weekend I ran ZeroMQ on the Android platform. The ZeroMQ website gives instructions on how to build ZeroMQ for Android and (surprise, surprise) I got errors following them. At least I wasn't alone. Fellow blogger Victor was just as lucky as me. He solved these errors, and what's even better, he created a set of scripts which correctly build ZeroMQ for Android. Alas, running "make all" on the project gave me the following error:

The compiler complained because it couldn't find some Android NDK header files which had their location changed in the latest release of the NDK (at the time of the writing the version is R8b). This was an easy fix. I just passed some flags to the compiler specifying the new header file locations. On my GitHub page, you'll find a fork of Victor's project which applies the fix. Enjoy!

Friday 3 August 2012

Sharing an FTP service between un-clustered Mule instances

One notorious problem when having Mule instances consuming files from the same FTP server is that it's possibile for two or more Mule instances to concurrently download the same file. This leads to duplicate messages: Mule messages representing the same file in different Mule nodes.


The clustering solution offered by Mule EE 3.2 addresses this issue by having Mule instances coordinate among each other on who will download files from the FTP server. 


Clustering introduces an in-memory data grid to your Mule nodes. An in-memory data grid adds significant complexity which is fine if you intend to use it for load balancing VM queues, share router state, etc... But for me, all that complexity is un-necessary if you intend to use clustering just for the sake of preventing Mule instances grabbing the same file on a FTP server. It's like hitting a nail with a sledge-hammer.

What's the alternative to using a cluster? The alternatives my colleagues and me came up with weren't exactly pretty and all of them involved doing some playing around inside of Mule. These days I've been travelling which gave me time to think more clearly about the problem. It turns out that a potential simple solution (at the expense of a small increase in latency) lies outside of Mule:


In above setup, the FTP proxy between the set of Mule instances and the FTP server is responsible for ensuring that a Mule node gets served a 'free' file. Such a solution should be relatively easy to develop and add little complexity compared to others, right? To be sure, I set about implementing the FTP proxyLuckily, most of the work was already done for me. James Healey took the trouble of developing an event-driven FTP framework in Ruby: em-ftpd. All I had to do was to add callbacks which corresponded to FTP actions (e.g., ls, get, size, etc...). After a couple of hours I got a naive implementation up and running:

MuleFtpProxyDriver represents a Mule FTP connection. This means that it's instantiated every time Mule attempts to do a FTP login. It declares two class variables: 

  • @@hidden_files is a list containing files Mule instances are downloading. As you will see, a Mule instance isn't permitted to start a download on a file contained within the list. This removes the chance of having duplicate messages.

  • @@mutex is the lock used by the MuleFtpProxyDriver instances to access safely @@hidden_files.

The FTP server, which the proxy will relay requests to, has its details passed to MuleFtpProxyDriver's constructor. The constructor generates a unique ID that the MuleFtpProxyDriver instance uses to keep track of the files it's downloading. 

The dir_contents method returns the list of files in a directory. The proxy lists only files which are 'free'. Therefore, Mule doesn't attempt the download a file that is being download since it's hidden from view. 

In the get_file method, the requested file is compared against the list of hidden files. If there is a match, the proxy tells Mule that the file is unavailable. Otherwise, it transfers the file from the FTP server to Mule. 

The delete_file method, in addition to deleting the file from disk, removes the file entry from @@hiddenfiles. This allows a new file, with the same name as the deleted file, to be downloaded.

The unbind method is invoked whenever the FTP connection is broken or closed. It makes again visible those files which didn't download completely.

As usual, instructions on configuring and running the FTP proxy are found in the repo. Try it out and let me know your thoughts about using an FTP proxy as an alternative to clustering.

Friday 6 July 2012

A WebSocket Cordova plugin for Android

This week I've investigated the feasibility of leveraging NullMQ inside a Cordova/PhoneGap application on Android. NullMQ sends and receives messages via WebSocket, and unfortunately, Android's browser doesn't support the standard yet. I found Cordova plugins on GitHub that implement WebSocket. The problem is that none of them functioned properly against the latest version of the WebSocket spec, that is, RFC 6455. So I decided to fork the Java-WebSocket project and expose it as a Cordova plugin. The cool thing about the Java-WebSocket project is that it supports, in addition to RFC 6455, earlier releases of the spec. Although, by default, the plugin is configured to use RFC 6455.

The plugin's API is identical to the WebSocket one so you don't need to re-learn a new API or change existing WebSocket calls. To install:
  1. Include websocket.js in the your HTML file:

  2. From your console, enter 'mvn package' in the Java-WebSocket project root directory. Move the produced JAR to your Cordova application classpath.

  3. Add the following plugin entry to the plugins.xml file:

Sunday 24 June 2012

Mule meet ZeroMQ. ZeroMQ meet Mule.

These past few weeks I've been experimenting with ZeroMQ (a.k.a. ØMQ or ZMQ). I like the idea of a library that provides you the building blocks for constructing your own messaging architecture. For example, we may want to send messages to an application that might be unavailable at times. In our system, losing some messages isn't the end of the world so we don't need to have a high degree of message reliability. A common solution for this type of problem is to introduce a message broker, such as ActiveMQ, and let the broker worry about making sure that messages are eventually delivered to the recipient.


The main disadvantage of this approach is that a broker adds another hop to the path of the message. Also, a broker is another point in your distributed system where things could go wrong.

Another option is to let the sender ensure that the message gets delivered to the recipient. Since guaranteed message delivery isn't a requirement, we can afford to lose messages should the sender go down.


Using ØMQ's asynchronous socket-like API, we can easily implement this scenario in a few minutes. All we need is to submit the message to ØMQ, and ØMQ will deliver the message to the recipient on our behalf. However, because ØMQ has its own protocol, the library must be embedded within the sender and receiver applications.



This simple example just scratches the surface of what you can do with ØMQ. Real-world applications require the architecture to scale and ØMQ offers building blocks to tackle this challenge. Consult the ØMQ guide to learn more.

How does Mule, an integration framework, fit in with ØMQ? Well, one simple use-case is that even though the back-end is talking in ØMQ, the outside world is speaking in a standard language such as HTTP. Mule can serve as the bridge between the outside world and the ØMQ back-end as shown below:


On my GitHub page, you'll find the first release of the Mule 3 ZeroMQ Transport for ZeroMQ 2.2. This transport frees you from having to know all the details of ØMQ's API. Furthermore, you don't need to worry about processing messages concurrently: the transport takes care of that.

This snippet is all that it takes to implement the bridge in Mule. Those of you who are familiar with ØMQ API should find the zeromq:outbound-endpoint easy to understand. The outbound-endpoint is telling ØMQ to "connect" to the receiver 192.168.34.10 listening on port 9090. All of this will happen once, at initialisation time. At run-time, the outbound-endpoint will transform messages to arrays of bytes and send them off to the receiver. Replies are returned by the outbound-endpoint as arrays of bytes. 

The exchange pattern request-response is just one of several exchange patterns the ØMQ transport supports. What if we don't want to wait for the reply from the back-end? Just set the exchange pattern to one-way. ØMQ folks, you may specify push instead of one-way. The same code is executed underneath.

Of course, it's quite possible that we have another Mule instance on the other end accepting ØMQ messages as shown below:

Apart from different exchange patterns (see the project's README for the complete list of supported exchange patterns) and socket operations (i.e., connect and bind), the transport also supports multi-part messages. In the event a ØMQ inbound-endpoint receives a multi-part message, a java.util.List is spit out. Each element in the list is a byte array representing a message part. 

The outbound-endpoint works a little different when it comes to multi-part messages. To send a multi-part message, the multipart attribute must be set to true on the outbound-endpoint. Additionally, the message must be a java.util.List where each element represents a message part.

Let me know how you find this transport and how it may be improved. Code contributions are more than welcome :-). Future releases of the transport will support missing features such as durable sockets so keep an eye on the repo.

Wednesday 13 June 2012

Mule ISO 8583 Module

Check out my latest post on the Ricston blog: Mule ISO 8583 Module

Monday 28 May 2012

Revisiting Dynamic Ports in Mule 3

Daniel Zapata wrote an interesting post about using dynamic ports when testing your Mule 3 application. Since then, subsequent releases of Mule included support for JUnit 4 which meant improved flexibility in terms of dynamic ports.

Before JUnit 4, an annoying problem with dynamic ports was that you were limited to property placeholder names having the following pattern: 'port' + n where n is an integer. For example:

Using JUnit 4, this problem is solved by leveraging the Rule annotation and the DynamicPort class. We'll see how this is done. Let's create a simple Mule config for testing dynamic ports out:

Notice the property placeholder "${foo}" in the port attribute. The next step is to create a test case for the config. The test case must extend org.mule.tck.junit4.FunctionalTestCase for this to work:

The port variable declaration is what we're interested in. @Rule instructs JUnit to execute code in the DynamicPort object before running the test. The code will:
  1. Find an unused port,
  2. Search the Mule config for a placeholder with the name 'foo', and then
  3. Replace the placeholder with the unused port number.
The newly assigned port no. is retrieved using the getNumber() method of the DynamicPort class. The complete example can be found on GitHub.

Saturday 26 May 2012

Conventions, Conventions, Conventions

I've been working with open source projects for a couple of years now and I've come to expect two things from such projects:
  • A set of working examples that comes with the distribution, and a
  • Quick guide on getting started.
Mule has them. ServiceMix has them. So why not Spring Integration? It's a shame because I heard a lot of good things about the software. I couldn't find a reason why SpringSource didn't post an official guide on getting started quickly in their main page. However, they do have a reason for leaving examples out from the distribution. Although for me, it's not a good enough reason. 

As a frequent user of open source projects, I'm accustomed to certain conventions. For instance, I expect to find a "README.txt" in the distribution's root folder telling me how to launch the program. When you decide to stay away from conventions, new users have to rely on Google to find what they're looking for. This is great for Google but not for your open source project since the user might get fed up searching and just try another project. I spent about 5 minutes searching for a simple Spring Integration example. This could have been because I chose a poor search term or I'm a slow reader, but still, for me it was a waste time. Doing the same thing in Mule or ServiceMix took me a second. Why? Not because I used Mule or ServiceMix before but because I expected there would be example folders in their distributions. Which project do you think the impatient user will have a better experience learning?