Archive for the ‘Java’ Category

Spring Configuration – Selecting An Alternate Implementation

November 9, 2011

A common recurring pattern in software development is the need to select at runtime a specific instance of an interface. This instance can either be a distinct implementation class of an interface or the same class  but instantiated with different properties. Spring provides unparalleled abilities to define different bean instances. These can be categorized as following:

  • Each bean is a different implementation class of the interface.
  • Each bean is the same implementation class but has different configuration.
  • A mixture of the two above.

The canonical example is selecting a mock implementation for testing instead of the actual target production implementation. However there are often business use cases where alternate providers need to be selectively activated.

The goal is to externalize the selection mechanism by providing a way to toggle the desired bean name. We want to avoid  manually commenting/uncommenting bean names inside a Spring XML configuration file. In other words, the key question is: how to toggle the particular implementation?

A brief disclaimer note: this pattern is most applicable to Spring 3.0.x and lower. Spring 3.1 introduces some exciting new features such as bean definition profiles dependent upon different environments. See the following articles for in-depth discussions:

There are two variants of this pattern:

  • Single Implementation – We only need one active implementation at runtime.
  • Multiple Implementations – We need several implementations at runtime so the application can dynamically select the desired one.
Assume we have the following interface:
  public interface NoSqlDao<T extends NoSqlEntity>  {
     public void put(T o) throws Exception;
     public T get(String id) throws Exception;
     public void delete(String id) throws Exception;

  public interface UserProfileDao extends NoSqlDao<UserProfile> {

Assume two implementations of the interface:

  public class CassandraUserProfileDao<T extends UserProfile>
    implements UserProfileDao

  public class MongodbUserProfileDao<T extends UserProfile>
    implements UserProfileDao

Single Loaded Implementation

In this variant of the pattern, you only need one implementation at runtime. Let’s assume that the name of the bean we wish to load is userProfileDao.

  ApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
  UserProfileDao userProfileDao = context.getBean("userProfileDao",UserProfileDao.class);

The top-level applicationContext.xml file contains common global beans and an import statement for the desired provider. The value of the imported file is externalized as a property called providerConfigFile. Since each provider file is mutually exclusive, the bean name is the same in each file.

    <bean id="propertyConfigurer"
      <property name="location" value="" />
      <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
    <import resource="${providerConfigFile}"/>

The provider-specific configuration files are:


For example (note the same bean name userProfileDao):


    <bean id="userProfileDao" class="com.amm.nosql.dao.cassandra.CassandraUserProfileDao" >
      <constructor-arg ref="keyspace.userProfile"/>
      <constructor-arg value="${cassandra.columnFamily.userProfile}"/>
      <constructor-arg ref="userProfileObjectMapper" />


    <bean id="userProfileDao" class="com.amm.nosql.dao.mongodb.MongodbUserProfileDao">
      <constructor-arg ref="userProfile.collectionFactory" />
      <constructor-arg ref="mongoObjectMapper" />

At runtime you need to specifiy the value for the property providerConfigFile.  Unfortunately with Spring 3.0. this has to be a system property and cannot be specified inside a properties file! This means it will work for a stand-alone Java application but not for a WAR unless you pass the value externally to the web server as a system property. This problem has been allegedly fixed in Spring 3.1 (I didn’t notice it working for 3.1.0.RC1). For example:


Multiple Loaded Implementations 

With this variant of the pattern, you will need to have all implementations loaded into your application context so you can later decide which one to choose. Instead of one import statement,  applicationContext.xml is imports all implementations.

  <import resource="applicationContextContext-cassandra.xml />
  <import resource="applicationContextContext-mongodb.xml />
  <import resource="applicationContextContext-redis.xml />
  <import resource="applicationContextContext-riak.xml />
  <import resource="applicationContextContext-membase.xml />
  <import resource="applicationContextContext-oracle.xml />

Since you have one namespace, each implementation has to have a unique bean name for the UserProfileDao implementation. Using our previous example:


  <bean id="cassandra.userProfileDao" class="com.amm.nosql.dao.cassandra.CassandraUserProfileDao" >
    <constructor-arg ref="keyspace.userProfile"/>
    <constructor-arg value="${cassandra.columnFamily.userProfile}"/>
    <constructor-arg ref="userProfileObjectMapper" />


  <bean id="mongodb.userProfileDao" class="com.amm.nosql.dao.mongodb.MongodbUserProfileDao">
    <constructor-arg ref="userProfile.collectionFactory" />
    <constructor-arg ref="mongoObjectMapper" />

Then inside your Java code you need to have a mechanism to select your desired bean, e.g. load either cassandra.userProfileDao or mongodb.userProfileDao. For example, you could have a test UI containing a dropdown list of all implementations. Or you might have a case where you even had a need to access two different NoSQL stores via a UserProfileDao interface.

MongoDB and Cassandra Cluster Failover

October 21, 2011

One of the most important features of a scalable clustered NoSQL store is how to handle failover. The basic question is: is failover seamless from the client perspective? This is not always immediately apparent from vendor documentation especially for open-source packages where documentation is often wanting. The only way to really know is to run your own tests – to both verify vendor claims and to fully understand them. Caveat emptor – especially if the product is free!

Mongo and Cassandra use fundamentally different approaches to clustering. Mongo is based on the classical master/slave model (similar to MySQL) whereas Cassandra is a peer-to-peer system modeled on the eventual consistency paradigm pioneered by Amazon’s Dynamo system. This difference has specific ramifications regarding failover capabilities. The design trade-offs regarding the CAP theorem are described very well in the dbmusings blog post Overview of the Oracle NoSQL Database (Section CAP).

For Mongo, the client can only write to the master and therefore it is a single point of failure. Until a new master is elected from the secondaries, the cluster will not be reachable for writes.

For Mongo reads, you have two options: either talk to the master or to the secondaries.  The default  is to read from the master, and you will be subject to the same semantics as for writes. To enable reading from secondaries, you can call Mongo.slaveOk() for you client driver. For details see here.

For Cassandra, as long  as your selected consistency policy is satisfied, failing nodes will not prevent client access.

Mongo Setup

I’ll illustrate the issue with a simple three node cluster. For Mongo, you’ll need to create a three-node replica set which is well described on the Replica Set Tutorial page.

Here’s a convenience shell script to launch a Mongo replica set.

  mv nohup.out old-nohup.out
  OPTS="--rest --nojournal --replSet myReplSet"
  nohup mongod $OPTS --port 27017 --dbpath $dir/node0  &
  nohup mongod $OPTS --port 27018 --dbpath $dir/node1  &
  nohup mongod $OPTS --port 27019 --dbpath $dir/node2  &

Cassandra Setup

For Cassandra, create a keyspace with replication factor of 3 in your schema definition file.

  create keyspace UserProfileKeyspace
    with strategy_options=[{replication_factor:3}]
    and placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy';

Since I was using the standard Java Hector client along with Spring, I’ll highlight some of the key Spring bean configurations. The important point to note is the consistency policy must be quorum which means that for an operation (read or write) to succeed, two out of the three nodes must respond.

Properties File


Application Context File

  <bean id="userProfileKeyspace"
    <constructor-arg value="UserProfileKeyspace" />
    <constructor-arg ref="cluster"/>
    <property name="consistencyLevelPolicy" ref="${cassandra.consistencyLevelPolicy}" />

  <bean id="cluster"
    <constructor-arg value="${cassandra.cluster}"/>
    <constructor-arg ref="cassandraHostConfigurator"/>

  <bean id="cassandraHostConfigurator"
     <constructor-arg value="${cassandra.hosts}"/>

  <bean id="quorumAllConsistencyLevelPolicy"
        class="me.prettyprint.cassandra.model.QuorumAllConsistencyLevelPolicy" />

  <bean id="allOneConsistencyLevelPolicy"
        class="me.prettyprint.cassandra.model.AllOneConsistencyLevelPolicy" />

Test Scenario

The basic steps of the test are as follows. Note the Mongo example assumes you do not have slaveOk turned on.

  • Launch cluster
  • Execute N requests where N is a large number such as 100,000 to give your cluster time to fail over. The request is either a read or write.
  • While your N requests are running, kill one of the nodes. For Cassandra this can be any node since it is peer-to-peer. For Mongo, kill the master node.
  • For Cassandra, there will be no exceptions. If your requests are inserts, you will be able to subsequently retrieve them.
  • For Mongo, your requests will fail until the secondary is promoted to the master. This happens for both writes and reads. The time window is “small”, but depending upon your client request rate, the number of failed requests can be quite a few thousand! See the sample exceptions below.
With Mongo, the client can directly access only the master node. The secondaries can only be accessed by the master and never directly by the client. With Cassandra, as long as the minimum number of nodes as specified by the quorum can be reached, your operation will succeed.

Mongo Put Exception Example

I really like the message “can’t say something” – sort of cute!

com.mongodb.MongoException$Network: can't say something
    at com.mongodb.DBTCPConnector.say(
    at com.mongodb.DBTCPConnector.say(
    at com.mongodb.DBApiLayer$MyCollection.update(
    at com.amm.nosql.dao.mongodb.MongodbDao.put(

Mongo Get Exception Example

com.mongodb.MongoException$Network: can't call something
    at com.mongodb.DBApiLayer$MyCollection.__find(
    at com.mongodb.DBCursor._check(
    at com.mongodb.DBCursor._next(
    at com.amm.nosql.dao.mongodb.MongodbDao.get(

Mongo Get with slaveOk() 

If you invoke Mongo.slaveOk() for your client driver, then your reads will not fail if a node goes down. You will get the following warning.

Oct 30, 2011 8:12:22 PM com.mongodb.ReplicaSetStatus$Node update
WARNING: Server seen down: localhost:27019 Connection reset by peer: socket write error
        at Method)
        at com.mongodb.OutMessage.pipe(
        at com.mongodb.DBPort.go(
        at com.mongodb.DBPort.go(
        at com.mongodb.DBPort.findOne(
        at com.mongodb.DBPort.runCommand(
        at com.mongodb.ReplicaSetStatus$Node.update(
        at com.mongodb.ReplicaSetStatus.updateAll(
        at com.mongodb.ReplicaSetStatus$
Oct 30, 2011 8:12:22 PM com.mongodb.DBTCPConnector _set

Cassandra Java Annotations

August 30, 2010


Cassandra has a unique column-oriented data model which does not easily map to an entity-based Java model. Furthermore, the Java Thrift client implementation is very low-level and presents the developer with a rather difficult API to work with on a daily basis. This situation  is a good candidate for an adapter to shield the business code from mundane plumbing details.

I recently did some intensive Cassandra (version 0.6.5) work to load millions of geographical postions for ships at sea.  Locations were already being stored in MySQL/Innodb using JPA/Hibernate so I already had a ready-made model based on JPA entity beans. After some analysis, I created a mini-framework based on custom annotations and a substantial adapter to encapsulate all the “ugly” Thrift boiler-plate code.  Naturally everything was wired together with Spring.


The very first step was to investigate existing Cassandra Java client toolkits. As usual in a startup environment time was at a premium, but I quickly checked out a few key clients. Firstly, I looked at Hector, but its API still exposed too much of the Thrift cruft for my needs. It did have nice features for failover and connection pooling, and I will definitely look at it in more detail in the future. Pelops looked really cool with its Mutators and Selectors, but it too dealt with columns – see the description.  What I was looking for was an object-oriented way to load and query Java beans. Note that this OO entity-like paradigm might not be applicable to other Cassandra data models, e.g. sparse matrices.

And then there was DataNucleus which advertises JPA/JDO implementations for a large variety of non-SQL persistence stores: LDAP, Hadoop Hbase, Google App, etc. There was mention of a Cassandra solution, but it wasn’t yet ready for prime time. How they manage to address the massive semantic mismatch between JPA is beyond me – unfortunately I didn’t have time to drill down. Seems fishy – but I’ll definitely check this out in the future. Even though I’m a big fan of using existing frameworks/tools, there are times when “rolling your own” is the best course of action.

The following collaborating classes comprised the  framework:

  • CassandraDao – High-level class that understands annotated entity beans
  • ColumnFamily – An adapter for common column family operations – hides the Thrift gore
  • AnnotationManager – Manages the annotated beans
  • TypeMapper – Maps Java data types into bytes and vice versa

Since we already had a JPA-annotated Location bean, my first thought was to reuse this class and simply process the the JPA annotations into their equivalent Cassandra concepts. Upon further examination this proved ugly – the semantic mismatch was too great. I certainly did not want to be importing JPA/Hibernate packages into a Cassandra application! Furthermore, many annotations (such as collections) were not applicable and I needed  annotations for Cassandra concepts that did not exist in JPA. In “set theoretic” terms, there are JPA-specific features, Cassandra-specific features and an intersection of the two.

The first-pass implementation required only three annotations: Entity, Column and Key. The Entity annotation is a class-level annotation with keyspace and columnFamily attributes. The Column annotation closely corresponded to its JPA equivalent. The Key annotation specifies the row key. The Entity defines the column family/keyspace  that the entity belongs to and its constituent columns. The CassandraDao class corresponds to a single column family and accepts an entity and type mapper.

Two column families were created: a column family for ship definitions, and a super column family for ship locations. The Ship CF was a simple collection of ship details keyed by each ship’s MMSI (a unique ID for a ship which is typically engraved on the keel).  The Location CF represented a one-to-many relationship for all the possible locations of a ship. The key was the ship’s MMSI, and the column names were Long types representing the millisecond timestamp for the location. The value of the column was a super column – it contained the columns as defined in the ShipLocation bean – latitude, longitude, course over ground, speed over ground, etc.  The number of location for a given ship could possibly range in the millions!

From an implementation perspective, I was rather surprised to find that there are no standard reusable classes to map basic Java data types to bytes. Sure, String has getBytes(), but I had to do some non-trivial distracting detective work to get doubles, longs, BigInteger, BigDecimal and Dates converted – all the shifting magic etc. Also made sure to run some performance tests to choose the best alternative!


The DAO is based on the standard concept of  a genericized DAO of which many versions are floating around:

The initial version of the DAO with basic CRUD functionality is shown below:

public class CassandraDao<T> {
  public CassandraDao(Class<T> clazz, CassandraClient client, TypeMapper mapper)
  public T get(String key)
  public void insert(T entity)
  public T getSuperColumn(String key, byte[] superColumnName)
  public List<T> getSuperColumns(String key, List<byte[]> superColumnNames)
  public void insertSuperColumn(String key, T entity)
  public void insertSuperColumns(String key, List<T> entities)

Of course more complex batch and range operations that reflect advanced Cassandra API methods are needed.

Usage Sample

  import org.springframework.context.ApplicationContext;

  // initialization
  ApplicationContext context = new ClassPathXmlApplicationContext("config.xml");
  CassandraDao<Ship> shipDao = (CassandraDao<Ship>)context.getBean("shipDao");
  CassandraDao<ShipLocation> shipLocationDao =
  TypeMapper mapper = (DefaultTypeMapper)applicationContext.getBean("typeMapper");

  // get ship
  Ship ship = shipDao.get("1975");

  // insert ship
  Ship ship = new Ship();
  ship.setMmsi(1975); // note: row key - framework insert() converts to required String

  // get ship location (super column)
  byte [] superColumn = typeMapper.toBytes(1283116367653L));
  ShipLocation location = shipLocationDao.getSuperColumn("1975",superColumn);

  // get ship locations (super column)
  ImmutableList<byte[]> superColumns = ImmutableList.of( // Until Java 7, Google rocks!
  List<ShipLocation> locations = shipLocationDao.getSuperColumns("1975",superColumns);

  // insert ship location (super column)
  ShipLocation location = new ShipLocation();
  location.setTimestamp(new Date());

Java Entity Beans


@Entity( keyspace="Marine", columnFamily="Ship")
public class Ship {
  private Integer mmsi;
  private String name;
  private Integer length;
  private Integer width;

  @Column(name = "mmsi")
  public Integer getMmsi() {return this.mmsi;}
  public void setMmsi(Integer mmsi) {this.mmsi= mmsi;}

  @Column(name = "name")
  public String getName() { return name; }
  public void setName(String name) { = name; }


@Entity( keyspace="Marine", columnFamily="ShipLocation")
public class ShipLocation {
  private Integer mmsi;
  private Date timestamp;
  private Double lat;
  private Double lon;

  @Column(name = "mmsi")
  public Integer getMmsi() {return this.mmsi;}
  public void setMmsi(Integer mmsi) {this.mmsi= mmsi;}

  @Column(name = "timestamp")
  public Date getTimestamp() {return this.timestamp;}
  public void setTimestamp(Date timestamp) {this.timestamp = msgTimestamp;}

  @Column(name = "lat")
  public Double getLat() {return;}
  public void setLat(Double lat) { = lat;}

  @Column(name = "lon")
  public Double getLon() {return this.lon;}
  public void setLon(Double lon) {this.lon = lon;}

Spring Configuration

 <bean id="propertyConfigurer">
   <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
   <property name="location" value="</value>
 <bean id="shipDao" class="com.andre.cassandra.dao.CassandraDao" scope="prototype" >
   <constructor-arg value="" />
   <constructor-arg ref="cassandraClient" />
   <constructor-arg ref="typeMapper" />
 <bean id="shipLocationDao" scope="prototype" >
   <constructor-arg value="" />
   <constructor-arg ref="cassandraClient" />
   <constructor-arg ref="typeMapper" />

<bean id="cassandraClient" class="com.andre.cassandra.util.CassandraClient" scope="prototype" >
  <constructor-arg value="${}" />
  <constructor-arg value="${cassandra.port}" />

<bean id="typeMapper" class="com.andre.cassandra.util.DefaultTypeMapper" scope="prototype" />

Annotation Documentation


Annotation Class/Field Description
Entity Class Defines the keyspace and column family
Column Field Column name
Key Field Row key

Entity Attributes

Attribute Type Description
keyspace String Keyspace
columnFamily String Column Family

Initial Cassandra Impressions

August 30, 2010

Recently I’ve been doing some intensive work with the popular NoSQL framework Cassandra. In this post I describe some of my first impressions of working with Cassandra Thrift Java stubs and some comparisons with Voldemort – another NoSQL framework that I am familiar with.

Cassandra Issues

Data Model

The Cassandra data model – with its columns and super columns is radically different from the traditional SQL data model. Most of the Cassandra descriptions are example-based, and though rich in details they lack generality. While examples are necessary they are not sufficient. What is missing is some formalism to capture the essential qualities of the model which no example fully captures. I recently came across a very good article about “NoSQL data model” from a “relational purist” that strongly resonates with me – see The Cassandra Data Model – highly recommended!

One day soon, I’ll try to write a new post summarizing some of my thoughts on NoSQL data modeling. In short, as the field matures there is going to be a need to create some types of standards out of the wide variety of implementations. There are distinct NoSQL categories: key/value stores, column-oriented stores, document-oriented stores –  but even within these categories there is much unnecessary overlap.

Regarding Cassandra columns, here’s a bit of clarification that may help. There are essentially two kinds of column families:

  • Those that have a fixed finite set of columns. The columns represent the attributes of single objects. Each row has the same number of columns, and the column names are fixed metadata.
  • Those that have an infinite set of columns that represent a collection for the key. The confusing part is that the column name is not really metadata – it is actually a value in its own right!

Thrift Client Limitations

Let me be frank – working with Cassandra’s Java Thrift client is a real pain. In part this due to the auto-generated cross-platform nature of the beast, but there are many pain points that reflect accidental and not inherent complexity. As Cassandra/Thrift matures, I hope more attention will be paid to ameliorating the life of poor programmers.

No class hierarchy for Thrift exceptions

Not deriving your exceptions from a base class is truly a disappointment. Interestingly, neither does Google ProtoBuf! The developer is  forced to either catch up to five exceptions for each call, or resort to the ugly catch Exception workaround. How much nicer would it have been to catch one Thrift base exception!

For example, just look at all the exceptions thrown by the get method of Cassandra.client!

  • org.apache.cassandra.thrift.InvalidRequestException
  • org.apache.cassandra.thrift.UnavailableException
  • org.apache.cassandra.thrift.TimedOutException
  • org.apache.thrift.TException

No class hierarchy for Column and SuperColumn

The core Thrift concepts Column and SuperColumn lack a base class for “implementation” reasons due to the “cross-platform” limitations of Thrift. Instead there is a ColumnOrSuperColumn class that encapsulates return results where either a Column or SuperColumn could be returned. For example, see get_slice. This leads to horrible non-OO onerous and problematic switch statements  – if is_setColumn() is true then call getColumn(), or if  is_setSuperColumn() then call getSuperColumn(). Aargh!


Both Voldemort and Cassandra do not provide satisfactory documentation. If you are going to bet your company’s future on one of these products, you definitely have a right to expect better documentation. Interestingly, other open-source NoSQL products such as MongoDB and Riak do have better documentation.

Documentation for Voldemort configuration properties was truly a disaster (at least in version 60.1).  Parameters responsible for key system performance or even basic functionality were either cryptically documented or not at all. I counted a total of sixty properties. For the majority we were forced to scour the source code to get some basic understanding. Totally unecessary! Some examples: client.max.threads, client.max.connections.per.node,,,, client.max.queued.requests, enable.redirect.routing, socket.listen.queue.length, nio.parallel.processing.threshold, max.threads, scheduler.threads,, etc.

Comparison of Cassandra with Voldemort

On the basic level, both Cassandra and Voldemort are sharded key value stores modeled on Dynamo. Cassandra can be regarded as a superset in that it also provides a data model on top of the base K/V store.

Some comparison points with Voldemort:

  • Cluster node failover
  • Quorum policies
  • Read or write optimized?
  • Can nodes be added to the cluster dynamically?
  • Pluggable store engines: Voldemort supports pluggable engines, Cassandra does not.
  • Dynamically adding column families
  • Hinted Hand-off
  • Read Repair
  • Vector Clocks

Cluster Node Failover

A Voldemort client can specify one or more cluster nodes to connect to. The first node that the client connects to will return to the client a list of all nodes. The client stubs will then account for failover and load balancing. In fact, you can plug in your custom strategies. The third-party Cassandra Java client Hector claims to support node failover.

Read/Write Optimization

Read or write optimized? Cassandra is write-optimized whereas Voldemort reads are faster. Cassandra uses a journaling and compacting paradigm model. Writes are instantaneous in that they simply append a log entry to the current log file. Reads are more expensive since they have to potentially look at more than one SSTable  file to find the latest version of a key. If you are lucky you will find it cached in memory – otherwise one or more disk accesses will have to be performed. In a way the comparison is not truly apples-to-apples since Voldemort is simply storing blobs, while Cassandra has to deal with its accompanying data model overhead. However, it is curious to see such how two basically K/V products having a different performance profile regarding this vital issue.

Pluggable store engines

Voldemort supports pluggable engines, Cassandra does not. This is a big plus for Voldemort! Out of the box, Voldemort already provides a Berkeley DB and MySQL engine and allows you to easily plug-in your own custom engine. Being able to implement your own backing store is an important concern for many shops.  In fact, on my recent project for a large telecom this was a crucial deal-breaking feature that played a large role in selecting Voldemort. We had in-house MySQL expertise and spent inordinate resources writing our own “highly optimized” MySQL engine. By the way, Riak also has pluggable engines – seven in total!

Dynamically adding column families

Neither Voldemort nor Cassandra (should do soon) support this. In order to add a new “database” or “table” you need update the configuration file and recycle all servers. Obviously this is not a viable production strategy. Riak does support this with buckets.

Quorum Policies

Quorum policies – Voldemort has one, Cassandra has several many Consistency Levels:

  • Zero – Ensure nothing. A write happens asynchronously in background
  • Any – Ensure that the write has been written to at least 1 node
  • One – Ensure that the write has been written to at least 1 replica’s commit log and memory table before responding to the client
  • Quorom – Ensure that the write has been written to N / 2 + 1 replicas before responding to the client
  • DCQuorom – As above but takes into account the rack aware placement strategy
  • All – Ensure that the write is written to all N replicas before responding to the client

Hinted Hand-off

Cassandra and Voldemort both support hinted handoff. Riak also has suppport.


If a node which should receive a write is down, Cassandra will write a hint to a live replica node indicating that the write needs to be replayed to the unavailable node. If no live replica nodes exist for this key, and ConsistencyLevel.ANY was specified, the coordinating node will write the hint locally. Cassandra uses hinted handoff as a way to (1) reduce the time required for a temporarily failed node to become consistent again with live ones and (2) provide extreme write availability when consistency is not required.


Hinted Handoff is extremely useful when dealing with a multiple datacenter environment. However, work remains to make this feasible.


Hinted handoff is a technique for dealing with node failure in the Riak cluster in which neighboring nodes temporarily takeover storage operations for the failed node. When the failed node returns to the cluster, the updates received by the neighboring nodes are handed off to it.

Hinted handoff allows Riak to ensure database availability. When a node fails, Riak can continue to handle requests as if the node were still there

Read Repair

Cassandra Read Repair

Read repair means that when a query is made against a given key, we perform that query against all the replicas of the key. If a low ConsistencyLevel was specified, this is done in the background after returning the data from the closest replica to the client; otherwise, it is done before returning the data.

This means that in almost all cases, at most the first instance of a query will return old data.


There are several methods for reaching consistency with different guarantees and performance tradeoffs.

Two-Phase Commit — This is a locking protocol that involves two rounds of co-ordination between machines. It perfectly consistent, but not failure tolerant, and very slow.

Paxos-style consensus — This is a protocol for coming to agreement on a value that is more failure tolerant.

Read-repair — The first two approaches prevent permanent inconsistency. This approach involves writing all inconsistent versions, and then at read-time detecting the conflict, and resolving the problems. This involves little co-ordination and is completely failure tolerant, but may require additional application logic to resolve conflicts.


Read repair occurs when a successful read occurs – that is, the quorum was met – but not all replicas from which the object was requested agreed on the value. There are two possibilities here for the errant nodes:

  1. The node responded with a “not found” for the object, meaning it doesn’t have a copy.
  2. The node responded with a vector clock that is an ancestor of the vector clock of the successful read.

When this situation occurs, Riak will force the errant nodes to update their object values based on the value of the successful read.

Version Conflict Resolution – Vector Clocks


Cassandra departs from the Dynamo paper by omitting vector clocks and moving from partition-based consistent hashing to key ranges, while adding functionality like order-preserving partitioners and range queries.  Source.


Voldemort uses Dynamo-style vector clocks for versioning.


Riak utilizes vector clocks (short: vclock) to handle version control. Since any node in a Riak cluster is able to handle a request, and not all nodes need to participate, data versioning is required to keep track of a current value. When a value is stored in Riak, it is tagged with a vector clock and establishes the initial version. When it is updated, the client provides the vector clock of the object being modified so that this vector clock can be extended to reflect the update. Riak can then compare vector clocks on different versions of the object and determine certain attributes of the data.


Synchronous/asynchronous writes

For Voldemort, inserts of a key’s replicas are synchronous. Cassandra allows you to choose which policy best suits you. For cross-data center replication, synchronous updates can be extremely slow.


Cassandra caches data in-memory, periodically flushing to disk. Voldemort does not cache.

Twitter REST API

May 13, 2010

I recently finished a mini-project to calculate the similarity of two Twitter users. Being a REST fan and having implemented a substantial REST service, I’m always eager for an excuse to get my hands dirty with a bit o’ REST. I find it fascinating that no two REST APIs (web APIs) are the same, especially in terms of documentation.

As we all know the term “REST” refers to a style and not a standard, so it is not surprising that actual implementation vary quite a bit. This lack of  specificity is most pronounced on the client side. With SOAP, client access is almost always mediated by platform-specific client stubs automatically generated from the WSDL contract. With REST there is no such standard contract (despite WADL which has not become a dominant force), and therefore no fully automated way to build client stubs. You can partially automate the process if you define your payload as an XML schema, but this still leaves the other vital part of specifying the resource model. Section 1.3 of the JAX-RS specification explicitly states: The specification will not define client-side APIsSee my comments on the current non-standard situation of client stubs in some JAX-RS providers.

API Data Formats

In the absence of standard client stub generation mechanisms, documentation plays an increasingly important role. The fidelity to genuine REST precepts and the terminology used to describe resources and their HTTP methods becomes of prime importance to effective client usage.

How do we unambiguously describe  the different resources and methods? The number and types of payload formats influence the decision. Do we support only one format, JSON or XML? If XML, do we have a schema? If so, what schema do we use? XSD or RelaxNG? Multiple XML formats such as Atom, RSS and/or proprietary XML? By the way, the former two do not have a defined schema. Do we support multiple formats? If so, do we use prescribed REST content negotiation?

Considering the strong presence of the Twitter REST API and my short albeit intense usage of it, I am a bit reluctant to “criticize”. So upfront I issue a disclaimer that my knowledge is partial and subject to change. One very interesting fact I recently read in the book Building Social Web Applications is that over 80% of Twitter’s usage come from its API and not from its web site! Caramba, that’s quite an ecosystem that has evolved around Twitter! All the more reason to invest in API contract specification and  documentation.

General API Documentation

Professional  high quality API documentation is obviously a vital need especially as API usage increases. With an internet consumer-facing API, clients can access resources using any language of choice, so it is important to be as precise as possible. Having worked with many different APIs and services, I have come to appreciate the importance of good documentation. I regard documentation not as separate add-on to the executable code, but rather as an integral part of the experience. It is a first-order concern.

The metaphor I would suggest is DDD – Documentation Driven Development. In fact, on my last big REST project where I took on the responsibility of documenting the API, I soon found it more efficient to update the documentation as soon as any API change was made. This was especially true when data formats were modified! The document format was Atlassian Wiki which unfortunately didn’t allow for global cross-page changes, so I had to keep code and its corresponding documentation closely synchronized; otherwise the documentation would’ve quickly diverged and become unmanageable.

Deductive and Inductive Documentation

In general, documentation can be divided into deductive and inductive. Deductive documentation is based on the top-down approach. If you have an XML schema all the better – you can use this as your basic axiom, and derive all further documentation fragments in progressive refinement steps. Even in the absence of a schema, you can still leverage this principle.

Inductive documentation is solely based on examples – there is no general definition, and it is up to the client to infer commonalities. You practically have to do parallel diffs on many different XML examples to separate the common from the specific.

Theorem: all good documentation must have examples but it cannot rely only on examples! In other words, examples are necessary but not sufficient.

Google API as a Model

Google has done a great job in extracting a common subset of its many public APIs into Google Data Protocol. All Google APIs share this definition: common data formats, common errors mechanisms, header specification, collection counts, etc. Google has standardized on AtomPub and JSON as its two primary data formats (with some RSS too). It does an excellent job on having an unambiguous and clear specification of its entire protocol across all its API instances.

Take the YouTube API for example. Although neither Google nor Atom use an XML XSD schema, the precise details of the format are clearly described. Atom leverages the concept of extensions where you can insert external namespaces (vocabularies) into the base Atom XML. Google Atom does not have to reinvent the wheel for cross-cutting extensions, and can reuse common XML vocabularies in a standard way. See the Data API Protocol Page – XML element definitions page for details. Some namespaces are openSearch (Open Search Schema) for collection counts and paging, media for MRSS (yes, you can insert RSS into Atom – cool!).

Twitter Data Format Documentation

The Twitter General API documentation page and the FAQ do not do a good job in giving a high-level description of the Twitter data formats for requests and responses. There is only a one basic mention of this on the Things Every Developer Should Know page:

The API presently supports the following data formats: XML, JSON, and the RSS and Atom syndication formats, with some methods only accepting a subset of these formats.

No clear indication is given as to which kinds of resources accept which formats. Common sense would lead us to believe that JSON and proprietary XML are  isomorphic and supported for both request and responses. Being feed formats, RSS and Atom would be supported only for responses.  It is unfortunate that this is not explicitly stated anywhere.

More disturbing is the lack of an XML schema or any attempt to formally define the XML vocabulary! It seems that only XML examples are provided for each resource. Googling for “Twitter API XSD” confirms my suspicion in that it returns many mentions of “inferring XML schemas” from instances – a scary proposition indeed! What is the cardinality of XML elements? Which ones are repeatable or not? What about the data types? The DRY (don’t repeat yourself) principle is violated since you have the same XML example redundantly repeated on many pages. You can maybe get away with this for a small API, but for a widely used API such as Twitter I would have thought Twitter would have invested more resources in API contract specification.

Twitter Content Negotiation

Another concern is the way Twitter handles content negotation. Instead of using the REST convention/standard of  the ACCEPT header or a content query parameter, Twitter appends the format type to the URL (.json, .xml) which in effect creates a new resource. For example Google GData uses a query parameter such as alt=json or alt=rss to indicate data format.

TWitter API Versioning

This lack of explicit contract specification leads to problems regarding versioning. Versioning is one of those very difficult API problems that has no ideal satisfactory answer. Instead, there are partial solutions depending on the use case. Without a contract, it is difficult to even know what new changes have been implemented.

Let’s say a change is made to an XML snippet that is shared across many resource representations. Twitter would have to make changes to each resource documentation page. Even worse, it has to then have some mechanism to inform clients as to the contract change. Having some schema or at least some common way of describing the format would be a much better idea.The Twitter FAQ weakly states that clients have to proactively monitor the following:

Google uses HTTP headers to indicate API versions as well as standard namespace naming conventions.

API Client Packages

Typically REST APIs will leave it to third parties to provide language-specific client stubs. This is understandable because of the large number of languages out there – it would be prohibitive for a small(er) company  to implement and test all these packages! However the downside is that these packages are by definition non-standard (caveat emptor), and it is an open question as to how reliably they implement the current service definition.

Documentation wildly varies.   Firstly, it is not always clear which API to use if several choices exists. Being mostly a Java guy, I focus here on Java clients. For example, Twitter has four Java clients. You most often find minimal Javadoc with no further explanation. API coverage is incomplete – features are missing . For example, for the user-timeline “method” (sidebar: misuse of REST term method!), Twitter4j supports the count query parameter whereas JTwitter apparently does not. The problem here is of client fidelity to the API. When the Twitter API changes, what is the guarantee that the “mom and pop” client will sync up?

Speaking of the devil, a rather interesting development happened just the other day with Amazon’s AWS Java client. On March 22, 2010 Amazon announced  rollout of a brand new AWS SDK for Java! Until then, they too had depended on third-parties – the venerable Jets3t (only supports S3 and CloudFront) and typica. Undoubtedly this was due to client pressure for precisely those reasons enumerated above! See Mr. Jets3t’s comment on the news.


One of the wonders of the human condition is how people manage to work around major obstacles when there is an overriding need. The plethora of Twitter clients in the real world is truly a testimony to the ubiquity of the Twitter API.

However, there is still considerable room for improvement to remove incidental accidental complexity and maximize client productivity. Twitter is still a young company and there is still an obvious maturation process ahead.

After all, Google has many more resources to fine tune the API experience. But if I was the chief Twitter API architect, I would certainly take a long and hard look at the strategic direction of the API. Obviously there is major momentum in this direction especially with the June deprecation of Basic Auth in favor of OAuth and the realignment of the REST and Search APIs. There is no reason to blindly mimic someone else’s API documentation style (think branding), but even less reason not to learn from others (prior knowledge) and to minimize client cognitive overhead.

VTest Testing Framework

April 12, 2010

In order to test basic Voldemort API methods under specified realistic load scenarios, I leveraged the “VTest” framework that I had previously written for load testing. VTest is a light-weight Spring-based framework that separates the execution strategy from the business tasks and provides cross-cutting features such as statistics gathering and reporting.

The main features of VTest are:

  • Declarative workflow-based testing framework based on Spring
  • Separation of concerns: framework, executor, job, task, key and value generation strategies
  • Implementations of these concerns are all pluggable and configurable via Spring dependency injection and bean wiring
  • Framework handles cross-cutting concerns: error handling, call statistics, result reporting, and result persistence
  • Conceptually inspired by java.util.concurrent’s Executor
  • Executors: SequentialExecutor, FixedThreadPoolExecutor, ScheduledThreadPoolExecutor
  • Executor invokes a job or an individual task
  • A task is a unit of work – a job is a collection of tasks
  • Tasks are implemented as Java classes
  • Jobs are specified as lists of tasks in Spring XML configuration file
  • VTest configuration acts as a high-level testing DSL (Domain Specific Language)

Sample Result

Here is a sample output for a CRUD job that puts one million key/value pairs, gets them, updates them and finally deletes them. Each task is executed for N requests – N being one million – with a thread pool of 200. The pool acts as a Leaky Bucket (thanks to Joe for this handy reference). The job is executed for five iterations and both the details of each individual run and the aggregated result are displayed.

Description of columns:

  • Req/Sec – requests per second or throughput
  • Ratio – the fraction of total time for the task. The ratio is an inverse of the throughput – the higher the ratio, the lower the throughput.
  • The five % columns represent standard latency percentiles. For example, in the first PutCreate 99-th percentile means that 99% of the requests were 384 milliseconds or less.
  • Max – maximum latency. It is instructive to see that for large request sets, the 99.9 percentile doesn’t accurately portray the slowest requests. Notice that for the first PutCreate the Max is over five seconds whereas the 99.9 percentile is only 610 milliseconds. There’s a lot going on this in 0.01 % of requests! In fact Vogels makes a point that  Amazon doesn’t focus so much on averages but on reducing these exterme “outliers”.
  • Errors – number of exceptions thrown by the server. There is an example in the third PutUpdate.
  • Fails – number of failures. A failure is when the server does not throw an exception but the business logic deems the result incorrect. For example, if the retrieved value does not match its expected value, a failure is noted. Observe that there are 29,832 failures for the third Get – a rather worrisome occurrence.
  • StdDev – standard deviation
==== DETAIL STATUS ============ 

Test         Req/Sec    50%    90%    99%  99.5%  99.9%    Max  Errors  Fails  StdDev
PutCreate       9921      7     29    384    454    610   5022       0      0   61.31
PutCreate       9790      7     31    358    427    516    707       0      0   55.23
PutCreate       8727      7     32    398    457    558    980       0      0   63.98
PutCreate      14354      7     26    122    213    375    613       0      0   27.51
PutCreate       8862      7     31    402    461    577    876       0      0   63.65
Total           9639      7     30    376    442    547   5022       0      0   58.03  

Test         Req/Sec    50%    90%    99%  99.5%  99.9%    Max  Errors  Fails  StdDev
Get            24364      6     10     78     88    114    440       0      0   11.35
Get            23568      6     11     81     89    159    320       0      0   12.31
Get            22769      7     11     81     89    109    381       0  28932   11.93
Get            23174      7     10     80     87     99    372       0      0   11.78
Get            22919      7     10     80     89    216    369       0      0   13.33
Total          23264      7     10     80     88    110    440       0  28932   12.15  

Test         Req/Sec    50%    90%    99%  99.5%  99.9%    Max  Errors  Fails  StdDev
PutUpdate       6555     11     32    554    943   1115   2272       0      0  101.49
PutUpdate       6412     11     32    574    900   1083   2040       0      0  101.99
PutUpdate       2945      3     10   4007   4009   4020   6010       1      0  494.14
PutUpdate       6365     11     35    537    746   1101   2118       0      0   97.55
PutUpdate       6634     11     32    537    853   1095   1293       0      0   98.18
Total           5668     10     31    554    978   4008   6010       1      0  197.87  

Count  Exception
1      class  

Test         Req/Sec    50%    90%    99%  99.5%  99.9%    Max  Errors  Fails  StdDev
Delete          6888     17     46    266    342    442    860       0      0   44.37
Delete          7649     17     43    176    263    395    619       0      0   34.11
Delete          8156     17     43    133    153    244    423       0   8544   25.03
Delete          7539     17     44    180    276    447    759       0      0   36.53
Delete          7457     17     43    218    285    420    714       0      0   38.02
Total           7494     17     44    203    280    410    860       0   8544   36.44  

=== SUMMARY STATUS ============
Test         Req/Sec  Ratio    50%    90%    99%  99.5%  99.9%    Max  Errors  Fails  StdDev
DeleteTable   307456  0.01       0      0      0      0      0      2       0      0    0.01
StoreCreate     9639  0.23       7     30    376    442    547   5022       0      0   58.03
Retrieve       23264  0.09       7     10     80     88    110    440       0  28932   12.15
StoreUpdate     5668  0.38      10     31    554    978   4008   6010       1      0  197.87
Delete          7494  0.29      17     44    203    280    410    860       0   8544   36.44
Total                                                                       1  37476         

Count  Exception

Config Parameters:
  requests           : 1000000
  threadPoolSize     : 200
  valueSize          : 1000

Sample Chart

Since call statistics are persisted in a structured XML file, the results can be post-processed and charts can be generated. The example below compares the throughput for four different record sizes: 1k, 2k, 3k and 5k. It is implemented using the popular open-source JFreeChart package .

VTest Job Configuration File

The jobs and tasks are defined and configured in a standard Spring configuration file. For ease-of-use, the dynamically varying properties are externalized in the file.

  <bean id="propertyConfigurer"
    <property name="locations" value="" />
    <property name="systemPropertiesMode" value="2" />

<!-- ** Jobs/Tasks ************************ -->

  <util:list id="crud.job"  >
    <ref bean="putCreate.task" />
    <ref bean="get.task" />
    <ref bean="putUpdate.task" />
    <ref bean="delete.task" />

  <bean id="putCreate.task" class="com.amm.vtest.tasks.voldemort.PutTask" scope="prototype" >
    <constructor-arg ref="taskConfig" />
    <constructor-arg value="PutCreate" />

  <bean id="putUpdate.task" class="com.amm.vtest.tasks.voldemort.PutTask" scope="prototype" >
    <constructor-arg ref="taskConfig" />
    <constructor-arg value="PutUpdate" />

  <bean id="get.task" class="com.amm.vtest.tasks.voldemort.GetTask" scope="prototype" >
    <constructor-arg ref="taskConfig" />

  <bean id="delete.task" class="com.amm.vtest.tasks.voldemort.DeleteTask" scope="prototype" >
    <constructor-arg ref="taskConfig" />

  <bean id="taskConfig" class="com.amm.vtest.tasks.voldemort.VoldemortTaskConfig" scope="prototype" >
    <constructor-arg value="${}" />
    <constructor-arg value="${cfg.urls}" />
    <constructor-arg value="${cfg.clientConfigFile}" />
    <property name="valueSize"      value="${cfg.valueSize}" />
    <property name="valueGenerator" ref="valueGenerator" />
    <property name="keyGenerator"   ref="keyGenerator" />
    <property name="checkValue"     value="${cfg.checkRetrieveValue}" />

<!-- ** VTest **************** -->

  <bean id="vtestProcessor"
        class="com.amm.vtest.VTestProcessor" scope="prototype">
    <constructor-arg ref="executor" />
    <constructor-arg ref="callStatsReporter" />
    <property name="warmup"          value="${cfg.warmup}" />
    <property name="logDetails"      value="true" />
    <property name="logDetailsAsXml" value="true" />

  <bean id="callStatsReporter"
        class="" scope="prototype">
    <property name="properties" ref="configProperties" />

  <util:map id="configProperties">
    <entry key="requests" value="${cfg.requests}" />
    <entry key="threadPoolSize" value="${cfg.threadPoolSize}" />
    <entry key="valueSize" value="${cfg.valueSize}" />
  </util:map >

<!-- ** Executors **************** -->

  <alias alias="executor" name="fixedThreadPool.executor" />

  <bean id="sequential.executor"
        class="com.amm.vtest.SequentialExecutor" scope="prototype">
    <property name="numRequests" value="${cfg.requests}" />

  <bean id="fixedThreadPool.executor"
        class="com.amm.vtest.FixedThreadPoolExecutor" scope="prototype">
    <property name="numRequests"     value="${cfg.requests}" />
    <property name="threadPoolSize"  value="${cfg.threadPoolSize}" />
    <property name="logModulo"       value="${cfg.logModulo}" />


VTest Properties


Run Script

. common.env



while getopts $opts opt
  case $opt in
    r) requests=$OPTARG ;;
    t) threadPoolSize=$OPTARG ;;
    v) valueSize=$OPTARG ;;
    i) iterations=$OPTARG ;;
    \?) echo $USAGE " Error"
shift `expr $OPTIND - 1`
if [ $# -gt 0 ] ; then

tstamp=`date "+%F_%H-%M"` ; logdir=logs-$job-$tstamp ; mkdir $logdir

PROPS="$PROPS -Dcfg.requests=$requests"
PROPS="$PROPS -Dcfg.threadPoolSize=$threadPoolSize"
PROPS="$PROPS -Dcfg.valueSize=$valueSize"

time -p java $PROPS -cp $CPATH $PGM $* \
  --config $CONFIG --iterations $iterations --job $job \
  | tee log.txt

cp -p log.txt log-*.xml times-*.txt *.log $logdir

XML Logging Output

The call statistics for each task run are stored in an XML files for future reference and possible post-processing, e.g. charts, database persistences, cross-run aggregation. JAXB and a XSD schema are used to process the XML.

    <calls failures="0" errors="0" all="1000000"/>

Comparison of JAX-RS Client Proxies

November 18, 2009

Though JAX-RS has done wonders for standardizing server-side implementation of Java REST services, there is definitely a need for some client-side standardization. Considering that for every REST implementation there are order-of-magnitude more clients, it is rather puzzling that more movement hasn’t occurred in this space.

Section 1.3 Non Goals of the JAX-RS 1.0 (Sep. 2008) spec states:

Client APIs The specification will not define client-side APIs. Other specifications are expected to provide such functionality.

Nevertheless, vendors have not been idle and have used their particular client-side frameworks as a value-add selling point. In this blog, I report some first impressions on CXF and RESTEasy‘s client APIs. Next report will be on the Jersey and Restlet versions.

Perhaps the reluctance to tackle a standard client API has something to do with the complexity associated with SOAP client proxies, but in the absence of REST client proxies, every single customer has to recreate the wheel and implement rather mundane low-level plumbing with httpclient. A chore best avoided.

Both CXF and RESTEasy support a nearly equivalent client proxy API that mirrors the server-side annotated resource implementation. They differ in two ways:

  • Bootstrap proxy creation
  • Exceptions thrown for errors

“Naturally”, each provider has a different way to create a proxy. Both are rather simple, and since they are a one-time action, their impact on the rest of the client code is minimal.

The “happy path” behavior for both implementations is the same – differences arise when exceptions are encountered. RESTEasy uses its own proprietary org.jboss.resteasy.client.ClientResponseFailure exception while CXF manages to use the standard JAX-RS exception Therefore, round one goes to CXF since we can write all our client tests using standard JAX-RS packages. In addition, this enables us to leverage these same tests for testing the server implementation – an absolute win-win.

Note that the test examples below use the outstanding testng.

Proxy Interface

Here’s the client proxy VideoServiceProxy that is the same for both CXF and RESTeasy. Very nice guys!

public interface VideoServiceProxy {
public GenreList getGenres(); 

public Genre getGenre(@PathParam("id") String id) ; 

public Response createGenre(Genre genre) ; 

public void updateGenre(@PathParam("id") String id, Genre genre) ; 

public void deleteGenre(@PathParam("id") String id) ; 


Proxy Bootstrapping

The client side bootstrapping for the proxy is shown below.

CXF Bootstrapping

import org.apache.cxf.jaxrs.client.JAXRSClientFactory;
import org.apache.cxf.jaxrs.client.JAXRSClientFactoryBean;

public class GenreTest {
    static String url = "http://localhost/vapp/vservice/genre";
    static VideoServiceProxy rservice ;

    public void initSuite() {
        rservice = JAXRSClientFactory.create(url, VideoServiceProxy.class);

RESTEasy Bootstrapping

import org.apache.commons.httpclient.HttpClient;
import org.jboss.resteasy.client.ProxyFactory;
import org.jboss.resteasy.plugins.providers.RegisterBuiltin;
import org.jboss.resteasy.spi.ResteasyProviderFactory;

public GenreTest {
    private static VideoServiceProxy rservice ;
    private static String url = "http://localhost/vapp/vservice/genre" ;   

    static public void beforeClass() {
        rservice = ProxyFactory.create(VideoServiceProxy.class, url, new HttpClient());

As you can see, the CXF version is slightly less verbose and simpler in that it has less imports.

Exception Handling

Happy Path – No Errors

For a happy path the test code is the same for both CXF and RESTEasy.

public void createGenre()  {
    Genre genre = new Genre();
    genre.setDescription("All animals");
    Response response = rservice.createGenre(obj);
    int status = response.getStatus();
    String createdId = ResponseUtils.getCreatedId(response); // get ID from standard Location header

CXF Unhappy Path – Error

    public void getGenreNonExistent() {
        try {
            Genre genre = rservice.getGenre(nonExistentId);
        catch (WebApplicationException e) {
            Response response = e.getResponse();
            int status = response.getStatus();
            Assert.assertEquals(status, Response.Status.INTERNAL_SERVER_ERROR.getStatusCode()); // TODO: fix 404 not being thrown?
            //Assert.assertEquals(status, Response.Status.NOT_FOUND.getStatusCode());

RESTEasy Unhappy Path – Error

    public void getGenreNonExistent() {
        try {
            Genre obj = rservice.getGenre(nonExistentId);
        catch (ClientResponseFailure e) {
            ClientResponse response = e.getResponse();
            Response.Status status = response.getResponseStatus();
            Assert.assertEquals(status, Response.Status.INTERNAL_SERVER_ERROR); // TODO: fix 404 not being thrown
            //Assert.assertEquals(status, Response.Status.NOT_FOUND);

Note, that there is a problem in correctly throwing a 404 with WebApplicationException on the server-side. Thought my current server implementation is CXF, I have also verified that this problem exists for RESTEasy and Jersey. The framework always returns a 500 even though I specifiy a 404. This is definitely not OK. Its a TBD for me to further investigate.

    throw new WebApplicationException(Response.Status.NOT_FOUND);

I definitely plan to check out Jersey and Restlet in more detail, so stay tuned!

JAX-RS: Java™ API for RESTful
Web Services
Version 1.0
September 8, 2008

Foray into Flex AIR

November 3, 2009

Adobe AIR has really excited me since I’ve been a long time fan of desktop applications. There is a time and place for web apps as well as for desktops. This post describes some of my reflections on a partial port of a Swing application to Flex.

Globesight is an economic modeling package geared toward scenario playing with a complex visualization metaphor as well as some heavy-duty binary floating point storage. It was my  Swing/UI opus magnum that I crafted a few years back – lots of UI navigation, charting, plugins, XML and binary I/O crunching and transfers. Too bad I implemented this before I came across Spring!

Although I won’t have time to port the whole application from Java/Swing, I want to see how some of the key Java technologies are handled by Flex/AIR.

  • Binary file IO for floating point numbers – I have concerns regarding Flex’s performance
  • Charting – I used the versatile JFreeChart and I expect Flex will excel here
  • Multiple windows, dialog boxes, popups
  • General screen navigation
  • How well does the Java code and classes translate into ActionScript?


Globesight Swing


Globesight Flex


Initial Reaction

Designing the Main Workbench screen and populating it from the XML project definition files was easy enough. I can’t imagine doing that so quickly in Java! So far so good. Reading in the XML is a charm.

I’ve also been able to leverage the elaborate MVC Java class hierarchy representing the domain model and visual components into ActionScript. The package hierarchy and class names almost match one for one. This leads me to think, might a Java to ActionScript translator help here? I’ve already found some info on the topic. Too bad that I simply can’t reuse the Java classes directly from Flex – too bad Flex doesn’t use the  JVM. I guess I got spoiled using Groovy which can be seamlessly (and confusingly) mixed with Java.

I’m already running into major hiccups regarding binary IO. I’ve got two dimensional arrays stored in binary floating point format (and some longs and ints in the header), and I see that ActionScript doesn’t have a long or FP data datatype! There’s Number and byte array and ultimately I assume I’ll be able to read the data, but how efficiently?

Adventures in the Cloud with AWS

June 27, 2009

Recently I’ve been engaged in an exciting AWS venture – mostly S3 but a bit of EC2. It started with a short contract, but then I subsequently rode the momentum wave and continued hacking away. The following observations are merely my first impressions – nothing more and nothing less.

Time-Sharing is Back!

Firstly, using AWS services seems to be a throwback to the old time-sharing paradigm where every compute cycle is directly billed to you. Ugh! To qualify, this really only makes a difference depending on who is paying and how high the charges are. If your boss is paying for it, then this might not be a big deal for you though it might be a deal for him. It also depends on what service you are using. S3 seems ridiculously cheap – I’ve racked up 16,000 calls that have cost me only $.20 – yes that’s 20 centavos! On the other hand, my smallest EC2 instance was running me over a dollar a day, so this was quickly adding up to be a budget buster for me. At that price, I’d rather go buy a book to read.

Testing Cost

This whole pay-per-cycle model also has interesting ramifications on best practices such as continuous integration test cycles. Assuming you’re constantly running an automated build cycle and your rather extensive integration tests are hitting your AWS infrastructure, you could potentially run up quite a bill. Its a constant recurring cost. Imagine a web site with millions of images that need to be served – what would your tests look like? If you own your own dedicated infrastructure (servers, disks) running tests has no apparent extra cost. With the AWS cloud model, you get billed for anything you do against AWS. Could be OK if you do your math, but it is certainly a brave new world.

Language Client Toolkits

One of the things I’ve noticed is the disparate language-specific toolkits available for AWS. Although I’m mainly a Java guy, I’ve also delved into the Perl (Eric Wagner’s S3:: and Leon Rocard’sNet::Amazon::S3) and Python (boto) libraries. Its most interesting at how the canonical REST/XML model is translated loss-fully to each different client binding. This can be disconcerting and frustrating if you need to work with multiple toolkits and need to have access to the full functionality of the underlying AWS service.

Java Toolkit

The “standard” Java client library is James Murty’s JetS3t which only covers S3 and most recently CloudFront – AWS’s new CDN offering. My question is considering James wrote the O’Reilly bookProgramming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB on all AWS services, why did he only support S3? Hmm… That means I’ve got to pull in another client library which detracts from an integrated solution. If I need to access SQS or SimpleDB, I have to use a completely different library, namelytypica. It would have been better to have one unified consistent approach. No comprendo, no capisco.

AWS Testing Toolkit?

All this leads me to wonder why hasn’t Amazon released their own in-house client libraries that they use to test their AWS services. They do test them, don’t they? So obviously the tests have to be written in some language and they have to be quite extensive. I’m puzzled at why this hasn’t been made available to the community. They could easily release it “as is” with no guarantees if they want to avoid the cost of official support of disparate client bindings. This would certainly be in the “open source” spirit.

Hierarchical Buckets

S3 seems rather primitive to me insofar as it doesn’t support hierarchical buckets. This leads everyone to implement “hacky” virtual directories by using keys such as “mydir/mykey1” to emulate directories. Is it really that hard to implement nested buckets? Since there is obviously a huge need/demand for this, does AWS have a roadmap for this?

REST-ian Batching

Another issue is relating to batching which leads to the more general topic of the difficulty of batching in a REST-ian approach. Say I want to delete a 100 objects in a bucket. Wouldn’t it be so much more efficient to issue this in one call instead of 100 calls? This would certainly makes AWS more scalable and performant, n’est pas?


All the above comments are preliminary and subject to future revisions. I certainly do find the AWS model fascinating, especially their new Public Data Sets, in particular genome databases such as Ensembl Annotated Human Genome Data. There is a real synergy going on here, keep on trucking guys!