Vert.x for World Domination


In this article I am going to show how I put Vert.x on my Raspberry Pis and my Mindstorm EV3.

The whole thing is the product of a long night of hacking so expect some updates to this article.

The associated project is on GitHub.


The core of my presentation Vert.x for World Domination was the setup I used to demonstrate the distribution capabilities of Vert.x.

What I wanted to do was a demo using “real life” conditions. No VMs or other tricks. I got the basic idea from the setup Michael uses to demo Hazelcast (see some photos here)

The idea of a portable Raspberry Pi cluster stuck with me and I built my somewhat smaller version.

The Brain

And here it comes, the portable brain:

IMG_2113

 

Parts list

  • 2 x Raspberry Pi Model B (inside the red lego-thing)
  • 2 x Loglink WL0084A Wifi-adapters
  • TP-Link TL-WR702N (the small white/blue box on the left, a very compact Wifi-router)
  • EasyAcc 5A USB power adapter (the white box on the right, make sure to user the iPad-connectors to power the Pis)

Setup

Connect the Pis to the Wifi.

Slap Raspbian on the Pis, install the embedded JDK and put Vert.x on them.

The brain is good to go.

Robby

My fearsome battle robot Robby is up next.

It’s a custom EV3-bot. Go crazy on how you want it to look like but follow my instructions on how to get Vert.x up and running. That part took me a while.

 

 

IMG_2115

 

Parts list

  • The EV3 base package
  • TP-Link TL-WN725N USB-Wifi (Michael reported some issues on his EV3, works fine on mine)
  • A micro SD card

LejOS

With EV3 Lego provided everything to make it as easy as possible to get a custom OS on your mindstorm. The LejOS project is one of those. It is aimed at Java developers and provides everything needed to get going.

Simply follow the instructions from their Wiki to get everything set up. It’s that easy.

After connecting it to Wifi you should see something like this on the display:

IMG_2116

Vert.x

Here comes the tricky part. First of all: You will have to use a fat jar as LejOS doesn’t provide a Bash and renders the provided vert.x-startup-skripts useless.Copy the generated jar to your EV3 (ahem, the root-password is “”, so just hit enter when asked):

scp <fatjar>.jar root@<ip-of-ev3>:~

Copy the cluster.xml found in your local Vert.x-installation:

scp <vertx-dir>/conf/cluster.xml root@<ip-of-ev3>:~

Afterwards log into the EV3.

There’s two issues we will be solving:

  1. Vert.x won’t bind to the Wifi-address using the -cluster-host parameter
  2. Multicast is not working

Get a Vert.x instance running on your local machine, note down the IP-address and log into your EV3:

ssh root@<ip-of-ev3>

Now open the cluster.xml for editing (vi is installed and my weapon of choice).
Disable multicast and enable tcp-ip. Add the address of your local machine (192.168.2.20 in the example).
Uncomment the interfaces block and add the address of your EV3-Wifi (the one you used to log into it)

Afterwards the config should look something like this:


<join>
<multicast enabled=”false”>
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled=”true”>
<interface>192.168.2.20</interface>
</tcp-ip>
<aws enabled=”false”>
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<region>us-east-1</region>
</aws>
</join>
<interfaces enabled=”true”>
<interface>192.168.2.101</interface>
</interfaces>

Now we need to modify the start skript. Add a new file called run.sh with the following content:

#! /bin/sh
. /etc/default/lejos
export LD_LIBRARY_PATH=${LEJOS_HOME}/libjna/usr/lib/arm-linux-gnueabi/:${LEJOS_HOME}/libjna/usr/lib/jni/
/home/root/lejos/ejre1.7.0_51/bin/java -jar <nameof-fat>.jar -cp . -cluster -cluster-host 192.168.2.101

Replace the IP-Adress with the one of your EV3-Wifi (the one you used to log into it) and replace <nameof-fat> with the name of your fat-jar.

Do a chmod +x run.sh

You are ready to go.

Use ./run.sh to start Vert.x and wait. It takes around 3 minutes for everything to come up so be patient.

This is the log-output I get when running everything ( => "Succeeded in deploying module"): 


root@EV3:~# ./run.sh
Apr 07, 2014 4:53:29 PM org.vertx.java.core.logging.impl.JULLogDelegate info
INFO: Starting clustering...
Apr 07, 2014 4:54:11 PM com.hazelcast.impl.AddressPicker
INFO: Interfaces is enabled, trying to pick one address matching to one of: [192.168.2.101]
Apr 07, 2014 4:54:12 PM com.hazelcast.impl.AddressPicker
INFO: Prefer IPv4 stack is true.
Apr 07, 2014 4:54:13 PM com.hazelcast.impl.AddressPicker
INFO: Picked Address[192.168.2.101]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Apr 07, 2014 4:54:35 PM com.hazelcast.system
INFO: [192.168.2.101]:5701 [dev] Hazelcast Community Edition 2.6.7 (20140210) starting at Address[192.168.2.101]:5701
Apr 07, 2014 4:54:35 PM com.hazelcast.system
INFO: [192.168.2.101]:5701 [dev] Copyright (C) 2008-2013 Hazelcast.com
Apr 07, 2014 4:54:36 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [192.168.2.101]:5701 [dev] Address[192.168.2.101]:5701 is STARTING
Apr 07, 2014 4:54:38 PM com.hazelcast.impl.TcpIpJoiner
INFO: [192.168.2.101]:5701 [dev] Connecting to possible member: Address[192.168.2.20]:5701
Apr 07, 2014 4:54:38 PM com.hazelcast.impl.TcpIpJoiner
INFO: [192.168.2.101]:5701 [dev] Connecting to possible member: Address[192.168.2.20]:5702
Apr 07, 2014 4:54:39 PM com.hazelcast.impl.TcpIpJoiner
INFO: [192.168.2.101]:5701 [dev] Connecting to possible member: Address[192.168.2.20]:5703
Apr 07, 2014 4:54:40 PM com.hazelcast.nio.ConnectionManager
INFO: [192.168.2.101]:5701 [dev] 59397 accepted socket connection from /192.168.2.20:5701
Apr 07, 2014 4:54:40 PM com.hazelcast.nio.ConnectionManager
INFO: [192.168.2.101]:5701 [dev] 44524 accepted socket connection from /192.168.2.20:5702
Apr 07, 2014 4:54:40 PM com.hazelcast.nio.ConnectionManager
INFO: [192.168.2.101]:5701 [dev] 33806 accepted socket connection from /192.168.2.20:5703
Apr 07, 2014 4:54:44 PM com.hazelcast.cluster.ClusterManager
INFO: [192.168.2.101]:5701 [dev]

Members [4] {
Member [192.168.2.20]:5701
Member [192.168.2.20]:5702
Member [192.168.2.20]:5703
Member [192.168.2.101]:5701 this
}

Apr 07, 2014 4:55:20 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [192.168.2.101]:5701 [dev] Address[192.168.2.101]:5701 is STARTED
Apr 07, 2014 4:55:36 PM com.hazelcast.util.HealthMonitor
INFO: [192.168.2.101]:5701 [dev] memory.used=5.6M, memory.free=3.3M, memory.total=8.9M, memory.max=29.0M, memory.used/total=63.02% memory.used/max=19.29% load.process=92.00%, load.system=100.00%, load.systemAverage=250.00% q.packet.size=2, q.processable.size=1, q.processablePriority.size=0, thread.count=19, thread.peakCount=19, q.query.size=0, q.mapLoader.size=0, q.defaultExecutor.size=0, q.asyncExecutor.size=0, q.eventExecutor.size=0, q.mapStoreExecutor.size=0
Apr 07, 2014 4:56:06 PM com.hazelcast.util.HealthMonitor
INFO: [192.168.2.101]:5701 [dev] memory.used=8.0M, memory.free=4.2M, memory.total=12.2M, memory.max=29.0M, memory.used/total=65.18% memory.used/max=27.42% load.process=93.00%, load.system=100.00%, load.systemAverage=281.00% q.packet.size=1, q.processable.size=1, q.processablePriority.size=0, thread.count=21, thread.peakCount=21, q.query.size=0, q.mapLoader.size=0, q.defaultExecutor.size=0, q.asyncExecutor.size=0, q.eventExecutor.size=0, q.mapStoreExecutor.size=0
Apr 07, 2014 4:56:37 PM com.hazelcast.util.HealthMonitor
INFO: [192.168.2.101]:5701 [dev] memory.used=9.9M, memory.free=2.3M, memory.total=12.2M, memory.max=29.0M, memory.used/total=81.41% memory.used/max=34.25% load.process=94.00%, load.system=100.00%, load.systemAverage=288.00% q.packet.size=0, q.processable.size=1, q.processablePriority.size=1, thread.count=21, thread.peakCount=21, q.query.size=0, q.mapLoader.size=0, q.defaultExecutor.size=0, q.asyncExecutor.size=0, q.eventExecutor.size=0, q.mapStoreExecutor.size=0
Apr 07, 2014 4:57:04 PM org.vertx.java.core.logging.impl.JULLogDelegate info
INFO: Succeeded in deploying module

 

Advertisement

Vert.X in IntelliJ IDEA

Well, that was fast. Outdated within minutes of posting 🙂
This post is for vert.x 1.3.1. 

I just wrote an update on how to do this with vert.x 2.0.0.final


I took a look at Disruptor back when it was released. It was an interesting piece of software but I never got beyond playing around. Then came Node.js. I have been fighting with myself for quite a while to use it but I am kind of biased when it comes to JavaScript… Well, then came vert.x and I finally had no excuse left to get into this single-threade-thingy-stuff. Playing around with vert.x was quite a fun experience I will write about later. Today I want to show you how to get it running in my favorite IDE.

The Problem

Being polyglot apparently also means to abandon tested deployment strategies. So instead of dumping a JAR/WAR or whatever else into the vert.x-container you will have to do some special magic. And this special magic also involves some tinkering with Idea.

Basic Project

The following (very basic) build.gradle gives you a simple vert.x project.

apply plugin:'java'

configurations {
    provided
    provided.extendsFrom(compile)
}

repositories {
    mavenCentral()
    mavenLocal()
    mavenRepo url: "https://repository.apache.org/content/repositories/snapshots/"
    mavenRepo url: "http://source.mysema.com/maven2/releases/"
    mavenRepo url: "http://repo.maven.apache.org/maven2"
}

dependencies{
        compile "org.vert-x:vertx-core:1.3.1.final"
    compile "org.vert-x:vertx-lang-java:1.3.1.final"
    compile "org.vert-x:vertx-platform:1.3.1.final"
}

Import it and add a vertical like the following:

package de.codepitbull.vertx;

import org.vertx.java.core.Handler;
import org.vertx.java.core.http.HttpServerRequest;
import org.vertx.java.deploy.Verticle;

import java.util.Map;

/**
* @author Jochen Mader
*/
public class HttpVerticle extends Verticle{
@Override
public void start() throws Exception {
vertx.createHttpServer().requestHandler(new Handler() {
public void handle(HttpServerRequest req) {
StringBuilder sb = new StringBuilder();
for (Map.Entry<String, String> header : req.headers().entrySet()) {
sb.append(header.getKey()).append(": ").append(header.getValue()).append("\n");
}
req.response.putHeader("content-type", "text/plain");
req.response.end(sb.toString());
}
}).listen(8087);
}
}

And, due to a bug in 1.3.1, you also need to add a langs.properties to your project with the following content:

java=org.vertx.java.deploy.impl.java.JavaVerticleFactory
class=org.vertx.java.deploy.impl.java.JavaVerticleFactory
js=org.vertx.java.deploy.impl.rhino.RhinoVerticleFactory
coffee=org.vertx.java.deploy.impl.rhino.RhinoVerticleFactory
rb=org.vertx.java.deploy.impl.jruby.JRubyVerticleFactory
groovy=org.vertx.groovy.deploy.impl.groovy.GroovyVerticleFactory
py=org.vertx.java.deploy.impl.jython.JythonVerticleFactory
default=org.vertx.java.deploy.impl.java.JavaVerticleFactory

Setup

To get this whole thing running you will need to download the vert.x tar and untag it to your file system. Next select File>New Module and create a new Java-Module using the wizard. After you are done open Open Module Settings and select your newly created module. Go to the Dependencies-tab and add the content of your vert.x-installations lib-directory. Next add the folder containing langs.properties. Almost there.

Shows the dependency screen in idea with all required deps added.

Start Config

Now that we got our little dummy-project up and going we need to create a start config. So create a new Application-Startconfig. – Set the Main-class to org.vertx.java.deploy.impl.cli.Starter – Set Program arguments to run de.codepitbull.vertx.HttpVerticle -cp where the path is the place where Idea puts your compiled classes from the gradle-project. – Set Use class path of module to the module we just configured. That’s it.

Shows the filled fields as described in the text.

Hit run and go to home sweet home

Serving static content with Jetty 9

I am currently (again) doing a lot of JavaScript related stuff. My favorite IDE (IntelliJ IDEA) has stellar support for this crappy language. The only thing I needed was a SMALL web server to play around with static html and some JavaScripts. The quickest way was to use Jetty 9 as it can be found on all of my development machines.

Just add a Jetty9-RunConfig in IDEA (or Eclipse, if you have to) and create static.xml with the following content in $JETTY_HOME/webapps.

<?xml version="1.0"  encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.eclipse.org/configure.dtd">

<Configure class="org.eclipse.jetty.server.handler.ContextHandler">
  <Set name="contextPath">/static</Set>
  <Set name="resourceBase">/path/to/your/document/root/</Set>
  <Set name="handler">
    <New class="org.eclipse.jetty.server.handler.ResourceHandler">
      <Set name="cacheControl">no-cache</Set>
    </New>
  </Set>
</Configure>

Don’t forget to adjust resourceBase to your liking.

Note: I also disabled caching in this example.

Bootstrapping Neo4j With Spring-Data – Without XML

With the maturing of Spring-Data I started porting all my personal projects to use Spring Data for bootstrapping. I also wanted to get rid of XML, which proved a little more tricky as I expected.

Dependencies

Let’s start with the required dependencies:

Update: Added a missing validator-dependency

 <properties>
 <spring-core.version>3.2.0.RELEASE</spring-core.version>
 </properties>
 <dependencies>
 <dependency>
 <groupId>org.springframework.data</groupId>
 <artifactId>spring-data-neo4j</artifactId>
 <version>2.2.1.RELEASE</version>
 <exclusions>
 <exclusion>
 <groupId>org.springframework</groupId>
 <artifactId>spring-asm</artifactId>
 </exclusion>
 </exclusions>
 </dependency>
 <dependency>
 <groupId>org.neo4j.app</groupId>
 <artifactId>neo4j-server</artifactId>
 <version>1.8.1</version>
 </dependency>
 <dependency>
 <groupId>org.neo4j.app</groupId>
 <artifactId>neo4j-server</artifactId>
 <classifier>static-web</classifier>
 <version>1.8.1</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-core</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aop</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aspects</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-beans</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-expression</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-tx</artifactId>
 <version>${spring-core.version}</version>
 </dependency>
 <dependency>
 <groupId>cglib</groupId>
 <artifactId>cglib</artifactId>
 <version>2.2.2</version>
 </dependency>
 </dependencies>
 <dependency>
 <groupId>org.hibernate</groupId>
 <artifactId>hibernate-validator</artifactId>
 <version>4.3.0.Final</version>
 </dependency>

We need/want to override the spring dependencies pulled in via Spring Data with a newer one. We also need to explicitly remove spring-asm as newer versions of Spring don’t need it.

The neo4j-dependencies are required to give us the Neo4j-WebServer.

Repositories

I created a small demo-Entity based on the Neo4j-Annotation provided by Spring-Data.

 @NodeEntity
 public class Component {
    @GraphId Long id;
    @Indexed(unique = true) String simpleName;
    @RelatedTo(type = “Component”) Set<Component> relatedTo = new HashSet<Component>();
 …
 …
 }

Next I added a repository to handle the class (oh, the beauty of Spring-Data-Repositories).

public interface ComponentRepository extends GraphRepository<Component>{
}

That’s it.

Configuration

To get it all up and running the only thing required is @Configuration-Annotated class which extends Neo4jConfiguration.

@EnableTransactionManagement
@Configuration
@EnableNeo4jRepositories(basePackages = “de.codepitbull.neo4j”)
public class CustomNeo4jConfig extends Neo4jConfiguration {
    private static final String DB_PATH = "target/neo4j";
    @Bean
    public EmbeddedGraphDatabase graphDatabaseService() {
        return new EmbeddedGraphDatabase(DB_PATH);
    }
    @Bean
    public WrappingNeoServerBootstrapper neo4jWebServer() {
        WrappingNeoServerBootstrapper server = new WrappingNeoServerBootstrapper(graphDatabaseService());
        server.start();
        return server;
    }
}

Using this config we are getting not only an embedded Neo4J instance but also a nice query-interface (including the Neo4J-Shell) running at localhost:7474.
Nice, that’s it 🙂

Create a main-Class, fire up the context and see the magic happen.

public class Main {
     public static void main(String[] args) {
         AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(CustomNeo4jConfig.class);
         context.start();
         context.getBean(WrappingNeoServerBootstrapper.class);
         context.registerShutdownHook();
     }
}

Using Converters

After playing around with Neo4J you will run into the need for using converters (everything that goes into the graph has to be converted to a String). This proved to be a little more tricky than I had expected.

Normally you only have to register a new ConversionServiceFactoryBean to provide additional back and forth conversions. Registering this been in the same @Configuration we just created will be followed by a “circular dependency”-problem.

I am not sure why this is happening but I simply moved it into a separate @Configuration _ which uses @Import_ to include the Neo4J-Configuration and things started working.

Obviously you will also have to replace CustomNeo4jConfig.class in your Main-class with MainConfiguration.class.

@Configuration
@Import(CustomNeo4jConfig.class)
public class MainConfiguration {

@Bean
public ConversionServiceFactoryBean conversionService() {
    Set converters = Sets.newHashSet();

    converters.add(new ConstructorListConverter());

    ConversionServiceFactoryBean bean = new ConversionServiceFactoryBean();
    bean.setConverters(converters);
    return bean;
}
}

Conclusion

I love it, nothing more to say.

Wicket: A slightly better “Open Session in View”

We recently participated at the The Plat_forms contest (more on that in a few days). Coding a full application in less than 30 hours is quite the task and there’s no room for wasting time.

Sadly, we wasted time. A lot. On the persistence layer.

Wasted may be too hard of a word but we spent too much time building the actual business services needed to populate the view layer.

You might wonder why it took so much time.

Well, because each use case needs a different service method which provides an entity and all it’s relations initialized as deep as the view layer will use it.

Now you might give me a confused look and ask “Why the hell did you bother building those and didn’t use Open Session in View?” and I would have answered with tirade on why I consider it an anti pattern.

But sometimes you have to rethink opinions you held dear for a long time.

Sometimes verbose is better

Before I continue to dive into Open Session in View (OSiV) a little deeper, let’s take a look why I still prefer the verbose approach. Building a relational persistence layer using an ORM can be a tricky thing.

There’s a lot of things that can go wrong. Especially if JOINs are involved. Any access to a collection or a referenced entity can cause havoc on your application performance.

By avoiding Open Session in View this problem is easily avoided as the developer has to think about every single JOIN as he will have to build the queries to resolve them.

There will also be integration tests covering individual queries and the possibility to do load tests based on those.

The moment you start using OSiV all these advantages disappear.

Sometimes terse is better

A contest (or building a prototype) has different rules. The only thing that counts is “getting it done”.

That’s where the verbose approach starts to fail as it demands a lot of code being written and tested.

Enter Open Session in View

Open Session in View

Using OSiV the database session is opened and closed through a Servlet Filter.

As the session stays open in the view layer you are free to navigate the entity-tree as you please.

This convenience comes at a price and, asides the problems I already mentioned, there are two key disadvantages:

N+1 Select problem

The most dreaded problem is the N+1-Select-Problem. Accessing collections without appropriate annotations will cause the ORM to do a Select for each entry in the collection.

So you will get 1 initial select plus N (=size of collection) subsequent selects. Using todays query-statistics-tools (every good ORM has a set of these) they are easy to find if they occur.

They are easily introduced and hard to find during development. With a good project setup they should be discovered during load testing.

If you are actually doing load tests …

Exception in the filter

ORMs do a lot of things when a database session is closed. Doing a lot of things also means that a lot of things can go wrong.

With OSiV the session is closed after the web request has been processed by the web framework. There is no way you can react on these exceptions in a meaningful way without putting significant logic into the ServletFilter. Not really a good idea.

Wicket

There is no way solving N+1 in a generic way. But the exception problem is easily solved. Well, if Wicket is your view layer. Wicket provides hooks to every step of request processing through

IRequestCycleListener. The following OpenSessionInRequestCycleListener shows how to build a Hibernate based OSiV for Wicket.

import org.apache.wicket.MetaDataKey;

import org.apache.wicket.request.IRequestHandler;

import org.apache.wicket.request.cycle.AbstractRequestCycleListener;

import org.apache.wicket.request.cycle.RequestCycle;

import org.hibernate.FlushMode;

import org.hibernate.HibernateException;

import org.hibernate.Session;

import org.hibernate.SessionFactory;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.beans.factory.annotation.Configurable;

import org.springframework.dao.DataAccessResourceFailureException;

import org.springframework.orm.hibernate4.SessionFactoryUtils;

import org.springframework.orm.jpa.EntityManagerFactoryUtils;

import org.springframework.orm.jpa.EntityManagerHolder;

import org.springframework.transaction.support.TransactionSynchronizationManager;

import javax.persistence.EntityManager;

import javax.persistence.EntityManagerFactory;

import javax.persistence.PersistenceException;

import javax.persistence.PersistenceUnit;

@Configurable

public final class OpenEntityManagerInRequestCycleListener extends

        AbstractRequestCycleListener {

    private static final Logger LOG = LoggerFactory.getLogger(OpenEntityManagerInRequestCycleListener.class);

    @SuppressWarnings("serial")

    static final MetaDataKey<Boolean> PARTICIPATE = new MetaDataKey<Boolean>() {};

    @PersistenceUnit

    private EntityManagerFactory emf;

    @Override

    public void onBeginRequest(RequestCycle cycle) {

        cycle.setMetaData(PARTICIPATE, TransactionSynchronizationManager.hasResource(emf));

        if(!cycle.getMetaData(PARTICIPATE)) {

            try {

                LOG.debug("OPENING NEW ENTITY MANAGER FOR THIS REQUEST.");

                EntityManager em = emf.createEntityManager();

                TransactionSynchronizationManager.bindResource(emf, new EntityManagerHolder(em));

            }

            catch (PersistenceException ex) {

                throw new DataAccessResourceFailureException("Could not create JPA EntityManager", ex);

            }

        }       

    }

    @Override

    public void onRequestHandlerExecuted(RequestCycle cycle, IRequestHandler handler) {

        if (!cycle.getMetaData(PARTICIPATE)) {

            try {

                LOG.debug("CLOSING ENTITY MANAGER FOR THIS REQUEST.");

                EntityManagerHolder emHolder = (EntityManagerHolder)

                        TransactionSynchronizationManager.unbindResource(emf);

                EntityManagerFactoryUtils.closeEntityManager(emHolder.getEntityManager());

            }

            catch (WhateverExceptionYouAreInterstedIn e) {

                //DOSTUFF

            }

        }

    }

    @Override

    public IRequestHandler onException(RequestCycle cycle, Exception ex) {

        return super.onException(cycle, ex);    //To change body of overridden methods use File | Settings | File Templates.

    }

    protected Session openSession(SessionFactory sessionFactory) throws DataAccessResourceFailureException {

        try {

            Session session = SessionFactoryUtils.openSession(sessionFactory);

            session.setFlushMode(FlushMode.MANUAL);

            return session;

        }

        catch (HibernateException ex) {

            throw new DataAccessResourceFailureException("Could not open Hibernate Session", ex);

        }

    }

}

And the same using JPA2:

import org.apache.wicket.MetaDataKey;

import org.apache.wicket.request.IRequestHandler;

import org.apache.wicket.request.cycle.AbstractRequestCycleListener;

import org.apache.wicket.request.cycle.RequestCycle;

import org.hibernate.FlushMode;

import org.hibernate.HibernateException;

import org.hibernate.Session;

import org.hibernate.SessionFactory;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.beans.factory.annotation.Configurable;

import org.springframework.dao.DataAccessResourceFailureException;

import org.springframework.orm.hibernate4.SessionFactoryUtils;

import org.springframework.orm.hibernate4.SessionHolder;

import org.springframework.transaction.support.TransactionSynchronizationManager;

@Configurable

final class OpenSessionInRequestCycleListener extends

        AbstractRequestCycleListener {

    @SuppressWarnings("serial")

    static final MetaDataKey<Boolean> PARTICIPATE = new MetaDataKey<Boolean>() {};

    @Autowired

    private SessionFactory sessionFactory;

    @Override

    public void onBeginRequest(RequestCycle cycle) {

        cycle.setMetaData(PARTICIPATE, TransactionSynchronizationManager.hasResource(sessionFactory));

        if(!cycle.getMetaData(PARTICIPATE)) {

            Session session = openSession(sessionFactory);

            TransactionSynchronizationManager.bindResource(sessionFactory, new SessionHolder(session));

        }       

    }

    @Override

    public void onRequestHandlerExecuted(RequestCycle cycle, IRequestHandler handler) {

        if (!cycle.getMetaData(PARTICIPATE)) {

            try {

                SessionHolder sessionHolder =

                        (SessionHolder) TransactionSynchronizationManager.unbindResource(sessionFactory);

                SessionFactoryUtils.closeSession(sessionHolder.getSession());

            }

            catch (WhateverExceptionYouAreInterstedIn e) {

                //DOSTUFF

            }

        }

    }

    protected Session openSession(SessionFactory sessionFactory) throws DataAccessResourceFailureException {

        try {

            Session session = SessionFactoryUtils.openSession(sessionFactory);

            session.setFlushMode(FlushMode.MANUAL);

            return session;

        }

        catch (HibernateException ex) {

            throw new DataAccessResourceFailureException("Could not open Hibernate Session", ex);

        }

    }

}

Both use the same mechanism. Overwrite onRequestHandlerExecuted and go wild. Throw a RestartResponseException or recover gracefully, it’s up to you.

End

There’s only one question left: Would I use it?

Well, to be honest: It depends.

If I ever had to do something like Plat_forms again or a project prototype I would definitely go for it.

For a mission critical application I still prefer to go the save route and know each JOIN by its first name.

Tomcat 7 with full JTA

Everytime I clean my home directory I purge something important.

Just like a week week ago when I  realized my Tomcat 7 installation was gone and with it all the things I had to do to get JOTM running in there.

Once again I spent a while to get everything back together.
But THIS time I am writing it down.

Tomcat Setup

So I want to have Tomcat 7 with full JTA (should also work on older Tomcat versions).
I will be using JOTM as it provides everything I need.

After downloading the JOTM-distribution copy the following jars to <tomcat-home>/lib.

  • commons-logging-api.jar
  • jotm-core.jar
  • log4j.jar
  • ow2-connector-1.5-spec.jar
  • ow2-jta-1.1-spec.jar
I will also add the tomcat connection pool to show a full example later on.
Download the distribution and add the following jars to <tomcat-home>/lib.
  • tomcat-jdbc.jar
  • tomcat-juli.jar

JNDI

Now we need to tell Tomcat how to create the required JNDI-resources.

Note:
Everything created using the Resource-tag will end up in java:comp/env/. Tomcat doesn’t provide functionality to put it somewhere else (or I was simply too blind to find it).

To make resources available we need to edit <tomcat-home>/conf/context.xml.
The first thing we need to add is a connection pool to showcase the useage of JTA later on.

<ResourceLink global="jdbc/myDB" name="jdbc/myDB" type="javax.sql.DataSource"/>
<Resource
	driverClassName="org.h2.Driver"
	factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
	name="jdbc/myDB"
	password=""
	type="javax.sql.DataSource"
	url="jdbc:h2:tcp://localhost/~/test"
	username="sa"/>

Now we need to add the actual JTA-related resources, starting with the the TransactionSynchronizationRegistry.
Remembering what I wrote above we have to be aware that the TransactionSynchronizationRegistryFactory will end up at java:comp/env/TransactionSynchronizationRegistry and not at java:comp/TransactionSynchronizationRegistry as the JEE-spec requires.
More on that later.

<Resource
	name="TransactionSynchronizationRegistry"
	auth="Container"
	type="javax.transaction.TransactionSynchronizationRegistry"
	factory="org.objectweb.jotm.TransactionSynchronizationRegistryFactory"/>

The final step is to add the actual transaction manager. In older versions of Tomcat one had to register the JTA-factory as a resource.
Today we got a specialized tag for that.

<Transaction
	factory="org.objectweb.jotm.UserTransactionFactory"
	jotm.timeout="60"/>

The main difference between the Transaction-tag and the Resource-Tag is that the transaction manager will end up at java:comp/UserTransaction, which is the name required by JEE.
That’s it for installing the JOTM as JTA-provider in Tomcat.

To make things available in the web application we need to add a couple of lines to web.xml in our webapplication.

<resource-env-ref>
	<description>DB Connection </description>
	<resource-env-ref-name>jdbc/myDB</resource-env-ref-name>
	<resource-env-ref-type>javax.sql.DataSource</resource-env-ref-type>
</resource-env-ref>

<resource-env-ref>
	<description>JTA transaction manager</description>
	<resource-env-ref-name>jta/UserTransaction</resource-env-ref-name>
	<resource-env-ref-type>javax.transaction.UserTransaction</resource-env-ref-type>
</resource-env-ref>

<resource-env-ref>
	<description>JTA Transaction Synchronization Registry</description>
	<resource-env-ref-name>TransactionSynchronizationRegistry</resource-env-ref-name>
	<resource-env-ref-type>javax.transaction.TransactionSynchronizationRegistry</resource-env-ref-type>
</resource-env-ref>

Putting it all together

A little example on how to use JTA in your webapplication via spring:

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
	<property name="dataSource" ref="dataSource"/>
	<property name="jpaVendorAdapter">
		<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
			<property name="showSql" value="false"/>
		</bean>
	</property>
	<propertyname="jpaProperties">
		<props>
			<prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop>
		</props>
	</property>
</bean>

<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/myDB" resource-ref="true"/>

<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
	<property name="transactionSynchronizationRegistryName" value="java:comp/env/TransactionSynchronizationRegistry"/>
	<property name="transactionManagerName" value="java:comp/UserTransaction"/>
</bean>

The only thing I don’t really like about this configuration is this line:

<property name="transactionSynchronizationRegistryName" value="java:comp/env/TransactionSynchronizationRegistry"/>

This is no problem, as long as your only deployment target is Tomcat.

If Tomcat  is only used for local deployments for development and the production system is some sort of JEE-server you will have to change the JNDI-location to the JEE-default.

This is easily achieved using maven-profiles.

OpenMBeans, rocket science from the 70s

(Sorry for the messed up layout, Scribefire destroied the post and I had to fix it manually)

Nope, no Scala in here. This time it is good old java.
I was working on small project of mine to get some size information about wicket pages (for the
interested: it’s here).
One would think things like AspectJ or figuring out the best strategy for estimating object sizes would be the hard things on such a project.
I can still hear the distant laughter of Murphy.

Up to now I never had to/wanted to display table data in JMX so this was the first time tackled OpenMBeans.
To get a decent tutorial for these is a frickin nightmare. People (Developers are people, too, just to make sure …) seem to always insist on demonstrating things using a higly intricated usecase to show off their coding skills.
If it didn’t come across what I wanted to say:

IF YOU WANT TO SHOW SOME TECHNOLOGY DON’T REQUIRE ME TO UNDERSTAND YOUR F…ING USECASE.

A SIMPLE OpenMBeanExample.

OpenMBeans feel weird as the way you build them is different from anything you normally do in Java (at least as of today).
Looking at the amount of imports required for a very small OpenMBean might give you an idea on what lies ahead. That’s why I am to go through this step by step.

import javax.management.Attribute;
import javax.management.AttributeList;
import javax.management.AttributeNotFoundException;
import javax.management.DynamicMBean;
import javax.management.InvalidAttributeValueException;
import javax.management.MBeanException;
import javax.management.MBeanInfo;
import javax.management.MBeanNotificationInfo;
import javax.management.ReflectionException;
import javax.management.RuntimeOperationsException;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.OpenDataException;
import javax.management.openmbean.OpenMBeanAttributeInfoSupport;
import javax.management.openmbean.OpenMBeanConstructorInfoSupport;
import javax.management.openmbean.OpenMBeanInfoSupport;
import javax.management.openmbean.OpenMBeanOperationInfoSupport;
import javax.management.openmbean.OpenType;
import javax.management.openmbean.SimpleType;
import javax.management.openmbean.TabularData;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;

To get started we need to implement DynamicMBean

public class PageSizeResultOpenMBean implements DynamicMBean {

The next step is to initialize all the parameters you are going to need for this simple example:

itemNames contains the names of properties representing a row in your tabular data.

private static String[] itemNames = { "page", "before", "after" };

itemDescriptions will contain the table headers for display in a JMX tool.

private static String[] itemDescriptions = { "Page class", "Before detach", "After detach" };

itemTypes defines the type of each row.

private static OpenType[] itemTypes = { SimpleType.STRING, SimpleType.LONG, SimpleType.LONG };

indexNames defines the itemName to be used to guarentee uniqueness of a row.

private static String[] indexNames = { "page" };

We are going to define the actual tabular data in the following static block:

private static TabularType pageTabularType = null;
private static CompositeType pageType = null;
 static {
    try {
       pageType = new CompositeType("page", "Page size info", itemNames,
          itemDescriptions, itemTypes);
       pageTabularType = new TabularType("pages", "List of Page Size
          results", pageType, indexNames);
    } catch (OpenDataException e) {
       throw new RuntimeException(e);
    }
 }

In this case pageType uses the definitions from above to describe a row in the table and pageTabularType defines the table.
In the constructor we are going to create a container for the content of the table, called pageData and the OpenMBeanInfoSupport object to hold the information required for the MBean server.

 private TabularDataSupport pageData;
 private OpenMBeanInfoSupport PSOMBInfo;
 public PageSizeResultOpenMBean() throws OpenDataException {
    OpenMBeanAttributeInfoSupport[] attributes =
          new OpenMBeanAttributeInfoSupport[] {
             new OpenMBeanAttributeInfoSupport( "PageInfos",
                "Page Infos sorted by class name", pageTabularType,
                true, false, false) };
    PSOMBInfo = new OpenMBeanInfoSupport(this.getClass().getName(),
          "Page Size OMB", attributes,
          new OpenMBeanConstructorInfoSupport[0],
          new OpenMBeanOperationInfoSupport[0],
          new MBeanNotificationInfo[0]);
    pageData = new TabularDataSupport(pageTabularType);
 }

In this case we are only defining one attribute, holding the tabular data, and no operations or constructors. We don’t need constructors as we are going to register an instance of the OpenMBean manually.Just a few methods left to implement.This method is going to be called from getAttribute and provides a cloned table to the client for display purposes:

public TabularData getPageInfos() {
   return (TabularData) pageData.clone();
}

getAttribute is called with an attribute name. In the constructor we defined an attribute named ‘PageInfos’. We simply check if that’s the attribute the client was asking for and return it.

public Object getAttribute(String attribute_name) throws
      AttributeNotFoundException, MBeanException, ReflectionException {

    if (attribute_name == null) {
    throw new RuntimeOperationsException(
       new IllegalArgumentException("Attribute name cannot be null"),
          "Cannot call getAttributeInfo with null attribute name");
    }
    if (attribute_name.equals("PageInfos")) {
       return getPageInfos();
    }
    throw new AttributeNotFoundException("Cannot find " +
       attribute_name + " attribute ");
 }

We are not allowing to set the attribute. Let it crash’n’burn.

public void setAttribute(Attribute attribute) throws
   AttributeNotFoundException,
   InvalidAttributeValueException,
   MBeanException,
   ReflectionException {
    throw new AttributeNotFoundException("No attribute can be set in
       this MBean");
 }

A shortcut method used by clients to get several attributes at once. In our case we are only
returning one attribute and this might never get called, but it doesn’t hurt to do a clean implementation 😉


public AttributeList getAttributes(String[] attributeNames) {
    if (attributeNames == null) {
       throw new RuntimeOperationsException(
          new IllegalArgumentException("attributeNames[] cannot be null"),
          "Cannot call getAttributes with null attribute names");
    }
    AttributeList resultList = new AttributeList();
    if (attributeNames.length == 0)
       return resultList;
    for (int i = 0; i &amp;lt; attributeNames.length; i++) {
       try {
          Object value = getAttribute(attributeNames[i]);
          resultList.add(new Attribute(attributeNames[i], value));
       } catch (Exception e) {
          e.printStackTrace();
       }
    }
    return (resultList);
 }
 public AttributeList setAttributes(AttributeList attributes) {
    return new AttributeList();
 }

We are not providing any operation so this methos is going to throw an exception if somebody tries to invoke one.

public Object invoke(String operationName, Object[] params, String[] signature)
   throws MBeanException, ReflectionException {
      throw new RuntimeOperationsException(
         new IllegalArgumentException("No operations defined for this
            OpenMBean"),
         "No operations defined for this OpenMBean");
}

Used by JMX-clients to get to now what we got here.

public MBeanInfo getMBeanInfo() {
    return PSOMBInfo;
}

A little internal method my aspect uses to actually ad some data to the table. This is just an example on how to modify the data stored by the MBean.

   public void addPageSizeResult(PageSizeResult pageSizeResult) {
      Object[] itemValues = {
         pageSizeResult.pageClass.getName(),
         pageSizeResult.sizeBeforeDetach,
         pageSizeResult.sizeAfterDetach
      };
      try {
         pageData.put(new CompositeDataSupport(pageType, itemNames,
            itemValues));
      } catch (OpenDataException e) {
         e.printStackTrace();
      }
   }

That’s it.

Source code available on github.

IntelliJ IDEA, Scala and Continuous Builds (with some Mac OS X details)

My last post focused on how and why I picked IntelliJ Idea to be my Scala-IDE.
This short post illustrates on how I am using it.
The only things required are a working installation of Maven (2.x or 3.x for added flavor) and IntelliJ 10. I expect people reading this blog to be able to install both so I won’t spend time on this.

Note for Mac OS X and Maven:

For some reasons Idea wouldn’t pick up my Maven installation (even when using the overrides). Looks like this is a known problem. The fix is a little invasive but pretty simple:

  • Do ‘sudo su’ to become root
  • edit/create the file ‘/etc/launchd.conf’
  • insert ‘setenv M2_HOME <path-to-maven-install>’
  • restart

After starting the IDE we need to add the Scala Plugin. Go to Preferences -> Plugins and seelect the Scala Plugin, not the Scala Power Pack.
Scala plugin selected, not Scala Tools
After installing we can now continue to create a new Scala project. As we want to use Maven we will use the Maven archetype.
Create a new project with ‘Create project from scratch’, then select ‘Maven Module’. In the following screen select the org.scala-tools.archetypes:scala-simple-archetype and continue. The resulting project is a Maven-Java project and we need two things to turn it into the Scala project we wanted.
First thing to do is to change the Scala version in the pom.xml.

&lt;br /&gt;<br />  &lt;properties&gt;<br />    &lt;scala.version&gt;2.6.1&lt;/scala.version&gt;&lt;br /&gt;<br />  &lt;/properties&gt;<br />

Simply replace 2.6.1 with 2.8.1.
Now we need to turn the project into a Scala project. Right-click on the project and select ‘Add Framework Support…’
And adjust the following screen to look like this:

Select Scala and fill in the text fields

Add Scala Framework support

We are almost there.

The last remaining step in Idea is to turn off compilation from the IDE.
Create a run configuration and open it. Uncheck ‘Make’ in ‘Before Launch’.

Uncheck Make

Adjust build settings.

The only thing remaining is to start the continuous compile.
Open a terminal, cd to the directory with your new project (the one containing the pom.xml) and execute ‘mvn scala:cc’.
Maven will now watch the project for changes and compile everything that changes.

Now back to the 99.

Scala, SBT, Maven and a little Idea

After playing around with a lot of IDEs I settled with IntelliJ Idea for Scala development.

This was during the time of Scala 2.8 development where every week brought a new version.

I just wanted to play around with Scala and didn’t mind to stick with 2.7.

Eclipse was way to fragile at that time, netbeans just didn’t work at all and Idea at least had some working code completion and syntax highlighting.

The cool thing about Idea is its capability to let an outside tool do the building without breaking completely (curse you, Eclipse workspace). So the next thing was to look for a cool build system. I have to use Maven a lot in other projects so I was looking for something that would allow me to reuse the Maven repository infrastructure.

SBT was exactly what I have been looking for.

SBT is very convenient replacement for Maven as it supports its Repositories and uses Scala as its scripting language.

  • Download SBT
  • Run it in an empty directory
  • Answer the questions
  • Get to see the glorious SBT-shell

This is the SBT-shell which allows interacting with SBT. From here you just type “run” or “test” or “update” or some other commands to interact with SBT.

Now let’s take a look at the directory layout:

  • lib
  • project
    • boot
    • build.properties
  • src
  • target

Src and target save the same purpose as in a Maven build, the layout beneath is the same.

The next step is to create a Project class in project/boot:

import sbt._
class MyProject(info: ProjectInfo) extends DefaultProject(info) {
      val derby = "org.apache.derby" % "derby" % "10.4.1.3"
}

The above sourcecode defines a dependency to apache derby. So this project file is used to configure dependencies and a lot of other things.
In my case I wanted to be able to also build the project from Maven to allow non-Scala-developers to use my stuff.
The docs told me about SBTs pom-support so went for it and build my pom. I put it into the root directory and Maven was happy.
SBT wasn’t. It complained about not being able to locate artifacts.
Turnes out that setting a property in the project file does the trick:

import sbt._
class MyProject(info: ProjectInfo) extends DefaultProject(info) {
      val mavenLocal = "Local Maven Repository" at "file://"+Path.userHome+"/.m2/repository"
}

Now that SBT is actually using the local Maven repository everything is working fine.
Now start SBT in your project-dir, start Idea and import the pom.
Happy Coding 😀

Sonatype Nexus on JBoss

After our main Nexus broke through an updat from 1.3.6 to 1.5.0 I had to create a temporary Nexus.
I already had a working JBoss (5.1.0.GA) install on my machine and wanted to deploy the nexus webapp there.
After installing and trying to acces it I got:

javax.servlet.ServletException: non-HTTP request for response …

After digging around for a while I discovered a servlet-api.jar inside the nexus.war.
Removed it and now everything is working finde.
Thought this one was worth sharing.