Web app with h2 console to access your database via browser

Here’s a simple war file that deploys H2 database web console along with h2, postgresql and mysql drivers on your web container.

You can access it by simply going to /h2console url and manage your database remotely only with your browser.

No dependencies and no manual steps required, simply deploy and connect to your database.

It’s based on configuration from JBoss quickstart, I’ve only added h2 and other db jars, so it’s ready to be used in any web app container.

Here’s a download link: h2console.war

How to control relay module with BeagleBone Black

I’ve finally received my 5V relay module and started messing up with my BegleBone Black tiny Arm board won from Touk.pl at Confitura conference.

Idea is to build a flowers watering system calculating water amount from current temperature and weather forecast with watering statistics.

Here’s how it works:

So. I’ve first connected my relay module to power(P8_7) and ground(P8_1) connectors on BB and input to some GPIO pins (ex: P9_8, P9_10). My relay module has only 2 relays, but you can connect as many as you like, just use more GPIO pins.
Here you can get nice pinout image to check what pin is where:

Next we have to enable our GPIOs so we can control them:

if [ ! -d /sys/class/gpio/gpio67 ]; then echo 67 > /sys/class/gpio/export; fi

You have to do this as root to have access. Remember that GPIO numbers are different from pin numbers. Check pinout if needed.

Next we can power up our GPIO, it should enable relay channel:

echo out > /sys/class/gpio/gpio68/direction

to disable:

echo in > /sys/class/gpio/gpio68/direction

That’s all. Now you can use your BeagleBone to power on/off almost any device you like. Here’s a full script used in the demo if you like to check it out:


if [ ! -d /sys/class/gpio/$gpio1 ]; then echo $gpio1 > /sys/class/gpio/export; fi
if [ ! -d /sys/class/gpio/$gpio2 ]; then echo $gpio2 > /sys/class/gpio/export; fi

click() {
    state="`cat /sys/class/gpio/gpio$1/direction`"
    if [ $state = "in" ] ; then

    echo $state > /sys/class/gpio/gpio$1/direction

while : ;do
    click $gpio1
    sleep .2
    click $gpio2
    sleep .4

How to run SolR in maven for tests that need to do some searches

Here’s a Maven pom section that launches Jetty instance with Apache SolR deployed just before tests and stops it after integration tests are finished.
You can select what what war will be deployed and where will be SolR home located (of course you need to prepare it first).
Hope you find it useful:

        <!--Need empty war tag so contextHandlers are loaded with Solr on proper path-->                               
            <connector implementation="org.eclipse.jetty.server.nio.SelectChannelConnector">                           
            <contextHandler implementation="org.mortbay.jetty.plugin.JettyWebAppContext">                              

Playframework Groovy plugin 0.1.99 release

Just uploaded compiled and packed up release of Playframework module that allows you to develop Play! applications in Groovy.
It’s based on Dave Clark code (https://github.com/clarkdave/play-groovy) with some nice features added here and there.

Check it out if you work with Play! in Java team and want to try some scripting language:

JSON endpoint in Playframework with one annotation

Playframework 1.2 is really nice in getting things done quickly. Today I’ve had to add JSON endpoints
in my application that would render exactly same models as html pages on same URLs.
I could have done it manually, but there are some AOP mechanisms you can use in Play that may be useful here.

So I’ve implemented some simple aspect that would cut trough requests and return JSON instead
of rendering html templates:

public class JsonPointcut extends Controller {
    static void renderModelsJson() {
        if (isJsonRequest()) {
            Scope.RenderArgs renderArgs = Scope.RenderArgs.current();
            Map<String, Object> outputObjects = new HashMap<>();
            for (Map.Entry<String, Object> entry : renderArgs.data.entrySet()) {
                if(entry.getValue() instanceof JPABase) {
                    outputObjects.put(entry.getKey(), entry.getValue());
    static boolean isJsonRequest() {
        Http.Header accepts = request.headers.get("accept");
        return accepts != null && "application/json".contains(accepts.value());

It will simply override render statements in controllers annotated with @With(JsonPointcut.class)
and render all parameters that would usually go into html template and extend JPABase (base class of Play entities) in form of JSON map.
Just remember to use Accept:application/json in request HTTP header.

Simple and beautiful.


Just uploaded a bit more fail safe and powerful version as Playframework module. Check my company GitHub account:

No documentation yet, but javadocs on controller.JsonRenderer class describe pretty much everything. Will add some later on.

How to use Apache SolR SQL integration and not get hurt

Recently I’ve spent more than one day fixing crazy issues in SolR SQL database integration on my project. You can set everything up using SolR documentation here: http://wiki.apache.org/solr/DataImportHandler. It’s not that difficult and probably is enough to handle data loading for many applications.

We have quite complex query that is feeding SolR with data. It has few sub-selects, group concats etc. We also use SQL database to store original content of documents we’re feeding SolR (so we can recreate index whenever it’s needed). Everything worked fine with simple varchars or integers, but when we wanted to process CLOB/LONGTEXT fields it didn’t work. First for CLOB data SolR was not indexing it’s content, but class name and address of database CLOB handler (i think it was something like org.h2.jdbc.Clob@1c341a for H2 database, oracle.sql.CLOB@24d12a for Oracle). It was Object.toString() call as you probably already guessed and database API was not returning String for CLOB, but some internal representation that SolR should read data from.

Everything should be fixed by using ClobTransformer. Just few changes in data-config.xml and it should be fine… but it wasn’t. I spent quite few hours to find out, that it won’t work if data column you’re feeding ClobTransformer is not written all in uppercase. Yes, it was just that. Adding alias to column name that made it in named in upper case fixed everything.

select col as COLUMN from table

This and sourceColName=”COLUMN” in entity mapping helped. So first advice how to not get hurt is:

Use only upper case names in query result table. Use aliases when needed.

Second issue we had was that after switching database to MySql CLOBs (it’s named LONGTEXT in MySql) again stopped working and again it was some crazy issue nowhere documented. After some time spent in debugger and SolR source I’ve found out that it was not using field name from data-config.xml to map it to schema field, but sourceColName. It also has some logic that was trying to resolve using sourceColName.toLowerCase when it couldn’t match it’s name. I have no idea why it does that way for CLOBs as other fields worked fine. Also switching back to H2 database worked fine. So next advice is:

Use same name as schema field name for query result table columns, but still in upper case. It will keep you safe from first issue described here and work fine because of toLowerCase logic in Solr

Hope it will save you few hours of searching. Will keep posting new crazy things about SolR if I find them.That’s all I have found for now.


Ok, this is not really true. Done some live debugging in SolR transformers and what you really have to do is use exactly same naming as your database will return.
SolR transformers use Map with table column names (as String in case returned by database) as keys and data as values. So make sure you declare
sourceColNames in mapping in exactly same case as your database returns or map.get() won’t match it.

Dream of fully bookmarkable system

Operating systems are not very usable right now. Of course you can create some files and launch few applications, but it’s rather a sack of totally independent parts. You always have to think in terms of individual applications. They are seldom connected and usually in not very smart way (MS Office kind of integration).

You cannot reference one file in another. You cannot for example attach the 13th minute of a movie to text file with your favorite movies scenes and make it play from this place when you click on it. It was just not designed this way. Current OSes are just a slow evolution of concepts born in ’60s or ’70s when there were only text files and batch scripts. No gigabytes of multimedia, no thousands of emails. It’s nearly impossible to keep all this organized with hierarchical filesystems and nearly no help on application level.

But there is one system that is going in direction of fully interconnected data. That is even capable of doing so. It’s a World Wide Web with simply beautiful concept of URL. it’s even getting more and more appealing with recent idea of making URLs fully bookmarkable and meaningful to the user and RESTful design. What if we could use the same concept to operating systems? What if we could have applications declare URLs directly to part of resource you have opened and would like to reference somewhere else? For example if I would like to reference 3rd page of opened pdf file I could get URL like:


Store it as link in my note and go to 3rd page of pdf file each time I want to see it. Note taking application could also use some pdf rendering capabilities, if it can handle pdf protocol, and show it inline.

What if we also decide to leave concept of hierarchical filesystems and change to some checksums and versioning? Maybe we could send someone a link to specific file in specific version (checksum) and he could then open it in exactly same state as we see it, even if he changed it in meantime. Even if he got it from someone else, but it is the same file (checksum is uniquely identifying content so no chance of mistake, mathematicians already made sure it works). Then we could also stop having multiple copies of the same file. If only thing that is identifying file is a checksum and some metadata, then its enough to have single instance with many names attached to different versions. URL to such content might look like:


Think about it for a second. Sending such link to anybody, really, anybody who has file with such checksum in its history will result in exactly same content opened. No misunderstanding. You could even bookmark and send character number or selection an it would be exactly same selection as you see. There could also be a base checksum of file (assigned when it was created) that could also be encoded in URL and used in case you don’t have exact version.

There are also some other cases you might find it useful. I think it’s even possible to get something like this working in near future. Maybe we could start with filesystems like ZFS and Unix extended attributes? Maybe with simple filesystem and some graph database to keep version references and metadata? No idea yet. But maybe someone will try and we see if it makes sense.

Web Services – Java client

Robert Mac asked some time ago for Java client for my old post here: Web Services in Ruby, Python and Java. So here it is (sorry for the delay). Simplest possible solution, no jars or IDE needed. Just plain Java 6 JDK.

First we have to generate proxy classes for our Web Service (you need to pass WSDL location, as URL or path to file):

wsimport http://localhost:8080/WSServer/Music?wsdl

wsimport is in /bin folder in your JDK.

Now let’s use them:

public class WSClient {

    public static void main(String[] args) {
        Music music = new Music();
        String[] artists = music.listArtists();
        for (String artist : artists) {
            Song[] songs = music.listSongs(artist);
            for (Song song : songs) {
                System.out.format("\t%s : %s : %d%s\n", new Object[]{song.getFileName(), song.getArtist(), song.getSize(), "MB"});

Now compile it and execute with classes generated by wsimport in classpath.
This is all. Simple, isn’t it?

Java annotations – little disappointment

I must say I’m little disappointed with Java annotations. There are no way to introduce dependencies between annotation parameters. This way you loose part of static error code checking on compilation. Example?

Let’s create annotation that generate some field in class. What you might want to do is to declare some interface for reference type and some implementation type to be assigned. But there is no way to make one type dependant on another, so user of your annotation may declare reference as List and implementation as HashMap. There is nothing you can do about it.

If only we had some generics there.

Log4j logger.error(Object) trap

Today I’ve fell into ugly Log4j logger trap. Everybody knows there is error, debug, info… method on Logger object. But what you might not know is that there is no error(Throwable) method there, only error(Object). What’s the difference you ask? It’s quite big. There’s error(String, Throwable) that will log out your message (the String param), build Throwable stack trace and log it along, but error(Object) will treat exception just like every other object. There will be NO stack trace in your logs, only exception message, Throwable.toString() will be called to generate it.

Continue reading…

%d bloggers like this: