Undo/Redo in Java using Protostuff serialization and binary diffs

Many applications need Undo/Redo functionality. Commonly used implementation patterns are:

  • Command Pattern
  • Memento Pattern (state snapshots)
  • State diffs

When using the Command Pattern one would encapsulate both the change logic and its reversal in command objects. Undo/Redo is implemented by managing stacks of those objects. This approach has its limitations, for example for changes that are unidirectional in nature, like anything involving randomness, encryption, etc.

State snapshots save the full state of the edited data as object graphs or some representation thereof. This is also called the Memento Pattern. It often uses serialization and typically compression of the object graph to reduce memory use and ensure immutable snapshots that can also be stored out-of-process, if desired.

State diffs are based on the idea of State snapshots, but only store the difference between states. This can vastly reduce memory consumption of your Undo/Redo history. It is based on diffing algorithms that compute the delta between two states (or their memento) and allow Undo/Redo by applying the deltas as patches against a given state. A disadvantage is that jumping to a state involves a whole chain of patch applications. But it is a good approach when the user mainly navigates the Undo/Redo history sequentially.

A highly reusable implementation of Undo/Redo using State Diffs is available at my github account: https://github.com/odoepner/diffing-history

It uses the following Open Source libraries:

  • Protostuff for object graph serialization using runtime schema
  • JavaxDelta for binary diffing and patching

It provides the following features:

  • Unlimited Undo and Redo
  • Can handle any type of Java objects
  • Low memory footprint
  • Straightforward type-safe API
  • Supports stack size listeners
  • Gzip compression for the serialized current state

It is Open Source under the Unlicense.

Usage

The main API is the History interface.
Create an instance of DiffingHistory to get started.
The DiffingHistoryTest calls all History methods and illustrates the API.

Continuous delivery using github, travis-ci and bintray

Continuous-Delivery-schema

Let’s say you work on a Java application and want to frequently make it available for download so that user’s can easily try the latest version.

Let’s say you work primarily on your laptop or personal computer using a Java IDE and commit code changes, but you don’t want to spend time manually building jars, packaging war or zip files, testing your application or uploading files to a website, etc.

Instead you want to have a fully automated process that compiles your source code, runs automated tests and other quality control mechanisms, builds your application and uploads the result to a public website.

But you don’t want to install any infrastructure for this and not run anything besides Java and your IDE on your own machine(s).

Basically you want to use developer-friendly reliable cloud services but you don’t want to pay a single cent.

All of this is possible, as long your code is Open Source:

  • Host your source code on github
  • Let travis-ci run vour build process
  • Let travis-ci upload the build result to bintray

For details, you can take a look at one of my github projects.

Relevant config files:

JBoss Undertow is pulling me in … :o)

I am very impressed as I am trying out the various code examples for Undertow, a kick-ass, light-weight yet powerful, ultra-easy-to-embed HTTP and Java Servlet engine.

One of my side projects requires an embeddable yet feature-complete Java HTTP engine with low memory footprint and a simple straightforward API. I dismissed Tomcat, briefly considered Jetty, found Winstone too old and unmaintained, simpleframework not well-enough documented, vert.x and netty a little too much for my purposes and/or too complicated, so that a few weeks ago I had actually started to clone and refactor NanoHttpd.

The NanoHttpd refactoring was a great learning experience, but it certainly felt like I was reinventing the wheel in the form of a cute and mobile but slightly rusty foldable unicycle. ;o) – no offense please, nanohttpd developers

Then I found out about Undertow. The author Stuart Douglas is now officially my hero. What an awesome job he is doing! The server meets all of the above mentioned requirements and is apparently also comparatively fast. No wonder it is the HTTP engine used by Wildfly, the new JBoss AS.

Anyway, if you want to try yourself, I’d go with version 1.1 final at this time, i.e. this in your Maven pom.xml:

<dependency>
    <groupId>io.undertow</groupId>
    <artifactId>undertow-core</artifactId>
    <version>1.1.0.Final</version>
</dependency>

I decided to pretty much ignore the documentation section of the undertow.io website for now, as it is still for version 1.0 and the API has changed – improved, I guess – since then. It seems to me, that at this point the core code itself and the usage examples are the best documentation for version 1.1. Both are Maven modules of the undertow github project.

By the way, if you are wondering why the project has no issues section on github: The issue tracking is done in the JBoss Jira.

Build Java Maven github project on travis-ci

Update 14/Aug/2018: I no longer use an FTP space for the build artifacts, instead I use bintray.

I used to use Cloudbees’ buildhive for continuous builds of my Java/Maven based github projects. But buildhive currently does not offer JDK 8. So far that hasn’t been a problem, but I recently started using lambdas and default methods in interfaces and other Java 8 goodness. And now buildhive does not work for me anymore.

So I looked for alternatives and tried travis-ci.org. It was easy enough to set up a free account: You just authorize their service through your github login. Then all your projects will be listed on travis and you just click a switch to enable a build.

.travis.yml with FTP upload

To actually activate a build, you have to add a .travis.yml file at the root of your project.

The builds then happen automatically whenever you commit changes to github. This is the build list for one of my projects.

My build produces a distributable zip file, using Maven assembly plugin, that contains all the jars and start scripts of my application. I want to make the latest stable version of that zip file available for public download. With buildhive I used the permanent URL of the build artifact within the workspace of the last stable Jenkins build. But travis does not store anything after the build.

To make travis-ci build artifacts available, a deploy step is required. Many cloud storage systems are supported, but I opted for a custom deploy via FTP to my web space at dev.doepner.net.

So to build my Java Maven project with JDK 8 and do the FTP upload of the zip artifact, I ended up with these lines:

language: java
jdk: oraclejdk8

env:
  global:
  - secure: L2lr/F0gIvyVUl0nJ7w9saGV7wZkL6nO61IxilDY/76iTlnhrFXn5Q8vATGbiRYdDW/tG1kyDUbKaWSkYrpV2Agm4wV/KmMg2CWRiIcQPPqwSEENx/1UZ/dBnCQGcRkkYApu5ayjGnX3Srg3ty1zvdud/O8tiKtWkkBDipJSpfY=
  - secure: OekVM5ZyLGHpqurOUWJcq0kKBA78WKZdXaA9aylwrjjQFeVoZxyxeZTYbhLajN4Ggg4Th58QwjUHpwcgZlnsxx4heDo1wyHxXojJd0H1LWKXJwet82IXaFJbl+Yz/htr7uWSFTUF6Szx70cpMxlGe3qsIFlgViEo9UGhHHdrjdY=

after_success:
  ./.travis/artifact-upload.sh

The env – global – secure entries are the encrypted username and password for my FTP server. Details about the encryption steps are at the end of this blog post.

Artifact upload script

The .travis/artifact-upload.sh script performs the actual upload. The .travis directory is in the root of my github project. The script looks like this:

#! /bin/bash

local_file="$(ls $TRAVIS_BUILD_DIR/typepad-dist/target/*.zip | head -n 1)"
target_url='ftp://doepner.net/~/public_html/dev/dist/ci-builds/typepad.zip'

echo "Uploading $local_file to $target_url"
curl -u $FTP_USER:$FTP_PASSWORD -T "$local_file" "$target_url"

I am only interested in the latest zip and I want the URL to be permanent, that’s why the filename is hardcoded as typepad.zip.

Build status and download links

Similar to buildhive, travis-ci provides nice build status icons that automatically show the current status of your build.

The README.adoc of my github project now contains these build status and download links:

== Build status

image:https://travis-ci.org/odoepner/typepad.svg?branch=master[
link="https://travis-ci.org/odoepner/typepad"]

http://dev.doepner.net/dist/ci-builds/typepad.zip[Download latest build]

If you are not used to this syntax: It is AsciiDoc, not the default Markdown format of github READMEs.

Encryption of FTP credentials

Travis supports encryption of environment variables. This makes sense, because you probably don’t want to expose your FTP username/password to the world.

To perform the encryption, a local travis command-line installation is required. On Debian it can be setup like this:

1) Install JRuby (but not Rails)
2) gem install travis
3) cd to local working copy of your project
4) travis encrypt FTP_USER=yourusername --add
5) travis encrypt FTP_PASSWORD=yourpassword --add

The --add tells the travis command to add the resulting config directly to the .travis.yml file in your project directory. That’s why you first need cd to the base dir of your project.

Jenkins Maven builds on OpenShift

Short version: If you want proper Maven builds with Jenkins on OpenShift, please vote for change request JENKINS-19844.

Full story:

Today I installed Jenkins on my OpenShift account to use it as Maven release build server for some of my Java based github projects. I ran into various obstacles and partially misleading information.

Installing the Jenkins “cartridge” on the OpenShift web console was the easiest part.

Then I logged into my new Jenkins using the auto-generated “admin” login. I created a “New Item” to “Build a maven2/3 project”, i.e. a new Maven build job, and configured it: Selected “Git” SCM and pasted the github URL of the project I want to build.

At first all “Build Now” attempts failed silently, until I realized I had to go into “Manage Jenkins” – “Configure System” page to change the “# of executors” from 0 to 1.

Next thing was that the Maven installation was not found. I set up ssh access to my OpenShift Jenkins (paste contents of ~/.ssh/id_rsa.pub from my Linux laptop into web console, then find the ssh hostname to connect) and ran a “find -name mvn /usr” on the host which located a Maven installation at /usr/share/java/apache-maven-3.0.4. I entered this in the “Maven installation” section on the Jenkins “Configure System” page.

Now I got at least some “Console output” when I clicked “Build Now” and navigated to the page of that build. The next error, however, has so far been a blocker for me. It is described here and seems to be a limitation of the Maven agent binding address in Jenkins.

I found several blogs recommending the “free-style” Jenkins job type as a workaround, instead of “maven2/3 project”. But that has many limitations and is not an acceptable solution for me.

Finally I noticed that the issue has already been reported in 2013 as JENKINS-19844 “Maven agent socket bind too inflexible (allow Jenkins in virtualized environment)”, but was closed by mistake due to a mix-up of JIRA issue numbers (19844 vs 19884).

I used my account at jenkins-ci.org and reopened the Jenkins issue. Now I can only hope that someone from Jenkins committers team will care enough about this and apply the suggested code changes. Then we have to wait until OpenShift provides a Jenkins version that contains the fix.

Additional Note: I also read about other issues with Maven on OpenShift, e.g. Jenkins having no write access to ~/.m2/repository. I could not verify those problems but they seem to be fixable in ~/.m2/settings.xml, using $OPENSHIFT_DATA_DIR. Via ssh, I was able to create and edit ~/.m2/settings.xml.

Java software engineering – reference resources

Official Java and JEE

Java Technology Reference

Java Standard Edition (JSE)

Java Enterprise Edition (JEE)

The official Java tutorials

The official JEE 7 tutorial

JEE 7 Technologies index

Java language spec and JVM spec

Java community

Oracle Java community

OpenJDK

Java Community Process (JCP)

Apache Commons

Apache.org Java projects

JBoss.org

Spring

Google Guava

Trending Java projects on github

JEE and Java web servers

Apache Tomcat

JBoss Wildfly

Glassfish

Build and test automation

Sonatype Maven books

Jenkins documentation (wiki)

JUnit reference documentation

Source and version control

The SVN reference book

Git reference documentation

Java IDEs

Intellij IDEA documentation

Eclipse documentation

Netbeans knowledge base

Vim configuration for Java coding

Using libgdx for cross-platform app development

I am looking for a framework that allows me to develop modern apps (mobile, web, desktop) all from one Java codebase. I prefer Java because I know it very well, it is already cross-platform and a statically typed language that allows IntelliJ, Eclipse and Netbeans to be better than any dynamically typed scripting language editor could ever be.

Currently my favorite is libgdx. I am planning to use it with IntelliJ Community Edition and with Maven.

By using RoboVM, libgdx even supports iOS.

For user input (forms) libgdx provides the scene2d.ui widgets. I hope that will be sufficient for most of my UIs. Now I just have to get OpenGL to work on my Linux box …

Java map comparison with detailed error messages using Guava

When you compare two Java maps that are supposed to be equal, i.e. contain the same name/value pairs, you might want to give some details about potential mismatches, for example in your log output.

The Guava library from Google provides a convenient tool for that, namely the class com.google.common.collect.MapDifference.

In the sample code below I have implemented a simple utitily method that compares two maps and logs detailed error messages if they are not equal.

This code is also the first item in my guava-based github project.

Maven dependencies

Put this into pom.xml :

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>16.0</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.5</version>
</dependency>

You will also need an slf4j implementation, like logback. For testing, we can use this :

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>1.7.5</version>
</dependency>

And for the JUnit tests further below you will need this :

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.10</version>
    <scope>test</scope>
</dependency>

Java class

Save this as src/main/java/net/doepner/util/MapDiffUtil.java :

package net.doepner.util;

import java.util.Map;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.google.common.collect.MapDifference;
import com.google.common.collect.Maps;

import static com.google.common.collect.MapDifference.ValueDifference;

/**
 * Map comparison with detailed log messages
 */
public class MapDiffUtil {

    private static final Logger log =
        LoggerFactory.getLogger(MapDiffUtil.class);

    public static <K, V> boolean validateEqual(
               Map<K, V> map1, Map<K, V> map2,
               String map1Name, String map2Name) {

        final MapDifference<K, V> diff = Maps.difference(map1, map2);

        if (diff.areEqual()) {
            log.info("Maps '{}' and '{}' contain exactly the same "
                   + "name/value pairs", map1Name, map2Name);
            return true;

        } else {
            logKeys(diff.entriesOnlyOnLeft(), map1Name, map2Name);
            logKeys(diff.entriesOnlyOnRight(), map2Name, map1Name);
            logEntries(diff.entriesDiffering(), map1Name, map2Name);
            return false;
        }
    }

    private static <K, V> void logKeys(
                Map<K, V> mapSubset, String n1, String n2) {
        if (not(mapSubset.isEmpty())) {
            log.error("Keys found in {} but not in {}: {}",
                n1, n2, mapSubset.keySet());
        }
    }

    private static <K, V> void logEntries(
                Map<K, ValueDifference<V>> differing, 
                String n1, String n2) {
        if (not(differing.isEmpty())) {
            log.error("Differing values found {key={}-value,{}-value}: {}",
                        n1, n2, differing);
        }
    }

    private static boolean not(boolean b) {
        return !b;
    }
}

Unit tests

Save this as src/main/java/net/doepner/util/MapDiffUtilTest.java :

package net.doepner.util;

import java.util.HashMap;
import java.util.Map;

import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;

/**
 * Tests MapDiffUtil
 */
public class MapDiffUtilTest {

    private final Logger log = LoggerFactory.getLogger(getClass());

    @Rule
    public TestName testName = new TestName();

    @Before
    public void logTestName() {
        log.info("Executing {}", testName.getMethodName());
    }

    @Test
    public void testEqual() {
        final Map<String, Integer> map1 = new HashMap<String, Integer>();
        map1.put("A", 1);
        map1.put("B", 2);

        final Map<String, Integer> map2 = new HashMap<String, Integer>();
        map2.put("B", 2);
        map2.put("A", 1);

        assertTrue("Maps should be equal", MapDiffUtil.validateEqual(
            map1, map2, "map1", "map2"));
    }

    @Test
    public void testSubset() {
        final Map<String, Integer> map1 = new HashMap<String, Integer>();
        map1.put("A", 1);

        final Map<String, Integer> map2 = new HashMap<String, Integer>();
        map2.put("B", 2);
        map2.put("A", 1);

        assertFalse("Maps should be unequal", MapDiffUtil.validateEqual(
            map1, map2, "map1", "map2"));

    }

    @Test
    public void testSeparate() {
        final Map<String, Integer> map1 = new HashMap<String, Integer>();
        map1.put("A", 1);

        final Map<String, Integer> map2 = new HashMap<String, Integer>();
        map2.put("B", 2);

        assertFalse("Maps should be unequal", MapDiffUtil.validateEqual(
            map1, map2, "map1", "map2"));
    }

    @Test
    public void testMismatches() {
        final Map<String, Integer> map1 = new HashMap<String, Integer>();
        map1.put("A", 1);
        map1.put("B", 2);

        final Map<String, Integer> map2 = new HashMap<String, Integer>();
        map2.put("B", 20);
        map2.put("C", 3);

        assertFalse("Maps should be unequal", MapDiffUtil.validateEqual(
            map1, map2, "map1", "map2"));

    }
}

Transparently improve Java 7 mime-type recognition with Apache Tika

Java 7 comes with the method java.nio.file.Files#probeContentType(path) to determine the content type of a file at the given path. It returns a mime type identifier. The implementation actually looks at the file content and inspects so-called “magic” byte sequences, which is more reliable than just trusting filename extensions.

However, the default implementation included in Java 7 seems to be platform dependent and not very complete. For example, for me it did not even recognize an mp3 file as audio/mpeg. Fortunately, the Open Source library Apache Tika provides more comprehensive mime type detection and seems to be platform independent.

As shown below, you can register a simple Tika based FileTypeDetector implementation with the Java Service Provider Interface (SPI) to transparently enhance the behaviour of java.nio.file.Files#probeContentType(path). As soon as the resulting jar is in your classpath, the SPI mechanism wil pick up our implementation class and Files.probeContentType(..) will automatically use it behind the scenes.

Maven dependency

        <dependency>
            <groupId>org.apache.tika</groupId>
            <artifactId>tika-core</artifactId>
            <version>1.4</version>
        </dependency>

FileTypeDetector.java

package net.doepner.file;

import java.io.IOException;
import java.nio.file.Path;

import org.apache.tika.Tika;

/**
 * Detects the mime type of files (ideally based on marker in file content)
 */
public class FileTypeDetector extends java.nio.file.spi.FileTypeDetector {

    private final Tika tika = new Tika();

    @Override
    public String probeContentType(Path path) throws IOException {
        return tika.detect(path.toFile());
    }
}

Service Provider registration

To register the implementation with the Java Service Provider Interface (SPI), you need to have a plaintext file /META-INF/services/java.nio.file.spi.FileTypeDetector in the same jar that contains the class net.doepner.file.FileTypeDetector. The text file contains just one line with the fully qualified name of the implementing class:

net.doepner.file.FileTypeDetector

With Maven, you simply create the file src/main/resources/META-INF/services/java.nio.file.spi.FileTypeDetector containing the line shown above.

See the ServiceLoader documentation for details about Java SPI.