OpenJDK builds for Windows now available from Redhat

As I mentioned in an earlier post, officially supported OpenJDK builds for non-Linux platforms have been notoriously hard to come by in the past, at least until Azul started their Zulu builds in 2013. Unofficial community builds are also available from the ojdkbuild project on Github.

Today Redhat announced that their OpenJDK offerings now include builds for the Windows platform as well.

After Google decided to use OpenJDK in Android N, I guess this is another strong indicator of OpenJDK’s value and increasingly wide adoption.

JEE Guardians petition Oracle to actively work on Enterprise Java standards again

Over the last 6 months or so, the development on Java EE 8 JSRs led by Oracle has nearly come to a stand-still. Even some spec leads working for Oracle privately admitted that they cannot do their part because Oracle has given them other priorities.

That is why the JEE Guardians group was formed by the community and that’s why I just signed this petition: “Larry Ellison: Tell Oracle to Move Forward Java EE as a Critical Part of the Global IT Industry

If you care about the future of Enterprise Java, please get involved and sign the petition, too.

JVM tips – The G1 Garbage Collector

An old wisdom says that Software can be optimized for latency, throughput or footprint. The same is true for the JVM and its Garbage Collector(s).

Roughly speaking, the classic GC implementations each optimize for one aspect: Serial GC optimizes footprint, Parallel GC optimizes throughput and Concurrent Mark and Sweep (CMS) optimizes for response times and minimal GC induced latency.

But since JDK7u4, we officially have the “Garbage First” (G1) GC. It is still new enough to not even have its own Wikipedia article, but there are good introductory tutorials, articles and tuning guides.

In several ways, G1 is a step up from the conventional GC approaches: It uses non-contiguous heap regions instead of contiguous young and old generations and does most of its reclamation through copying of the live data, thus achieving compaction.

It is based on the principle of collecting the most garbage first and designed with scalability in mind, without compromising throughput.

The benefits of G1 have lead to a proposal and lively debate about Defaulting to G1 Garbage Collector in Java 9.

In conclusion, you can either take the easy path and use the default JVM settings or take some time to learn about modern GC choices and tuning options.

And if you get it all right you might be rewarded with your Java based services performing better than ever before … :)

DBeaver – My new favorite DB tool

I have used Toad for Oracle and Oracle SQL Developer. Those are both good for working with Oracle databases.

However, I generally prefer Open Source tools and ideally something that works with other databases as well.

So I looked around, tried TOra but found it buggy and too limited. Also, its development is quite slow, see commit history.

Then I came across DBeaver and liked it a lot. It is actively developed, the latest version 3.5.1 was actually released 3 days ago.

It is a cross-platform tool (Windows, Linux, MacOS, other Unixes) written in Java, uses the Eclipse framework for a lot of great out-of-the-box features and is overall quite polished.

It supports many databases via JDBC. More details and some comparison with similar tools are mentioned on its About page.

splashscreen-circle

Configure Intellij to use default Eclipse Java import layout

Eclipse and Intellij use different default layouts of Java imports. If used on the same project, Eclipse’s “Organize Imports” will compete with Intellij’s “Optimize Imports“.

To avoid distracting back-and-forth code changes, Intellij can be configured to match the default Eclipse behavior:

Go to File – Settings – Editor – Code Style – Java – Imports tab

Prevent on-demand imports (i.e. wildcards) by settings high count limits:
intellij-java-imports-no-wildcards

Define the imports layout (i.e. grouping and order) like this:
intellij-java-imports-layout

Zulu – Certified OpenJDK 8 builds for all operating systems

You might have heard that Java is Open Source. And then you noticed that the Java SE downloads from the Oracle website are not actually Open Source. Maybe you also heard about OpenJDK.

So how does this fit together?

OpenJDK is an Open Source implementation of Java and Oracle Java engineers do work on Java with the OpenJDK community and and within the OpenJDK projects.

But source code needs to be compiled into executable binaries to be useful for end users. And that’s where things get dicey …

Where to find OpenJDK builds

For a long time there has been no reliable source for certified, well-supported builds of OpenJDK for all platforms.

The various GNU/Linux distributions, like Fedora, Debian, etc, have provided OpenJDK builds for a quite a while now, but for Windows and MacOS there were only some unofficial, often outdated hobby projects without reliable security updates.

Zulu – Open JDK builds

zulu-duke

This changed within the last 2 years: JVM vendor Azul Systems first released their “Zulu” line of free OpenJDK builds in September 2013, mainly targeting Windows Servers and the Microsoft Azure cloud. In 2014 they added support for Linux, MacOS and Java 8, as well as Docker images. All Zulu builds are certified against the official Java SE TCK. The focus is on the JDK and servers, without browser plugin or webstart.

The Azul website does not clearly state their security update policy for their free builds, but they offer deb and rpm package repositories that seem to contain latest builds of OpenJDK that match the current Oracle JDK update versions. Also, their engineers participate in the community and allegedly contribute back to OpenJDK.

Zulu – OpenJDK 8 for Debian stable

For Debian stable (Wheezy or Jessie), Azul is a convenient way to install OpenJDK 8, since the Debian openjdk-8 package is currently only available in Debian unstable and hasn’t even made it into the Debian testing yet.

Here is how I set up the Azul deb repo and installed their OpenJDK 8:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0x219BD9C9
apt_source='deb http://repos.azulsystems.com/debian stable main'
apt_list='/etc/apt/sources.list.d/zulu.list'
echo "$apt_source" | sudo tee "$apt_list" > /dev/null
sudo apt-get update
sudo apt-get install zulu-8

Please note, that the package installation automatically sets the Java related system alternatives to the Zulu ones. So right after zulu-8 installation the java version in your system path will be something like this:

oliver@basement:~$ java -version
openjdk version "1.8.0_45"
OpenJDK Runtime Environment (Zulu 8.7.0.5-linux64) (build 1.8.0_45-b14)
OpenJDK 64-Bit Server VM (Zulu 8.7.0.5-linux64) (build 25.45-b02, mixed mode)

JBoss Undertow is pulling me in … :o)

I am very impressed as I am trying out the various code examples for Undertow, a kick-ass, light-weight yet powerful, ultra-easy-to-embed HTTP and Java Servlet engine.

One of my side projects requires an embeddable yet feature-complete Java HTTP engine with low memory footprint and a simple straightforward API. I dismissed Tomcat, briefly considered Jetty, found Winstone too old and unmaintained, simpleframework not well-enough documented, vert.x and netty a little too much for my purposes and/or too complicated, so that a few weeks ago I had actually started to clone and refactor NanoHttpd.

The NanoHttpd refactoring was a great learning experience, but it certainly felt like I was reinventing the wheel in the form of a cute and mobile but slightly rusty foldable unicycle. ;o) – no offense please, nanohttpd developers

Then I found out about Undertow. The author Stuart Douglas is now officially my hero. What an awesome job he is doing! The server meets all of the above mentioned requirements and is apparently also comparatively fast. No wonder it is the HTTP engine used by Wildfly, the new JBoss AS.

Anyway, if you want to try yourself, I’d go with version 1.1 final at this time, i.e. this in your Maven pom.xml:

<dependency>
    <groupId>io.undertow</groupId>
    <artifactId>undertow-core</artifactId>
    <version>1.1.0.Final</version>
</dependency>

I decided to pretty much ignore the documentation section of the undertow.io website for now, as it is still for version 1.0 and the API has changed – improved, I guess – since then. It seems to me, that at this point the core code itself and the usage examples are the best documentation for version 1.1. Both are Maven modules of the undertow github project.

By the way, if you are wondering why the project has no issues section on github: The issue tracking is done in the JBoss Jira.

Is JSP an unsupported deprecated part of JEE ?

Recently, someone claimed that Java Server Pages (JSP) is an “unsupported”, kind of “deprecated” technology and that Java Server Faces (JSF) is the superior current standard.

I responded that JSP is a solid base standard, not deprecated, just not much advertised anymore. In combination with JSTL core logic tags it is still a reasonably powerful option, suitable for straightforward request-oriented web applications.

For example, JSP tag files (pure JSP based tags, no Java coding required) are a great templating feature for easy view structure reuse that many Java web developers don’t even know about because it was only added in JSP 2.0 – as part of JEE 1.4, when newly released JSF was getting all the hype.

JSP is “stable” in the sense that no significant features have been added in recent years. It is hard to find information on what was actually new in JSP 2.3 versus JSP 2.2. Some might say that it is borderline unmaintained because of this lack of changes. Others might just like it as is.

I think JSP is for web applications sort of what the JDBC API is for persistence, whereas JSF is sort of what the JPA standard is: More elaborate, higher-level features, multiple implementations of the standard by different vendors, added (sometimes nerve-wracking) complexity.

On the other hand, the JEE documentation side of things looks pretty bad for JSP (and ironically also for JDBC):

The JEE 5 tutorial still had a full long chapter about all aspects of JSP.

The JEE 6 and JEE 7 tutorials don’t have those chapters anymore and mention JSP only in passing. The documentation focus is clearly on JSF.

The omission of JSP from those JEE tutorials lead some folks on stackoverflow to ask “Where’s the official JSP tutorial” with some interesting answers.

Regarding non-“deprecation” of JSP, please note that the JEE 7 technologies page clearly lists JSP 2.3 as part of the JEE standard.

The reference documentation for JSP 2.3 is the detailed spec of JSR 245, version 2.3, maintenance release 2, available as PDF from the JSP site.

The JSTL 1.2 spec is also available as PDF from its JCP page.

Side note: Maven Central has a javax.servlet.jsp-api artifact, with most recent version strangely marked as 2.3.2-b01, as if it was a “beta” version.

So in conclusion: Certainly there is no buzz around JSP, rather complete silence. You can call that silence a symptom of death or see it as a sign that there is no need or interest to change this still widely used technology anymore.

How does Java’s Object.wait() really work?

Recently, I came across a question whether Object.wait() uses an “infinite loop”.

I didn’t know the answer, but since the JDK is Open Source, I thought I could at least find out where the code is and look at it.

So for example for JDK 7, these are the steps through the OpenJDK sources:

  1. In Object.java wait() calls the native method wait(timeout) via wait(0)
  2. The native code for wait in Object.c actually uses JVM_MonitorWait.
  3. The JVM wrapper method JVM_MonitorWait calls ObjectSynchronizer::wait.
  4. ObjectSynchronizer::wait calls wait() on an ObjectMonitor.
  5. Finally ObjectMonitor::wait does the real work.

I don’t fully understand that code, but I certainly don’t see an infinite loop in there. :o)

Build Java Maven github project on travis-ci

Update 14/Aug/2018: I no longer use an FTP space for the build artifacts, instead I use bintray.

I used to use Cloudbees’ buildhive for continuous builds of my Java/Maven based github projects. But buildhive currently does not offer JDK 8. So far that hasn’t been a problem, but I recently started using lambdas and default methods in interfaces and other Java 8 goodness. And now buildhive does not work for me anymore.

So I looked for alternatives and tried travis-ci.org. It was easy enough to set up a free account: You just authorize their service through your github login. Then all your projects will be listed on travis and you just click a switch to enable a build.

.travis.yml with FTP upload

To actually activate a build, you have to add a .travis.yml file at the root of your project.

The builds then happen automatically whenever you commit changes to github. This is the build list for one of my projects.

My build produces a distributable zip file, using Maven assembly plugin, that contains all the jars and start scripts of my application. I want to make the latest stable version of that zip file available for public download. With buildhive I used the permanent URL of the build artifact within the workspace of the last stable Jenkins build. But travis does not store anything after the build.

To make travis-ci build artifacts available, a deploy step is required. Many cloud storage systems are supported, but I opted for a custom deploy via FTP to my web space at dev.doepner.net.

So to build my Java Maven project with JDK 8 and do the FTP upload of the zip artifact, I ended up with these lines:

language: java
jdk: oraclejdk8

env:
  global:
  - secure: L2lr/F0gIvyVUl0nJ7w9saGV7wZkL6nO61IxilDY/76iTlnhrFXn5Q8vATGbiRYdDW/tG1kyDUbKaWSkYrpV2Agm4wV/KmMg2CWRiIcQPPqwSEENx/1UZ/dBnCQGcRkkYApu5ayjGnX3Srg3ty1zvdud/O8tiKtWkkBDipJSpfY=
  - secure: OekVM5ZyLGHpqurOUWJcq0kKBA78WKZdXaA9aylwrjjQFeVoZxyxeZTYbhLajN4Ggg4Th58QwjUHpwcgZlnsxx4heDo1wyHxXojJd0H1LWKXJwet82IXaFJbl+Yz/htr7uWSFTUF6Szx70cpMxlGe3qsIFlgViEo9UGhHHdrjdY=

after_success:
  ./.travis/artifact-upload.sh

The env – global – secure entries are the encrypted username and password for my FTP server. Details about the encryption steps are at the end of this blog post.

Artifact upload script

The .travis/artifact-upload.sh script performs the actual upload. The .travis directory is in the root of my github project. The script looks like this:

#! /bin/bash

local_file="$(ls $TRAVIS_BUILD_DIR/typepad-dist/target/*.zip | head -n 1)"
target_url='ftp://doepner.net/~/public_html/dev/dist/ci-builds/typepad.zip'

echo "Uploading $local_file to $target_url"
curl -u $FTP_USER:$FTP_PASSWORD -T "$local_file" "$target_url"

I am only interested in the latest zip and I want the URL to be permanent, that’s why the filename is hardcoded as typepad.zip.

Build status and download links

Similar to buildhive, travis-ci provides nice build status icons that automatically show the current status of your build.

The README.adoc of my github project now contains these build status and download links:

== Build status

image:https://travis-ci.org/odoepner/typepad.svg?branch=master[
link="https://travis-ci.org/odoepner/typepad"]

http://dev.doepner.net/dist/ci-builds/typepad.zip[Download latest build]

If you are not used to this syntax: It is AsciiDoc, not the default Markdown format of github READMEs.

Encryption of FTP credentials

Travis supports encryption of environment variables. This makes sense, because you probably don’t want to expose your FTP username/password to the world.

To perform the encryption, a local travis command-line installation is required. On Debian it can be setup like this:

1) Install JRuby (but not Rails)
2) gem install travis
3) cd to local working copy of your project
4) travis encrypt FTP_USER=yourusername --add
5) travis encrypt FTP_PASSWORD=yourpassword --add

The --add tells the travis command to add the resulting config directly to the .travis.yml file in your project directory. That’s why you first need cd to the base dir of your project.