Controlling Java from Node.js – a simple JSON based RPC protocol

This is an introductory example of creating a simple JSON based protocol for communicating with a Java program from a Node.js JavaScript application. You might want to do this if, for example, there is a Java library that gives some sort of fantastic benefit that you’d have a problem replicating in a Node.js script by itself. In this example, we will only be sending some basic message back and forth. In a more real world scenario, and a future article I plan on writing, you could have messages that do whatever you want on the Java side so that those actions would then be controllable from the Node side.

Development Environment

I think it’s worth discussing my development environment that i used here for anyone who isn’t familiar with these tools and technologies themselves. Getting setup is simple. You will need to download and install a Java Development kit for compiling the Java code, the Node.js for running your JavaScript code, and you’ll likely want an IDE like NetBeans. I use NetBeans for this sort of development, so you may find it easier to work with this example if you’re also using it. This was written with NetBeans 7.2. I also make use of GitHub for Windows for dealing with the source code repository.

Creating the Java program

The run loop

The application will stay in a loop where it reads the standard input stream for request messages. As it receives messages, it will call the process method on the received request and then go back to reading for more. In my code, this loop is found in the run method of my GsonRpc class. The only message that breaks this pattern is the Goodbye message, which is used to exit the run loop.

        while ((message = RequestMessage.fetchRequest()) != null) {
            if (message instanceof Goodbye) {

Messaging classes

All messages will inherit from the base Message class. It is the simplest of them as it only has the messageName field that will be the same as the class name, and the Gson instance that will be used for serializing and deserializing the JSON representation of the classes. Inheriting from it are the RequestMessage and ResponseMessage subclasses. These are fairly obvious as to their purpose, and each one adds functionality that assists with making customized requests and responses behave the way we need them to.

Request messages

The RequestMessage class contains two very important functions needed for both receiving and converting JSON strings in to actual Java object instances. Instead of just hard coding to read from standard input, I’ve gone ahead and made a DataInputStream local variable that we will read from. This will normally be standard input, but we can easily change it to some other source. This is immediately beneficial for writing unit tests where it makes it far easier to set and reset what is being pumped in.

The most interesting part of the RequestMessage code is where we use the Gson object to actually deserialize the JSON strings. I use a HashMap that contains all the different types of request messages that we will support converting to. I could have gone to much more trouble to create some sort of dynamic system that used reflection to figure out the clases, but the HashMap solution was so fast and simple that I think it’s worth considering leaving it this way for simplicity’s sake.

    public static Message jsonToMessage(String jsonString, HashMap<String, Type> messageMap)
        // Parse the json string to a json object
        JsonParser parser = new JsonParser();
        JsonElement element = parser.parse(jsonString);
        JsonObject object = element.getAsJsonObject();
        // Get the messageName from the json object
        String messageName = object.get("messageName").getAsString();
        // Convert the name into a class type for gson to use
        Type messageType = messageMap.get(messageName);
        // If the type name isn't listed then we get a null and can't convert
        if(null == messageType) {
            return null;
        // Convert to a message object and return it
        return _gson.fromJson(element, messageType);

Each request message subclass will have to override the abstract process method. You can implement whatever sort of logic you want in the request processor. Your logic will likely need to generate at least one or more response message objects and send them. At a minimum, I recommend at least some sort of informational acknowledgement. You’ll see how basic the processing is in the example request message objects that I have provided. In a follow up article to this, you will see it doing a lot more useful processing.

The only request class that I have included any code to even bother looking at in this example is the Hello class.

public class Hello extends RequestMessage {
    private String version;
    public void process() {
        ResponseMessage response;
        if(!GsonRpc.APP_VERSION.equals(version)) {
            response = new Error(new Exception("Client does not match expected version " + GsonRpc.APP_VERSION));
        } else {
            response = new Info("Ready to receive requests.");

Response messages

The ResponseMessage class is so simple that it isn’t even hardly worth talking about. It also makes use of the Gson library, but it just calls the toJson method for converting whatever message is being sent into a JSON string, and then outputs the string byte-length followed by the string bytes. That’s really all that is needed, as now it will give a nice send method to each request message object so that they can convert and send themselves easily.

As you will also see in the code, the response message objects have little need for anything more than the information that they are sending back. You can put more complex object types in there and Gson can typically handle them. The one really important thing to remember is that if you have something in your response classes that you do not want Gson to encode then you will need to mark it as transient.

The Error class, in my example code, is a good example of how simple they can be while still providing a bit of info. You will see that the full stack trace and all come across on the Node.js side. Thanks it inheriting the send method, it is little more than a constructor that loads up the data that we will send back from an Exception object.

public class Error extends ResponseMessage {
    private String errorMessage;
    private String errorType;
    private StackTraceElement[] stackTrace;

    public Error(Exception errorMessage) {
        this.errorMessage = errorMessage.getMessage();
        this.errorType = errorMessage.getClass().getName();
        this.stackTrace = errorMessage.getStackTrace();

Using the code from Node.js scripts

For the JavaScript side of things, I have decided to implement the messaging logic in a module that can be require-d in to other scripts so that they don’t have to do the parsing and creating of message objects. Instead, they can just receive messages as events and send messages via function calls. This frees the rest of the application up so that it can just process business logic.

The JavaRpcClient module wraps up the code that invokes the Java application and handles the messaging protocol. It uses the child_process module to invoke the java process. It wires in events to listen for standard output and standard error data. When it receives data through standard output, it first expects that the first 4 bytes in the buffer gives the length of the following byte array that will be the JSON string. After that, it converts the JSON to an object and raises an event that uses the messageName as the name of the event. This will allow us to then wire up events to the JavaRpcClient object that are named the same as the response messages that we will receive.

    function onData(data) {

        // Attach or extend receive buffer
        _receiveBuffer = (null == _receiveBuffer) ? data : Buffer.concat([_receiveBuffer, data]);

        // Pop all messages until the buffer is exhausted
        while(null != _receiveBuffer && _receiveBuffer.length > 3) {
            var size = _receiveBuffer.readInt32BE(0);

            // Early exit processing if we don't have enough data yet
            if((size + 4) > _receiveBuffer.length) {

            // Pull out the message
            var json = _receiveBuffer.toString('utf8', 4, (size + 4));

            // Resize the receive buffer
            _receiveBuffer = ((size + 4) == _receiveBuffer.length) ? null : _receiveBuffer.slice((size + 4));

            // Parse the message as a JSON object
            try {
                var msgObj = JSON.parse(json);

                // emit the generic message received event
                _self.emit('message', msgObj);

                // emit an object-type specific event
                if((typeof msgObj.messageName) == 'undefined') {
                    _self.emit('unknown', msgObj);
                } else {
                    _self.emit(msgObj.messageName, msgObj);
            catch(ex) {
                _self.emit('exception', ex);

This module also exposes other functions for sending request messages to the Java program. Each function creates an object and then passes it to the send function to be encoded and written to standard in on the java child process. Luckily, Node has handy functions on the Buffer class that makes this very easy to do.

    function sendMessage(msg) {
        // convert to json and prepare buffer
        var json = JSON.stringify(msg);
        var byteLen = Buffer.byteLength(json);
        var msgBuffer = new Buffer(4 + byteLen);

        // Write 4-byte length, followed by json, to buffer
        msgBuffer.writeUInt32BE(byteLen, 0);
        msgBuffer.write(json, 4, json.length, 'utf8');

        // send buffer to standard input on the java application

        _self.emit('sent', msg);

The only thing to do then is to make use of the JavaRpcClient module to actually implement some “business” logic. I do this in the test-client.js script. You’ll find where I include the module and create a basic instance of the JavaRpcClient to use. I wire up some basic event handlers so that you can see some things going on. Then at the bottom, there is this little bit of code that shows an example of how it could be used.

// Here's the full client logic for the test
    // Start it up (Hello exchanges happen);
    // Receive acknowledgement of hello
    instance.once('Info', function(){
        // Try echoing something
        // Receive back the echo and quit.
        instance.once('Info', function(){

And that’s about it for this example. Give it a try running and you should see something like the image below. Let me know, in the comments below, if you do anything interesting with it. I’ll have a followup article coming soon where I’ll show an example of doing something much more useful.

Test run of node to java rpc


4 thoughts on “Controlling Java from Node.js – a simple JSON based RPC protocol

  1. mohanradhakrishnan

    Interesting because we want to remove a small RMI server that services print requests from a JBoss JEE application. We saw may RMI DGC threads clogging up the JBoss VM. RMI is not needed for this. nodejs seemed like a good choice but I found there is no access to printer drivers. Do you think your method is useful here ?


    1. Brent Spotswood Post author

      I’ve only been using this for simple tools, scripts, and web apps written in JS/CoffeeScript to run on Node and need access to some Java functionality that just isn’t readily available for me in Node. This might help you out, but perhaps you’re better off looking at a MQ solution. It’s hard to say without testing and considering what kind of factors you’re dealing with. If you’re talking about an internal application that isn’t mission critical then it wouldn’t hurt to test it out. It should be simple to implement a test with it.

      If you’re wanting to hook this in for some sort of production ticketing system then you really need to consider volume frequency and how much you could potentially backup on the pipeline of requests going to the Java component side. Node would be able to drop all the requests it wants on the output stream to the Java process, but it isn’t free of charge and won’t come with any kind of fault tolerance and built-in recovery mechanism like you would get with MQ and a broker service. If that’s too complex, maybe just a single threaded print server process that keeps checking a database where you write requests into a table.

      I’m a huge fan of looking for creative solutions where you get to play with new toys (technology), but you really need to study what the requirements are and ensure that you’re meeting everything when it comes to performance, stability, scalability, disaster recovery, maintenance, backup, etc. It’s a matter of finding the right fit for the need, and it sounds like there could be good existing alternatives for your issue if you don’t already have Node in the technology stack.

  2. renexdon

    Hi Brent, I’ve just read your post and it seems that fits on what I was looking for. I wanna implement a user interface based on NodeJs for my app in JAVA. As you mentioned, koz my java app communicates with my experimental setup (voltimeters, current sources) with serial port, and I have thoushand of code invested I dont wanna recode what I did in any other language. My problem is that I’m in the need to implement some user interface stack in order to control my setup and NodeJs seems to be the best option, with evething in the same localhost machine (a desk computer). I’ve the feeling that swing it’s no so lighweigth and I dont like the type of codding it uses with the Jframes. Koz I’m quite new with nodeJs framework I wold like to ask you how could I run your example… Should I run both node and java or just running the node file you can control the whole communication between Java-NodeJs? My system env is an ubuntu 14.04, nodejs v0.10.32, and java Java(TM) SE Runtime Environment (build 1.7.0_67-b01) / Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode).
    Thanks in advance for any help,

    1. Brent Spotswood Post author

      The example is a Node script which runs a Java child process itself. It then communicates over the standard IO streams using the simple communication protocol covered in the code and described in the article above. The point of the article was to show how you could wrap up desired Java functionality and then make use of it from the Node side. I’m not sure if this is what you want for your scenario, but the key thing is really the communication and not which process starts the other.

      It might make more sense for your case to have Java run Node as the child process. You’d basically just need to change the Node script side so that it uses the main process’ IO streams (process.stdin and process.stdout) instead of starting up a java child and using the IO streams of the child. You can use something like SailsJS to load up and handle the HTTP side of things in the Node child at that point and you’ll have everything happening in one child.

      There are of course other options as well (use a database, use TCP/IP, use files, etc).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s