Implementation of Microkernel architecture using Java OSGI

Implementation of Microkernel architecture using Java OSGI

I would like to share the experience of implementing a microkernel architecture (microkernel) in Java using OSGI (Open Service Gateway Initiative). This approach is an intermediate option between micro-service and monolithic architecture. On the one hand, there is a division between components at the VM level, on the other – intercomponent interaction occurs without the participation of the network, which speeds up requests.

Introduction

Source: Image o’reilly

The microkernel architecture implies the division of the program’s functionality into many plugins, each of which guarantees extensibility, provides isolation and separation of functionality. The division of components into two types is assumed: the core and plugins. The core contains the minimum functionality necessary for the system to work, and the logic of the program is divided between plugins. At the same time, it is expected that interaction between plugins will be minimized, this will improve the isolation of each component, which will improve testability and simplify maintenance.

At the same time, the system core needs information about the running modules and how to interact with them. The most common approach to solving this task is through the organization of the plug-in registry, which includes information about the plug-in name and available interfaces.

This pattern can be implemented using completely different technologies. For example, we can isolate the core and connect plugins via dynamically loading jar files without additional isolation.

OSGI offers an approach to plug-in isolation by separating the code for each plug-in at the classloader level. Each plugin can be loaded by a separate loader, thereby providing additional isolation. The downside of this solution is potential class conflicts: identical classes loaded by different loaders not being able to interact.

As a high-level solution, you can consider Apache Karaf, which positions itself as a Modulith Runtime and provides integration with the main frameworks: JAX-RS and Spring Boot. This tool facilitates interaction with OSGI technology by providing high-level abstractions.

Alternative options include direct OSGI implementations: Apache Felix, Eclipse Equinox, and Knopflerfish. Using low-level solutions will give us more freedom in the design process.

Plug-in architecture based on Apache Felix

Context

For integration with various customer data sources, we used a solution based on Apache Camel, which, based on the user’s configuration, connected to an arbitrary data source (from FTP to OPC UA) and applied user-defined transformations on the received data. Such a solution has proven itself to be reliable, as well as easy to extend for the case of protocols that are already in Apache Camel. The disadvantage of this solution was the difficulty of connecting new protocols that are not present in Apache Camel. The problem was the emergence of dependency hell, which consisted in the emergence of incompatible transitive dependencies.

This became the main driver for studying other approaches in building an integration service. In addition, I had the feeling that it is possible to implement a more effective initialization of the program due to the exclusion of Spring from the project and manual configuration of services. This was possible due to the small number of dependencies between components.

As a solution, it was proposed to use Apache Felix, independently define the interface for the data processing component and dynamically connect plugins at the program start stage. It is worth emphasizing that we needed to implement a data processing pipeline: receiving data from a remote source, several transformation steps and writing to our data storage system or reading from our system, several transformation steps and writing the result to a remote data source.

  • READ FLOW: Reading from the customer’s system, Conversion, Writing to our system

  • WRITE FLOW: Reading from our system, Conversion, Writing to the customer’s system

It was important to take into account the context of the task, which consists in the presence of simple interactions between the stages of data processing. The value object format was unified. At the same time, the data processing pipeline did not contain logical blocks or one-to-many connections in the data transfer process. This greatly simplified data processing.

Project structure

Launcher. A separate project was allocated – launcher, which performed the function of the system core. His area of ​​responsibility was limited to running the osgi framework, reading the configuration and dynamically connecting the necessary plugins that were specified in the configuration; as well as linking all plugins into a single pipeline based on user configuration.

In the process of implementing the core and connecting the basic plug-in, it turned out that the documentation is not enough for the correct configuration of the program. It turned out to be very useful to use the Github search to compare your solution and someone else’s, probably working solution.

Shared Code. The general code was divided into two projects: api – a set of interfaces for implementing pipeline data processing and parent – a common parent for all projects, which contains api as a dependency, as well as the configuration of the maven plugin, which allowed to receive a jar file with the plugin code.

Plugins. Each plugin was placed in a separate maven project and packaged in a jar file with a special structure (bundle in osgi terms). The maven plugin org.apache.felix:maven-bundle-plugin is responsible for generating the correct structure, which accepts the project name, activator (entrypoint) and list of private/export/import/embed dependencies as settings.

The structure of the plugin (bundle)

Each plug-in contains an activator – a class that will be launched when the plug-in is connected. The plugin is expected to register its services in the context at this point. Each service can contain meta-information that it will write to the Dictionary.

public class Activator implements BundleActivator {
  @Override
  public void start(final BundleContext bundleContext) {    
      Dictionary<String, Object> dictionary = new Hashtable<>();
      dictionary.put("CustomField", "API_IMPL_V1");
      bundleContext.registerService(ApiService.class, new ApiServiceImpl(), dictionary);
    }
}

The core of the application (Host in OSGI terms) can refer to the context with a request to receive registered services specifying metadata fields:

var references =
        context.getServiceReferences(ApiService.class, "(CustomField=*)");
Map<String, ConnectorService> index = new HashMap<>();
for (ServiceReference<ConnectorService> reference : references) {
    var  service = context.getService(reference);
    index.put(reference.getProperty("CustomField").toString(), service);
}

At the same time, the plugin will contain dependencies that are not available to other plugins if they are marked as Private.

Non-obvious things you would like to know about

No. 1. The specification does not allow having classes in the default package. This requirement applies not only to your project, but also to all your dependencies. The error that will be displayed in case of violation of the requirement will not be informative:

[ERROR] Bundle {groupId}:{artifactId}:bundle:{version} : The default package ‘.’ is not permitted by the Import-Package syntax.
This may be caused by compile errors in Eclipse because Eclipse creates
valid class files regardless of compile errors.
following package(s) import from the default package null
[ERROR] Error(s) found in bundle configuration

To solve this problem, you need to place a conditional breakpoint in the code of the “org.apache.felix:maven-bundle-plugin” plugin and independently find the dependency that contains the wrong class structure.

I posted a detailed solution to this problem in a separate article: https://medium.com/@mark.andreev/how-to-fix-the-default-package-is-not-permitted-by-the-import-package-syntax-in-osgi-3b59a6c18e71

No. 2. Obvious required settings “org.osgi.framework.launch.Framework”. You will not be able to run apache felix without specifying the temporary directory “Constants.FRAMEWORK_STORAGE”. In case of problems, the error will not be informative.

No. 3. No error in case of problems while loading the bundle. The only way to know that the bundle failed to load is to compare the SymbolicName in the bundle to null.

Bundle addition = bundleContext.installBundle(location);
if (addition.getSymbolicName() != null) {
   // TODO: add error
}

No. 4. Difficulties in transferring library classes to the plugin. The solution turned out to be the unification of interfaces in the api library and the use of only these classes for communication between plugins.

Conclusion

The solution based on Apache Felix demonstrated not only the difficulty in adapting an insufficiently popular technology, which was expressed in the lack of knowledge on Stackoverflow and the need to use a debugger to investigate most problems, which makes it difficult to analyze incidents. On the other hand, thanks to this technology, we got low coupling between system components, isolation of plug-ins at the class loader level, and a simpler project structure due to the allocation of each component of the pipeline into a separate project; and significant startup acceleration.

It is important to consider that the positive experience is directly related to the weak connection between plugins and the lack of shared dependencies, except for the api library.

If you need closer interaction, you should pay attention to Apache Karaf. Most likely, it will be more convenient for you not to implement low-level interaction with OSGI similar to the one described in the project.

Epilogue

Have you had experience implementing a microkernel architecture? How did you solve this problem?

Mark Andreev

Senior Software Engineer

Related posts