Intro to Integration

Integration is one of those concepts that is easy to “know,” but becomes less obvious that more you try to define it. A basic, casual definition is making different things work together. The complexity, though, comes from the fact that every single part of that has to be broken down: what are the “things,” what are they doing that makes them “work together,” how are they working, and what is the goal or purpose of them working together. All of those elements can be answered differently for different organizations, or even within the same organization at different times.

An understanding of integration comes from looking at the different potential patterns that you can integrate and then defining the logic behind the integration so you can select the right patterns for your environment.

Integration Patterns

Integration itself is an architectural structure within your infrastructure, rather than an action or specific process. While getting various systems to work together has long been an IT (and organizational) responsibility, integration as a practice became more of a focus in the early 2000s. With emerging large-scale enterprise applications, there became a growing need to get those applications working together without having to redesign or redeploy the applications themselves. That push became integration.

Integration is subdefined by what is being integrated; these are the integration patterns.

There are different types of patterns, depending on perspective. There are patterns based on what is being integrated and then there are patterns based on the topology or design of the integration. Basically, it’s the what and the how.

Continue reading “Intro to Integration”

Upcoming Webinar: Migrating to Open Source Integration and Automation Technologies

Balaji Rajam (principal architect) and Ushnash Shukla (senior consultant) from Red Hat will be conducting a webinar about the ability to integrate data from disparate sources with people and processes. This is a crucial part of strategies for data integration.

Data is increasingly moving from being an asset within an organization to one of the key business drivers and products, regardless of industry. The ability to integrate data from disparate sources is a crucial part of business digital strategy. Many organizations have been locked into proprietary and closed software solutions like TIBCO, but as the IT environments transform again into microservices, agile, and cloud-based infrastructures, those proprietary systems may not be able to keep up – or it may be too cost-prohibitive to try. Open source offers standards-based approaches for application interoperability with potentially lower costs and faster development times. This webinar looks at three key aspects of effectively moving from proprietary to open source solutions:

  • Recommendations for migrating from TIBCO to open source applications
  • Performing data integrations
  • Defining automated business processes and logic

Registration is open. The webinar is August 9 at 11:00am Eastern Time (US).

register_now

Fun Follow Up: Webinar Q&A

I will collect any questions asked during the webinar, and I’ll do a follow-up post on Friday, August 12, to try to capture the most interesting questions that arise.

Announcing Red Hat JBoss Data Grid 7.0

Red Hat JBoss Data Grid 7.0 is here.

This significant new release offers a variety of features which are designed for complex, distributed, and high-volume, velocity data requirements. JBoss Data Grid can support both on-premise and cloud-based infrastructures, and it can handle the near-instantaneous, complex demands of Internet of Things and big data environments.

What It Brings

JBoss Data Grid can be deployed in different architectures, depending on the needs of your environment. In addition to the traditional usage as distributed cache, in-memory data grids can function as the primary data store for applications or as a compute grid for distributed computing.

Data Storage

There are two use cases for using JBoss Data Grid as a data store:

  • Data caching and transient storage. As an in-memory data store for frequently accessed application data or for transient data, such as shopping cart information or session data. This avoids hitting transactional backend systems are frequently, which reduces operating costs.
  • Primary data store. Data Grid can function as a key-value store similar to a NoSQL database. This can be the primary data source for applications for rapid retrieval of in-memory data and to persist data for recovery and archiving. Applications can run data-intensive operations like queries, transaction management, and distributed workloads.

Computing Grid

Modern architectures require flexible, distributed, and scalable memory and data storage. Using JBoss Data Grid as a distributed computing grid can help support the most demanding architectures:

  • Scale-out compute grid and event-driven computing. Through storage node clusters, JBoss Data Grid can do a distributed architecture with application logic at each node for faster data processing and lower latency and traffic. This architecture also supports event-driven computing by executing application logic at the node as data are updated.
  • Big data and the Internet of Things. JBoss Data Grid can support massive data streams — hundreds of thousands of updates per second. The Internet of Things can have data streams from thousands of connected devices, updating frequently. Clustering and scale, application logic and processing, and both in-memory and persistent storage in JBoss Data Grid enable those big data architectures by managing those massive data streams.

Real-Time Analytics and Performance for Digital Business

DIgital transformation means that organizations are pushing into a new intersection between their physical goods or services and online, on-demand applications. This digital environment is reliant on data — and unlike previous generations, this technology uses -near live data streams rather than historical data collections.

JBoss Data Grid is a leading high-performance, highly-scalable, in-memory data grid. In-memory data grids provide a means of scalable memory so that even rapidly changing application data can be processed.  Better data processing and management enables organizations to make fast, accurate decisions using large data streams. JBoss Data Grid 7.0 offers a data foundation for real time analytics:

  • Low latency data processing through memory and distributed parallel execution
  • Data partitioning and distribution across cluster nodes for horizontal scalability
  • High availability through data replication
  • Shared data services for real-time and in-memory analytics and event processing

A Short List of Major Features

The release notes cover the full features and enhancements for JBoss Data Grid 7.0. There are a number of features for improved ease of use, real-time analytics, and language support:

  • Distributed streams, which uses the Java 8 Stream API to take complex collections of data and run defined analytics operations.
  • Resilient distributed dataset (RDD) and DStream integration with Apache Spark 1.6, allowing Data Grid to be a data source for Spark and to execute Spark and Spark Streaming operations on data in Data Grid.
  • Hadoop InputFormat/OutputFormat integration, so that Hadoop tooling and oeprations can be used with data stored in Data Grid.
  • New administrative consoles for cluster management to simplify common tasks for managing the cache, nodes, and remote tasks.
  • Control operations for clusters including graceful shutdowns and startup and restores from persistent storage.
  • A new Node.js Hot Rod client to support using Data Grid as a NoSQL database with Node.js applications.
  • Running remote tasks (business logic) on a Data Grid server from the Java Hot Rod client.
  • Support for a Cassandra cache store, which persists the entries of a distributed cache on a shared Apache Cassandra instance.

Additional Resources

New styles of integration are the hallmark of Digital Transformation

New Styles of Integration 2

Shakeup your integration strategy to enable digital transformation, says VP & Gartner Fellow Massimo Pezzini. Pezzini asserts that it is not just about transforming and modernizing the infrastructure and the applications concerned.  Some of the fundamental concepts of integration need to be revisited and transformed as well.  Such systemic transformation punctuate the migration of  legacy environments to microservices and the cloud.  What may have worked in the past will no longer be applicable going forward.  “Integration is dead.  Long live integration,” screamed the title of one of the sessions at the Red Hat Summit 2016.  The session was making a point.  Integration, as we knew it a few years back, is dead.  Integration in the digital world has a long life in the decades ahead.  Join me as I walk through the new styles of integration that are the hallmark of digital transformation.

Continue reading “New styles of integration are the hallmark of Digital Transformation”

Announcing Integrated Web Single Sign-On and Identity Federation

Red Hat recently released a new web single sign-on (SSO) server, based on the upstream Keycloak project. Now you have an out-of-the-box SAML 2.0 or OpenID Connect-based identity provider, fully supported, which mediates with your enterprise user directory or third-party identity provider for identity information and your applications via standards-based tokens. Keycloak is the next-generation replacement for PicketLink in the JBoss middleware technologies. Eventually, Keycloak will also provide single sign-on for Red Hat Cloud Suite and management products like Red Hat Satellite.

Feature Overview

At its core, Keycloak is a SAML 2.0 or OpenID Connect-based identity provider.

There is more information on the Customer Portal to go in-depth into features and configuration.

Client Support

Keycloak has a central identity server, and clients connect to it through their identity management configuration, assuming they have the appropriate adapter or module.

Keycloak supports a number of different clients:

  • Red Hat JBoss Enterprise Application Platform 6.4 and 7.0
  • Red Hat JBoss Fuse 6.2 (as tech preview)
  • Red Hat Enterprise Linux 7.2, through the mod_auth_mellon module for SAML 2.0

Identity Federation

Keycloak can be used for user federation with LDAP-based directory services, including:

  • Microsoft Active Directory
  • RHEL Identity Management

Additionally, Keybloak supports SPNEGO-based Kerberos with both Microsoft Active Directory and RHEL Identity Management.

Identity Brokering

Keycloak integrates with social login providers for user authentication, including:

  • Facebook
  • Google
  • Twitter

Administrative Interfaces

The Keycloak server, identity realms, and clients can be administered through a web-based GUI or through REST APIs. This allows you to completely design the sign sign-on environment, including users and role mapping, client registration, user federation, and identity brokering operations.

Subscriptions and support lifecycle

Single sign-on is currently  available via the JBoss Core Services Collection, on a 3-year support lifecycle. We anticipate offering Keycloak-based SSO as a service on Red Hat OpenShift Container Platform and Red Hat Mobile Application Platform, and as a federated identity provider for Red Hat OpenStack Platform.

The long-term vision is that Keycloak can be used to centralize user and client identities and to federate identity providers. This will stretch across existing infrastructure such as internal user directories or external cloud-based identity providers, such as social networks, and will provide SSO and identity federation across Red Hat products.

Upcoming Webinar: Reimagine Your Java Applications

Bilge Ozpeynirci (senior product manager for Red Hat) and Thomas Qvarnstrom (JBoss technology evangelist) will be conducting a webinar about trends in application development and how these changes can influence your Java EE infrastructure, applications, and architectures.

IT is changing rapidly and in a lot of different areas: processes like DevOps, architectures like microservices, and technologies like containers. This not only affects upcoming changes in your IT infrastructure, it can affect how you maintain, migrate, or update existing applications and infrastructure.

This webinar looks at how Red Hat JBoss Enterprise Application Platform 7 and Red Hat OpenShift 3 can be used together to effectively manage existing Java  applications and begin moving into a cloud and container based environment.

Registration is open. The webinar is July 26 at 11:00am Eastern Time (US).

register_now

Fun Follow Up: Webinar Q&A

I will collect any questions asked during the webinar, and I’ll do a follow-up post on Friday, June 29, to try to capture the most interesting questions that arise.

Visualizing Integration Applications

Since I’ve changed roles and started performing architect duties, I have to draw more boxes and arrows than write code. There are ways to fight that, like contributing to open source projects during sleepless nights, POCs, demos, but drawing boxes to express architectures and designs is still big part of it. This post is about visualizing distributed messaging, SOA, microservices applications in agile environments (this term has lost its meaning, but there is not better one in this case). What I like about the software industry in recent years is that the majority of organizations I’ve worked with value the principles behind lean and agile software development methodologies. As long as it is practical, everyone strives to deliver working software (rather than documentation), deliver fast (rather than plan for a long time), eliminate waste, and respond to change. And there are management practices such as scrum and kanban, and technical practices from extreme programming (XP) methodology such as unit testing, pair programing, and other practices such as CI/CD and DevOps to help implement those principles. In this line of thinking, I decided to put together a summary of the design tools and diagrams I find useful in my day to day job while working with distributed systems.

Issues with 4+1 View Model and Death by UML

Every project kicks off with big ambitions, but there is never enough time to do things perfectly, and at the end we have to deliver whatever works. And that is a good thing, it is the way the environment helps us avoid gold plating and supports principles like YAGNI and KISS, so we do just enough and adapt to changes.

Looking back, I can say that most of the diagrams I’ve seen around are inspired by the 4+1 view model of Philippe Kruchten which has logical, development, process and physical views.

4+1_Architectural_View_Model
4+1 View Model.  From A Practical Guide to Enterprise Architecture by James McGovern, Scott W. Ambler, Michael E. Stevens, James Linn, Vikas Sharan, Elias K. Jo, 2003.

I quite like the ideas and the motivation behind this framework: using separate views and perspectives to address specific set of constraints and targeting the different stakeholders. That is a great way of describing complex software architectures. But I have two issues with using this model for integration applications.

Diagram Applicability

Typically these views are expressed through a unified modeling language (UML), and for each view, you have to use one or more UML diagrams. If I have to use 15 types of UML diagrams to communicate and express a system architecture in an accessible way, it defeats the purpose of UML.

Death by UML
Death by UML. From Wikipedia, derived from a diagram by Paulo Merson.

With such a complexity, the chances are that there are only one or two people in the whole organization who have the tools to create and ability to understand and maintain these diagrams. And having hard-to-interpret, out-of-date diagrams is as useful as having out-of-date documentation. These diagrams are too complex and with limited value, and very quickly they turn into a liability that you have to maintain rather than an asset expressing the state of a constantly changing system.
Another big drawback is that the existing UML diagram types are primarily focused on describing object-oriented architectures rather than pipes and filters architectures. The essence of messaging applications is around interaction styles, routing, and data flow rather than structure. Class, object, component, package, and other diagrams are of less value for describing pipes and filters based processing flows. Behavioral UML diagrams such as activity and sequence get closer, but still cannot express easily concepts such filtering and content based routing, which are a fundamental part of integration applications.

View Applicability

Having a different set of views for a system to address different concerns is a great way of expressing intent. But the existing views of the 4+1 model don’t reflect the way we develop and deploy software nowadays. The idea of that directional flow — that you have a logical view first, which then leads to development and process views, and those lead to a physical view — is not always the case. The systems development life cycle is not following the traditional (waterfall) sequence of requirement gathering, designing, implementing, and maintaining.

2000px-CPT-SystemLifeSycle.svg
Software Development Lifecycle. Derived from an image by Web Serv

Instead other development methodologies such as agile, prototyping, synchronize and stabilize, and spike and stabilize are used too. In addition to the process, the stakeholders are changing too. With practices such as DevOps, developers have to know about the final physical deployment model and the operations team have to know about the application processing flows.

Modern architectures such as microservices affect the views too. Knowing one microservice in a plethora of services is not very useful. Knowing too much about all the services is not practical either. Having the right abstraction level to have a system wide view with just enough details becomes vital.

Practical Visualization for Integration Applications

The closest thing that has been working for me is described by Simon Brown as the C4 model. (You should also get a free copy of Simon’s awesome The Art of Visualising Software Architecture book). In his model, Simon is talking about the importance of a common set of abstractions rather than common notation (such as UML) and then using simple set of diagrams for different level of abstractions: system context, container, componentand class. I quite like this “outside-in” approach, where you first have 10000 foot view and with each next level, going deeper with more detailed views.
C4 is also not an exact match for middleware/integration applications either, but it is getting closer. If we were to use the C4 model, then the system context diagram would be one box that says ESB (or middleware, MOM, or microservices) with tens of arrows from north to south. Not very useful. The container diagram is quite close, but the term container is so overloaded (VM, application container, docker container) which makes it less useful for communication. Component and class diagrams are also not a good fit as pipes and filter architectures are focused around enterprise integration patterns, rather than classes and packages.
So at the end, what is it that worked for me? It is the following three types of diagrams which abbreviate as SSD (not as cool as C4):  system context, service design, and deployment.

System Context Diagram

The aim of this model is to show all the services (whether they are SOA or microservices) with their inputs and outputs, ideally having the external systems on the north, the services in the middle section, and internal services in the south. Or you could use both external and internal services on both side of the middleware layer as shown below. Also having the protocol (such as HTTP, JMS, file) on the arrows, with the data format (XML, JSON, CSV) gives useful context too, but it is not mandatory. If there are too many services, you can leave the protocol and the data format for the service level diagrams. I use the direction of the arrow to indicate which service is initiating the call rather than the data flow direction.

System Context Diagram
System Context Diagram

Having such a diagram gives a good overview of the scope of a distributed system. We can see all the services, the internal and external dependencies, the types of interaction (with protocol and data format), and the call initiator.

Service Design Diagram

The aim of this diagram is to show what is going on in each box representing a middleware service from the system context diagram. And the best diagram for this is to use EIP icons and connect those as message flows. A service may have a number of flows, support a number of protocols, implement real time, or batch behaviour.

Service Design Diagram
Service Design Diagram

At this level, we want to show all possible data flows implemented by a specific service, from any source to any destination.

Deployment Diagram

The previous two diagrams are the logical views of the system as a whole and each service separately. With the deployment diagram, we want to show where each service is going to be deployed. Maybe there will be multiple instances of the same service running on multiple hosts. Maybe some services will be active on one host, and passive on the other. Maybe there will be a load balancer fronting the services.

Deployment Diagram
Deployment Diagram

The deployment diagram is supposed to show how individual services and the system as a whole relates to the host systems (regardless whether that is physical or virtual).

What Tools Do I Use?

The system context and the deployment diagrams are composed only of boxes and arrows and do not require any special tools. For the service design diagram, you will need a tool that has the enterprise integration pattern icons installed. So far, I have seen the following tools with EIP icon support:

Other development tools that could be also used for creating EIP diagrams are:

A system context diagram is useful to show the system wide scope and reach of the services, a service design diagram is good for describing what a service does, and a deployment diagram is useful mapping all that into something physical.

In IT, we can expand work and fill up all the available time with things to do. I’m sure given more time, we can invent ten more useful views. But without those basic three, I cannot imagine describing an integration application. As Antoine de Saint-Exupery put it long ago: “Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away.

How To Import Any JBoss BRMS Example Project

This tips & tricks comes to you after I have been asked the following repeatedly over the last few weeks by users of the JBoss BRMS demos:

“How can I import the projects associated with the various JBoss BRMS demo projects into my own existing installation?”

What this means is that users want to have an example project in their personal installation of the product without using the projects installation process. This is certainly possible but not totally obvious to everyone.

Below I will walk you through how the various example projects for JBoss BRMS are setup, how the actual rules projects are loaded into JBoss BRMS when you set them up and why. After this I will show you how to extract any of the available rules projects for importing in to any previously installed JBoss BRMS server.

Figure 1: In JBoss BRMS open the Administration
perspective with menu options, Authoring -> Administration.

Background on how it works

The normal installation of a JBoss BRMS demo project that I have provided uses a template. This template ensures that the process is always the same; download, unzip, add products and run the installation script. After doing this, you are done, just fire up the JBoss BRMS for the adjusted experience where you open up the Authoring perspective to a pretty process designer with the demo project displayed for you to kick off a demo run.

These projects have a demo template that provides some consistency and you can read about how it works in a previous article.  For the initial installation run of any of these demo projects, a folder is copied from support/brms-demo-niogit to the installation at the location target/jboss-eap-{version}/bin/.niogit. 

Figure 2: To import a new project, open the Clone repository
from the menu Repositories. This will allow you to bring
in any rules project to your JBoss BRMS.

This folder contains all of the project and system Git repositories that are formatted for the version of the project you have downloaded. By installing this directory or complete repository, when JBoss BRMS starts up the first time, it will pick up the state I left it in when designing the experience around you using this demo project.

Get your hands on a specific rules project

The problem I want to help you with in this article is to show you how to extract only the rules project from one of these examples and import this into your own installation of JBoss BRMS.

Figure 3: Cloning a repository is how you import an
existing project, which requires the 
information shown.

The following list is the order you do the tasks, after which I will explain each one:

  1. Download any JBoss BRMS demo project and unzip (or clone it if you like).
  2. Log in to your own JBoss BRMS and open Administration perspective via menu: Authoring -> Administration.
  3. Setup the new rules project you want to import: Repositories -> Clone repository -> fill in details including import project URL
  4. Explore the new project in the Authoring perspective: Authoring -> Project Authoring
I am going to assume you can find a JBoss BRMS demo project of your liking from the link provided in step 1 and download or clone to your local machine.

I will be using the JBoss BRMS Cool Store Demo as the example project you want to import into your current JBoss BRMS installation instead of leveraging the standalone demo project.

In your current installation where you are logged in,  open the Administration perspective as shown in figure 1 by menu options Authoring -> Administration. This allows you to start importing any existing rules project. We will be importing the Cool Store rules project by using the feature to clone existing projects found in menu options, Repositories -> Clone repository as shown in figure 2.

Figure 4: Once the project has been imported (cloned), you
will receive this message in a pop-up.
This will produce a pop-up that asks for some information about the project to be imported, which you can fill in as listed below and shown in figure 3:
  • Repository Name: retail
  • Organizational Unit: Demos    (select whatever org you want to use from your system)
  • Git URL:  file:///[path-to-project-you-downloaded]/brms-coolstore-demo/support/brms-demo-niogit/coolstore-demo.git
Figure 5: Explore your newly imported rules project in the
authoring perspective within your JBoss BRMS installation.

The most interesting bit here is the Git URL, which is normally something hosted online, but this project we want to import is positioned locally in our filesystem, so we use a file based URL to point to it. Click on Clone button to import the project and you should see a pop-up that looks like figure 4 stating that you have successfully imported your project.

Now you can explore the new imported project in your authoring perspective and proceed as you desire with this project as shown in figure 5. This will work for any project I have put together for the field that is based on the standard template I use.

I hope this tips & tricks helps you to explore and enjoy as many of the existing rules examples offered in the current collection of demo projects.

 

See more by Eric D. Schabell, contact him on Twitter for comments or visit his home site.

Upcoming: Webinar on Design Approaches for Business Automation

Justin Holmes is a business automation practice lead with Red Hat. It is essentially his job to come up with practical solutions to business problems. He is conducting a webinar next week to go over design practices to more effectively develop and deliver software products, with a heavy emphasis using business rules and process automation to make testing and deploying software more controlled and easier. This webinar will look at two historically separate development processes — engineering-driven development and business rules-driven automation — and how it is possible to develop a design model using the strengths of both.

He has more details on the Services Speaks blog.

register_now

This event is free. It will be Tuesday, July 12, at 11am Eastern time.

And the Winner Is…

The comparison between the bag of cash representing a MINI Cooper S and Red Hat JBoss Enterprise Application Platform 7 is kind of fun. JBoss EAP 7 — like a MINI Cooper S — is small, agile, fast, and fits easily in appropriately-sized containers.

car

As part of this year’s Red Hat Summit — and to celebrate the release of JBOss EAP 7 — the Red Hat Middleware group held a drawing for a (metaphorical) bagful of cash equal to the value of a 2016 MINI Cooper S ($24,950 as of June 1, 2016). Anyone at Summit (who is not a Red Hat employee or relative) could enter the drawing.

And the drawing was last night! The winner is … drumroll ….

RYAN THAMES.

IMG_2015

Congratulations, Ryan!

Thank you to everyone who participated and who has visited the booth so far during Summit. It has been quite a ride this week.

WP_20160630_09_09_04_Pro

Left to right, Craig Muzilla, senior vice president of Application Platforms Business; Ryan Thames (winner); and Mark Little, CTO of JBoss middleware.

For reference….

The terms and conditions for this contest are available at https://www.redhat.com/files/resources/car-giveaway-contest-terms-conditions.pdf.