Process management and business logic for responsive cloud-native applications: Red Hat Process Automation Manager is released

Today, Red Hat announced the latest major release of its business process suite, with a new name and several major changes that pivot the focus of the product itself. Red Hat Process Automation Manager is about more than providing a business process modeler or optimizing resource allocation. This is the first generation (at Red Hat) of a digital automation platform — a hub where business users and technical developers can collaborate to create strategically-relevant, intelligent applications.

Red Hat Process Automation Manager has two core conceptual areas:

  • The first is based on decision management (the “intelligent” part of intelligent or even-driven applications). This includes the decision engine of Red Hat Decision Manager and allows automated, immediate responses to interactions, from event processing to resource optimization.
  • Second, Process Automation Manager provides the means of modeling and applying business logic within an application. In combination with a graphical UI, these creates a platform for business users to be able to design business logic in collaboration with the technical teams.

New feature: Process management + case management

The heart of a BPM platform is the “BP” — business process modeling. The previous BPM Suite supported BPMN, the notation specification for business process models, and DMN, the notation specification for data models. The assumption behind a lot of these specs is that the workflows or processes being modeled are relatively static or sequential. For certain types of business processes, that is an accurate assumption (things like resource optimization or scheduling). However, in many organizations, there are also processes which are not linear or which may follow different steps in a dynamic sequence or may be interrupted or require human intervention at certain points. These are generally defined within a related notation specification, Case Management Model Notation (CMMN).

While there are differences, there is also a lot of conceptual overlap between business processes / BPMN and case management processes / CMMN. Process Automation Manager combines the functionality of both process models and case management models within a single digital automation platform. (This is covered in more detail in the blog post here.)

Supporting both linear process / task models and dynamic or unpredictable case management models within the same platform allows developers to have a simpler development process (and, combined with other features like Process Automation Manager’s new graphical UI, makes collaboration with business users easier).

Process Automation Manager also supports other types of modeling and visualizing data and worflows:

  • Data modeling
  • Decision modeling
  • Custom data dashboards
  • Process simulations

New Feature: An easier way for business users to collaborate (graphical UI)

Previous versions of Red Hat JBoss BPM Suite were designed around business process logic, but were intended to be used by Java developers within the application development process. Beginning with this Process Automation Manager 7.0 release, there is a new Entando UI included with the platform. This provides an easier, graphical interface where business users can just drag and drop elements into their models — using ultimately the same platform that the developers are using to create the application. Business processes, rules, and logic can be written into the application essentially without having to write a single line of code.

This also effectively changes the workflow for creating event- and process-driven application. Previously, developers did all the work within their development environment. Now, business users can work in parallel (using the Process Automation Manager UI) to create artifacts which can be pulled into the developer’s IDE and code. Everything can then be packaged up and deployed in containers or other environments.

New feature: Cloud (and container) native applications

With more distributed, hybrid infrastructures, it is imperative that applications be able to function exactly the same regardless of the underlying platform. And those applications need to be designed, natively, to work in a distributed, dynamic environment so that they can be rapidly deployed, updated, or scaled.

Process Automation Manager can itself run in Red Hat OpenShift containers, in public or private clouds, on-premise, or in all environments — depending on the needs of your development and infrastructure teams. Additionally, the models and applications created using Process Automation Manager as a platform can be deployed into cloud instances, OpenShift containers, or local instances. This allows truly hybrid development, testing, and production environments.

Process Automation Manager components, applications, and models can all be exposed and accessed using REST APIs, allowing integration with other software applications or management tools.

Additional Resources

  • Dive a little deeper into process automation technology with our tech overview.
  • For general information about the Process Automation Manager, check out the datasheet.
  • There are different use cases for process automation and a business decision engine. The FAQ runs through some things to consider.
  • Get started by actually using the Process Automation Manager. Red Hat Developers has a whole “hello world” example, waiting for you.

Meet application integration in the times of hybrid cloud

The concept of agile integration, depending on whom you ask, may appear as a contradiction in terms. Integration is a concept that used to be associated with “slow,” “monolithic,” “only to be touched by the expert team,” etc.. Big and complex legacy enterprise service buses connected to your applications were the technology of choice at a time when agility was not a requirement, when the cloud was barely an idea, when containers were associated with maritime shipping and not with application packaging and delivery.

Can the principles of agile development be combined with those of modern integration? Our response is yes, and we call it  agile integration. Let me show you what it is, why it is important, and what we at Red Hat are doing about it.

Software development methodologies have evolved rapidly in the last few years to incorporate innovative concepts that result in faster development cycles, agility to react to changes and immediate business value. Development now takes place in small teams, changes can be approved and incorporated fast to keep track of the changing demands of the business, and each iteration of the code has a product as the ultimate result. No more need for longer development cycles and never-ending approvals for changes. And importantly, business and technical users join forces and collaborate to optimize the end result.

In addition, modern integration requires agility, cloud-readiness, and support of modern integration approaches. In contrast with the legacy, monolithic ESBs, modern integration is lightweight, pattern-based, scalable, and able to manage complex, distributed environments. It has to be cloud-ready and support modern architectures and deployment models like containers. It also has to provide integration services with new, popular technologies, like API management, which is becoming the preferred way to integrate applications and is at the core of microservices architectures. And support innovative and fast evolving use cases such as the Internet of Things (IoT).

Continue reading “Meet application integration in the times of hybrid cloud”

Red Hat present at EclipseCon France 2018

EclipseCon France is taking place this week in Toulouse, France (June 13-14, 2018) and it’s offering a great lineup of top-notch sessions on nine different tracks, from IoT to cloud and modeling technologies. This year, there is even a dedicated track for “Microservices, MicroProfile, EE4J and Jakarta EE,” which is covering topics such as Istio, 12-factor apps, geoscience, machine learning, noSQL database integration, cloud-native application development, security, resilience, scalability, and the latest statuses of the Jakarta EE and MicroProfile open source specification projects. Under this track, we are hosting two sessions:

But we are also delivering other interesting sessions under the “Reactive Programming” track:

Under the “IoT” track:

Under the “Eclipse IDE and RCP in Practice” track:

And, under the “Cloud & DevOps” and “Other Cool Stuff” tracks:

For those of you that will be at the conference, we invite you to attend the sessions above and to stop by the Red Hat booth to learn how Red Hat can help your organization solve your IT challenges (and get your swag too!). And for those of you that would like to learn more about Red Hat offerings in relation to the topics above, please visit the following links:

Digital Automation Platforms: Injecting speed into application development

Red Hat has just published a new study by Carl Lehmann of the 451 Group, “Intelligent Process Automation and the Emergence of Digital Automation Platforms,” that examines the increasing importance of business automation technologies in modern business, and the ways that converged solutions (digital automation platforms) are bringing value to organizations engaged in digital transformation projects.

Carl writes that competitive advantage is enabled when an organization either does the same things as its rivals, but differently, or it does different things that are acknowledged as superior by customers. In today’s competitive markets, businesses are turning to next-generation digital automation platforms (DAP) to enable greater automation of key business functions and greater flexibility in responding to their customers’ needs.

A DAP is a set of tools and resources structured within a uniform framework to enable developers to rapidly design, prototype, develop, deploy, manage, and monitor process-oriented applications – from simple task-related workflows to dynamic unstructured collaborative activity streams and even highly structured cross-functional enterprise applications. To do so, DAPs are equipped with a range of new capabilities that go beyond those of their BPM and application development predecessors.

Continue reading “Digital Automation Platforms: Injecting speed into application development”

Announcing: Red Hat Fuse 7 is now available

After several technical previews over the last few months, Red Hat Fuse is officially available. This is a significant release, both for Fuse itself and for integration platforms, because it represents a shift from more traditional, on-premise, centralized integration architecture to distributed, hybrid environment integration architecture.

Integration itself has historically been a bottleneck for infrastructure design and changes. The integration points were largely centralized and controlled by a central team in an attempt to manage dependencies and standardize data management between applications. However, that centralization also made change difficult, and it was governed more by procedure and bureaucracy than business innovation. As with traditional infrastructure architecture more generally, integration has not historically been an agile or adaptive architecture.

Red Hat Fuse (and related community projects) is the beginning of a departure from traditional, rigid integration platforms to more agile, distributed integration design. Fuse introduces three major features in the latest release:

  • Fuse Online, fully hosted Fuse applications and integrations. Fuse Online provides immediate access to the functionality of Fuse, without having to install and configure it on-premise. Developers can begin testing and customizing integrations immediately. Connectors can be uploaded to the online development area to allow even more integrations.
  • Fuse container images for Red Hat OpenShift. Fuse runs natively on OpenShift, allowing local, containerized integration points to be created in development teams and to be designed, tested, and updated within DevOps workflows as part of the overall application development cycle.
  • A drag-and-drop UI for integration pattern design. While integration development is typically done within IT teams, integration design relies on business knowledge. Business managers and analysts need to be able to collaborate effectively with their development teams. The new Fuse Ignite UI (based on the Syndesis.io project) is a lowcode way to develop integration — business users can use design elements to create integration architectures and to work with their development teams, within the same tool set.

These three features allow more agile integration development. Fuse installations can span online, on-premise, or container based environments without reducing functionality. This allows an integration platform that crosses environments, and be as lightweight and decentralized as an individual development team or an enterprise-wide platform. The lowcode UI allows business users to be brought directly into the application development cycle, enabling business logic to be incorporated into the integration application design from the beginning.

Additionally, Fuse 7 contains these new features:

  • Support for Spring Boot deployment for Fuse applications
  • 50 new application connectors (with a total of over 200 included connectors)
  • A new monitoring subsystem
  • Updated component versions, including new versions of Red Hat JBoss Enterprise Application Platform and Apache Camel
  • A new name (Red Hat Fuse, rather than Red Hat JBoss Fuse)

 

Additional Resources

An Introduction to Red Hat Application Migration Toolkit

Application migration and modernization can be a daunting task. Not only do you have to update legacy applications with newer libraries and APIs, but often must also address new frameworks, infrastructures, and architectures all while simultaneously keeping resources dedicated to new features and versions.

Red Hat Application Migration Toolkit (RHAMT), formerly known as Windup, provides a set of utilities for easing this process. Applications can be analyzed through a command-line interface (CLI), a web-based interface, or directly inside Eclipse, allowing immediate modification of the source code.

These utilities allow you to quickly gain insights into thousands of your applications simultaneously. They identify migration challenges, code or dependencies shared between applications, and accelerate making the necessary code changes to have your applications run in the latest middleware platforms.

Choosing the Right Distribution

You’ve read the introduction, possibly seen a video, and are eager to run your first application through the process. Where do you begin?

RHAMT provides a number of different distributions to meet your needs, and all include detailed reports that highlight migration issues with effort estimation. Each of these is summarized below.

CLI

CLI DownloadProduct Documentation

The CLI is a command-line tool that provides access to the reports without the overhead of the other tools. It includes a wide array of customization options, and allows you to finely tune the RHAMT analysis options or integrate with external automation tools.

Web Console

Web Console DownloadProduct Documentation

The web console is a web-based system that allows a team of users to assess and prioritize migration and modernization efforts. In addition, applications can be grouped into projects for analysis.

Eclipse Plug-in

Eclipse Plug-in DownloadProduct Documentation

The Eclipse plug-in provides assistance directly in Eclipse and Red Hat JBoss Developer Studio (JBDS) and allows developers to see migration issues directly in the source code. The Eclipse plug-in also provides guidance on resolving issues and offers automatic code replacement where possible.

Start by Choosing a Distribution

  • If you’re working on a team that needs concurrent access to the reports, or have a large number of applications to analyze, then choose the web console.
  • If you’re a developer familiar with Eclipse or JBDS and want live feedback, then start with the Eclipse plug-in.
  • Otherwise, we recommend starting with the CLI.

Follow the download link for the chosen distribution, and then examine the first few chapters in the appropriate guide to install and run the tool.

Analyzing an Application

You have a local installation of RHAMT, located at RHAMT_HOME, and an application you want to analyze. For the purposes of this blog, we’ll assume that you chose the CLI. With that out of the way, let’s get started.

The analysis is performed by calling `rhamt-cli` and passing it in the application along with any desired options, as seen in the following example.

$ bin/rhamt-cli --sourceMode --input /path/to/source_folder/ --output /path/to/output_folder/ --target eap7

The options are straightforward:

  • –sourceMode – indicates the input files are source files instead of compiled binaries
  • –input – path to the file or directory containing the files to be analyzed
  • –output – path to the directory to contain the reports
  • –target – technology to migrate to; used to determine the rules for the analysis

Once the analysis finishes, a message will be seen in the console indicating the path to the report.

Report created: /path/to/output_folder/index.html
Access it at this URL: file:///path/to/output_folder/index.html

Rules

All of RHAMT’s distributions utilize the same rules engine to analyze the APIs, technologies, and architectures used by the application you plan to migrate. This engine extracts files from archives, decompiles classes, scans and classifies file types, analyzes XML and other file content, analyzes application code, and then generates the reports.

Each of these actions is handled by defined rules, which consist of a set of actions to perform once conditions are met. We’ll look more in-depth at how rules work, and creating your own custom rules, in a subsequent post, but for now know that RHAMT includes a comprehensive set of standard migration rules to get you started.

Just Lifting and Shifting?

Lifting and shifting, or rehosting, an application is one possible first step in migrating it. This process involves moving the application onto a different target runtime or infrastructure. A common end goal of this stage is to make the smallest number of changes to have the application running successfully in a cloud environment.

Once the application is successfully running in the cloud, the next step is to modernize the application so that it’s natively designed for a cloud environment. Instead of simply rehosting the application, this step involves redesigning it, moving unnecessary dependencies and libraries outside the application.

Regardless of which step you’re at, RHAMT assists with both of these steps by providing a set of cloud-ready rules. Once executed against the application, a detailed report is created that indicates what changes should be made. For anyone familiar with using RHAMT to migrate middleware platforms, the process is similar – examine the report and adjust your application based on the feedback.

It’s that simple.

Summary

Wherever you are in the migration process, I’d recommend looking at RHAMT. It’s extremely simple to set up, and comes with a number of default rules to assist in any part of the migration and modernization process. In addition, RHAMT facilitates solving unique problems once – after a given solution has been identified a custom rule can be created to capture that solution, vastly simplifying the migration process.

Stay tuned for our next update, where we discuss how to create custom rules to better utilize RHAMT in your environment.

References

https://developers.redhat.com/products/rhamt/overview/

https://access.redhat.com/documentation/en-us/red_hat_application_migration_toolkit/

Announcing AMQ Streams: Apache Kafka on OpenShift

Cross-posted from the Developers Blog. See the session at Red Hat Summit on Apache Kafka and AMQ data streams on Thursday, May 10, at 11:15.

We are excited to announce a Developer Preview of Red Hat AMQ Streams, a new addition to Red Hat AMQ, focused on running Apache Kafka on OpenShift.

Apache Kafka is a leading real-time, distributed messaging platform for building data pipelines and streaming applications.

Using Kafka, applications can:

  • Publish and subscribe to streams of records.
  • Store streams of records.
  • Process records as they occur.

Kafka makes all of this is possible while being fast, horizontally scalable and fault tolerant. This makes Kafka suitable for a large range of use cases, including website activity tracking, metrics and log aggregation, stream processing, event sourcing, and IoT telemetry. The forthcoming AMQ Streams product will provide Red Hat customers with a supported offering for running Apache Kafka on Red Hat Enterprise Linux and on Red Hat OpenShift Container Platform.

As more and more applications move to Kubernetes and OpenShift, it is increasingly important to be able to run the communication infrastructure on the same platform. OpenShift as a highly scalable platform is a natural fit for messaging technologies such as Kafka. But with AMQ Streams, our target is not to just run Apache Kafka on OpenShift, but rather AMQ Streams makes running and managing Apache Kafka “OpenShift native.”

Uniting the massive scalability of Kafka on an elastic platform like OpenShift involves resolving a number of technology challenges:

  • Kafka brokers are inherently stateful, because each has its own identity and data logs that must be preserved in the event of restarts.
  • Updating and scaling a Kafka cluster requires careful orchestration to ensure that messaging clients are unaffected and no records are lost.
  • By design, Kafka clients connect to all the brokers in the cluster. This is part of what gives Kafka its horizontal scaling and high availability, but when running on OpenShift, this means the Kafka cluster cannot simply be put behind a load-balanced service like other services. Instead services have to be orchestrated in parallel with cluster scaling.
  • Running Kafka also requires running a Zookeeper cluster, which has many of the same challenges as running the Kafka cluster.

AMQ Streams simplifies the deployment, configuration, management and use of Apache Kafka on OpenShift using the Operator concept, thereby enabling the inherent benefits of OpenShift, such as elastic scaling. An Operator is an application-specific controller that extends the Kubernetes APIs and combines them with domain-specific knowledge and makes it easy to run and manage complex applications.  Developers and administrators used to OpenShift’s declarative approach to resource provisioning can now enjoy those same benefits when working with Kafka, Kafka Connect, and Kafka topics.

AMQ Streams makes it easy to:

  • Deploy a complete Kafka cluster, at the scale that suits you, with the click of a button or with a single oc create command.
  • Deploy the Kafka topic right alongside the microservice that uses it.
  • Scale up the partitions of that topic.
  • Trivially scale up and and down the Kafka cluster according to load.

Strimzi Logo

AMQ Streams is optimized for running on OpenShift (as opposed to regular Kubernetes). Not only does it benefit from Red Hat’s years of experience and in-depth knowledge gained from developing and running OpenShift, but there is, for example, special support for building Kafka Connect clusters with the user’s own Kafka Connect plugins. Further, OpenShift-specific features and Red Hat product integrations are anticipated, with the overall aim being a seamless experience with full support from the OpenShift fabric up.

Unsurprisingly, because Red hat is the world’s leading provider of open source technologies for the enterprise, AMQ Streams is fully open source and based on the Strimzi project.

The Developer Preview, which is being made available to interested customers this week, provides the foundation for running Kafka on OpenShift. Interested customers and other interested parties are invited try it out, give us their feedback and, if desired, collaborate on the open source Strimzi project to help shape the future direction of AMQ Streams on OpenShift.

If you’re lucky enough to be in San Francisco this week for Red Hat Summit, then you can hear a lot more about AMQ Streams (and the broader Red hat AMQ product) at the following sessions:

  • Running data-streaming applications with Kafka on OpenShift
    Tue May 8, 1:00 PM–3:00 PM, Moscone South 156
    Marius Bogoevici, Paolo Patierno, Gunnar Morling [L1099]
  • Red Hat AMQ overview and roadmap
    Wed May 9, 11:45 AM–12:30 PM, Moscone West 2011
    David Ingham, Jack Britton [S2802]
  • Introducing AMQ Streams—data streaming with Apache Kafka
    Thu May 10, 11:15 AM–12:00 PM, Moscone West 2014
    Paolo Patierno, David Ingham [S1775]
  • Red Hat AMQ Online—Messaging-as-a-Service
    Thu May 10, 1:45 PM–3:45 PM, Moscone South 214
    Ulf Lilleengen, Paolo Patierno [W1098]

We expect to release further previews as we iterate towards the general availability release, which is planned for later this year.

Please give it a try and let us know what you think.

 

Why are our Application Platform Partners succeeding in Digital Transformation?

Last year we set out to start the Application Platform Partner Initiative with the objective to enable deeper collaboration with partners focused on application platform and emerging technologies. We planned to create a collaborative go-to-market strategy between Red Hat and participating partner organizations focused on optimizing the value chain for application development and integration projects.

The Application Platform Partner Initiative focuses on Application Development-related and other emerging technology offerings, which revenue increased 42% in our last fiscal year up to $624 million. Partners like the APPs are contributing to this growth and we are happy to see the momentum continuing, and their trust on Red Hat as a strategic partner. What started out as a pilot has developed into a fully fledged initiative with 28 partners across North America, who are as committed as we are to the role opens source plays at the core of digital transformation.

As part of the success of this initiative, for the first time this year, we have created the Application Platform Partner Pavilion in Red Hat Summit.  Arctiq, Crossvale, Kovarus, Levvel, Li9, Lighthouse, OSI, Shadow-Soft, VeriStor and Vizuri will join us this year in the pavilion. Don’t miss a chance to get to know the advanced solutions they have created on top of Openshift and Red Hat Middleware products, which they will be showcasing at Red Hat Summit. Check out, for example, Arctiq Value Stream Mapping (VSM), Crossvale CloudBalancer for Red Hat® OpenShift or Vizuri log aggregation solutions.

These partners are delivering a strong investment in enablement, and commitment in their go-to-market alliance with Red Hat, including co-marketing and sales collaboration. As some examples of planned activities, Arctiq is running a Modern Mobile App Development event and Crossvale an OpenShift roadshow).

Levvel has been an active participant in the APP program, doing joint webinars, customer workshops and panel discussions to promote Red Hat emerging technologies. As a result, they have influenced and closed quite a few customers and have a long list of potential opportunities. Don’t forget to attend their coming up event “App Transformation Workshop: Monoliths to Microservices”!

Shadow-Soft has been particularly focused on growing the customer base with our OpenShift and JBoss product family with innovative sales and marketing strategies that are turning into a growing pipeline of opportunities, and running events around digital transformation.

Veristor joined recently the APP program and is growing rapidly their different practices around OpenShift and Red Hat Middleware, like DevOps and Agile Consulting, Services and Software Development practice.

OSI, an international company with a long experience with JBoss, is also growing in the US and have worked on an Agile Integration demo environment focusing on JBoss Fuse Integration platform to support their customer engagements, including integration with cloud and on-premise systems. Try to attend their “Monoliths to Microservices: App Transformation Workshop” right after Summit.

Vizuri has been a Red Hat partner for over 10 years. Having delivered more than 120 JBoss-related engagements, their JBoss experience and expertise helps customers reduce risk and improve time-to-value, while avoiding project delays and unplanned downtime. You can’t miss their take on How To Manage Business Rules In A Microservices Architecture using OpenShift and JBoss BRMS.

Having recently joined the APP program, Astellent has heavily invested in enablement and marketing, while achieving exciting customer success. Read their views on the newly launched Red Hat Decision Manager 7.

Lighthouse has been helping businesses with the right mix of Red Hat’s public, on-premises, and hybrid cloud technologies, customizing them to fit their unique business needs. They have also been active with unique marketing events like the one with the Red Sox coming in May.

As you can see, APP partners are working closely with Red Hat to establish a sales, marketing, and delivery practice around Red Hat technologies, including Red Hat JBoss Middleware, Red Hat OpenShift, and Red Hat Mobile Application Platform.

In the words of John Bleuer, VP, Strategic Partners, North America, “I am thrilled that as year one of the program ends, the sophistication of our partner solutioning and delivery abilities has increased dramatically; many partners are working with us in industry and line of business (including healthcare, payments, and e-commerce); other partners are adding sophistication into the DevOps / automation practices with Openshift, Jenkins, and Ansible, while others are honing their skills delivering app modernization and integration & BPM solutions in a cloud native environment, containerized in OpenShift.  It’s an exciting time at Red Hat”.

The market is looking to digital transformation initiatives to grow and maintain competitive advantage. Challenges range from confined platforms to complex architectures, from rigid processes to lack of agility. Together with our partners, we can play a critical role to help our customers overcome those to become growing, competitive organizations.

We hope to see you at Red Hat Summit checking them out, as well as at the Red Hat Summit Ecosystem Expo!

Red Hat OpenShift Application Runtimes: Delivering new productivity, performance, and stronger standards support with its latest sprint release

Red Hat OpenShift Application Runtimes is a collection of cloud-native application runtimes that are optimized to run on OpenShift, including Eclipse Vert.x, Node.js, Spring Boot, and WildFly Swarm. In addition, OpenShift Application Runtimes includes the Launch Service, which helps developers get up and running quickly in the cloud through a number of ready-to-run examples — or missions — that streamline developer productivity.

New Cache Booster with JBoss Data Grid integration

In our latest continuous delivery release, we have added a new cache mission  that demonstrates how to use a cache to increase the response time of applications.  This mission shows you how to:

  1. Deploy a cache to OpenShift.
  2. Use a cache within an application.

The common use case for this booster is to cache service result sets to decrease latency associated with data access as well as reduce workload on backend service.  Another very common use case is to reduce the data volume of message send across in distributed system.

Continue reading “Red Hat OpenShift Application Runtimes: Delivering new productivity, performance, and stronger standards support with its latest sprint release”

#RHSummit: A Random Sampling of Awesome Sessions and Events Throughout the Week

There are around 500 sessions crammed into a speedy three day schedule — so it is impossible to catch everything. (That’s one reason that I’m promoting things like theCube streaming channel and recorded sessions on our Youtube channel — it’s a way to catch all the things you can miss, even if you attend something every hour.)

If you haven’t already mapped out everything to see and do, the trailmaps are a great place to start to get the cream of every topic area.

I have created my own, unscientific list of the app dev and middleware-related sessions that caught my eye in the session catalog.

Stuff to Do

There are after-parties most nights, some on site at the Moscone Center and some at the conference hotels. Keep an eye on the signs in the lobbies — there are lists there. For those passionate about app development, middleware, and application architecture:

  • There will be a press panel including Mike Piech (VP of middleware) and Harry Mower (Red Hat Developers) in the Intercontinental Hotel Ballroom A. Space is limited, so it will also be broadcast live on theCube at 11am.
  • There is a rockstar cocktail hour on Wednesday evening, starting at 5:30 in Moscone West.
  • Also on Wednesday, Mike Piech and Mark LIttle will do an interview with theCube. Along with streaming live online, you can see it in person in the Moscone West lobby.
  • The Red Hat Summit wrap party is Thursday night at the Armory, starting at 7pm.

Integration

Trailmap: Integration

Distributed API management in a hybrid cloud environment
Tuesday, 10:30am, Moscone West 2003
Why it’s cool:
This is a real customer story on how they used API management with 3scale to manage thousands of services across a hybrid environment.

Practitioner’s guide to API strategy
Wednesday, 3:30pm, Moscone South 207
Why it’s cool:
Anything with “strategy” in it catches my eye. This session goes over why and how an API initiative should be structured to be successful.

Introducing AMQ streams – data streaming with Apache Kafka
Thursday, 11:15am, Moscone West 2014
Why it’s cool:
Microservices — or any kind of distributed computing system — comes down to a question of managing data. This looks at some new technologies in AMQ so that the messaging platform can span a variety of data architectures, from IoT to enterprise integration to (also) data streaming.

Decompose a monolith with microservices
Thursday, 3:00pm, Moscone West Level 2, Discovery Zone
Why it’s cool:
 Another session hitting the same point — distributed architectures are complex. You need a clear understanding of interdependencies, integration points, and data (among many other things), and this session breaks down what you need to know and best practices for addressing it.

Future Technologies

There are a lot of separate, and separately interesting, technologies on the horizon. The ones that seem to stick out at this Summit revolve around serverless computing or Istio Service Mesh.

Containers, microservices, serverless: On being serverless or serverful
Tuesday, 10:30am, Moscone South 207
Why it’s cool:
Burr Sutter presenting plus serverless and microservices in the title.

Istio: Solving the challenges of hybrid cloud
Tuesday, 3:30pm, Moscone South 208
Why it’s cool:
 This goes over how Istio can be used in an infrastructure that spans OpenShift containers, Kubernetes, and virtual machines. Managing data across environments effectively is a major challenge as applications and services need to be able to scale.

Low-risk mono to microservices: Istio, Teiid, and Spring Boot
Tuesday, 4:30pm, Moscone South 207
Why it’s cool:
This looks at how to break a monolith — fully recognizing that there are no clear-cut boundaries in a monolith and the interdependencies get messy.

An eventful tour from enterprise integration to serverless computing
Wednesday, 10:30am, Moscone South 207
Why it’s cool:
This looks at the different architectural designs and choices for event-driven computing, microservices, messaging, and data management. There isn’t a single perfect solution that works for everyone — each infrastructure has its own priorities and needs, and those have to be reflected in the architecture.

Internet of Things

Trailmap: IoT

Making IoT real across industries
Tuesday, 11:45am, Moscone West 2007
Why it’s cool:
Tell me a story. IoT is essentially a highly complex integration story, integrating not only applications but physical devices. Three different industries — technology, petroleum, and transportation — highlight different aspects of IoT as it was done in real life.

Internet of Things: Open, integrated, managed, and secure
Thursday, 3:00pm, Moscone West 2016
Why it’s cool:
How do you take a cool idea (or a business necessity) and make it happen in real life? This section includes common reference architectures for industrial IoT deployments.

Cloud-native and App Dev

Trailmap: Cloud-native apps

Containerizing applications — existing and new
Wedneesday, 1:00pm, Moscone South 155
Why it’s cool:
Anything practical is immediately appealing. Most organizations aren’t dealing with a greenfield of applications, and this looks at how to move both cloud-native and legacy applications into a container.

Using machine learning, Red Hat BPM, and reactive microservices
Thursday, 11:15am, Moscone West 2004
Why it’s cool:
Business process automation, decision management, event processing — these tend to be treated as commodity actions. The things you have to do to get an application to be more responsive with less intervention. I like the approach of adding machine learning to process management, giving more intelligence to the overall architecture.

Java Awesomeness

Eclipse Microprofile and Wildfly Swarm
Tuesday, 11:45am, Moscone West 2011
Why it’s cool:
There isn’t a ton of Java on this lit (I don’t really know why), but this is definitely a don’t-miss session for Java developers. Wildfly Swarm is a way to create cloud-native, container-native Java applications. So … all your Java expertise, in a tiny container.

EE4J, MicroProfile, and the future of enterprise Java
Wednesday, 3:30pm, Moscone South 215
Why it’s cool:
 There are probably a dozen think-pieces a year on the imminent death of Java — yet it continues to evolve across new architectures and to take on new technologies. This session takes a more optimistic (realistic?) view of the future of Java.

Microservices data patterns: CQRS and event sourcing
Thursday, 11:15am, Moscone South 208
Why it’s cool:
Microservices (as Christian Posta is fond of saying) represent a data challenge. The more distributed the data is, than the more structured and clear the data architecture needs to be.

Crossing the chasm between traditional and agile planning
Tuesday, 1:45pm, Moscone West 2103
Why it’s cool:
Teams are people. Technology has to be developed and executed and maintained by people. Making any kind of shift, whether changing the planning structure or the infrastructure architecture or something else, requires an understanding of how to manage and inspire teams.