An Introduction to Red Hat Application Migration Toolkit

Application migration and modernization can be a daunting task. Not only do you have to update legacy applications with newer libraries and APIs, but often must also address new frameworks, infrastructures, and architectures all while simultaneously keeping resources dedicated to new features and versions.

Red Hat Application Migration Toolkit (RHAMT), formerly known as Windup, provides a set of utilities for easing this process. Applications can be analyzed through a command-line interface (CLI), a web-based interface, or directly inside Eclipse, allowing immediate modification of the source code.

These utilities allow you to quickly gain insights into thousands of your applications simultaneously. They identify migration challenges, code or dependencies shared between applications, and accelerate making the necessary code changes to have your applications run in the latest middleware platforms.

Choosing the Right Distribution

You’ve read the introduction, possibly seen a video, and are eager to run your first application through the process. Where do you begin?

RHAMT provides a number of different distributions to meet your needs, and all include detailed reports that highlight migration issues with effort estimation. Each of these is summarized below.

CLI

CLI DownloadProduct Documentation

The CLI is a command-line tool that provides access to the reports without the overhead of the other tools. It includes a wide array of customization options, and allows you to finely tune the RHAMT analysis options or integrate with external automation tools.

Web Console

Web Console DownloadProduct Documentation

The web console is a web-based system that allows a team of users to assess and prioritize migration and modernization efforts. In addition, applications can be grouped into projects for analysis.

Eclipse Plug-in

Eclipse Plug-in DownloadProduct Documentation

The Eclipse plug-in provides assistance directly in Eclipse and Red Hat JBoss Developer Studio (JBDS) and allows developers to see migration issues directly in the source code. The Eclipse plug-in also provides guidance on resolving issues and offers automatic code replacement where possible.

Start by Choosing a Distribution

  • If you’re working on a team that needs concurrent access to the reports, or have a large number of applications to analyze, then choose the web console.
  • If you’re a developer familiar with Eclipse or JBDS and want live feedback, then start with the Eclipse plug-in.
  • Otherwise, we recommend starting with the CLI.

Follow the download link for the chosen distribution, and then examine the first few chapters in the appropriate guide to install and run the tool.

Analyzing an Application

You have a local installation of RHAMT, located at RHAMT_HOME, and an application you want to analyze. For the purposes of this blog, we’ll assume that you chose the CLI. With that out of the way, let’s get started.

The analysis is performed by calling `rhamt-cli` and passing it in the application along with any desired options, as seen in the following example.

$ bin/rhamt-cli --sourceMode --input /path/to/source_folder/ --output /path/to/output_folder/ --target eap7

The options are straightforward:

  • –sourceMode – indicates the input files are source files instead of compiled binaries
  • –input – path to the file or directory containing the files to be analyzed
  • –output – path to the directory to contain the reports
  • –target – technology to migrate to; used to determine the rules for the analysis

Once the analysis finishes, a message will be seen in the console indicating the path to the report.

Report created: /path/to/output_folder/index.html
Access it at this URL: file:///path/to/output_folder/index.html

Rules

All of RHAMT’s distributions utilize the same rules engine to analyze the APIs, technologies, and architectures used by the application you plan to migrate. This engine extracts files from archives, decompiles classes, scans and classifies file types, analyzes XML and other file content, analyzes application code, and then generates the reports.

Each of these actions is handled by defined rules, which consist of a set of actions to perform once conditions are met. We’ll look more in-depth at how rules work, and creating your own custom rules, in a subsequent post, but for now know that RHAMT includes a comprehensive set of standard migration rules to get you started.

Just Lifting and Shifting?

Lifting and shifting, or rehosting, an application is one possible first step in migrating it. This process involves moving the application onto a different target runtime or infrastructure. A common end goal of this stage is to make the smallest number of changes to have the application running successfully in a cloud environment.

Once the application is successfully running in the cloud, the next step is to modernize the application so that it’s natively designed for a cloud environment. Instead of simply rehosting the application, this step involves redesigning it, moving unnecessary dependencies and libraries outside the application.

Regardless of which step you’re at, RHAMT assists with both of these steps by providing a set of cloud-ready rules. Once executed against the application, a detailed report is created that indicates what changes should be made. For anyone familiar with using RHAMT to migrate middleware platforms, the process is similar – examine the report and adjust your application based on the feedback.

It’s that simple.

Summary

Wherever you are in the migration process, I’d recommend looking at RHAMT. It’s extremely simple to set up, and comes with a number of default rules to assist in any part of the migration and modernization process. In addition, RHAMT facilitates solving unique problems once – after a given solution has been identified a custom rule can be created to capture that solution, vastly simplifying the migration process.

Stay tuned for our next update, where we discuss how to create custom rules to better utilize RHAMT in your environment.

References

https://developers.redhat.com/products/rhamt/overview/

https://access.redhat.com/documentation/en-us/red_hat_application_migration_toolkit/

Announcing AMQ Streams: Apache Kafka on OpenShift

Cross-posted from the Developers Blog. See the session at Red Hat Summit on Apache Kafka and AMQ data streams on Thursday, May 10, at 11:15.

We are excited to announce a Developer Preview of Red Hat AMQ Streams, a new addition to Red Hat AMQ, focused on running Apache Kafka on OpenShift.

Apache Kafka is a leading real-time, distributed messaging platform for building data pipelines and streaming applications.

Using Kafka, applications can:

  • Publish and subscribe to streams of records.
  • Store streams of records.
  • Process records as they occur.

Kafka makes all of this is possible while being fast, horizontally scalable and fault tolerant. This makes Kafka suitable for a large range of use cases, including website activity tracking, metrics and log aggregation, stream processing, event sourcing, and IoT telemetry. The forthcoming AMQ Streams product will provide Red Hat customers with a supported offering for running Apache Kafka on Red Hat Enterprise Linux and on Red Hat OpenShift Container Platform.

As more and more applications move to Kubernetes and OpenShift, it is increasingly important to be able to run the communication infrastructure on the same platform. OpenShift as a highly scalable platform is a natural fit for messaging technologies such as Kafka. But with AMQ Streams, our target is not to just run Apache Kafka on OpenShift, but rather AMQ Streams makes running and managing Apache Kafka “OpenShift native.”

Uniting the massive scalability of Kafka on an elastic platform like OpenShift involves resolving a number of technology challenges:

  • Kafka brokers are inherently stateful, because each has its own identity and data logs that must be preserved in the event of restarts.
  • Updating and scaling a Kafka cluster requires careful orchestration to ensure that messaging clients are unaffected and no records are lost.
  • By design, Kafka clients connect to all the brokers in the cluster. This is part of what gives Kafka its horizontal scaling and high availability, but when running on OpenShift, this means the Kafka cluster cannot simply be put behind a load-balanced service like other services. Instead services have to be orchestrated in parallel with cluster scaling.
  • Running Kafka also requires running a Zookeeper cluster, which has many of the same challenges as running the Kafka cluster.

AMQ Streams simplifies the deployment, configuration, management and use of Apache Kafka on OpenShift using the Operator concept, thereby enabling the inherent benefits of OpenShift, such as elastic scaling. An Operator is an application-specific controller that extends the Kubernetes APIs and combines them with domain-specific knowledge and makes it easy to run and manage complex applications.  Developers and administrators used to OpenShift’s declarative approach to resource provisioning can now enjoy those same benefits when working with Kafka, Kafka Connect, and Kafka topics.

AMQ Streams makes it easy to:

  • Deploy a complete Kafka cluster, at the scale that suits you, with the click of a button or with a single oc create command.
  • Deploy the Kafka topic right alongside the microservice that uses it.
  • Scale up the partitions of that topic.
  • Trivially scale up and and down the Kafka cluster according to load.

Strimzi Logo

AMQ Streams is optimized for running on OpenShift (as opposed to regular Kubernetes). Not only does it benefit from Red Hat’s years of experience and in-depth knowledge gained from developing and running OpenShift, but there is, for example, special support for building Kafka Connect clusters with the user’s own Kafka Connect plugins. Further, OpenShift-specific features and Red Hat product integrations are anticipated, with the overall aim being a seamless experience with full support from the OpenShift fabric up.

Unsurprisingly, because Red hat is the world’s leading provider of open source technologies for the enterprise, AMQ Streams is fully open source and based on the Strimzi project.

The Developer Preview, which is being made available to interested customers this week, provides the foundation for running Kafka on OpenShift. Interested customers and other interested parties are invited try it out, give us their feedback and, if desired, collaborate on the open source Strimzi project to help shape the future direction of AMQ Streams on OpenShift.

If you’re lucky enough to be in San Francisco this week for Red Hat Summit, then you can hear a lot more about AMQ Streams (and the broader Red hat AMQ product) at the following sessions:

  • Running data-streaming applications with Kafka on OpenShift
    Tue May 8, 1:00 PM–3:00 PM, Moscone South 156
    Marius Bogoevici, Paolo Patierno, Gunnar Morling [L1099]
  • Red Hat AMQ overview and roadmap
    Wed May 9, 11:45 AM–12:30 PM, Moscone West 2011
    David Ingham, Jack Britton [S2802]
  • Introducing AMQ Streams—data streaming with Apache Kafka
    Thu May 10, 11:15 AM–12:00 PM, Moscone West 2014
    Paolo Patierno, David Ingham [S1775]
  • Red Hat AMQ Online—Messaging-as-a-Service
    Thu May 10, 1:45 PM–3:45 PM, Moscone South 214
    Ulf Lilleengen, Paolo Patierno [W1098]

We expect to release further previews as we iterate towards the general availability release, which is planned for later this year.

Please give it a try and let us know what you think.

 

Why are our Application Platform Partners succeeding in Digital Transformation?

Last year we set out to start the Application Platform Partner Initiative with the objective to enable deeper collaboration with partners focused on application platform and emerging technologies. We planned to create a collaborative go-to-market strategy between Red Hat and participating partner organizations focused on optimizing the value chain for application development and integration projects.

The Application Platform Partner Initiative focuses on Application Development-related and other emerging technology offerings, which revenue increased 42% in our last fiscal year up to $624 million. Partners like the APPs are contributing to this growth and we are happy to see the momentum continuing, and their trust on Red Hat as a strategic partner. What started out as a pilot has developed into a fully fledged initiative with 28 partners across North America, who are as committed as we are to the role opens source plays at the core of digital transformation.

As part of the success of this initiative, for the first time this year, we have created the Application Platform Partner Pavilion in Red Hat Summit.  Arctiq, Crossvale, Kovarus, Levvel, Li9, Lighthouse, OSI, Shadow-Soft, VeriStor and Vizuri will join us this year in the pavilion. Don’t miss a chance to get to know the advanced solutions they have created on top of Openshift and Red Hat Middleware products, which they will be showcasing at Red Hat Summit. Check out, for example, Arctiq Value Stream Mapping (VSM), Crossvale CloudBalancer for Red Hat® OpenShift or Vizuri log aggregation solutions.

These partners are delivering a strong investment in enablement, and commitment in their go-to-market alliance with Red Hat, including co-marketing and sales collaboration. As some examples of planned activities, Arctiq is running a Modern Mobile App Development event and Crossvale an OpenShift roadshow).

Levvel has been an active participant in the APP program, doing joint webinars, customer workshops and panel discussions to promote Red Hat emerging technologies. As a result, they have influenced and closed quite a few customers and have a long list of potential opportunities. Don’t forget to attend their coming up event “App Transformation Workshop: Monoliths to Microservices”!

Shadow-Soft has been particularly focused on growing the customer base with our OpenShift and JBoss product family with innovative sales and marketing strategies that are turning into a growing pipeline of opportunities, and running events around digital transformation.

Veristor joined recently the APP program and is growing rapidly their different practices around OpenShift and Red Hat Middleware, like DevOps and Agile Consulting, Services and Software Development practice.

OSI, an international company with a long experience with JBoss, is also growing in the US and have worked on an Agile Integration demo environment focusing on JBoss Fuse Integration platform to support their customer engagements, including integration with cloud and on-premise systems. Try to attend their “Monoliths to Microservices: App Transformation Workshop” right after Summit.

Vizuri has been a Red Hat partner for over 10 years. Having delivered more than 120 JBoss-related engagements, their JBoss experience and expertise helps customers reduce risk and improve time-to-value, while avoiding project delays and unplanned downtime. You can’t miss their take on How To Manage Business Rules In A Microservices Architecture using OpenShift and JBoss BRMS.

Having recently joined the APP program, Astellent has heavily invested in enablement and marketing, while achieving exciting customer success. Read their views on the newly launched Red Hat Decision Manager 7.

Lighthouse has been helping businesses with the right mix of Red Hat’s public, on-premises, and hybrid cloud technologies, customizing them to fit their unique business needs. They have also been active with unique marketing events like the one with the Red Sox coming in May.

As you can see, APP partners are working closely with Red Hat to establish a sales, marketing, and delivery practice around Red Hat technologies, including Red Hat JBoss Middleware, Red Hat OpenShift, and Red Hat Mobile Application Platform.

In the words of John Bleuer, VP, Strategic Partners, North America, “I am thrilled that as year one of the program ends, the sophistication of our partner solutioning and delivery abilities has increased dramatically; many partners are working with us in industry and line of business (including healthcare, payments, and e-commerce); other partners are adding sophistication into the DevOps / automation practices with Openshift, Jenkins, and Ansible, while others are honing their skills delivering app modernization and integration & BPM solutions in a cloud native environment, containerized in OpenShift.  It’s an exciting time at Red Hat”.

The market is looking to digital transformation initiatives to grow and maintain competitive advantage. Challenges range from confined platforms to complex architectures, from rigid processes to lack of agility. Together with our partners, we can play a critical role to help our customers overcome those to become growing, competitive organizations.

We hope to see you at Red Hat Summit checking them out, as well as at the Red Hat Summit Ecosystem Expo!

Red Hat OpenShift Application Runtimes: Delivering new productivity, performance, and stronger standards support with its latest sprint release

Red Hat OpenShift Application Runtimes is a collection of cloud-native application runtimes that are optimized to run on OpenShift, including Eclipse Vert.x, Node.js, Spring Boot, and WildFly Swarm. In addition, OpenShift Application Runtimes includes the Launch Service, which helps developers get up and running quickly in the cloud through a number of ready-to-run examples — or missions — that streamline developer productivity.

New Cache Booster with JBoss Data Grid integration

In our latest continuous delivery release, we have added a new cache mission  that demonstrates how to use a cache to increase the response time of applications.  This mission shows you how to:

  1. Deploy a cache to OpenShift.
  2. Use a cache within an application.

The common use case for this booster is to cache service result sets to decrease latency associated with data access as well as reduce workload on backend service.  Another very common use case is to reduce the data volume of message send across in distributed system.

Continue reading “Red Hat OpenShift Application Runtimes: Delivering new productivity, performance, and stronger standards support with its latest sprint release”

#RHSummit: A Random Sampling of Awesome Sessions and Events Throughout the Week

There are around 500 sessions crammed into a speedy three day schedule — so it is impossible to catch everything. (That’s one reason that I’m promoting things like theCube streaming channel and recorded sessions on our Youtube channel — it’s a way to catch all the things you can miss, even if you attend something every hour.)

If you haven’t already mapped out everything to see and do, the trailmaps are a great place to start to get the cream of every topic area.

I have created my own, unscientific list of the app dev and middleware-related sessions that caught my eye in the session catalog.

Stuff to Do

There are after-parties most nights, some on site at the Moscone Center and some at the conference hotels. Keep an eye on the signs in the lobbies — there are lists there. For those passionate about app development, middleware, and application architecture:

  • There will be a press panel including Mike Piech (VP of middleware) and Harry Mower (Red Hat Developers) in the Intercontinental Hotel Ballroom A. Space is limited, so it will also be broadcast live on theCube at 11am.
  • There is a rockstar cocktail hour on Wednesday evening, starting at 5:30 in Moscone West.
  • Also on Wednesday, Mike Piech and Mark LIttle will do an interview with theCube. Along with streaming live online, you can see it in person in the Moscone West lobby.
  • The Red Hat Summit wrap party is Thursday night at the Armory, starting at 7pm.

Integration

Trailmap: Integration

Distributed API management in a hybrid cloud environment
Tuesday, 10:30am, Moscone West 2003
Why it’s cool:
This is a real customer story on how they used API management with 3scale to manage thousands of services across a hybrid environment.

Practitioner’s guide to API strategy
Wednesday, 3:30pm, Moscone South 207
Why it’s cool:
Anything with “strategy” in it catches my eye. This session goes over why and how an API initiative should be structured to be successful.

Introducing AMQ streams – data streaming with Apache Kafka
Thursday, 11:15am, Moscone West 2014
Why it’s cool:
Microservices — or any kind of distributed computing system — comes down to a question of managing data. This looks at some new technologies in AMQ so that the messaging platform can span a variety of data architectures, from IoT to enterprise integration to (also) data streaming.

Decompose a monolith with microservices
Thursday, 3:00pm, Moscone West Level 2, Discovery Zone
Why it’s cool:
 Another session hitting the same point — distributed architectures are complex. You need a clear understanding of interdependencies, integration points, and data (among many other things), and this session breaks down what you need to know and best practices for addressing it.

Future Technologies

There are a lot of separate, and separately interesting, technologies on the horizon. The ones that seem to stick out at this Summit revolve around serverless computing or Istio Service Mesh.

Containers, microservices, serverless: On being serverless or serverful
Tuesday, 10:30am, Moscone South 207
Why it’s cool:
Burr Sutter presenting plus serverless and microservices in the title.

Istio: Solving the challenges of hybrid cloud
Tuesday, 3:30pm, Moscone South 208
Why it’s cool:
 This goes over how Istio can be used in an infrastructure that spans OpenShift containers, Kubernetes, and virtual machines. Managing data across environments effectively is a major challenge as applications and services need to be able to scale.

Low-risk mono to microservices: Istio, Teiid, and Spring Boot
Tuesday, 4:30pm, Moscone South 207
Why it’s cool:
This looks at how to break a monolith — fully recognizing that there are no clear-cut boundaries in a monolith and the interdependencies get messy.

An eventful tour from enterprise integration to serverless computing
Wednesday, 10:30am, Moscone South 207
Why it’s cool:
This looks at the different architectural designs and choices for event-driven computing, microservices, messaging, and data management. There isn’t a single perfect solution that works for everyone — each infrastructure has its own priorities and needs, and those have to be reflected in the architecture.

Internet of Things

Trailmap: IoT

Making IoT real across industries
Tuesday, 11:45am, Moscone West 2007
Why it’s cool:
Tell me a story. IoT is essentially a highly complex integration story, integrating not only applications but physical devices. Three different industries — technology, petroleum, and transportation — highlight different aspects of IoT as it was done in real life.

Internet of Things: Open, integrated, managed, and secure
Thursday, 3:00pm, Moscone West 2016
Why it’s cool:
How do you take a cool idea (or a business necessity) and make it happen in real life? This section includes common reference architectures for industrial IoT deployments.

Cloud-native and App Dev

Trailmap: Cloud-native apps

Containerizing applications — existing and new
Wedneesday, 1:00pm, Moscone South 155
Why it’s cool:
Anything practical is immediately appealing. Most organizations aren’t dealing with a greenfield of applications, and this looks at how to move both cloud-native and legacy applications into a container.

Using machine learning, Red Hat BPM, and reactive microservices
Thursday, 11:15am, Moscone West 2004
Why it’s cool:
Business process automation, decision management, event processing — these tend to be treated as commodity actions. The things you have to do to get an application to be more responsive with less intervention. I like the approach of adding machine learning to process management, giving more intelligence to the overall architecture.

Java Awesomeness

Eclipse Microprofile and Wildfly Swarm
Tuesday, 11:45am, Moscone West 2011
Why it’s cool:
There isn’t a ton of Java on this lit (I don’t really know why), but this is definitely a don’t-miss session for Java developers. Wildfly Swarm is a way to create cloud-native, container-native Java applications. So … all your Java expertise, in a tiny container.

EE4J, MicroProfile, and the future of enterprise Java
Wednesday, 3:30pm, Moscone South 215
Why it’s cool:
 There are probably a dozen think-pieces a year on the imminent death of Java — yet it continues to evolve across new architectures and to take on new technologies. This session takes a more optimistic (realistic?) view of the future of Java.

Microservices data patterns: CQRS and event sourcing
Thursday, 11:15am, Moscone South 208
Why it’s cool:
Microservices (as Christian Posta is fond of saying) represent a data challenge. The more distributed the data is, than the more structured and clear the data architecture needs to be.

Crossing the chasm between traditional and agile planning
Tuesday, 1:45pm, Moscone West 2103
Why it’s cool:
Teams are people. Technology has to be developed and executed and maintained by people. Making any kind of shift, whether changing the planning structure or the infrastructure architecture or something else, requires an understanding of how to manage and inspire teams.

 

#RHSummit: Plug in (whether you’re here or not)

Red Hat Summit — the unofficial start of summer technology discussions and the official conference for all things open source — begins tomorrow. For attendees, there are a handful of links to keep handy so that you can hit the sessions, booths, demos, and after-hours events that make Summit so awesome.

Even if you can’t attend this year (or since, realistically, no human being can attend everything going on at Summit), here is a round-up of social media channels and people to follow so you can dip your toe into the Summit experience.

**NEW** Interesting Personal Accounts

A lot of the people presenting at Summit or working in the DevZone and Partner booths have a social media presence all their own. It’s definitely worth tracking what they’re doing at Summit, and after. A handful:

Live-streaming Summit

All of the Red Hat Summit general sessions will be live-streamed on theCube, along with interviews and round tables throughout the day. Previous years’ Summits are also available in theCube archives or on our Red Hat Summit YouTube channel.

Live streams of note:

  • Morning general sessions, Tuesday and Wednesday, 8:30am
  • Press conference, with live Q&A, Tuesday, 11am
  • Afternoon general sessions, Tuesday and Wednesday, 1:45pm
  • The Future of Java interview with Mike Piech and Mark Little, Wednesday
  • Closing general session, Thursday, 8:30

Summit-Specific Accounts

Follow the #RHSummit hastag — use it to be part of the conversation.

Middleware-Related Social Media

Main Red Hat Sites

Jakarta EE is officially out

Jakarta EE is officially out! OK, given the amount of publicity and evangelising we and others have done around EE4J and Jakarta EE over the past few months, you would be forgiven for thinking it was already the case, but it wasn’t … until today!

I cannot stress enough how important this is to our industry. The number of Java™ developers globally is estimated at over 14 million. The Java EE market is estimated at a high multi-billion dollar value to the industry. Yes, there are other languages out there and other frameworks but none of them have yet made the impact Java™ and Java EE has over the years. Of course, Java EE was not perfect for a variety of reasons, but if you consider how much of an impact it has had on the industry given known and debated limitations, just imagine how much it can bring in the years ahead if it were improved.

With the release of Jakarta EE, we all have a chance to collaborate and build on the good things it inherits, whilst at the same time working to evolve those pieces which are no longer relevant or perhaps never were quite what was needed. Working within the open processes of the Eclipse Foundation vendors, Java™ communities, individuals etc. are all able to interact as peers with no one vendor holding a higher role than another. We’ve seen this exact same process work extremely well in a relatively short period of time with Eclipse MicroProfile and I believe Jakarta EE can do at least as well.

When talking about Java EE and now Jakarta EE some often focus only on the technologies. Fortunately, those of us who have been in the open source world long enough appreciate that the community is just as important. With Jakarta EE, all of us involved in working towards the release hope that we can use it as a catalyst to bring together often disparate Java™ communities under a single banner. Too often, Java EE has been a divisive topic for some vendors and some communities, resulting in fractures and often working on the same problems but pulling in different directions. If Jakarta EE does only one thing, and that is bringing everyone together to collaborate, then I would still deem it a success!

I’ll finish by discussing why Red Hat® has been helping to lead this effort along with others. I can summarise this pretty easily: enterprise Java™ remains critical to our customers and communities, and we believe that despite the increase of other languages and frameworks, it should remain so for many years to come. Red Hat, and JBoss® before it, has contributed to J2EE™, Java EE, and Eclipse MicroProfile for years, and we believe that sharing our experiences and working on open source implementations is important for the industry as a whole, no matter what language you may be using. We believe it’s important to leverage Jakarta EE in the cloud and to a wider range of communities than in the past. We’re here to stay and will continue to help lead!

Onward!

To learn more, join these upcoming live sessions:

Red Hat makes Node.js a first-class citizen on OpenShift with RHOAR, by Conor O’Neill, nearForm

Red Hat’s offering in cloud-native application development has just taken another step forward with the announcement of supported Node.js. Conor O’Neill from our partner nearForm shares his thoughts on the role that Node.js and Red Hat OpenShift Application Runtimes (RHOAR) will take in Red Hat’s market leadership in Cloud-Native application development, modernization and migration.

Read more here: Red Hat makes Node.js a first-class citizen on OpenShift with RHOAR, by Conor O’Neill, nearForm

Luis I. Cortes. Senior Manager, Middleware Partner Strategy – @licortes_redhat

Learning Process Driven Application Development with JBoss BPM

Are you interested in an introduction to the concepts of process management (BPM)?

Do you want to learn how your business can leverage process driven application delivery?

Are you looking for an easy to understand guide to mastering Red Hat JBoss BPM Suite tooling?

Do you want a step-by-step introduction to setting up JBoss BPM Suite, then coverage of practical and important topics like data modeling, designing business rules and processes,  detailed real world examples, and tips for testing?

For the last few years I’ve been working on putting years of working with JBoss BPM Suite, community projects Drools and jBPM together in one easy to understand book.

In 2017, Red Hat put the first chapter online for free and literally thousands downloaded it starting their journey on the road to delivering process driven application with JBoss BPM Suite. Many of you have reached out over the years to ask about the completion of this book and where you can get it.

The good news isthat the book is available and Red Hat’s providing ebook downloads for free!

Let’s look at how this works, shall we?

Continue reading “Learning Process Driven Application Development with JBoss BPM”

Cloud Native Application Development – Adopt or Fail

In today’s digital world, software strategy is central to business strategy. To stay competitive, organizations need customized software applications to meet their unique needs — from customer engagements to new product and services development. Drawn-out development projects are no longer acceptable, given business demands. Therefore, the need to speed up application development, testing, delivery, and deployment is no longer optional but a must-have competency.   

At the same time that developers are confronting this challenge to deliver solutions more quickly, they are also facing the most diverse technology ecosystem in the history of computing.  To address this challenge, development teams must modernize architecture, infrastructure, and processes to deliver higher-quality applications with greater agility.

Cloud native development is an approach to building and running applications that fully exploits the advantages of the cloud computing model.  Cloud native development multidimensionality involves architecture, infrastructure, and processes based upon four key tenets:

  1. Services-based architecture: could be microservices or any modular loosely coupled model for independent scalability and flexibility of maintenance and polyglot language runtimes.
  2. Containers and Docker image: as the deployment unit and self-contained execution environment with consistency and portability across cloud infrastructures.
  3. DevOps automation: implementing processes and practices and instrumentation of development to test deployment of applications.     
  4. API-based design: The only communication allowed is via service interface calls over the network. No direct linking, no direct reads of another team’s data store, no shared-memory model with an outside- in perspective.

Continue reading “Cloud Native Application Development – Adopt or Fail”