Five features of JBoss EAP that will help get you production ready

JBoss Enterprise Application Server 7 has been out since June, and if you build and deliver using a Java EE environment and haven’t yet upgraded to EAP7, it’s time to make the jump.

Here’s a look at what’s new in JBoss EAP 7, what has changed since JBoss EAP 6, and how to get the most out of JBoss EAP 7 as your Java EE7 server.

Overview

JBoss EAP 7 is based on WildFly Application Server 10, which provides a complete implementation of the Java EE 7 Full and Web Profile standards. WildFly 10 does much to simplify modern application delivery based on containers and microservices.

JBoss EAP 7 features certified support for Java EE7 and Java 8 SE. The WildFly integration brings experimental Java 9 support, too. It also supports current development snapshots of Java 9, which is expected for release this fall.

The JBoss EAP 7 release is available for download from JBoss.org.

Announcing Red Hat JBoss Data Virtualization 6.3

We are excited to announce the release of Red Hat JBoss Data Virtualization 6.3.

Data integration has always been challenging. Modern technology trends like big data and cloud, coupled with the need for more real time data access and analysis, are adding to the complexity facing enterprises today.

JBoss Data Virtualization is a data access and integration solution that offers an alternative to physical data consolidation and data delivery by allowing flexibility and agility in data access. Data virtualization creates a single logical view of data from varied data sources, including transactional systems, relational databases, cloud data stores, and big data stores.

What’s New in JDV 6.3

JBoss Data Virtualization 6.3 adds capabilities to help organizations integrate big data and provide high performance data access in real time.  The release notes cover the full features and enhancements for JBoss Data Virtualization 6.3. There are a number of new features to enable expanded connectivity, enhanced security and developer productivity support.

Winner! Best Data Virtualization Solution

Database Trends and Applications (DBTA) announced its data solutions winners earlier this August, and one of our middleware products was honored! Red Hat JBoss Data Virtualization won best data virtualization solution.

DBTA’s awards were reader’s choice, meaning that it was the community of data virtualization users who voted for JBoss Data Virtualization. According to DBTA’s announcements, the hallmarks of a winning data virtualization solution include three characteristics:

  • Agile development
  • A secure virtual data layer
  • Real-time data access and provisioning

It’s a combination of security and speed.

Data virtualization provides a layer over existing, separate data sources, which integrates the data in those sources without have to manually copy or convert that data. Data virtualization can support a lot of potential business benefits, including reducing duplicate data, improving data consistency, and reducing architectural complexity. Data virtualizations can provide that comprehensive view and access to data, without having to replace existing applications.
Find out more about Red Hat JBoss Data Virtualization here.

Upcoming Webinar: Highly Available and Horizontally Scalable Complex Event Processing

What if you could take the streams of information coming into your business and use it to recognize potential opportunities or issues almost immediately? Fabio Marinelli (senior architect) and Syed Rasheed (product marketing manager) will be conducting a webinar on complex event processing. Complex event processing helps you recognize important patterns within your data streams in near real-time.

There are two parts to understanding complex event processing. First is looking at the data itself, from a variety of sources (such as social media, devices, web or mobile applications, monitoring applications). Being able to take different types of information from unrelated sources and get a holistic view is important. The second part is designing an architectural framework that supports that level of data and processing. This webinar looks at an in-memory data grid as a complex event processing engine and using a distributed architecture for dynamic scalability.

Registration is open. The webinar is August 23 at 11:00am Eastern Time (US).

register_now

Fun Follow Up: Webinar Q&A

I will collect any questions asked during the webinar, and I’ll do a follow-up post on Friday, June 24, to try to capture the most interesting questions that arise.

What Are You Getting from (Big) Data?

Gartner has a term for information which is routinely gathered, but not really used: dark data. This is information which is collected for a direct purpose (like processing an online transaction), but then never really used for anything else. By IDC estimates, dark data represent about 90% of the data collected and stored by organizations.

The Internet of Things (specifically) and digital transformation (more generally) are business initiatives that try to harness that dark data by incorporating new or previously untapped data streams into larger business processes.

Big data refers to that new influx of data. The “big” adjective can be a bit misleading — it doesn’t necessarily mean that these are massive amounts of data. Some organizations may be dealing with petabytes of data, but some may only be gigabytes. It’s not a given amount of data, but rather the scale of increase from previous data streams.

How to get started with JBoss BPM

If you are evaluating, exploring or just plain interested in learning more about Business Process Management (BPM), then read onwards as this is what you have been waiting for.
While there are quite a few resources online, often they are focused either on community project code that is constantly changing or disjointed in such a manner that it is very difficult for you to find a coherent learning path.
No more.
Just a few months back, in June, the early access program for Effective Business Process Management with JBoss BPM kicked off. This book is focused on a coherent path of learning to get you started with BPM and it focuses on JBoss BPM Suite as the Open Source BPM solution of choice.
The first chapters have been put online and you can both read along as the book is written, while interacting with the author in the online forums.

Deal of the Day

Today only, half off the price of Effective Business Process Management with JBoss BPM, so head on over and grab yourself a copy using the code dotd081716au to get started with JBoss BPM Suite.

The deal will go live at Midnight US ET and will stay active for ~48 hours, running a little longer than a day to account for time zone differences.

If you would like to help out with socializing this news, here is a tweet you can cut and paste into your social networks:

Intro to In-Memory Data Grids

Some of the biggest technology trends aren’t necessarily about doing something new. Things like cloud computing (as an environment) and design patterns for the Internet of Things and mobile applications (as business drivers) are building on existing conceptual foundations — virtualization, centralized databases, client-based applications. What is new is the scale of these applications and the performance expected from them.

That demand for performance and scalability has inspired an architectural design called distributed computing. Technologies within that larger umbrella used distributed physical resources to create a shared pool for that service.

One of those technologies is the purpose of this post — in-memory data grids. It takes the concept of a centralized, single database and breaks it into numerous individual nodes, working together to create a grid. Gartner defines an in-memory data grid as “a distributed, reliable, scalable and … consistent in-memory NoSQL data store[,] shareable across multiple and distributed applications.” That nails the purpose of distributed computing services: scalable, reliable, and shareable across multiple applications.

Intro to Integration

Integration is one of those concepts that is easy to “know,” but becomes less obvious that more you try to define it. A basic, casual definition is making different things work together. The complexity, though, comes from the fact that every single part of that has to be broken down: what are the “things,” what are they doing that makes them “work together,” how are they working, and what is the goal or purpose of them working together. All of those elements can be answered differently for different organizations, or even within the same organization at different times.

An understanding of integration comes from looking at the different potential patterns that you can integrate and then defining the logic behind the integration so you can select the right patterns for your environment.

Integration Patterns

Integration itself is an architectural structure within your infrastructure, rather than an action or specific process. While getting various systems to work together has long been an IT (and organizational) responsibility, integration as a practice became more of a focus in the early 2000s. With emerging large-scale enterprise applications, there became a growing need to get those applications working together without having to redesign or redeploy the applications themselves. That push became integration.

Integration is subdefined by what is being integrated; these are the integration patterns.

There are different types of patterns, depending on perspective. There are patterns based on what is being integrated and then there are patterns based on the topology or design of the integration. Basically, it’s the what and the how.

Upcoming Webinar: Migrating to Open Source Integration and Automation Technologies

Balaji Rajam (principal architect) and Ushnash Shukla (senior consultant) from Red Hat will be conducting a webinar about the ability to integrate data from disparate sources with people and processes. This is a crucial part of strategies for data integration.

Data is increasingly moving from being an asset within an organization to one of the key business drivers and products, regardless of industry. The ability to integrate data from disparate sources is a crucial part of business digital strategy. Many organizations have been locked into proprietary and closed software solutions like TIBCO, but as the IT environments transform again into microservices, agile, and cloud-based infrastructures, those proprietary systems may not be able to keep up – or it may be too cost-prohibitive to try. Open source offers standards-based approaches for application interoperability with potentially lower costs and faster development times. This webinar looks at three key aspects of effectively moving from proprietary to open source solutions:

  • Recommendations for migrating from TIBCO to open source applications
  • Performing data integrations
  • Defining automated business processes and logic

Registration is open. The webinar is August 9 at 11:00am Eastern Time (US).

register_now

Fun Follow Up: Webinar Q&A

I will collect any questions asked during the webinar, and I’ll do a follow-up post on Friday, August 12, to try to capture the most interesting questions that arise.

Announcing Red Hat JBoss Data Grid 7.0

Red Hat JBoss Data Grid 7.0 is here.

This significant new release offers a variety of features which are designed for complex, distributed, and high-volume, velocity data requirements. JBoss Data Grid can support both on-premise and cloud-based infrastructures, and it can handle the near-instantaneous, complex demands of Internet of Things and big data environments.

What It Brings

JBoss Data Grid can be deployed in different architectures, depending on the needs of your environment. In addition to the traditional usage as distributed cache, in-memory data grids can function as the primary data store for applications or as a compute grid for distributed computing.

Data Storage

There are two use cases for using JBoss Data Grid as a data store:

  • Data caching and transient storage. As an in-memory data store for frequently accessed application data or for transient data, such as shopping cart information or session data. This avoids hitting transactional backend systems are frequently, which reduces operating costs.
  • Primary data store. Data Grid can function as a key-value store similar to a NoSQL database. This can be the primary data source for applications for rapid retrieval of in-memory data and to persist data for recovery and archiving. Applications can run data-intensive operations like queries, transaction management, and distributed workloads.

Computing Grid

Modern architectures require flexible, distributed, and scalable memory and data storage. Using JBoss Data Grid as a distributed computing grid can help support the most demanding architectures:

  • Scale-out compute grid and event-driven computing. Through storage node clusters, JBoss Data Grid can do a distributed architecture with application logic at each node for faster data processing and lower latency and traffic. This architecture also supports event-driven computing by executing application logic at the node as data are updated.
  • Big data and the Internet of Things. JBoss Data Grid can support massive data streams — hundreds of thousands of updates per second. The Internet of Things can have data streams from thousands of connected devices, updating frequently. Clustering and scale, application logic and processing, and both in-memory and persistent storage in JBoss Data Grid enable those big data architectures by managing those massive data streams.

Real-Time Analytics and Performance for Digital Business

DIgital transformation means that organizations are pushing into a new intersection between their physical goods or services and online, on-demand applications. This digital environment is reliant on data — and unlike previous generations, this technology uses -near live data streams rather than historical data collections.

JBoss Data Grid is a leading high-performance, highly-scalable, in-memory data grid. In-memory data grids provide a means of scalable memory so that even rapidly changing application data can be processed.  Better data processing and management enables organizations to make fast, accurate decisions using large data streams. JBoss Data Grid 7.0 offers a data foundation for real time analytics:

  • Low latency data processing through memory and distributed parallel execution
  • Data partitioning and distribution across cluster nodes for horizontal scalability
  • High availability through data replication
  • Shared data services for real-time and in-memory analytics and event processing

A Short List of Major Features

The release notes cover the full features and enhancements for JBoss Data Grid 7.0. There are a number of features for improved ease of use, real-time analytics, and language support:

  • Distributed streams, which uses the Java 8 Stream API to take complex collections of data and run defined analytics operations.
  • Resilient distributed dataset (RDD) and DStream integration with Apache Spark 1.6, allowing Data Grid to be a data source for Spark and to execute Spark and Spark Streaming operations on data in Data Grid.
  • Hadoop InputFormat/OutputFormat integration, so that Hadoop tooling and oeprations can be used with data stored in Data Grid.
  • New administrative consoles for cluster management to simplify common tasks for managing the cache, nodes, and remote tasks.
  • Control operations for clusters including graceful shutdowns and startup and restores from persistent storage.
  • A new Node.js Hot Rod client to support using Data Grid as a NoSQL database with Node.js applications.
  • Running remote tasks (business logic) on a Data Grid server from the Java Hot Rod client.
  • Support for a Cassandra cache store, which persists the entries of a distributed cache on a shared Apache Cassandra instance.

Additional Resources