Digital transformation is a hot topic in enterprises these days, and like any such topic it’s associated with a wide range of use, overuse, and misuse. But the phrase does get at something that we can all sense is really going on, a truly profound change. As different businesses undergo or undertake variants of digital transformation, we see a number of common characteristics of the more digital world:
- More things happen (or are expected to happen) in real time
- More different sources and kinds of data are brought together
- Activities are more decentralized and ad hoc
- There is a broadening of participation in both the building and the use of I.T.
- There is a shift from analysis and planning to trial-and-error experimentation
Each of those ideas deserves elaboration–topics for future blogs–but going for the moment with whatever came to mind for those bullets as a rough characterization of digital transformation, let’s explore the interplay of architecture, process, and platform in helping enterprises compete and succeed in this emerging digital world.
A key requirement for most businesses is the ability to change quickly, both in a proactive way to outmaneuver competitors and in a reactive way to keep up with market dynamics. This is usually what is meant by the term “agility”–it’s not just about speed, it’s about the combination of speed and adaptability.
In order to be changed quickly, the things that must change–the processes, the organizational structures, the software, the value chain, etc.–must be structured with a granularity commensurate with the required kinds of change. If the addition or removal of a step in a process requires a reorg and weeks of training, the change could come too late or be put off. If adding a new product type means taking the order entry system offline for an extended period, business could be needlessly lost, and there will be lots of resistance to such launches.
This is where architecture, specifically software architecture, is critical. Enterprise software must be structured with the right granularity of modules to accommodate the kinds of changes needed by the business. Modularity has always been an important basic tool in the engineer’s toolbox: any engineered solution involves breaking down a problem into subcomponents and architecting a system based on that understanding. Even the old systems we now refer to as “monolithic” typically had modular internal structures; they were just much more cumbersome to change. The better the architecture, the better the ability to swap modules and adapt without having to change other modules. What’s at stake here is the granularity.
Which brings us to microservices. The point and power of microservices is to give the architect more flexibility in sizing modules for the needs at hand. The goal is not to decompose every last part of the application estate into the tiniest possible modules just because small is cool; the opportunity is to architect with fine granularity where that helps the business change and adapt with greater agility, and in areas less pertinent to such agility, group functionality into modules in whatever way makes the most sense.
And why this is possible now but wasn’t before is containers and orchestration. Before containers, SOA services were fairly heavyweight entities, each typically sitting on top of a full VM. With that much overhead, it didn’t make sense to put just a few lines of code in a service–you wanted to get the biggest bang possible for that VM buck. Containers are much lighterweight and are commensurate with truly “micro”-scaled services potentially containing just a few lines of code. In addition, managing a much larger number of much smaller entities would have been a stretch for the first-generation orchestration technologies we had with SOA. Microservice-oriented orchestration such as Kubernetes is the other necessary foundational piece to make fine-grained architecture viable at enterprise scale.
So a fine-grained architecture enabled by microservices and containers can significantly help with the fast, trial-and-error iteration demanded by digital transformation. To fully realize the benefit of this architecture depends on the right process, which is likely something along the lines of the “DevOps” bandied about in enterprise circles these days. A while back DevOps mostly meant self-service for developers, relieving them of the need to wait for Ops to provision capacity or make changes and also relieving Ops because developers did more of the work to get their code deployed.
The notion of DevOps has expanded (in a good way) to comprise more of the overall application lifecycle: a closer collaboration among all application stakeholders as they work in tighter trial-and-error loops with supporting technologies such as continuous integration and continuous deployment (CI/CD). Even more powerful is to think of it really as “BizDevOps” where not just the I.T. folks but also the business application stakeholders are part of the fast iteration. This is both a consequence and a driver of digital transformation: the business is carrying out trial-and-error, not just I.T.
Working in a (Biz)DevOps process is not something that happens overnight and not simply as an inevitable result of adopting certain software. It likely depends on a deep mindset and culture change in many organizations, something that takes sustained commitment and senior-level support. The good news is that while still early, the approach has matured in many ways, and there are numerous success stories and best practices to start with.
However, with the right mindset and culture, adopting a DevOps process and containerized microservice architecture can very much benefit from the right supporting technologies, which brings us to platform. For a long time platform most frequently meant hardware and operating system (platform of course is relative: nearly any technology in a layered stack views all the layers below it as the “platform”). We see platform in the digital transformation context as all the software that the application developer builds with and runs on: operating system, cloud technology, workload runtimes and frameworks such as application servers and other middleware, plus storage and management.
The operating system is still very much at the core; it’s both what containers run on and what runs in containers. Skimp on the O.S. and you compromise security, performance, reliability, and other critical enterprise concerns. Red Hat Enterprise Linux (RHEL) is a compelling foundation.
A container platform that leverages the strengths of RHEL and brings the industry-leading Docker container standard and Kubernetes orchestration technology is the next layer up; this is Red Hat OpenShift Container Platform. Emerging from what the industry used to think of as “Platform as a Service” (PaaS), OpenShift enables next-generation DevOps with CI/CD, a wide array of both Red Hat and third-party middleware services, and great support for hybrid approaches spanning a range from on-premise to public cloud environments.
At the upper layers of the platform for digital transformation is middleware, manifested in this environment as services. We’ve taken the traditional middleware offerings such as the app server, integration, rules engine, and process automation and containerized them on OpenShift to make their use by developers much easier and less error-prone since so much of the tedious setup and configuration is automated within OpenShift. We continue to refine the user experience of middleware on OpenShift, and we’re also driving new runtime approaches that carry the best of the traditional app server forward into the world of microservices.
There’s certainly plenty here to expand on in future blogs, but we hope this is a compelling and stimulating introduction. We welcome the opportunity to engage with you on how best to use our current and future offerings in your digital transformation projects, and we look forward to the journey!
See more on architecture, process, and platform, in living color. The Feb 9 webinar recording is available on-demand here.