The Good, the Bad and the...?
Today, there are many choices regarding application and container management platforms. Next to open source Kubernetes, there are commercial offerings such as Pivotal Cloud Foundry and RedHat Openshift. While both commercial platforms have their charm, let me share my personal experience with both in the context of delivering containerized applications within the corporate banking sector.
We started our journey writing applications for the cloud nearly two years ago. Since then a lot has changed — this write-up summarizes how we have progressed in supporting both deployment platforms today and how we experience the main differences between these platforms.
First, there are a couple of areas which I believe are key when developing software:
- Keep constant focus on writing code addressing your business needs and processes
- Automate everything — from build to deployment (CI/CD)
- Reach operational excellence together with your customers after delivery of the software
The majority of time and energy any developer spends writing code should be directed towards the business problems you are trying to solve within the product.
This is what clients pay for. While all the other aspects of running code (NFRs) are equally important, this is something considered to be “part of the product.”
From the beginning we choose to optimize writing business logic-centric code; we adopted next to a service-based architecture also a deployment platform as a cornerstone of the architecture blueprint.
The latter takes away the majority of concerns considering NFRs. Think here in deploy-ability, configurability, log aggregation, auto-healing, service monitoring, health-checking, service call tracing, alerting, auto-scaling, etc…We have not written a single line of code to (re)implement any of these management capabilities. What we did do is implement our code according to the 12-factor app principles which enables us to seamlessly take advantage of these (deployment platform) capabilities.
At the time (2 years ago) there was only one platform mature enough which could provide us with these required (NFR) capabilities:So Cloud Foundry. With a couple of hours of training, any developer could deploy/trace/analyze seamlessly his/her code into this platform.
While Openshift has evolved over the last couple of years, it is not yet at the same level of maturity as (Pivotal) Cloud Foundry today; having developers deploy/test/debug directly against Openshift requires significant additional effort — and mostly — additional skills including, but not limited to, thorough understanding of infrastructural concepts. This would skew the balance — in my opinion — between delivering business logic versus “getting it to work.” As a past experience, I used to work on projects where this lead to a time/effort balance between business logic and NFRs of nearly 50-50 percent (!).
Similar to writing code, we also treat deploying code “as business components.”
With Cloud Foundry this is a basic concept; the developer does not need to be concerned about “how” code is deployed; this is handled within the platform (by use of buildpacks). Just throw your deploy-able against Cloud Foundry and the platform figures out how to deploy the code.
With Openshift, while having a lot more flexibility — the developer requires to understand, define, and implement all; write Docker file definitions, building the docker images and releasing these to a remote repository, assuring Docker security policies are met (as required in our sector) and then deploying these images to Openshift. This requires the developer to have a much higher level of understanding of the deployment platform before he/she becomes proficient here.
As for developers, DevOps processes are impacted here too with the additional complexity, implementation and maintenance of the steps described above — next to the added governance to be in place to assure all deployments would still follow the cloud native application principles. Openshift is flexible, but also opens a doors you might want to keep closed.
Once the software is deployed against a development (or client hosted) environment, we need to manage and monitor the µservices.
As a systems operator I want to be able to:
- Monitor the health of all services deployed in the platform
- Have an aggregated view across all log streams of a µservice type
- Trace µservice to µservice calls
- Manage costs and limit resources used by a certain project/group of µservices
- Specify auto-scale and auto-healing parameters for each µservice type
- Perform updates to both µservices as well as the platform without downtime
While both platforms potentially support all of these features to reach operational excellence, the main difference is Pivotal Cloud Foundry delivers all of these out-of-the-box during installation while Openshift requires add-ons, configuration and often home-build/maintained solutions to be implemented to reach to the same level.
In order words… with Pivotal Cloud Foundry an IT organization is off-loaded of all these operational concerns (which are delivered tested, verified and supported by Pivotal) and can focus on… yes… running business logic while with RedHat Openshift the IT organization requires to satisfy, integrate, test and maintain the majority of these NFRs next to the software delivered by us.
From a product and support point of view Cloud Foundry is developed and maintained by a core team assuring coherence across the solution. Openshift consists of a set of different open source technologies and maintainers; there are several moving parts (Docker, Kubernetes, Ansible, Prometheus, Kibana, Elasticsearch, Fluentd, etc…) and I believe it is hard for RedHat to keep up with the changes made across all these different software components to form a coherent product.
There is no one-size-fits-all. Both platforms have there pros and cons and differ significantly in what you get (and pay for) out-of-the-box — keep in mind the license is only one part of the TCO of a platform.
Cloud Foundry is stricter in how applications are deployed which sometimes feels as a limitation; Openshift offers incredible flexibility but requires discipline and additional skills from your designers and developers.
In order to successfully manage and monitor an Openshift platform, the IT organization responsible for the platform requires to have also skilled resources to compensate for the additional NFRs to be satisfied, implemented and maintained.
While being impressed with the progress and level of maturity increase over the last period, Openshift is still undergoing significant changes today and each version of Openshift differs from the previous version which likely requires re-work on the monitoring part and significant testing for production-zero-down-time upgrades of the platform itself.