CF-logo

CF Summit Europe 2017

The European edition of Cloud Foundry Summit is just around the corner.

Next week, an ITQ delegation will visit Basel Switzerland for the conference about the fastest-growing cloud native platform. After Berlin in 2015 and Frankfurt in 2016, Basel will for a couple of days be the venue where around 1000 Cloud Foundry developers, vendors, consultants and users meet to discuss the direction of the platform and its ecosystem.

Cloud Foundry has proven over the last few years that its not just another cool new technology. It is adopted as the platform of choice by organisations with a combined market cap of over 12 trillion $. Together with its vibrant community, this means CF is here to stay.

However, CF is not just about technology. In fact, it’s quite the opposite: the core idea is to help organizations enter into the digital age, and change the way in which they work and build their core products. CF is just the technology that makes it possible.

If the last few years were any indication, the keynotes will be all about organizations making fundamental transformative changes and becoming much more efficient by embracing Cloud Foundry. The remainder of the schedule is split into 6 tracks focussed around all the different facets:

CF tracks

From die hard technology to real world use cases – there’s something for everyone: the core project updates, extension projects and experiments tracks will have a strong technical focus aimed at the platform engineers and specialists. Cloud-native microservices won’t be any less technical, but rather focussed on how to develop software which runs well on CF, and which leverages all the goodness of the cloud. The Cloud Foundry at scale track focusses on the heavy users (IoT, service providers, multi-cloud). Finally the Cloud Foundry in the enterprise track focusses on real world use cases.

There is obviously a wealth of information in the sessions, but most importantly this is the place to meet and engage with the Cloud Foundry community at large.

Hope to see and meet you there!

PKS

Pivotal Container Service

This week at VMworld US, a couple of guys in suits and Sam Ramji VMware, Pivotal, and Google introduced Pivotal Container Service. Exciting stuff! But what does it mean?

Applications on modern platforms run in containers. And while containers themselves are increasingly standardized around the Open Container Initiative (OCI), there are huge differences in how platforms build and run these containerized workloads. This is a direct reflection of the intended platform use cases and corresponding design choices.

Cloud Foundry takes an application centric approach: developers push the source code of their app, and the platform will build a container and run it. As a dev you never have to deal with the creation and orchestration of the container – they are platform intrinsincs which can be tweaked by the Operations team. So in CF everything is focussed on developer productivity and DevOps enablement: an ideal platform for reliable and fast modern software development.

Other use cases do exist in which you do want to bring your own container (BYOC) e.g.: containerized legacy apps, applications already containerized by ISVs, stateful apps and databases, or cases in which dev teams already build containers as part of their build process. Although I would recommend those dev teams in the last example to check out Cloud Foundry, they are all valid use cases – use cases best served by a container centric platform.

Kubernetes is such a container centric platform. In fact, it’s the most mature and battletested platform out there, as the open source spin-off of Google’s internal container platform. However, it’s also notoriously hard to deploy and manage right. Google Cloud Platform introduced the managed Google Container Engine (GKE) to solve this problem in the public cloud.

Pivotal Container Service (PKS) is the answer for the private cloud. Pivotal solved the problem of deploying and managing distributed systems some years ago with BOSH – an infrastructure as code tool for deploying (day 1) and managing (day 2) distributed systems. Not coincidentally BOSH is the foundation and secret ingredient of Cloud Foundry.

PKS is Kubernetes on BOSH (Kubo), with tons of extras to make it enterprise friendly:

  • deep integration with VMware tooling on vSphere (vRealize Operations, Orchestrator, Automation)
  • integration with VMware NSX virtual networking
  • access to Google Cloud APIs from everywhere through a GCP Service Broker
  • production ready – enterprise scaled
  • supported

Be prepared for Q4 availability! In the mean time I can’t wait for beta access to test drive it myself.

MSFT_CF

.NET on Cloud Foundry

As a consultant on the Cloud Foundry platform I regularly get asked if CF can host .NET applications. The answer is yes. However, it depends on the application how much we as platform engineering have to do to make it possible. Chances are, you don’t have to do anything special. That chance is however quite low as I’ll explain below.

Note that I wrote a post on the same topic some 2 years ago. Now that Diego, .NET Core, and Concourse have all gained production status it’s time to see how the dust has settled.

The old and the new

The .NET Framework we have become used over the last 16 years or so is at version 4.6.x. It’s is essentially single platform (Windows), closed source, installed and upgraded as part of the OS, has a large footprint and is not especially fast.
Microsoft realized at some point this just wouldn’t do anymore in the modern cloud era in which frameworks are developed as open source, without explicit OS dependencies, and applications are typically deployed as a (containerized) set of lightweight services that are packaged together with their versioned dependencies (libraries and application runtime). Some time after, the world saw the first alphas and betas of .NET Core, and on June 27th 2016 it reached GA with version 1.0.0. This made lots of people very happy and was generally seen as a good move (albeit quite late).
While Microsoft is still actively developing both the legacy .NET Framework as well as .NET Core, it was made pretty clear .NET Core is the future.

So what about apps?

ASP.NET Core in a Nutshell

ASP.NET Core in a Nutshell


An application written as an (ASP).NET Core app will run on the old and the new – although sometimes it needs some community convincing to keep it that way. The opposite is not the case: many Windows specific/Win32 APIs are for obvious reasons not available on the cross-platform .NET Core runtime, so legacy .NET apps taking a dependency on these APIs can not be run on .NET Core without refactoring.
Note this dependency doesn’t have to be explicit: it’s about the whole dependency chain. For instance, the popular Entity Framework ORM library takes a dependency on ADO.NET which is highly dependent on Win32, and so can not be used. Instead, applications using it should be rewritten to use the new EF Core library.

New is easy – .NET Core

From a platform engineering perspective supporting .NET core is easy. As .NET core can run in a container on Linux, it follows the default hosting model of CF. So you just install/accept the dotnetcore buildpack and off you go.

Old is hard – .NET Framework

Of course you can attempt to convince your developers they have to port their code to .NET Core. However, your mileage may vary. Since legacy is what makes you money today, a large existing .NET codebase that’s the result of years of engineering can’t be expected to be rewritten overnight. And if rarely updated, it will be very hard to make a business case for it even if do you have the resources.
Instead, a more realistic scenario is a minimal refactoring in such a way that the vast majority of the never touched cold code can stay on .NET Framework, while all the new code together with often changed hot code can be written in .NET Core.

It needs Windows – Garden has your back

Cloud Foundry before 2015 used Warden containers, which took a hard dependency on Linux. The rewrite of the DEA component of Cloud Foundry in Go, resulting in DEA-Go Diego was covered quite a lot online. For .NET support, the accompanying rewrite of Warden in Go – resulting in Garden *badum tish* is much more interesting since Garden is a platform independent API for containerization. So what we need for a Windows Diego Cell is:

  • a Windows Garden backend – to make CF provision workloads on the VM
  • the BOSH agent for Windows – to manage the VM in the same way all of Cloud Foundry is managed on an infrastructure level

We need to package all this in a template VM stemcell so BOSH can use it. You can find the recipe for doing this, and some automation scripts here. Even with the scripts, it’s a lot of cumbersome, time consuming, and error prone work, so you best automate it. I’ll discuss in my next post how I did that using a pipeline in Concourse CI.

If you are on a large public cloud like Azure, GCP or AWS, and use Pivotal Cloud Foundry, Pivotal has supported stemcells ready for download. If you are on a private cloud, or not using PCF, you have to roll your own. I’m not sure why Pivotal doesn’t offer vSphere or OpenStack Windows stemcells, but I can imagine it has something to do legal (think Microsoft, licensing and redistribution).

Final steps

PCF Runtime for Windows

PCF Runtime for Windows


Once you have the stemcell you need to do a few things:

Again, if you are using PCF, Pivotal has you covered, and you can download and install the PCF Windows Runtime tile which takes care of both of the above. If you are on vanilla CF, you have to do some CLI magic yourselves.

Container confusion

These days I’m working at a client creating workflows for their state of the art private cloud platform. It’s really quite nice: internal clients can use a webportal to request machines, which are then customized with Puppet runs and workflows to install additional software and perform custom tasks like registering the machine in a CMDB. All this is ideal for running legacy workloads like SQL databases.

Other offerings include ‘PaaS’ workloads for running business applications, e.g.: developers can make requests for ‘Scale Out’ application servers, meaning 2 linux VMs with Tomcat installed behind a loadbalancer.

The most popular offering by far is the large VM with a preinstalled Docker engine. In fact, they are so popular you might wonder why.

Is it because Developers have an intrinsic desire to create and run Docker containers? Naively, the current hype around containerization in general and Docker as a specific technology could indeed be explained as such.

However, if you know Developers a bit you know what they really want is to push their code into production every day.

To get to this ideal state, modern development teams adopt Agile, Scrum, and Continuous Delivery. Sadly, especially the latter usually fails to deliver to production in enterprise IT, giving rise to the waterscrumfall phenomenon: Continuous Delivery fails to break the massive ITIL wall constructed by IT Ops to make sure no changes come through and uptime is guaranteed.

So guess what’s happening when your Dev/Business teams request the largest possible deployment of a Docker blueprint?

Yep, you just created a massive hole in your precious wall. You’ll have your CMDB filled with ‘Docker machine’ entries, and have just lost all visibility of what really runs where on your infrastructure.

Docker in production is a problem masquerading as a solution.

Does this mean containers are evil? Not at all. Containers are the ideal shipping vehicles for code. You just don’t want anyone to schedule them manually and directly. In fact, you don’t even want to create or expose the raw containers, but rather keep them internal to your platform.

So how do you use the benefits of containers, stay in control of your infrastructure, and satisfy the needs of your Developers and Business all at the same time? With a real DevOps enablement platform: a platform which makes it clear who is responsible for what – Ops: platform availability, Dev: application availability – and which enables Developers to just push their code.

DevOps stage at VMworld

VMworld 2015: beyond virtualization

What do you base your selection on when buying some piece of technology? Is it the core functionality, or the added features?

As Kit Colbert aptly stated in his VMworld DevOps program Keynote, customers at this point implicitly assume the core functionality of almost any given product will be alright, and base their choices on the extras:

  • when selecting a new home audio set, you select it based on for instance connectivity, wireless options and easy of use. Actual audio quality is perhaps the #10 item on the list
  • a lot of companies make decent tractors, but some (e.g. John Deere) set themselves apart and do great by adding integrated options such as support for GPS (driving in straight lines)
  • the hypervisor used for virtualization was once the unique selling point, where people now buy virtualization suites based on supporting functionalities, e.g.: High Availability, virtualized layer 2 networking (NSX), Dynamic Resource Scheduling

Smart existing companies have recognized this trend of commoditization of the core functionality, which results in a huge drive from the business to add more extra value fast while staying safe and reliable, all in order to stay competitive with the army of disruptive startups coming for (a piece of) the cake.

Developers have been used to working with the short iteration cycles intrinsic to Agile development for years now, since apart from adding value to the business quickly it has the additional benefits of risk reduction and adaptability:

Agile value proposition

Agile value proposition

However, this mode of operation is asking a lot from traditional IT departments as historically IT operations is focused on reliability of infrastructure: a characteristic seemingly best served by no changes ever – diametrically opposed to adding new features on a daily basis.

This has given rise to the waterscrumfall phenomenon: new features developed with short iteration cycles (scrum/agile) will still have to wait for the biannual release weekend to hit production (waterfall), thereby eliminating most of the advantages gained by adopting agile methods in development.

It goes without saying waterscrumfall is not a desirable situation to be in, and therefore people have been experimenting with the logical extension to Agile development to the whole pipeline: the DevOps movement.

DevOps

DevOps has perhaps over 9000 alternative definitions. The most important thing to note though, is that DevOps is a mix of culture, process and supporting technology. You can’t buy DevOps, and you can’t implement it.

Adopting DevOps requires a permanent push towards a different mindset which enables you to bring changes to production fast, at scale and in a reliable way. There are however some technologies that can help you enforce and enable DevOps principles. It’s here were the most exciting developments took place at VMworld 2015.

Overview of the VMware Cloud Native stack

Overview of the VMware Cloud Native stack

Unified platform: vSphere integrated containers

Interaction between Operations and Development runs most smoothly if Developers don’t have to file tickets for virtual machines, but instead use an API to request some compute resources to run their code. This is where containers come in: originally devised as an Operating System (OS) level virtualization technology, their main popularity these days is not the result of OS overprovisioning capabilities but rather of their ability to serve as shipping vehicles for code enabling reproducible deployment.

The output of a Continuous Integration (CI) process is known as a build artifact. Where usually this is a .war/binary/.zip file, the more modern approaches use containers. Ideally, the next stage of the process – Continuous Deployment (CD) – would subsequently push the container to a container engine (e.g.: Docker) which can schedule it. vSphere integrated containers allow this exact mechanism which nicely seperates Operations and Development concerns:

  • Ops can define special resource pools – Virtual container hosts (VCH) – to keep tabs on the resources available to containerized workloads
  • vSphere exposes a Docker Engine API, which Devs can use to schedule container workloads to a VCH. When a container is scheduled, a Virtual Machine (VM) is forked (instant cloned) to run this workload
vSphere integrated containers

vSphere integrated containers

Note that since the container is running on a VM in a 1:1 relation, the VM is not important here. It just provides the isolation and scheduling features to the container: the first class citizen of the data center – from the perspective of the developer – is the container itself. At the same time, because of the 1:1 mapping, Ops can monitor and manage the just enough VM (jeVM) in the same ways they would legacy workloads.

Continuous Delivery: vRealize Code Stream

Most development teams have some kind of Continuous Integration set up by now, which generates automated builds on a clean system, tests the build and stores the build artifact. The next phase which is pushing the artifact to test, user acceptance test, and ultimately production is not usually done in an automated way in the traditional enterprise environment as this phase requires Ops cooperation to set up – and as described above – this is where traditionally 2 worlds collide.

However, reproducible and therefore automated deployment is essential if you want to work with fast as well as safe pushing of new features into production. Therefore, companies today can only survive the onslaught of disruptive newcomers if they set up some sort of Continuous Delivery practice.

This is where vRealize Code Stream comes in: when a build artifact in the form of a container is output from the Continuous Integration phase of the pipeline, vRealize Code Stream pulls it in and takes care of the Continuous Delivery part in an automated way based on (user defined) rules, checks and tests.

vRealize Code Stream Continous Delivery Automation

vRealize Code Stream Continous Delivery Automation

Integration with cloud native platforms: Photon platform

Scheduling a container directly on vSphere using integrated containers is a great start, but it will not be the typical use case for new applications in production environments. Problems such as scaling, scheduling, dynamic routing and load balancing are universal and so unless you want to reinvent the wheel (a very common developer pastime), it’s much more convenient to use a cloud native application platform to deploy applications. Platforms such as Kubernetes, mesos, docker swarm and Pivotal Cloud Foundry take care of the scheduling, scaling and dynamic routing automatically.

Photon Platform architecture

Photon Platform architecture

At VMworld, VMware announced the missing link for landing cloud native platforms on vSphere – Photon platform – a multi tenant control plane for provisioning next generation (cloud native) application platforms.

Integrated containers or Photon platform?

Integrated containers vs. Photon platform

Integrated containers vs. Photon platform

Cloud native architecture is the future, but applications need to be designed to be cloud native (12 factors), and most existing applications are just not ready. So basically it comes down to this:

  • cloud native applications ⇒ cloud native platform using Photon platform
  • ‘legacy’ applications ⇒ vSphere, with packaging as container if possible

Note that for large applications, it doesn’t have to be one or the other: realistic migrations of existing applications will likely keep a core monolithic/legacy part hosted on the traditional platform, with new extensions or refactored bits – for which a business case can be made – as cloud native applications.

Pivotal Cloud Foundry partnership

Pivotal Cloud Foundry (PCF) is just one of the cloud platforms that can be provisioned on vSphere with Photon controller, so why special attention for PCF? From the VMware perspective this seems obvious: VMware owns Pivotal Software, sure they like to see them do well, there’s $$$ in it.

However, from the impartial enterprise perspective there is a very good case to make for PCF as well:

  • it’s the only platform that has support for enterprise concepts like organisations, projects, spaces, user management
  • it’s the only platform which strongly drives home the distinction between platform operations (managing and monitoring the cloud platform itself) and application operations (managing and monitoring the apps)
  • it’s a structured/opinionated platform which enforces DevOps principles – as opposed to unstructured (more freedom a.k.a. more chaos/management hell) platforms such as Kubernetes and mesos
Pivotal: enabling DevOps

Pivotal: enabling DevOps

Ergo: it’s the only platform right now that’s good enough for general purpose enterprise production use, and it’s the only platform that ‘just works’ on vSphere.

VMware and the commoditization of virtualization

Technology aside, VMworld 2015 was interesting because VMware is in somewhat of a bind: the hypervisor – once the sole reason for buying VMware – has become a commodity. The reason for choosing vSphere is nowadays the management, monitoring and automation suite around it. However, disruptive newcomers are using DevOps and cloud native architectures, and coming from a development background myself, I can see why they are the future, and I was sure there are enough intelligent people at VMware to recognize this as well.

So VMware had to move, and after following the DevOps and cloud native tracks and talking to Kit Colbert privately it became very obvious they are in fact moving.

However, VMware has a strong customer base in Ops in organizations which aren’t known for their aptitude to change; perhaps the willingness to change technology is there, but real change is needed especially on the culture and process fronts in order to keep up.

So it’s pretty clear: VMware realizes exactly what needs to happen, the difficulty is in determining the right pace for change: if they go too fast they alienate their current customer base, and if they go to slow they become legacy themselves. A real balancing act, but the proposition is strong.

.NET on Pivotal Cloud Foundry

In my latest post, I tested Lattice.cf, the single VM smaller brother of Pivotal Cloud Foundry (PCF). Considering a full installation of PCF has a footprint of about 25 Virtual Machines (VM) requiring a total of over 33Gb RAM, 500+ Gb storage, and some serious CPU power, it’s not hard to see why Lattice is more developer friendly. However, that wasn’t my real motivation to try it out: more important was Lattice’s incorporation of the new elastic runtime architecture codename ‘Diego‘ which will replace the current Droplet Execution Agent (DEA) based runtime in due time.

For me, the main reasons to get excited about Diego are two-fold:

  • Diego can run Docker containers: I demoed this in my latest post
  • Diego can run Windows (.NET) based workloads

In this post, I’ll demo the Windows based workloads by deploying an ASP.NET MVC application, which uses the full-fledged production ready .NET stack and requires a Windows kernel – as opposed to .NET Core which is cross-platform, runs on a plentitude of OSes, and is very exciting, but not production ready yet.

Diego on PCF

At this point we have to resort to PCF as Lattice can’t run Windows workloads. This is because Lattice’s strong point (all services in 1 VM) is also its weakness: since Lattice incorporates all required services in a single Linux VM it quite obviously loses the ability to schedule Windows workloads.

Let’s take a quick look at the Diego architecture overview:

Diego architecture overview

Diego architecture overview

Diego consists of a ‘brain’ and a number of ‘cells’. These cells run a container backend implementing Garden – a platform-neutral API for containerization (and the Go version of Warden). The default backend – garden-linux – is Linux based and compatible with various container technologies found on Linux (including Docker).

As soon as we run the various services in the overview on seperate VMs (as we do on PCF), it becomes possible to provision a Windows cell. ‘All’ we need is a Windows backend for garden and the various supporting services, and we should be good to go. Preferably in the form of a convenient installer.

One problem remains: we still need the Diego runtime in Pivotal Cloud Foundry. Kaushik Bhattacharya at Pivotal supplied me with an internal beta of ‘Diego for PCF’ (version 0.2) which I could install in my Pivotal on VMware vSphere lab environment. Installed, the Ops Manager tile looks like this:

Diego for PCF tile

Diego for PCF tile

And the default VMs and resources that come with Diego for PCF (notice the single default cell which is a garden-linux cell):

Diego for PCF default resources

Diego for PCF default resources

garden-windows cell

To provision a Windows cell, we have to create a new VM manually and install a Windows server OS, and run a setup powershell script as well as the installer with the correct configuration. I’ll skip the details here, but when all is done, you can hit the receptor service url (in my case https://receptor.system.cf.lab.local/v1/cells), and it should show 2 cells: 1 installed as part of Diego for PCF, as well as the one we just created:

[
    {
        "cell_id": "DiegoWinCell",
        "zone": "d48224618511a8ac44df",
        "capacity": {
            "memory_mb": 16383,
            "disk_mb": 40607,
            "containers": 256
        }
    },
    {
        "cell_id": "cell-partition-d48224618511a8ac44df-0",
        "zone": "d48224618511a8ac44df",
        "capacity": {
            "memory_mb": 16049,
            "disk_mb": 16325,
            "containers": 256
        }
    }
]

Intermezzo: Containerization on Windows?

There must be some readers who are intrigued by the garden-windows implementation by now. After all, since when does Windows have support for containers? In fact, Microsoft has announced container support in the next Server OS, and the Windows Server 2016 Technical Preview 3 with container support was just released. However, this is not the ‘containerization’ used in the current version of garden-windows.

So how does it work?

Some of you may know Hostable Web Core: an API by which you can load a copy of IIS inside your process. So what happens when you push an app to Cloud Foundry and select the windows stack, is that app is hosted inside a new process on the cell, in which it gets its own copy of IIS after which it’s started.

I know what you’re thinking by now: “but that’s not containerization at all”. Indeed, strictly speaking it isn’t: it doesn’t use things like cgroups and namespaces used by Linux (and future Windows) container technologies in order to guarantee container isolation. However, from the perspective of containers as ‘shipping vehicles for code’ it’s very much containerization, as long as you understand the security implications.

Deploying to the Windows cell

Deployment to the Windows cell isn’t harder than to default cells, however, there are a couple of things to keep in mind:

  • as the windows stack isn’t the default, you have to specify it explicitly
  • as for now the DEA mode of running tasks is still default, you have to enable Diego support explicitly

Using the Diego Beta CLI the commands to push a full .NET stack demo MVC app are as follows (assuming you cloned it from github):

cf push diegoMVC -m 2g -s windows2012R2 -b https://github.com/ryandotsmith/null-buildpack.git --no-start -p ./
cf enable-diego diegoMVC
cf start diegoMVC

After pushing and scaling the Pivotal Apps Manager:

Pivotal CF Apps Manager with DiegoMVC .NET  app

Pivotal CF Apps Manager with DiegoMVC .NET app

And the app itself:

DiegoMVC .NET application on Windows cell on Pivotal CF

DiegoMVC .NET application on Windows cell on Pivotal CF

Summary

Diego for PCF is still in internal beta, but soon Pivotal Cloud Foundry will have support for running applications using the full .NET stack.