In today’s world, high rates of innovation are being achieved by start-ups, but also by companies like Amazon or Zalando. One way of achieving this is through extremely short software development cycles. To achieve this speed, the concepts of agile software development, DevOps and Continuous Delivery come into play.
For conventional software projects, two to four releases are often scheduled each year. It can sometimes take months for new requirements to be implemented in production. Until 2013, this was also the case for the EVI digital phone book for the DB Group. But these slow release cycles made the product no longer viable in the long run. “We’ve modernised our work from the beginning to the end of the software lifecycle in four stages,” says Dr Stephan Pflume from the CIO area of the DB Group. “Initially, we switched over to agile software development. Then we migrated operation of the application to the DB Enterprise Cloud, before then expanding the development team into a DevOps team, and finally, setting up a Continuous Delivery Pipeline.” A small team combines both the development and the operation of the software. In this way, new features are continuously being put into operation – rather than in a regular cycle.
Explanation of terms
- DevOps: the aim of this approach is to shorten the intervals between releases and to improve quality. This relies on standardised processes and tools in a CD pipeline as well as close collaboration between the Development and Operations (Dev and Ops) units for more effective and efficient work.
- Continuous Integration (CI): this is the process of continuously combining software modifications in one version-managed code base.
- Continuous Testing (CT): a process of performing automated tests to obtain immediate feedback on business risks that are associated with a release candidate.
- Continuous Delivery (CD): the process automatically delivers release candidates to the development, test, integration and production environments. In this process, CD relies on CI and CT.
Previously, the fields of development and operations management were strictly separated from one another – each with different aims: developers are strongly interested in optimising systems through frequent releases. Operations, on the other hand, concentrates on keeping the systems running. Development and Operations generally come face to face when there’s time pressure, for example, when a new release is launched, or if there’s a problem like system failure. This kicks off the typical “blame game” in which each unit blames the other for the situation. DevOps is now promoting the automation of processes in IT operations and the shared use of tools between Development and Operations. This means that the developers can forward the infrastructure configuration they are using to Operations and coordinate this with them. Alternatively, Development and Operations jointly develop the infrastructure code directly and thus preclude any incompatibilities between the development, test and production environments from the outset.
We’re now able to work in the way that we’d once dreamed of.
In development, the agile working method is already well established and the agility is now being extended into IT operations. Functions that are developed on an agile basis in sprints can therefore be transferred to production operation in a fast and automated way – and are thus beneficial for the customer. Thanks to this method, the team is now achieving up to 18 releases per year, which means that new requirements are brought into production sooner. Smaller releases have another advantage: it’s easier for the user to adapt to the updated software. Even user feedback can now be taken into consideration in development sooner. “Now everything is software,” says Pflume, “we can now work in the way that we’d once dreamed of doing.” Virtualisation in the cloud allows all applications to be made available from one electronic system definition on an automated basis. However, the automation processes must first be defined and programmed. For this purpose, the team requires additional skills from the field of software operation and test automation.
Two factors have motivated the switch to the cloud: firstly, the pay-per-use approach has enabled a considerable economic advantage to be realised. Secondly, “DB Enterprise Cloud” has enabled data protection requirements to be met. EVI was the first Group application in the cloud. From the beginning, costs could be reduced: for example, test systems are shut down overnight and during weekends – so too are some of the production servers. After a two-week evaluation phase, production had already been migrated to smaller servers. At present, the software architecture is being modified so that even more cost-efficient servers will be sufficient during off-peak periods.
The Continuous Delivery Pipeline
The software, the servers and all components are automatically connected into a processing chain, the Continuous Delivery Pipeline. As soon as a new piece of software code has been completed, it’s automatically subjected to tests and then integrated into the production system. Before these processes can be developed and executed, new tools have to be brought in for the team. DB Systel is helping with the selection of automation tools. “We assume an integrator role. The market has a range of tools that could be used for implementing Continuous Delivery. We search for the ideal modules and connect them in a Continuous Delivery Pipeline,” says Dr Natascha Brosche, DevOps Community Manager at DB Systel. She’s a developer herself and is familiar with her customers’ requirements. Methods and (open-source) tools are selected when needed and used both in development and in operations.
This entirely seamless automation of processes is something like a revolution and a massive step towards reducing the time-to-market.
In the pipeline, the entire process of delivering the software package, from development to testing, and subsequently to acceptance and even production, is automated. “This entirely seamless automation of processes is something like a revolution and a massive step towards reducing the time-to-market,” says Brosche. DevOps team members now receive a tool set that allows them to push the entire release to production at the touch of a button. “For me, the innovation factor is that the release process now actually happens almost incidentally.” Stephan Pflume is also impressed: “With these methods, we’re achieving speeds that allow us to make innovative things reality. Everything else is too slow.” Other projects within the Group are also exploiting the advantages of DevOps.
It’s clear that we’ve made a good start – now let’s see what the future has in store for us.