Jenkins is a software system developed for maintaining collaboration, quality management, agility. Modern IT projects require high-quality delivery. At the same time, the process must be fast enough to respond to the ever-changing demands of operational business. Misconceptions about the coexistence of these two issues, quality management and agility, often cause irritation or even frustration for the development and operations managers. To improve collaboration between development and operations, new tools for software distribution and deployment are often introduced. The Jenkins project is a software developed for this purpose. The goal of this series of work is to evaluate how the use of Jenkins facilitates the distribution and set-up of software, thus improving collaboration between development and operations. This includes the cost of introduction and possible circumstances after the introduction, as well as the benefits of the Jenkins.
Here, information on the subject areas on Jenkins and DevOps is summarized under the sub-header Basics. Based on the basic knowledge gained, requirements will be set in the next sub-headers, followed by actual analysis in relation to the requirements. Jenkins is examined systematically and without judgment. This is the Part I of Multi-Part Article Analysis of Jenkins for DevOps.
|Table of Contents
Analysis of Jenkins for DevOps : Basics
The term DevOps consists of two words: Development and Operations. We discussed the basics of DevOps in 5 part article, somewhat in details. Development and Operations are the two areas of software development which have different goals, which they want to achieve within the software development department. The development aims to quickly incorporate new features in the software, while the operation is focused on the stable software operation, as existing functions should continue to work without problems. The change in software development leads to ever more agile methods which include ever-shorter release cycles. These release cycles are the result of ever-evolving software and software environment requirements. The software operation has historically been guided by fixed dates for software updates and a small quantity of these updates to ensure the stability and functionality of the application. DevOps is a strategy which tries to unify these different goals. In this case, the support of both areas of interest is covered with high communication and coordination effort as well as an increased automation of processes, which concern the processes between the departments. Through the use of software to support the departments and joint processes some targets being achieved.
Jenkins is a continuous integration (CI) platform, with the original concept of automating quality assurance testing and automating the build process for the development department. The project uses an extensible, plug-in-based architecture which allows it to be adapted to different use cases. With this solution, the Jenkins break through the traditional software model in the form of the waterfall model, since this requires due to the many stages which are run through one after the long development cycles. The project is written in Java, works across operating systems and was initially developed under the name Hudson, under the sponsorship of Sun Microsystems.
Hudson was originally developed by developer Kohsuke Kawaguchi as a test automation tool, later the tool was positioned as an alternative to the CruiseControl program. Especially in the Java community, it quickly gained popularity: due to increasing popularization, integration into almost all tools of the software development process followed, at the JavaOne 2008 Hudson won award. As a result of Sun’s acquisition by Oracle, as well as differences of opinion in the Hudson community of Jenkins was created as a spin-off for Hudson project. Due to a far too late integration in the Eclipse Foundation, Hudson finally played only a negligible role. [In the last 10 years, Jenkins has become the standard tool used by millions of people to automate software development. There are now more than 1000 plugins and 100000 installations of the project.
Integration of Jenkins into the project cycle
The classic software life cycle consists of six phases. DevOps tries to support phases 4, 5 and 6 with its approaches and build up a strategy to accelerate the process. The quality of the software is always in the foreground. DevOps-supported phases involve both departments – both development and operations.
The implementation phase is about developing the software. It mainly involves the development department. For operation, this phase has less importance than the subsequent test and integration phase. In this phase, various components of the software are tested for their functionality, and the interaction been tested in a test environment. This phase has the most interface between operation and development as developers can perform their developer tests there and continue to test operation in the test environment. The development and operations department will be notified about faulty tests. After the test run, the application is built and then ready for delivery. After the delivery of the software, the operation and maintenance phase is reached. In this, the operation takes care of commissioning the software at the customer and taking feedback on features of the application. These are forwarded to the development department so that appropriate adjustments can be made to meet the customer’s wishes for the software.
To support the software project, the Jenkins must be available as a shared tool for both the departments. Integration into the project cycle involves two key points on which the Jenkins works – the initial installation and integration of the Jenkins and the changes in the workflow that the Jenkins project entails. The former is a one-time step where the less effort is better. The latter has continuous influence on software projects, for which reason a greater focus is placed on it. When changing the workflow within projects for no apparent reason, the tools and the nature of automation prevent DevOps values from developing. The use of Jenkins should therefore be possible without interference in the workflow of the developers. Ideally, the developer will notice only a reduced amount of work or the elimination of operations that previously belonged to the routine. In the most unfavorable case, additional work steps are created which can not be reconstructed by the developer since the benefits of the new software are not apparent. It should also be noted that the Jenkins can only be used as part of a larger development infrastructure. In particular, for example, the interoperability with an existing versioning system or an established build tool should be ensured.
The operation should have similar requirements. Here, too, the Jenkins is not intended to create any extra work, but merely to support and automate already existing processes. Here, the Jenkins should also be involved in work processes, without already existing processes have to be changed or adapted.
Quality assurance is a key objective for operation in a DevOps environment. Therefore, a standardized approach to the process of quality assurance in software projects is often developed. One of these processes is the Quality Assurance pipeline. It is used in the continuous-deployment environment and is essential for feedback to the developer. It begins with the process of importing software code into the code repository and then triggers a sequence of further processes. The pipeline can consist of several subregions, which can independently report back to the developers. It usually includes unit tests and integration tests. If an error occurs during the execution of these User Acceptance Tests, the pipeline will be interrupted and discontinued, as these are essential for the functioning of the application. The pipeline can also be extended by further phases. However, these phases must not be a prerequisite for the operation of the software product, but merely include further tests. Quality assurance gets new features as quickly as possible and can still test them in this environment. At the end of the QA pipeline, the application is built and delivered in the test environment. The Jenkins must provide a closed-loop procedure for automatically running multiple configurable tests. In addition, there should be a notification for the developers in case of an error, so that they can rectify the situation quickly and react quickly to these error cases. This ensures that the operation can quickly reach an executable version for productive operation.
Optimization of the build process
The agile software methods increasingly used in DevOps environments result in increased requirements for the build process of software projects. The automation of software building is one area of these requirements. Development must be notified if there are problems in the build, as it can quickly resolve them. In addition, the company must have a way to quickly give feedback on new features to the development.
In order the application does not need to have to be built by hand for every update of the software, the build process should be automated. This shortens the time from code store in the code repository to the finished software product. The shorter this time span, the more frequently the application can be updated. Thus, the new features are brought quickly into the productive system and offer the user of the application new possibilities. The build process includes the following steps:
- Compilation of the program code
- Generation of documentation
- Carrying out unit tests
- Generation of a deliverable format (exe, zip, tar, etc.)
The Continuous Integration Tool does not do this by itself, but launches it periodically, displays the results, and sends notifications to project members when problems occur. While it’s important to get the build server to build the application, more importantly, the server alerts responsible people if it can not build. A key part of the value proposition of a Continuous Integration Environment is the improvement of the flow of information about the status of the project. This includes failed unit tests, backward integration tests or other quality-related issues. In all cases, the Continuous Integration Server must report to responsible persons about the problems that have arisen.
Continuous integration should not stop as soon as the program code can be compiled without error. Also, running a series of automated tests should not be the end. The next logical step, once the aforementioned has been achieved, is the leap from the automated build process to the distribution phase. This practice is known as automated deployment or continuous deployment.
In its most advanced form, continuous deployment means that any code change, after passing through tests and other appropriate verification process, is automatically distributed to the productive system. The goal is to reduce throughput times and reduce the effort required for the distribution process. This, in turn, is intended to help the development team reduce the time they spend on custom functionality and fixes, which, in turn, increases their throughput. However, systematically incorporating the last software release into the production system is not always appropriate, no matter how well the automated tests are. Many companies are not prepared to use a new, unannounced software version every week. Users need training, the product must be marketed or similar process needed to be performed. A more conservative and more suitable alternative for large companies would be to automate the entire distribution process, but to initiate the actual distribution manually, at the click of a mouse. This principle is known as Continuous Delivery and includes all the advantages of Continuous Deployment, without its disadvantages. The decision on the timing of distributing a new software version is thus in the hands of the operation, rather than IT. without its disadvantages. The decision on the timing of distributing a new software version is thus in the hands of the operation, rather than IT, without its disadvantages. The decision on the timing of distributing a new software version is thus in the hands of the operation, rather than IT. Seriously, the automatic deployment should also be followed by a series of automatic tests. These are, for example, a series of automated acceptance tests that can be performed after deployment on the production server without generating heavy loads. One of the fundamental principles of automatic deployment is the reuse of binary files. It is inefficient and unreliable to rebuild the application during the deployment process. It is common practice to perform a series of unit and integration tests on a specific software version before it is loaded into the test environment. If the application is rebuilt before distribution, the code could have changed. All previous tests would not be reliable.Another requirement is the possibility of rewinding back if an error occurs after unrolling.
Conclusion of Part I of Analysis of Jenkins for DevOps
Since the Jenkins is being investigated for DevOps in the context of this project, a fast and error-free delivery of the software is part of the requirements. For this reason, implementing or supporting continuous deployment or continuous delivery by the Jenkins is part of the requirement.
In this part, we have discussed the bare minimum theoretical basis. Travis CI is similar tool used by GitHub and a good example to publicly show how Continuous Integration (CI) works in real. In the Part II of this series, we will discuss the practical procedure of integration into the project cycle.Tagged With softwares required for jenkins in devops , jenkin partial sun and cloud , jenkins cloud quality analysis , jenkins in which part of devops , partial sun with cloud in jenkins