Test automation is the automation of activities in testing, both software testing and hardware. In software development, it is particularly important to know a fixed, defined status of the software, such as:
Is the current, new software version better than the old version?
Automatic tests, which test for undesirable effects on other functions after a change has been applied, are called regression tests. They make software measurable in terms of its quality and show possible side effects of changes made directly and recognizably. They serve as direct feedback for developers and testers, who may not be able to see the overall software system at once, as well as to detect side effects and consequential errors.
---
Test automation therefore provides a metric, the number of successful test cases per test run. This can answer the following questions:
- When is a new requirement fully fulfilled by software?
- When is a bug fixed?
- When is the developer’s work finished?
- Who is responsible for what and when?
- What is the quality of a new software version?
- Is the quality of the new software version better than the previous version?
- Does a fixed bug or a new requirement have an impact on existing software (change in the behavior of the software)?
- Is it ensured that the live operation with the new software is successful and secure?
- What does the software actually contain in terms of new functionality as well as any bug fixes; Is this comprehensible?
- Is it still possible to meet the delivery date of the software if it is not possible to assess the current quality of the software?
To the example question: “When is a bug fixed?” the answer in this case is: Exactly when all existing test cases and also the test cases written for the bug itself have been successfully completed.
Feedback is only provided by constant testing, and this is only possible and feasible through automation.
Another benefit of test automation is the acceleration of the development process. Whereas in software projects without automation, production, installation and testing are carried out manually one after the other, in fully automated projects (i.e. if production and installation can be automated in addition to the test), these three steps can be started automatically one after the other, e.g. in a night-time run. Depending on the scope of the project, you may be able to start this process in the evening and have the test result available the next morning.

Activities that Can be Automated
Test Case Creation
Depending on the format used to describe a test case, test case creation can be automated by transforming higher-level language descriptions (test specifications) into this format. Languages of different levels of abstraction are used for test specification: simple table-like notations for test data and function calls, scripting languages (e.g., Tcl, Perl, Python), imperative languages, object-oriented approaches, and declarative and logical formalisms, as well as model-based approaches. The aim is to achieve a far-reaching and, as far as possible, fully automatic translation of artefacts in a technical language level that is far away from the machine into artefacts in a machine-related technical language level. Another approach is to dynamically generate test case creation based on business objects to be declared. If a test specification is not already available in executable form, but in a non-executable language (e.g. UML, Excel spreadsheet, or similar), it may be possible to automatically translate it into executable test cases with suitable tools.
Test Data Creation and Test Scripting
Since the number of possible input values and sequences of a program is often very large, input data and sequences must be selected from test specifications according to the test coverage to be achieved when generating test cases. The data model of the software can often be used for test data creation, and behavioral models of the software are used for test scripting in model-based testing. Script-free solutions are also available on the commercial market.
Test Execution
Today, tests are largely carried out using fully automated test tools. Depending on the target system, unit test tools, test systems for graphical user interfaces, load test systems, hardware-in-the-loop test benches or other tools are used.
Test Evaluation
For test evaluation, the test result obtained must be compared with the expected value. In the simplest case, only a table comparison needs to be made; however, if the target behavior is defined by logical constraints or contains extremely complex calculations, the so-called oracle problem can occur. If two software versions or two test cycles and thus two test results are compared against the target result, trend statements and quality statistics can be calculated.
Test Documentation
In the case of test documentation, a comprehensible and comprehensible test report is generated from the test results received. Document generators and stencil tools can be used for this purpose.
Test Administration
The task of test administration is the administration and versioning of test suites as well as the provision of an adequate user environment. In addition to standard tools (e.g. CVS, Eclipse), there are a number of special tools that are specifically tailored to the needs of software testing.