In computer graphics and digital image processing, scaling refers to the resize of a digital image. In video technology, the enlargement of digital material is also referred to as upscaling or resolution enhancement. When scaling a vector graphic, the graphic primitives that make up the vector graphic are stretched by geometric transformation before rasterization, which does not cause any loss of image quality.
When scaling raster graphics, their image resolution is changed. This means that a new image with a higher or lower number of pixels is created from a given raster graphic. In the case of increasing the number of pixels (upscaling), this is usually associated with a visible loss of quality. From a digital signal processing standpoint, raster graphics scaling is an example of sample rate conversion, the conversion of a discrete signal from one sample rate (in this case, the local sample rate) to another.
Image scaling is used in web browsers, image editing programs, image and file viewers, software magnifiers, digital zoom, crop enlargement and the generation of preview images, as well as in the output of images from screens or printers.
---
The enlargement of images is also important for the home cinema sector, where HDTV-capable output devices are operated with material in PAL resolution, e.g. from a DVD player. The upscaling is carried out by special chips (video scalers) in real time, whereby the output signal is not stored. Upscaling is therefore in contrast to upconverting a material, where the output signal does not necessarily have to be created in real time, but is stored for this purpose.

Scaling Methods for Raster Graphics
Image editing programs usually offer several scaling methods. The most commonly supported methods – pixel repeat, bilinear and bicubic interpolation – scale the image using a reconstruction filter.
When scaling, the specified image grid must be transferred to an output grid of different sizes. The scaling can therefore be clearly represented by overlaying the pixel grid of the output image to be calculated over the pixel grid of the input image. Each pixel of the output image is assigned a color value, which is calculated from the nearby pixels of the input image. The reconstruction filter used determines which pixels of the input image are used for the calculation and how their color values are weighted.
During scaling, a two-dimensional reconstruction filter is placed over each pixel of the output image. The color value is calculated as the sum of the color values of the pixels of the input image overlapped by the carrier of the reconstruction filter, weighted by the value of the reconstruction filter on these pixels.
Usually, reconstruction filters decrease with increasing distance from the center. As a result, color values closer to the output pixel are weighted more strongly, and those further away are weighted less strongly. The size of a reconstruction filter is measured by the grid of the input image and, in the case of reduction, by the grid of the output image.
Some reconstruction filters have negative partial areas; such filters lead to a sharpening of the image similar to unsharp masking. This can result in color values outside the permitted value range, which are then usually set to the minimum or maximum value. It must also be taken into account that fewer pixels are overlapped by the reconstruction filter at the edges of the image than in the rest of the image. To prevent dark pixels at the edges of the image, the filter must be renormalized here. The determined color value of the output image is divided by the sum of the values of the reconstruction filter at the overlapped pixels of the input value. Another option is to use the nearest color value at the edge of the image for points that fall outside the image.
When comparing different reconstruction filters, the one-dimensional filters can first be considered. Reconstruction filters, which are defined as polynomials, are also called splines. Other well-known filters are the Lanczos filter and the Gaussian filter.
There are two ways in which a two-dimensional reconstruction filter can be created from a one-dimensional reconstruction filter, namely by radial symmetry and by separation.
A two-dimensional, radially symmetrical reconstruction filter can be created as the rotation surface of a one-dimensional filter. The filter value depends solely on the distance from the center. In order to apply radially symmetrical reconstruction filters, the Euclidean distance to the pixels of the input image must therefore be calculated. Radially symmetrical filters result in sampling frequency ripple: when zooming in on a solid color area, the color values can vary from pixel to pixel unless the filter is renormalized for each pixel.
Most scaling methods use separable filters with a square carrier. In the case of separable filters, the calculation using a two-dimensional filter can be replaced by a series of interpolations with a one-dimensional reconstruction filter. In an intermediate step, the interpolated point at the x-coordinate of the output pixel is calculated for each of the image lines overlapped by the filter. The interpolated color value on the output pixel is then calculated from the vertical points created in this way.
Image artifacts created by separable filters are not isotropically distributed (evenly in all directions), but preferably aligned horizontally and vertically. Since separable filters only require a sequence of one-dimensional interpolations and no Euclidean distances are calculated, they are faster to calculate than radially symmetric filters.
The Gaussian filter is the only radially symmetrical reconstruction filter that is also separable. For all other filters, the separable and radially symmetrical generation leads to different results.
In pixel repeating, also known as the nearest neighbor, each pixel of the output image is assigned the color value of the nearest pixel of the input image. Reducing the size of images using this method can result in strong aliasing effects, which manifest themselves as image artifacts. When enlarging by means of pixel repetition, a block-like, “pixelated” display occurs.
At magnification, the pixel repetition corresponds to the reconstruction with a 1×1 pixel box filter. Such a filter overlaps only one pixel of the input image, namely the closest one.
In bilinear interpolation, the color value of a pixel of the output image is interpolated from the four adjacent color values of the input image.
This filter is separable and can be calculated as a series of interpolations with a one-dimensional reconstruction filter (the triangle filter). First, an interpolated color value is calculated for each of the two image lines involved, and then interpolation is performed between these two vertical points.
In bicubic interpolation, a color value of the output image is interpolated from the neighboring color values of the input image using cubic splines. There are several common cubic splines with different properties; the term “bicubic interpolation” is therefore ambiguous.
Technical drawings such as construction, construction and survey plans as well as maps contain symbols, dimensions and explanatory text entries in addition to the object geometry. These drawings are based on pure vector graphics and are therefore easy to influence. The design of these plans is standardized by general and industry-specific drawing standards and sample sheets to such an extent that users can interpret and implement these plan contents. Serve as a means of uniform plan design
In the case of technical drawings, good plan design is important in order to interpret the plan content. This includes a balanced relationship between the presentation of the essential plan content and the additional explanatory information, which is only possible in the scaling range that meets these requirements. For output, vector graphics must be converted into raster graphics, the limits of which thus also apply to vector graphics for output. Enlargements are generally less problematic, here the plan size limits the scaling. However, with reductions, fewer pixels are available for the same information density; the display then quickly appears overloaded and is difficult to differentiate.
In the course of this conversion, however, it is possible to influence the output, for example by hiding details that are too small and therefore not clearly recognizable. In the case of symbol outputs, text entries and measurements, the importance of the entry can generally be recognized by the size of the display, which provides a good overview, because the plan user gains an overall impression more easily and can better familiarize himself with the plan content. Subordinate details, on the other hand, are characterized by small but still legible representation. This method of presentation makes it possible to automatically control the density of information to a certain extent.