AI art or AI generated art refers to works of art created through the use of artificial intelligence. Generative art is a contemporary form of artistic creation, whereby the focus is not necessarily on the artwork or end product, but on the process of creation and the ideas on which it is based.
The work or product is created through the processing of a procedural invention, i.e. a set of rules created by the artist or a program that is recorded, for example, in the form of natural language, musical language, a binary code, or a mechanism.
The process is self-organizing, in the form of a relatively autonomous process, for example through actions that are carried out according to existing instructions – as in the score for a happening – through a computer program that processes instructions, image information or other concepts, or through other media and aids. Under different production conditions, the process takes place differently. The result is within more or less given limits, but is unpredictable. Generative art often serves as a means for artists to avoid intentionality.
---

Tools and Procedures
There are many mechanisms for creating AI art, including procedural, “rule-based” generation of images based on mathematical patterns, algorithms that simulate brushstrokes and other painted effects, and artificial intelligence or deep learning algorithms such as generative adversarial networks (GANs) and transformers.
One of the first significant AI art systems is AARON, which was developed by Harold Cohen from the late 1960s onwards. AARON is the most notable example of AI art in the era of GOFAI programming because it uses a symbolic, rules-based approach to generating technical images. Cohen developed AARON with the goal of encoding the act of drawing. In its primitive form, AARON created simple black-and-white drawings. Later, Cohen completed the drawings himself by painting them. Over the years, he developed AARON so that it could paint without his help, using special brushes and paints selected by the program itself and without Cohen’s mediation.
Since their development in 2014, GANs have been widely used by AI artists. This system uses a “generator” to create new images and a “discriminator” to decide which created images are considered successful. Newer models use Vector Quantized Generative Adversarial Network and Contrastive Language-Image Pre-training (VQGAN+CLIP).
DeepDream, released by Google in 2015, uses a convolutional neural network to find and amplify patterns in images using algorithmic pareidolia, creating a dream-like psychedelic appearance in the intentionally over-processed images.
Several programs of large companies use AI to generate a variety of images based on various text inputs. These include OpenAI’s DALL-E, which released a series of images in January 2021, Google Brain’s Imagen and Parti announced in May 2022, and Microsoft’s NUWA-Infinity.
There are many other programs for generating AI art, ranging in complexity from simple consumer mobile applications to Jupyter laptops that require powerful GPUs to work effectively. Examples include Midjourney and StyleGAN, among many others. On August 22, 2022, Stable Diffusion was released, the first such model whose source code is completely open source.
Stable Diffusion is used, among other things, as a tool in 3D animation. For example, a plug-in has been released for the open-source software Blender to generate textures. These have the advantage of being free to use under copyright law and that they can be created very individually.