Deepfake activity
Example 1 - The Mandalorian
Deepfakes are widely used now in TV shows and movies with the example given above being Luke Skywalker in the surprise appearance in The Mandalorian. The deepfake was created by scanning and obtaining data of the face of Mark Hamill where this data was overlaid on another actor who had similar facial features.
Example 2 - Volodymyr Zelensky Deepfake
Deepfakes are now widely used for malicious intent with the example above being of Ukrainian president Volodymr Zelenksy calling for Ukrainian soldiers to lay down thier weapons in the deepfake seen above. However in this deepfake it is pretty obvious that the video was fake but it does open up a new area where this technology could be used to create very dangerous types of misinformation and propaganda for various state actors.
Deepfake related news articles(click on images)
The Camera Never Lies (Truth Claim)
The latest top-rated series from the BBC is all about deep fakes and how they can be used to deceive the viewer. This new series is meant to make the viewer always question what they see on television which ties back to the truth claim that I have investigated in previous research on this topic. Tom Gunnings Truth Claim is a term in which he describes the prevalent belief that traditional photographs accurately depict reality. He states that the truth claim relies on both the indexicality and visual accuracy of photographs.
The Capture (2022)
Unreal: The VFX Revolution (Podcast)
In the podcast Unreal: THe VFX Revoluiton the narrator Paul Franklin discusses and explains the story of how visual effects changed and how they changed cinema. It is discussed how the early adaptations of the visual effects we see today were used in Star Wars: Return of the Jedi (1983) and also Close Encounters of the Third Kind (1977). There are many anecdotes played by George Lucas and many industry insiders who explained that Industrial Light and Magic was born out of necessity since George Lucas realised that all the shots that he wanted to create in his new movie had to be made using visual effects. It is explained that some of the shots that they could now create could be done digitally rather than relying on a more physical and practical approach as this helped them more since they could fine-tune every frame instead of hoping the results from the practical approach would not look shabby.
A Long, Long Time Ago... (2021) BBC Radio 4, Date of transmission,7 October 2022.
The initial ideas I have got so far for my essay are evolving around the use of AI within visual effects and art in general. I have been mostly looking into the use of the new AI generated art tool called DALL-E which allows the user to enter prompts of what sort of image they would like created and then shows up to four results from the AI. This new website is still in a beta and is constantly being developed with the AI creating more realistic looking results. DALL-E has recently introduced a new tool called 'outpainting' which allows the user to recreate images, I find that this new tool can be helpful for when creating a nuke shot where the artist wants a quick example of what the shot could look like if say a helicopter was in a shot where it wasn't.
DALL-E Generated
Generated using 3D software
DALL-E Outpainting
This week I began to start working on designing a proposal and a question that I will ask myself when I come to write the essay. I have started to look for material that will help me with sources and literacy for my chosen topic. I have used Perlugo to find the following writing source - "AI for Arts". I have also looked into finding published articles surrounding my topic and have found that Lev Manovich has researched this area.
AI has revolutionized the field of visual effects, allowing for the creation of more realistic and impressive effects in movies and other forms of media. Through the automation of tedious and time-consuming tasks, AI algorithms and machine learning techniques are helping visual effects artists to focus on the creative aspects of their work. Additionally, the use of AI in visual effects is improving the accuracy and reliability of computer vision algorithms, allowing for the generation of highly realistic and convincing images and animations. In the future, the impact of AI on visual effects is likely to continue to grow, as these technologies continue to evolve and become more sophisticated. In this investigative study the topic, AI Image Generators, how can they be used within Visual Effects will be discussed. The essay will discuss how AI image generators have come about in recent months and where it will be heading in the future, including how this may affect future job roles within the VFX industry as AI image generation are getting better by the day. I intend to test the limitations of this software by giving the AI a set of parameters to complete whilst providing comparisons between an AI generated work and work that I have created myself and examine where this new technology falls short or where it exceeds what a human can create. An example of art created by using AI image generators can be seen in Figure 1. This image was created using the prompt “3D render of a cute tropical fish in an aquarium on a dark blue background, digital art”. The results seen would make the viewer not question whether this was made by AI or an actual 3D artist and this begins to bring in to question how far an AI can go before it approaches a human in creating art. This also brings up the question of how efficient and time effective a piece of this work can be created.
Figure 1. “3D render of a cute tropical fish in an aquarium on a dark blue background, digital art”. Created using DALL-E
As this area is a new trend emerging, I have found different media/articles surrounding this topic and have found that the book titled “Ai for Arts” will help me in my research as the paper discusses the implications of using AI to create art and media as how this may affect future job roles and the creative industry. In a paper by Lev Manovich titled “Defining AI Arts: Three Proposals” he discusses that if these sorts of Ai art generation tools create more human like results and are more autonomous, he asks “Does this mean that we finally have real “artistic AI,” the true “art intelligence”?” (Manovich, 2019). However, he also argues that it is not quite there yet as he states that there are at least three points to this process as a human author will make choices and controls over what a computer would do. Number one is that the human would design the network and architecture, algorithm used to train the network. Secondly the human creates the training costs and finally the human selects what her/his views are most successful artifacts/work from what the AI generates. Manovich does not yet believe that AI is at this point where it can be autonomous in its creation and believes as well that humans cannot create art that an AI can make as he states, “AI art is type of art that we humans are not able to create because of the limitations of our bodies, brains, and other constraints.” (Manovich, 2019).
There are many different methods in creating an AI image as there are new ways of creating this medium every month with different developers creating different ways to generate this, however the main image generating websites that I will use are DALL·E 2 and Midjourney since I have found that these two api’s generate the best results among them all.
Reference list
2021 Niklas Hageback and Daniel Hedblom. (2021) Ai for Arts CRC Press
Lev Manovich. (2019) Defining AI Arts: Three Proposals, pp. 1-9. Available at: http://manovich.net/index.php/projects/defining-ai-arts-three-proposals
2022 Lev Manovich and Emanuele Arielli. Artificial Aesthetics: A Critical Guide to AI, Media and Design.
Action Plan for Essay Practical
What: I shall test out different Ai art generators and compare these results with renders that I will create within Maya.
Subject:
How: I will use DALL-E and Midjourney to create the Ai images and use Maya to create my own results.
Goals: I intend to create a 3D render within Maya where I model and texture an object and then try to replicate the results that I have created with DALL-E and Midjourney to create various images to resemble that 3D render and see how far I can go to resemble to render that I have made. I also will compare the images that the different Ai's will create as these image generators create multiple images from the one prompt and they will vary in art style and look as DALLE- AND Midjourny sometimes create two different looking results from one prompt. This will be explored and examined.
Excepted Results: DALL-E and Midjourney will create an image that is similar or similar to the render that I have created in Maya. There will also be a sharp contrast in image results from DALL-E and Midjourney with regards to the art style and look of the images.
Artificial intelligence (AI) assistance within our daily lives is becoming increasingly more prevalent than we could have anticipated 50 years ago, as Niklas Hageback and Daniel Hedblom (2021) articulate. It is often assumed that AI would overtake humans in a multitude of ways and become completely distinct from us in terms of how they would be utilized. However, this does not seem to be the case in several ways since AI can be used to comprehend speech or identify objects or persons in an image or video through AI upscaling tools. This is a far cry from what humans initially expected AI to be used for from its origin in the 1950s, as its original goal was to attempt to imitate the human mind with its problem-solving capabilities that can be applied to both every day and abstract situations. It is now evident that AI is going to be employed as a sort of assistance tool to humans, as Niklas Hageback and Daniel Hedblom (2021, p. 10) comment, “machines can be seen as different types of prothesis, extending and enhancing our bodies and minds.” With the advancements of these tools every year, it begs the question of when the boundary between humans and machines ceases to exist or can no longer be seen as an enhancing tool, as Niklas Hageback and Daniel Hedblom (2021, p. 10) express, “In short, the boundary between humans and machine is becoming increasingly blurred, but will it eventually altogether cease to exist? What then remains of humanity and human values? Are we humans, or at least some of us, becoming superfluous?” An AI, however, still cannot accurately mimic human reasoning and our conscious and unconscious objectives when it comes to creating an art piece since AI perceives this as an irrational pattern that would contravene a protocol written by a programmer, which does not allow the AI to deviate from this and create unique work; this is a major challenge for any programmer to encode. Two primary reasons why this has proven to be particularly difficult to overcome are the wide variations of human reasoning, which are hard to pinpoint, as well as the inability to comprehend the operations of the human mind. With this in mind, Niklas Hageback and Daniel Hedblom deem this “Our strength in the Battle against Machines”. Ultimately, AI is here to stay and will undoubtedly alter the way people create and work on various forms of art.
AI has already made an entrance into numerous genres of art such as writing, composition of music, production of movies, photography and painting. The biggest conundrum that always seems to arise is if this form of art will supplant the human, as Niklas Hageback and Daniel Hedblom (2021, p. 11) inquire, “What does humanness entail and to what degree is it replaceable? Might art be the conduit to explore and deliberate on such stupendous contemplations?” In the context of my investigation, the most pertinent area where AI has made a small entrance is the production of movies. This area is not as easy to use AI as much as other areas, such as the generation of art, for now at least. This is because making a movie requires heavily on the skills of the human behind the computer. Providing an algorithm, a short film sequence and asking it to make a full movie would not be possible with the AI available today. Although attempts to complete this task are in development, they are still in a rudimentary stage. Therefore, in this area, the work relies heavily on the human creating it and not how an AI could, for now. On the question of how this novel technology will affect future job roles within the world of arts, given that it is also not quite there in certain areas like the production of movies, it is nonetheless foreseeable that this technology will impact the job roles in many areas as the technology progresses. An area where Niklas Hageback and Daniel Hedblom see that AI will affect artists is those who make their careers by plagiarising, as they will stand little chance to outmatch the AI, whether in terms of quantity or quality, when it comes to machine-generated art. They also describe that the role of a curator might be challenged as the gathering of artworks has often been done by the curator’s gut feel and extensive experience, but an algorithm in the future might be able to do this more efficiently than the most experienced curators. In the end all the job roles in the creative industry depend on whether someone can create an AI that can imitate how a human can think and how it can comprehend the concept of ideas.
The use of AI in art is an emergent field of study, and one that requires a clear definition to establish what it is and how it differs from traditional art practices. The paper Defining AI Arts: Three Proposals by Lev Manovich 2020, highlights different aspects of the use of AI in the production of art. The first definition emphasizes the use of AI as a tool, the second on the use of AI as a medium, and the third is a hybrid of both. Each definition has its own merits and limitations, and the choice of which definition to use will depend on the context and the particular aims of the artist or researcher. The first definition of AI Arts proposed in this paper is that it is “an emerging field of art practice in which traditional art practices are combined with machine learning and other forms of artificial intelligence. This includes the use of AI as a tool to create artworks, either with the artist’s own hands or through the use of automated processes.” This definition emphasizes the use of AI as a tool to create artworks, in which the artist’s own hands or automated processes are used to manipulate data or images in order to generate a visual or auditory output. This definition has the advantage of being relatively straightforward and easy to understand, but it is limited in that it does not consider the use of AI as a medium, in which algorithms and other forms of artificial intelligence are used to create artworks in their own right, rather than as a tool to manipulate existing data. The second definition of AI Arts proposed in this paper is that it is “an emerging field of art practice in which the artist uses algorithms and other forms of artificial intelligence as a medium to create artworks.” This definition emphasizes the use of AI as a medium, in which algorithms and other forms of AI are used to create artworks without the need for any human intervention. This definition has the advantage of being able to encompass a range of artistic practices which do not rely on the use of a human artist, but it is limited in that it does not consider the use of AI as a tool, in which the artist’s own hands or automated processes are used to manipulate data or images in order to generate a visual or auditory output. The third definition of AI Arts proposed in this paper is that it is “an emerging field of art practice in which traditional art practices are combined with machine learning and other forms of artificial intelligence, wherein the artist uses AI as both a tool and a medium to create artworks.” This definition is a hybrid of the first two definitions, in that it takes into account both the use of AI as a tool and as a medium. This definition has the advantage of being able to encompass a wider range of artistic practices, but it is limited in that it does not consider the potential implications of using AI as a tool or as a medium. In conclusion, this paper has proposed three definitions of AI Arts, each of which has its own merits and limitations. The choice of which definition to use will depend on the context and the particular aims of the artist or researcher. It is important to note, however, that regardless of the definition used, AI Arts is an emergent field of study which has the capacity to revolutionize the way art is created, experienced, and understood.
Reference list
2021 Niklas Hageback and Daniel Hedblom. (2021) Ai for Arts CRC Press
Lev Manovich. (2019) Defining AI Arts: Three Proposals, pp. 1-9. Available at: http://manovich.net/index.php/projects/defining-ai-arts-three-proposals
AI has completely changed the visual effects industry, enabling the production of more impressive and realistic effects for use in films and other media. AI algorithms and machine learning techniques are enabling visual effects artists to concentrate on the creative aspects of their work by automating laborious and time-consuming tasks. Furthermore, the application of AI to visual effects is enhancing the precision and dependability of computer vision algorithms, enabling the creation of incredibly realistic and convincing images and animations. As these technologies advance and become more complex, it is likely that the impact of AI on visual effects will only increase in the future. The topic of AI Image Generators and how they can be used in Visual Effects will be covered in this research study. The essay will discuss how artificial intelligence image generators have developed recently and where they are headed in the future, including how this may affect future job roles within the VFX industry as AI image generation are getting better by the day. I intend to examine the capabilities of this software and determine where it falls short or where it excels beyond what a human can produce. Figure 1 displays a piece of art produced by AI image generators. The prompt "3D render of a cute tropical fish in an aquarium on a dark blue background, digital art" was used to create this picture. The viewer would not question whether this was created by AI or a real 3D artist based on the results, which raises the question of how far AI technology can advance before it is able to produce art on par with that of a human. This raises the issue of how quickly and effectively a piece of this work can be produced.
Figure 1. “3D render of a cute tropical fish in an aquarium on a dark blue background, digital art”. Created using DALL-E
As this is a new trend emerging, I have found various media/articles surrounding this topic, The book "Ai for Arts" will be helpful to me in my research because it discusses the implications of using AI to create art and media and how this may affect future job roles and the creative industry. Do these types of AI art generation tools produce more human-like outcomes and are more autonomous, as discussed in Lev Manovich's paper "Defining AI Arts: Three Proposals," mean that we have real "artistic AI," the true "art intelligence"? 2019 (Manovich). He contends that there are still at least three points in this process where a human author will have control over what a computer would do, so it is not quite there yet. The first is that a human would create the network's architecture and training algorithm. Second, the human establishes the training costs, and third, the human chooses from among the outputs of the AI what, in her/his opinion, are the most successful artefacts or works. As he states, "AI art is type of art that we humans are not able to create because of the limitations of our bodies, brains, and other constraints," Manovich does not yet believe that AI is at this point where it can be autonomous in its creation. He also believes that humans cannot create art that an AI can make (Manovich, 2019).
Because different developers come up with new ways to do it every month, there are a lot of different approaches that can be taken when creating an AI image. In spite of this, the primary image-generating website that I will be focusing on in this essay is DALL-E. This is due to the fact that I have found that this API generates the best results out of all of the others.
The earliest researchers in computer vision were pioneers in the field of artificial intelligence image generation. Marvin Minsky and Seymour Papert of the Massachusetts Institute of Technology (MIT) developed a system in the late 1960s that could generate images based on templates. (Minsky and Papert, 1971).The generation of images has become more sophisticated as a result of neural networks and deep learning. Beginning in the early 2000s and continuing through the middle of the decade, researchers developed algorithms and methods for the generation of AI-driven images. The generation of realistic images from text descriptions was accomplished with the help of GANs (Generative adversarial networks) and VAEs (Variational Autoencoders). A new AI was developed with these techniques that was a dataset containing more than 14 million images with human-annotated descriptions of the contents. As a result of this, an early image-generating tool for artificial intelligence was created and given the name ImageNet. The most significant one at the time, which contributed to the growth of research into computer vision. To get to the point where we are now with the creation of DALL-E, advancements in both hardware and software throughout the 2000s made it possible to create artificial intelligence images with higher resolutions and a more natural appearance.
OpenAI has developed a brand new artificial intelligence system called DALL-E. It makes use of generative adversarial networks, or GANs, in order to generate stunning and imaginative images based on text descriptions. This artificial intelligence was developed to produce highly detailed and high-quality images based on text prompts. This gives it the ability to "understand" the meaning of the words that are provided to it and produce an image that most effectively communicates the concept. The results are very impressive and have the potential to be used in a wide variety of contexts, such as in the design of creative artwork, product design, and even medical imaging. The system functions by utilising not one but two distinct neural networks: the first one is a transformer model that takes in a text description and breaks it down into its component parts; the second model then takes these component parts and synthesises them into an image. The system is taught how to interpret different kinds of data, such as colour and shape, by being exposed to millions of different pictures during its training phase. When the training is finished, DALL-E will be able to produce new images based on any text description. For instance, if you tell DALL-E to create an image of "a giraffe eating a slice of pizza," it can come up with something that is eerily similar to a photograph that was taken in real life.
Because of this level of detail, it is possible to create more accurate representations of concepts and things that are difficult to express using only words. The range of activities that could be accomplished with DALL-E is practically unbounded. This system allows creative professionals to quickly generate visuals for their projects without having to spend time on intricate detailing or complex drawing techniques, which can save them a lot of time. One person who has already accomplished this is the man who won first place at the Colorado State Fair with the help of a different image generating API known as "Midjourney" (Vice News, 2022). In general, DALL-E is an impressive piece of software that possesses the capability of bringing about a sea change in the manner in which we produce visuals and engage with technological systems.
Artificial intelligence (AI) assistance within our daily lives is becoming increasingly more prevalent than we could have anticipated 50 years ago, as Niklas Hageback and Daniel Hedblom (2021) articulate. It is often assumed that AI would overtake humans in a multitude of ways and become completely distinct from us in terms of how they would be utilized. However, this does not seem to be the case in several ways since AI can be used to comprehend speech or identify objects or persons in an image or video through AI upscaling tools. Humans initially expected AI to be used for from its origin in the 1950s, as its original goal was to attempt to imitate the human mind with its problem-solving capabilities that can be applied to both every day and abstract situations. It is now evident that AI is going to be employed as a sort of assistance tool to humans, as Niklas Hageback and Daniel Hedblom (2021, p. 10) comment, “machines can be seen as different types of prothesis, extending and enhancing our bodies and minds.”
With the advancements of these tools every year, it begs the question of when the boundary between humans and machines ceases to exist or can no longer be seen as an enhancing tool, as Niklas Hageback and Daniel Hedblom (2021, p. 10) express,
“In short, the boundary between humans and machine is becoming increasingly blurred, but will it eventually altogether cease to exist? What then remains of humanity and human values? Are we humans, or at least some of us, becoming superfluous?”
An AI, however, still cannot accurately mimic human reasoning and our conscious and unconscious objectives when it comes to creating an art piece since AI perceives this as an irrational pattern that would contravene a protocol written by a programmer, which does not allow the AI to deviate from this and create unique work; this is a major challenge for any programmer to encode. Two primary reasons why this has proven to be particularly difficult to overcome are the wide variations of human reasoning, which are hard to pinpoint, as well as the inability to comprehend the operations of the human mind. With this in mind, Niklas Hageback and Daniel Hedblom deem this “Our strength in the Battle against Machines”. Ultimately, AI is here to stay and will undoubtedly alter the way people create and work on various forms of art.
AI has already made an entrance into numerous genres of art such as writing, composition of music, production of movies, photography and painting. An example of where AI has help with music for instance can be found where AI can be used to help with tasks like music transcription, which involves converting audio recordings of music into written notations. The biggest conundrum that always seems to arise is if this form of art will supplant the human, as Niklas Hageback and Daniel Hedblom (2021, p. 11) inquire, “What does humanness entail and to what degree is it replaceable? Might art be the conduit to explore and deliberate on such stupendous contemplations?” In the context of my investigation, the most pertinent area where AI has made a small entrance is the production of movies. This area is not as easy to use AI as much as other areas, such as the generation of art, for now at least. This is because making a movie requires heavily on the skills of the human behind the computer. Providing an algorithm, a short film sequence and asking it to make a full movie would not be possible with the AI available today as the AI would not be able to achieve the level of creativity and storytelling that is typically required for a successful film. Niklas Hageback and Daniel Hedblom discus that a movie's rhythm, visual tone, and mood can potentially be summarized by an AI. However, it is currently not possible for an AI to automatically combine multiple subplots and capture the ambiguities of an actor's language and mannerisms, which are important for the quality of a movie. Although attempts to complete this task are in development, they are still in a rudimentary stage. Therefore, in this area, the work relies heavily on the human creating it and not how an AI could, for now. On the question of how this novel technology will affect future job roles within the world of arts, given that it is also not quite there in certain areas like the production of movies, it is nonetheless foreseeable that this technology will impact the job roles in many areas as the technology progresses.
An area where Niklas Hageback and Daniel Hedblom see that AI will affect artists is those who make their careers by plagiarising, as they will stand little chance to outmatch the AI, whether in terms of quantity or quality, when it comes to machine-generated art. They also describe that the role of a curator might be challenged as the gathering of artworks has often been done by the curator’s gut feel and extensive experience, but an algorithm in the future might be able to do this more efficiently than the most experienced curators as One way this could happen is if algorithms become better at analysing and understanding large amounts of data and information. This would allow them to quickly identify patterns and trends, and to make more accurate predictions and recommendations. In the end all the job roles in the creative industry depend on whether someone can create an AI that can imitate how a human can think and how it can comprehend the concept of ideas.
The use of AI in art is an emergent field of study, and one that requires a clear definition to establish what it is and how it differs from traditional art practices. The paper Defining AI Arts: Three Proposals by Lev Manovich 2020, highlights different aspects of the use of AI in the production of art. The first definition emphasizes the use of AI as a tool, the second on the use of AI as a medium, and the third is a hybrid of both. Each definition has its own merits and limitations, and the choice of which definition to use will depend on the context and the particular aims of the artist or researcher. The first definition of AI Arts proposed in this paper is that it is
“An emerging field of art practice in which traditional art practices are combined with machine learning and other forms of artificial intelligence. This includes the use of AI as a tool to create artworks, either with the artist’s own hands or through the use of automated processes.”
This definition has the advantage of being relatively straightforward and easy to understand, but it is limited in that it does not consider the use of AI as a medium, in which algorithms and other forms of artificial intelligence are used to create artworks in their own right, rather than as a tool to manipulate existing data and also as a new medium, it may have its own look and feel, when compared to other types of visual media. The second definition of AI Arts proposed in this paper is that it is
“An emerging field of art practice in which the artist uses algorithms and other forms of artificial intelligence as a medium to create artworks.”
This definition emphasizes the use of AI as a medium, in which algorithms and other forms of AI are used to create artworks without the need for any human intervention. This definition has the advantage of being able to encompass a range of artistic practices which do not rely on the use of a human artist, but it is limited in that it does not consider the use of AI as a tool, in which the artist’s own hands or automated processes are used to manipulate data or images in order to generate a visual or auditory output. The third definition of AI Arts proposed in this paper is that it is “an emerging field of art practice in which traditional art practices are combined with machine learning and other forms of artificial intelligence, wherein the artist uses AI as both a tool and a medium to create artworks.” This definition is a hybrid of the first two definitions, in that it takes into account both the use of AI as a tool and as a medium. This definition has the advantage of being able to encompass a wider range of artistic practices, but it is limited in that it does not consider the potential implications of using AI as a tool or as a medium. In conclusion, this paper has proposed three definitions of AI Arts, each of which has its own merits and limitations. The choice of which definition to use will depend on the context and the particular aims of the artist or researcher. It is important to note, however, that regardless of the definition used, AI Arts is an emergent field of study which has the capacity to revolutionize the way art is created, experienced, and understood.
Over the course of the last decade, the visual effects industry has experienced tremendous growth thanks to the development of tools and technology that enable higher levels of creative output and production. The recent developments that have been made in the field of artificial intelligence have opened up a number of new doors of opportunity for the visual effects industry. One example of these new opportunities is the already mentioned DALL-E. This tool has the potential to completely transform the visual effects industry by empowering artists to generate fresh and original visuals at a rate that is substantially higher than in the past (Menz, 2022). This eliminates the need for the artists to manually draw or paint them, which is a process that can be very time consuming. This not only helps save time and effort, but it also opens up new opportunities for visual effects artists to create images that are one-of-a-kind and imaginative. For instance, an artist could use DALL-E to generate a number of visuals in a matter of seconds that match the description they provided rather than having to manually create a series of shots following a character in a fantasy world. These visuals would match the description they provided. The use of DALL-E to create special effects such as fire or smoke would give filmmakers access to previously unexplored avenues of creativity (ftack, 2021). In addition, DALL-E could be used during post-production to assist in the creation of believable settings and backdrops for the scenes that it was used in. One illustration of this can be found in the work of the digital compositor Josh Parks, who, in response to one of my lectures, mentioned that he employs DALL-E in order to generate backgrounds for the composites that he creates in Nuke.
Instead of needing to manually render each element of a scene, such as trees and buildings, an artist could simply provide a description of the desired environment and then let DALL-E generate realistic visuals for them (Corridor Crew, 2022). A news article has discussed these new revelations and have also dived into the implications. (What Do AI Image Generators Mean For Visual Artists… And The Rest Of Us?,” 2022). There is a consensus among most people that the development of DALL-E represents a significant step forward in technological capability. On the other hand, there are many different ways in which it has the potential to be harmful to the industry of visual effects.
To begin, the capacity of DALL-E to generate highly realistic images based on textual descriptions represents a potential risk that could lead to the extinction of conventional visual effects techniques. With the assistance of DALL-E, filmmakers and visual effects artists now have the ability to merely describe the image they want, and the model will generate it for them. This capability was not previously available to them. This could result in a decrease in the demand for skilled visual effects artists, which could potentially result in job losses and financial instability in the industry. Additionally, the capability of DALL-E to generate images with an extremely high level of realism raises ethical concerns. This is because the image generator does not know what the majority of the images that they are producing are, which could result in the production of a potential racial bias as well as a potential gender bias. (Maiti, 2022).
It is possible that the use of DALL-E in the process of creating visual effects could lead to the creation of images that are indistinguishable from those created by a real visual effects artist or by an AI. This could cause the lines between what is created by a human and what is not created by a human to become blurred. Both Niklas Hageback and Daniel Hedblom mentioned this earlier. Because of this, it is possible to question the legality of this situation, as one might wonder who the rightful owner of this piece of artwork is. The person who created it using the prompts, or the owner of the AI image generation tool, is responsible for it. The capability of DALL-E to generate images with a high level of realism could, if it were to fall into the wrong hands, be used to create content that is either false or misleading, which could potentially lead to the spread of misinformation. This could have serious repercussions, such as undermining public trust in news and information or even causing widespread panic among the general population. Both of these outcomes would be undesirable. At the moment, DALL-E does have some tools for content moderation that can filter out inappropriate content; however, this raises the question of what would take place in the event that these filters were removed.
In conclusion, the initial question that prompted me to write this investigational study was, "How will DALL-E Impact the Visual Effect Industry?" I asked myself this question when I was writing the study. The development of DALL-E and other image generation software has the potential to have a significant impact on the visual effects industry. This is because it will make it possible to create imagery that is both more realistic and more complex, and it will also make it possible to automate certain tasks, as was discovered to be the case with the compositor Josh Parks. However, the widespread use of these tools may also result in the loss of jobs, as was discussed earlier with regard to the function of a concept artist, as well as a shift in the kinds of skills that are in highest demand in the sector. In order to ensure the continued success and expansion of the visual effects industry, it will be essential for the industry to thoughtfully consider the implications of these tools and adapt to the changes that are brought about by the introduction of these tools.
Reference list
Minsky, M. & Papert, S. (1971) Minsky-Papert 1971 Progress Report [online]. Available from: https://web.media.mit.edu/~minsky/papers/PR1971.html (Accessed 26 December 2022).
Gault, M. (2022) An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition, and Artists Are Pissed [online]. Available from: https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed (Accessed 26 December 2022)
2021 Niklas Hageback and Daniel Hedblom. (2021) Ai for Arts CRC Press
Lev Manovich. (2019) Defining AI Arts: Three Proposals, pp. 1-9. Available at: http://manovich.net/index.php/projects/defining-ai-arts-three-proposals
Menz, L. (2022) This Dall-E 2 Feature is Mindblowing for Animating Vfx (After Effects + A.I.) [online]. Available from: https://www.cinecom.net/after-effects-tutorials/this-dall-e-feature-mindblowing/ (Accessed 9 January 2023).
2022 Lev Manovich and Emanuele Arielli. Artificial Aesthetics: A Critical Guide to AI, Media and Design.
ftack (2021) How machine learning is optimizing VFX workflows - ftrack [online]. Available from: https://www.ftrack.com/en/2021/10/how-machine-learning-is-optimizing-vfx-workflows.html (Accessed 9 January 2023).
Corridor Crew. (2022) Is This The Death of VFX? [online]. Available from: https://www.youtube.com/watch?v=W4Mcuh38wyM (Accessed 9 January 2023).
Anon (2022) What Do AI Image Generators Mean For Visual Artists… And The Rest Of Us?[online]. Available from:
https://www.sciencefriday.com/segments/ai-art/ (Accessed 26 December 2022).
Maiti, A. (2022) DALL-E: As Viewed Through an Ethical Perspective [online]. Available from: https://humancentered-ai.eu/dall-e-as-viewed-through-an-ethical-angle/ (Accessed 26 December 2022).