Week 1- Introduction to the Module. Topic: The Age of the Image and the Trend of the Lens
In this week we started by discussing the current trends of visual effects.
-
How much visual effects is now used in movies, subtle or obvious?
-
Photorealism - in the context of real life (no cartoons), (fantasy and non-fantasy/ realistic). The Crown / vs Avengers.
-
Pre-viz
-
Green screen - (old technology) putting actors in different locations, on set-extension.
-
The composite
-
Unreal Engine - it is now being used for a lot of films. Virtual Filmmaking. Comes from a virtual environment of games. Realtime rendering.
-
Live action
Movie made using Unreal Engine
ABBA Live action concert
James Fox's documentary The Age of the image: EPISODE 1
BBC iPlayer (2020) Age of the Image. Available at: https://www.bbc.co.uk/iplayer/episodes/m000fzmc/age-of-the-image
James Fox above explores how humans now want to see everything through cameras as he describes at the beginning of the episode how more people want to look at the Mono Lisa through a camera rather than your own eyes now. James goes right back to the start of photography and also early film to explain how the people like The Lumieres brothers developed the moving photograph and revolutionise the way people see this different medium. He describes this era as the Victorian virtual reality.
Harold Edgerton's Bullet Frozen In Place
James talks about Harold Edgerton whom is an American engineer who pioneered slo motion videos and also images that are frozen in time by using the method of a strobe light which sinks up with the rotation of an object e.g. a fan blade. Above is an example of one of his work.
The Matrix (bullet scene)
The bullet scene from the Matrix can be traced back to Edgerton's photos and shows how modern movies can now utilise the techniques created by Edgerton many years ago. In order for John Gaeta to create this scene he had to create multiple cameras surrounding the actors which all worked as one to take a shot in a sequence that will then be compiled to create a slow-motion video.
Task - Match four Harold Edgerton images each with a VFX shot
A comparison between Inception and Harold Edgerton's Milk Drop
A comparison between X-Men: Days of Future Past and Harold Edgerton's Bullet through Balloons
A comparison between one of Harold Edgerton's photos and the Matrix
A comparison between a gun being fired (Hannibal) and one of Harold Edgerton's photos
Blog Post - What do think Dr James Fox means by his phrase 'The Age of the Image'
Week 2 - The Photographic Truth Claim: Can we believe what we see?
James Fox explores how humans now want to see everything through cameras as he describes at the beginning of his miniseries ‘The Age of the image’ how more people want to look at the Mono Lisa through a camera rather than your own eyes now. James goes right back to the start of photography and early film to explain how the people like The Lumieres brothers developed the moving photograph and revolutionise the way people see this different medium. He describes this era as the Victorian virtual reality. The phrase ‘The Age of the Image’ I believe describes how technology has transformed the way that people now perceive art and changed the way people go about their daily lives now. Since everywhere you go you now always look through some sort of device whether its your iPhone to take pictures of events that happen during your day or even if you just have a digital camera. People today will now rather then help in a certain situation they would rather just film the event and then post it online for more people to view it and this is something that reflects how we are in ‘The Age of the Image’.
James Fox's documentary The Age of the image: EPISODE 4
BBC iPlayer (2020) Age of the Image. Available at: https://www.bbc.co.uk/iplayer/episodes/m000fzmc/age-of-the-image
In this episode of "The Age of the Image" James firstly introduces us to Plato's cave, which is a philosophical theory that Plato came up with around 514a–520a. Plato's theory questions the illusion of reality in practice. In the allegory, Socrates describes a group of people who have lived chained to the wall of a cave all their lives, facing a blank wall. The people watch shadows projected on the wall from objects passing in front of a fire behind them and give names to these shadows. The shadows are the prisoners' reality but are not accurate representations of the real world.
Plato's Cave
James goes into detail about how the age of the image has changed people's perception of reality by the use of now 24/7 news on TV and also the use of deep fakes with the example of Forset Gump when actor Tom Hanks meets President Kennedy. James also mentions that 9/11 was a massive moment for the age of the image as it is the first time in history that ordinary people could watch an event like this unfold in real-time with the attackers knowing this and utilized this medium in that way.
Forset Gump (1994)
Blog Post - What do you think is meant by the theory: The Photographic Truth-Claim?
Photography has now entered a new digital revolution in which the photographic truth claim now comes into question in many ways. The truth claim has also stayed somewhat believable with traditional photography as this method is looked at in a more authentic and original. But traditional photography can be manipulated with the use of superimposing multiple negatives, gum printing or solarization. An example of this would be the infamous photo of Stalin with his head of secret police beside him on the right which has been manipulated for him to be there. However digital photography is now increasingly scrutinized with every photo that is produced in this medium, as its now easier than ever before to manipulate an image in such ease with the use of Photoshop and other software. Gunning describes that digital photography is now putting the truth claim into question on whether what we look at daily is authentic or manipulated. When people now look at images from the news, they now question each suspiciously since many know the ease of manipulation and the line that many cross now to create a false narrative for their own interests. To conclude, the truth claim has now more than ever before been questioned as time will tell how far people can take it.
Gunning, T. (2004) ‘What’s the Point of an Index? or, Faking Photographs’, Plenary Session II, Digital Aesthetics, 1(25), pp. 39-48
In this week for this module, we focused on image fakes and also image manipulation and how this came about throughout the years. Below are 4 examples from different methods of manipulating images.
Four Famous Faked Analogue Photographs
Russian Premier Josef Stalin Airbrushes Out His Enemies, circa 1930
Mao Zedong, Under the Pagoda Mountain, 1942
Canadian PM William Lyon Mackenzie King Removes King George VI, 1939
Kent State Massacre Fencepost Removal, 1970
Four Famous Faked Digital Photographs
David Cameron with Poppy Photoshoped
Iranian Extra Photoshopped Missile
Shark Attacking Helicopter
Tony Blair Infront of Airstrike
Four Faked Shots from Visual Effects
Zodiac, Port of San Fransico
Batman The Dark Knight, Truck Flip Scene
Wolf Of Wall Street, Prison Scene
Logan, Stunt Double
Blog Post - Write a Definition of VFX Compositing. What is it and how does it work?
VFX compositing is something that has been around for many years but only now is it being increasingly used in many different films and movies. Early VFX compositing can be traced back to Georges Méliès and The Four Heads (1898) in which he used a black glass screen to create a matte leaving apportion of the film unexposed. Compositing is done in many ways with examples of ILM’s matte painting for Star Wars, which was done by painting the sets and combing it with live action to match the perspective of the painting. As time went on the use of computers become more apparent especially when it came to digital composting and around the 1980’s more films started to use this method of composting to create movies like Indiana Jones and The Last Crusade (1989) which was also done by ILM. For a composite to create an “impression of reality” it requires the composite to look as natural as possible as if the composite was not natural, the viewers would not get that “impression of reality” from it. Modern day composting now always gives the viewer the “impression of reality” with compositors using software like Nuke which makes composting now easier and quicker than how matte painting was done. Movies now implement invisible VFX everywhere using compositors to blend in CGI with natural film, A great example of this would be the movie The Wolf Of Wall Street (2013) In which at the end of the film the whole prison is CGI.
Week 4 - Defining Trends of Photorealism
In this week for the module, we had a look at photorealism and also photorealism in visual effects, since many movies now make use of photorealism with the example of the latest Lion King (2019)
The Lion King (2019)
Task - What do you understand Photorealism to be or mean?
-
Lifelike
-
Natural
-
Uncanny Valley
-
Detailed
-
Hyperrealism
-
Truthful
-
Authentic
-
A way to reprint reality
-
Surrealism
Cat's Eye and the Best of 'Em, 1993 by Charles Bell
Task - Find four composite examples of CGI photorealism
The Lion King (2019)
The Adventures Of Tin Tin: The Secret Of The Unicorn (2011)
The Lord Of The Rings (2001)
Elysium (2013)
Task - Find four composite examples of Version Two photorealism
‘Aberfan’ The Crown: Season 3 (2019) Netflix
1917 (2019)
Deep Water Horizon (2016)
Titanic (1997)
Blog Post - Write a post on Photorealism
Photorealism is a genre that emerged in the late 1960s with the artworks like the Kandy Kane Rainbow taking on this new genre of art. The basis of photorealism was found from artists whose work depended heavily on photographs in which they often traced the photographs to allow precision and accuracy. With the emergence of CGI photorealism has become ever so relevant today, as many movies now rely on photorealism to achieve a believable story and universe that they create. Early CGI used this genre well with the likes of Toy Story in which it was the first ever full-length CG feature. Toy Story heavily relied on photorealism with the way the characters behave, moved and how they expressed their facial expressions to make the film more real to the viewers. As CGI developed over the years new CG feature films emerged like The Polar Express that used photorealism so many people began to see this film as uncanny as the characters where too lifelike. Tv shows and movies now use photorealism for everything they do whether it’s the full CG movies like the new The Lion King or just compositing photorealism into scenes like seen in The Crown.
In this week for the module, we had a look at the capture of movement in VFX and how it is done via technology like motion capture and also the new emerging technology,
Task - What types of capture are used in VFX and why? How is this data captured?
-
Scanning
-
Mocap - the process of recording the movement of objects or people.
-
Lidar - is a method for determining ranges by targeting an object with a laser and measuring the time for the reflected light to return to the receiver.
Motion Capture Suits
Lidar Scan
Task - List what you think are the advantages and disadvantages of motion capture?
Pros -
-
All fine details of human motion will be recorded
-
Immediate and real-time results
-
Hyperrealistic
Cons -
-
Can be expensive
-
Can sometimes make a movie worse by using mocap
-
Not easy to use
Motion Capture, Caesar from the Planet of the Apes Series
Traditional key-frame, Aladdin
Blog Post - Write a post comparing Motion Capture to Key Frame Animation
Motion capture and key frame animation are very similar to one another, but one requires the use of many different technologies whilst the other just requires a pencil and paper in most cases. Key frame animation has been around since the 1914’s with the creation of the animation Gertie the Dinosaur but it was widely expanded upon by Disney in the 1930s with the release of Snow White and the Seven Dwarfs which was one of the first full-length animated feature films. With motion capture, it has been around since the 1980s in which this was used to animate a 2D main character in the videogame Vixen. Whilst both techniques are different in some ways, they are both though very similar to what they are trying to achieve by creating a believable picture. For example, motion capture can make a person into an animal with realistic facial expressions and movement whilst also key framing can create the same results using just drawings. An example of where the two meet would be the story of The Jungle Book as this story has been using keyframe animation and motion capture with two different movies, one in 1967 and the latest in 2016. By using these techniques in both they still achieve the same story whether it's done by motion capture or key frame animation, but the only way they differ is how they come to the final result of making the film.
Week 6 - Reality Capture (LIDAR) and VFX
In this week for the module, we had a look at capturing reality through 3D scanning and where and how indexicality can be found in the tech of 3D scanning. We also have a look at LIDAR lounge which is a VFX company that specialises in 3D laser scanning services.
Task - Find an example project of each type of 3D scanning.
Depth-based Scanning
Laser Scanning (LIDAR) - used in self driving cars
Photogrammetry
Above is a video exploring Italy's long lost history that is made possible by the use of laser scanning (LIDAR) which I brought to the viewer in a 360 degree view format.
Above is a famous example of a LIDAR scan that was performed a couple of years ago of the Notre Dame in France. This scan was performed so that architects could try and rebuild a part of the cathedral that was severely damaged due to a fire and to rebuild it with precision, they chose to use LIDAR scans of it. The scan above is a direct copy to a photograph of the Notre Dame before it had the fire.
Blog Post - Write a Case Study post on Reality Capture
Reality capture is photogrammetry software for creating 3D models out of multiple images of a certain area or building. It is now mostly used in gaming, surveying, mapping, visual effects and is widely used in virtual reality. An example where reality capture was used would be the famous example of the reconstruction of Notre Dame in France which was severely damaged due to a fire, and this could not be done without the help of laser scanning (LIDAR) of Notre Dame due to its intricate details that a normal image or blueprints could not provide. The laser scan of Notre Dame was provided in 2015 by Vassar College professor, Andrew Tallon in which he measured the distance between every point the laser hit and the device itself. Tallon also took spherical panoramic photographs from the exact spot as the laser scanner which helped him produce a photorealistic 3-D replication of Notre Dame to 5mm. The data that was collected by the laser scanner amounted to over 1 billion points of data with an accuracy of a fraction of an inch which thus allowed for the reconstruction of Notre Dame with an accuracy of 5 millimetres. Without the use of a laser scanner for the restoration, normal blueprints of the cathedral or images would not suffice for the accuracy needed for its reconstruction as Notre Dame has been repeatedly modified over almost seven centuries, making the architectural history of the building difficult to follow.
In this week for the module, we had a look again at reality capture in VFX. But this time we had a closer look at photogrammetry, which is a 3D Scanning technique that uses photographs to build 3D models. These models can be used as the basis for incredibly lifelike digital props and characters.
The Veronica Scanner
Above is The Veronica Scanner which is a 3D photogrammetry scanner that focuses on 3D head scanning. The scanner uses eight cameras to capture 96 high resolution photographs of the human head from every angle, mapping even the finest surface details.
The Veronica Scanner
The Digital Michelangelo Project
The Digital Michelangelo Project was done by Stanford University in which they wanted to digitize Michelangelos statues by using 3D scanning technology. The reasoning behind this project was that they wanted to advance the technology of 3D scanning and also create a long-term digital archive of some important cultural artifacts.
An Example of The Digital Michelangelo Project
In this week for the module, we take a look a look at the differences between simulacra and a simulation - specifically the writing and theories of the philosopher Jean Baudrillard. We specifically looked at the first chapter of Simulacra and Simulation: The Precession of Simulacra. Below are my notes for week 10
Above is a clip from the TV series Rick and Morty that connects to the topic of Simulacra, Simulation, and the Hyperreal. In the clip the main character Morty enters a video game that is about how long you can survive in virtual life. Morty is also not aware he is in a simulation until he dies and gets brought back to the arcade where he is presented with his results.
Jean Baudrillard Simulacra and Simulation - https://0ducks.files.wordpress.com/2014/12/simulacra-and-simulation-by-jean-baudrillard.pdf
Task - What did Baudrillard think about The Matrix?
The video above talks about what Baudrillard thought of the movie The Matrix, It goes into detail on why Baudrillard distanced himself from the movie whilst also being invited to go on set with the actors. The movie The Matrix takes a huge amount of inspiration from Baudrillard with also including his book Simulacra and Simulation in the movie. Baudrillard says that his work and the movies are nothing alike since he explains that this movie takes its inspirations from movies like Minority Report and The Truman Show and not that of his work.
In this week for the module, we had a look at virtual filmmaking and also how it is used in shows like the Mandolorian and Jungle Book. We also looked into detail at the webinar on Virtual Production: The Transition to Real-Time Filmmaking
Unreal Engine (2020) Virtual Production: The Transition to Real-Time Filmmaking | The Pulse | Unreal Engine. 10 September 2020. Available at: https://www.youtube.com/watch?v=p9XUd4TQl2Y&feature=emb_logo
The video above explores and talks about how many companies and movies are on the transition to real-time filmmaking. They also discuss how the technology has improved over the years with the increase of rendering now and how they can change scenes in real-time to the directors.
The Virtual Production of The Mandolorian, Season One
The methods used to create visual effects is forever changing as time goes on and many always seem to think that the latest technology in visual effects will make some roles obsolete in the industry. Whether it’s the introduction of new trends of capture and real-time filmmaking such as the new Unreal Engine 5 which expands upon the area of real-time filmmaking by utilising a new real-time rendering system called Lumen suggests that the industry is moving to a new era of trends that may make rendering now obsolete since there is now no need to wait for a scene to render as well as many other methods that include 3D modelling and animation. The question of this essay conjures up a similar topic that was said by Paul Delaroche in 1840, of whom said that upon seeing his first photograph that form that moment on painting as a medium was dead. And this did not turn out to be the case since paintings are still widely used today as mediums to create art. I will be discussing in this essay whether new trends in visual effects today will make traditional approaches to visual effects still needed whilst also taking about the new technology that has emerged in the past couple of years whilst also talking about where maybe the new trends fall short of the traditional approaches to areas such as 3D modelling, animation and rendering.
Technology is always adapting with the times and there will always be new technology thus creating new trends. An example of such new technology would be the new Unreal Engine 5 which changes the way that 3D models, animation are made. With the introduction of their new Lumen, which is a new real time global illumination and reflections system that allows the use of real time rendering making rendering times non-existent now. This new system suggest that we are moving to an era where traditional approaches such as the use of render farms may not be needed anymore and would allow the time that would be used in render farms to be used elsewhere on improving the shot that they might be working on. Other than relying on a real-time rendering, cloud-based services are now also on the horizon as more companies are using this method more and more since the use of the cloud will allow the use of real-time remote collaboration and will not strain the company’s storage devices since the cloud stores all of that online. Unreal Engine 5 also introduces the new MetaHuman creator of which it is a tool for creating photorealistic digital humans which are rigged and ready to animate. The MetaHuman creator may help support the argument that this eliminates the traditional approach of creating a 3D model which would be to make the character in something like ZBrush or Maya. However, by using the MetaHuman creator this may put the artist in a restriction of what they can make since it’s a tool with pre-sets that you can customize but whilst in something like ZBrush you are free to what ever you want to make. Software tools like MetaHuman seems to be following the trend of computer graphics becoming more photorealistic and this something that seems to be on the rise within visual effects as the normally photorealism is something that is mainly associated with photorealistic artists that use painting and photography as their medium. For this photorealistic software to work they would need “at all costs to avoid inconsistencies or breaks in the illusion” as said by J. David Bolter and Richard A. Grusin in Remediation: understanding new media. This new photorealistic technology also begins to bring in to question on the topic of Simulacra, Simulation and Hyperreality as described by the French sociologist Jean Baudrillard in his book Simulacra and Simulation in which he describes the Simulacra as “The simulacrum is never that which conceals the truth—it is the truth which conceals that there is none. The simulacrum is true” and “According to Jean Baudrillard, who believes that hyperreality goes further than confusing or blending the 'real' with the symbol which represents it, it involves creating a symbol or set of signifiers which represent something that does not actually exist.”. Jean also describes that we have now entered the fourth stage of the hyperreal as examples form contemporary everyday life include the creation and the remoulding of virtual selves that can be seen with all this new 3D modelling technology.
Different 3D scanning techniques are now changing the way visual effects is now being made and being perceived at whether its depth-based scanning, Laser Scanning (LIDAR) or photogrammetry. All these mentioned reality capture techniques all work in different ways as LIDAR is done by using lasers to ping off objects around the camera and ping back the information on how far away that object is. Whilst photogrammetry is achieved by taking photographs of an object in different angles and then compiling all that data into one image or object. Many of these reality capture methods are now wildly being used within movies and TV shows such as Joker and John Wick: Chapter 3 – Parabellum in which LIDAR was used to capture 3D scans of film sets and locations. LIDAR was used in these instances as it much more feasible to capture the real-life location and use it when creating visual effects than to model film sets and locations separately. 3D scanning is also being used now outside of TV shows and movies with the example of the famous example of the reconstruction of Notre Dame in France which was severely damaged due to a fire, and this could not be done without the help of laser scanning (LIDAR) of Notre Dame due to its intricate details that a normal image or blueprints could not provide. The laser scan of Notre Dame was provided in 2015 by Vassar College professor, Andrew Tallon in which he measured the distance between every point the laser hit and the device itself. Without the use of a laser scanner for the restoration, normal blueprints of the cathedral or images would not suffice for the accuracy needed for its reconstruction as Notre Dame has been repeatedly modified over almost seven centuries, making the architectural history of the building difficult to follow.
Many companies now, focus solely on 3D scanning technology now with one such as Lidar lounge which is a company that focuses just on 3D scanning. Companies like Lidar lounge are now becoming ever so more popular since the scanning is now becoming increasingly used within the visual effects industry and they even describe that “Lidar scanning has become an essential tool for visual effects and art departments by providing highly accurate data. This enables the design process to be developed quickly and with superior detail”. Lidar lounge provides their laser scanning technology and photogrammetry for film and tv, architecture and design, virtual reality, augmented reality, gaming, consumer products, forensics, and archaeology.
However, whilst 3D scanning technology is on the rise this still doesn’t make the traditional approach like using 3D modelling software obsolete. Since 3D scanning does provide a high level of realism to its scans it sometimes cannot provide the high-level detail that is always needed for films, and this is where a 3D modeller would be able to go and alter the scan and even model the set from scratch. Scans also do not allow you to make many different fictional objects or scenes since scanning is mainly done by taking something from the real world and then putting it into a 3D form, whilst a 3D modeller would be able to create whatever he wanted to do with many different software like Maya and Blender. 3D scanning is also very costly to do now since mostly all the top-of-the-line 3D laser scanners can cost anywhere from $50,000 to $150,000 not to mention that these 3D scanners will need frequent checks or even repairs in case of them breaking down. whereas 3D modelling is much more cost effective than scanning. It also requires the scan to be completed in good conditions for example if the scanner was to scan an object outside there will be many different variables that could affect this scan such as the ambient light that is there as this will affect the scans accuracy. To fix these issues it is recommended to scan these parts under cover or in different rooms. As on the other hand 3D modelling does not require certain requirements such as the ambient lighting.
Whilst 3D scanning is becoming an ever-increasing trend, another trend that should not be overlooked upon is the rise of virtual filmmaking. This new approach of creating a film or TV show is most of the time done by using software like Unreal Engine as mentioned previously. Virtual filmmaking now allows the visual effects teams to work more closely with the director of the project since it will be filmed most of the time Infront of a blue screen or green screen like seen with the new series The Mandalorian where they have fully utilised virtual filmmaking within Unreal engine by also using LED screens for their traditional green screen. With virtual filmmaking, scenes can now easily change in real-time by for example just changing the background of the scene with some extra rocks and maybe the sky to a different time of day. Software like Unreal engine takes all the 3D environment that will change to the scene with attendant changes to perspective, lighting, distortion, depth of field etc, and will manage to change all this fast enough that those changes can be shown on the background instantly. This new trend now allows companies to save time and money and look just as good as how a traditional approach would be done, and this trend is not just staying within the film industry as its already moving over to TV commercials since many believe that in a few years from now it will be uncommon to find a production that doesn’t use virtual filmmaking with an LED wall since this is the new standard. Virtual filmmaking is also a huge help in most cases for the actors since LED screens can make it feel like they are filming on-location rather than pretending that they are in a location that will be composited in post-production. Reflections and lighting are now also a huge help for these productions as Sammy Lamouti whom worked on The Mandalorian describes it as “The Mandalorian, the lead character, wears a highly reflective armour. If they had used green screens, it would have resulted in green hues on his armour, and a lack of realistic reflection from surrounding scenery.” Visual effects supervisor Ben Grossmann also states that “This empowers the filmmakers to create shots based on what "feels” right, rather than looking at a computer and trying to guess where things should go when it’s not feeling right. And they can do it quickly without having to spend hours moving heavy equipment”.
Although there are many positives for virtual filmmaking there are still some drawbacks of following this new trend like the example that sometimes film sets are now just compiled to using LED screens for creating movies or TV shows. This can often make the actor feel like they are in a small room that limits their ability to act, however as previously mentioned in most cases this is not the case as it can make the actor feel as if there are on a fictional planet with the LED screens rather than pretending with a green screen. Another factor to keep in mind this can be very costly for small productions since a full set of LED walls can be very expensive compared to what a bigger production would be able to do. As in some cases for smaller productions it may be cheaper and more feasible to go and shoot at an actual location and then use postproduction to fill in the visual effects needed. Lighting also sometimes can affect the shot when using virtual production since you would still need to figure out how to seamlessly blend the studio floor with the screen. As Eben Bolter who is a cinematographer describes it as “If we go back to the desert analogy, in reality, we’d have figure out how to seamlessly blend the studio floor with the screen. This can be done, because in The Mandalorian, the join was protected by sand and rocks, but still, there were certain angles that couldn’t be achieved. Like looking up someone’s nose, for example, you’d see the seam between the wall and ceiling screen, because they’re not joined together. The ceiling screen is really just for lighting.”
With all these new trends considered and with technology changing every day the question does “new Trends of Capture and Real-Time Filmmaking lead us to consider if the traditional approaches to 3D Modelling, Animation and Rendering are still be needed?” also comes up when discussing these topics as well as the famous French painter Paul Delaroche. Since Paul Delaroche said upon seeing his first photograph that from that moment on painting as a medium was dead. This could also be said now for the new trends such as 3D scanning and also virtual filmmaking as some could say that with the rise of 3D scanning it now makes 3D modelling not needed anymore since anyone could just take a 3D scanner and get the model from its surroundings and also with virtual filmmaking that some could say why bother a green screen and post production when you could just use an LED wall that could changed on demand by request of the director on set. However, with all of these trends considered I do believe that the new approaches to certain aspects like 3D modelling, animation and rendering do not make us think that the traditional approaches will be long gone in the future as whilst these new trends are making visual effects look better and easier to make it does not allow for sometimes the creative freedom that you might get by 3D modelling a character or rigging an animation. I also find it interesting to compare these new trends to the previously mentioned French sociologist Jean Baudrillard and how these new trends like 3D scanning and virtual filmmaking are increasingly making TV and movies more hyper realistic. I do believe though however, traditional rendering approaches will become something of the past since by having longer render times is only a negative when creating visual effects and with the rise of the new Unreal engine 5 that allows for real time rendering and further allows more time to be spent on perfecting the visual effects in a film or TV show.
References
Jean Baudrillard, 1981. Simulacres et Simulation. Éditions Galilée
Bolter, J. and Grusin, R., 2000. Remediation. Cambridge, Mass: MIT Press.
https://www.artandobject.com/news/today-painting-dead-photographys-revolutionary-effect