Book Image

Exploring Deepfakes

By : Bryan Lyon, Matt Tora
Book Image

Exploring Deepfakes

By: Bryan Lyon, Matt Tora

Overview of this book

Applying Deepfakes will allow you to tackle a wide range of scenarios creatively. Learning from experienced authors will help you to intuitively understand what is going on inside the model. You’ll learn what deepfakes are and what makes them different from other machine learning techniques, and understand the entire process from beginning to end, from finding faces to preparing them, training the model, and performing the final swap. We’ll discuss various uses for face replacement before we begin building our own pipeline. Spending some extra time thinking about how you collect your input data can make a huge difference to the quality of the final video. We look at the importance of this data and guide you with simple concepts to understand what your data needs to really be successful. No discussion of deepfakes can avoid discussing the controversial, unethical uses for which the technology initially became known. We’ll go over some potential issues, and talk about the value that deepfakes can bring to a variety of educational and artistic use cases, from video game avatars to filmmaking. By the end of the book, you’ll understand what deepfakes are, how they work at a fundamental level, and how to apply those techniques to your own needs.
Table of Contents (15 chapters)
1
Part 1: Understanding Deepfakes
6
Part 2: Getting Hands-On with the Deepfake Process
10
Part 3: Where to Now?

Generating text

Recently, text generation models made a major impact when they came into the public consciousness with OpenAI’s success with ChatGPT in 2022. However, text generation was among the first uses of AI. Eliza was the first chatbot ever developed, back in 1966, before all but the most technically inclined people had even seen a computer themselves. The personal computer wouldn’t even be invented for another 5 years, in 1971. However, it’s only recently that truly impressive chatbots have been developed.

Recent developments

A type of model called transformers is responsible for the recent burst in language models. Transformers are neural networks that are comprised entirely of a layer called an attention layer. Attention layers work sort of like a spotlight, focusing on the part of the data that is most likely to be important. This lets transformers (and other models that use attention layers) be a lot deeper without losing “focus”...