The Future is Now: How do Biometrics, Deepfakes and AI Affect Us All? Skip to main content

The Future is Now: How do Biometrics, Deepfakes and AI Affect Us All?

Communications Professor Kevin John Shares His Experiences Studying the Effects of Modern Technological Advancements

Spring 2025 Experiential Learning Trip in Seoul, South Korea
Courtesy of Kevin John

What truths can be revealed with the combined power of deepfakes and biometric technology? This April, communications professor Kevin John traveled to Korea with several BYU communications students to learn more about the ever-changing world of synthetic media.

John himself has conducted biometrics research since he was an undergraduate student. While speaking at the University of Alabama about his biometrics research, John met journalism and creative media professor Jiyoung Lee. The two started collaborating to explore the implications of deepfakes through biometrics. When John and the communications students traveled to Korea, they met with professor Lee at Sungkyunkwan University and began a deep dive into deep fakes.

Q: Could you give us some context on your field of study? What is a deepfake?

John: A deepfake is essentially a digital projection of somebody else's face (most commonly a celebrity’s) onto a model. There have even been early applications of deepfakes in the pornography industry, so there is a side to this technology that’s dark with misinformation.

We have been dealing with face-altering filters and Photoshop for several years; proportions have been stretched to be slightly outside of what is natural. If you are only seeing one or two images like this, it is not a big deal — but when you are talking about thousands of exposures to filtered or manipulated images, then over time, our sense of beauty can shift to something that is literally unattainable. This creates fertile ground for body dysmorphia or other psychological issues to grow; we start to feel like our natural bodies do not quite measure up. A world of synthetic media separates us from the real world and pushes us toward an unattainable, digital ideal.

Comms Students and Professor John at the Gamaksan Chulleong Bridge Near the North Korean Border
Courtesy of Kevin John

Q: What would you say inspired your fascination in this field?

John: As an undergraduate at BYU, I was a public relations student. The entire focus of my program was, “How do you create effective messages?” Then, three-quarters of the way through my time, I took a media effects class from Steve Thompson. That class flipped the script on me. Instead of, “How do you create effective messages?” it was, “How do messages affect people?” I thought it was awesome! I could spend my time researching how the content we engage with affects our psyche; I could look at so many different options through the frame of media effects. That really sparked the fire in me.

I have explored health communications through body image issues, substance use, and skin cancer. This deepfake project is the next step: How does artificially generated content impact humans? Synthetic media is really changing in scope because more and more images, videos and articles are generated with the assistance of AI. With deepfake research, I can see how human beings respond to synthetic media.

Q: What made you want to start researching deepfakes and biometrics with professor Jiyoung Lee?

Jiyoung Lee Presenting at BYU
Courtesy of Kevin John

John: I was fascinated with Lee’s research on deepfakes and began to wonder how I could combine it with my own research on biometrics. She and I began looking at deepfakes and authenticity. We decided to combine my biometrics expertise with her deepfake expertise, and we analyzed original and deepfake videos with the facial recognition algorithm that I use in my biometrics research.

Then, we had individuals watch both original and deepfake videos and rate them on authenticity. We found that the conversion of deepfakes actually removes emotional information, both positive and negative. Emotion scores were consistently lower for the deepfakes than the original videos — reduced emotional content could be be a cue to a viewer that this video is not authentic. We know that synthetic media isn't communicating emotionally rich information to the same extent as real media. The next step in this process is going to be comparing our findings with deepfakes over the span of, say, 10 years to see if they have gotten better at retaining emotional information.

Q: At what point did you decide to involve students in this research?

John: I have some connections in Korea, and the School of Communications wants to get undergraduate students involved in higher level research. Since I was flying to meet with professor Lee at Sungkyunkwan University in Korea, I asked some students if they had any interest in pursuing higher education and getting exposure to what that career path could look like.

We found a group of six students (four media and society students, one public relations student and one advertising student) to accompany me to the university. They met Dr. Lee and talked with her about her research, attended a lecture and participated in a cultural enrichment experience. We had several conversations about how media has facilitated cultural transference between Korean society and really the rest of the world. This trip was a way for students to participate in academically rigorous conversations in preparation for grad school and to learn how media facilitates cultural exchange.

BYU Students and Professor John at Sungkyunkwan University
Courtesy of Kevin John

Q: What hope can you give us for the future with regards to the fear that some people have regarding AI?

John: There is naturally some fear around artificial intelligence because it is unknown territory. Like in the industrial revolution when we moved from building cars by hand to using assembly lines, there is always fear that comes with progression. Is some of that fear justified? Probably. But there is a great deal of hope that accompanies this fear. For instance, AI excels at pattern recognition. Over the past 20 years, we have gathered a wealth of data on Google and social media, but we have not had the means to process it all. Now, with AI excelling at pattern recognition, we can extract vital meaning from that data.

The absence of a human element is concerning for people, which I can entirely understand, but AI has also opened the doors to creation for so many people. I have even helped my kids use AI to create picture books — it was really exciting for them. We cannot let the fear of this situation drown out the hope; when it comes to innovation, there is always a scary side and there is always a hopeful side. It is up to us to decide what we focus on.