Community Update on Deepfake Project
Hello friends, colleagues and community!
I’d first like to thank all of you who have shown interest in my deepfake and predictive data visualization project. I appreciate all the insight I have received from the crowd and welcome any other input that you might be interested in sharing. I’ve put together a great multinational team of volunteers for collaboration. Patrick, our engineering genius on AI and machine learning works from the United States in the insurance industry. Jacob, from Canada, is pursuing a PhD and is my braintrust with all things psychology. Noam is a young Israeli who shows a lot of promise in the fields of data composure; he’ll be playing more of a backseat role and doing research as well. I am especially interested in bringing on another expert in the field of Iraqi government makeup. If you are or may know a person well versed in the subject, please let me know. This project will offer great exposure and a good learning opportunity into a burgeoning and important subject.
This project has four basic components. These four sections are:
1. Describing deepfakes and the technology behind them.
2. Describing the factors of susceptibility.
3. Looking at Iraq as a particular threat for vulnerability.
4. Determining best practices in combating against and preparing for deepfakes.
Section 1: Describing deepfakes and the technology behind them.
The development of technology in public access platforms has allowed for one of the greatest cultural and economic revolutions of all time. Platforms like Facebook, Youtube and Twitter promote communication throughout the world. The sharing of ideas has led to some truly amazing outcomes and some rather silly ones as well. Where would our global community be without the Ice Bucket Challenge, podcasting or helpful community pages that welcome new neighbors. Digital communities have spawned a new system of network effects and congregation. On the flip side, these digital communities can be harmful or exploited for harmful purposes. Da’esh (Islamic State) would unlikely ever been formed without social media or at least not reached such destructive international recruitment. The spread of criminal cooperation, teenage bullying mobs and fake news have been some negative effects of the new socio-digital landscape we all operate in.
This project focuses on the latter of these issues, specifically on deepfakes. A deep fake is a computer generated video that uses image transference algorithms to superimpose a digitally created likeness of a person to a screen near you. These deepfake image transference algorithms use something called GANs or General Adversarial Networks. This is a subsection of a field of machine learning that uses patterns of data to interpret and visualise future data sets. You can learn all about GANs here. While I did say visualize, these data sets can also be included in any kind of expression including audio.
Section 2: Describing the factors of susceptibility.
The potentially viral nature of deepfakes across social media and conventional media platforms has a near direct relation to the threat level. When speaking about susceptibility or vulnerability, unless stated in a specific context, the assumed reference is the ability to cause civic disruption. Susceptibility for damage is not the same as susceptibility for an initial deepfake to be made. When determining issues of susceptibility at large, a number of factors come into play which include, but are not limited to the technological ease of producing a deepfake. Other pertinent factors are societal relations around trust, the intentions of malicious actors, saturation of social media usage and many more.
Section 3: Looking at Iraq as a particular threat for vulnerability.
Doing research into the subject has brought me to a number of conclusions on what exactly the factors of susceptibility are. Using these factors, I was able to narrow down how vulnerability is found in institutions, individuals, places and populations. I was surprised and disheartened to realize that one of the areas most vulnerable is the civic structure of Iraq, especially in regards to religious networks and political hierarchies. Iraq is somewhat of a specialty of mine, near and dear to my heart. The tribalistic makeup of the country with its diversity of faith, ethnicity, political ideology and linguistic makeup are some of the reasons I find the country fascinating, but these are also some of the factors that make it susceptible to deepfakes.
Iraq is a place where I have visited and conducted research in the past. Not only am I familiar with the people, I admire their perseverance and grit. Hopefully, focusing a portion of this paper on Iraq in this context may ease their own burden of future misgivings. Insuring stability in that country allows for development and prevents chaos. Perhaps outside of Ukraine, Iraq is the most susceptible to deepfake interference leading to civic duress. Having a comfort in the region and specifically Iraq, is somewhat of a fortunate coincidence that allows me to delve deeper. My prior work, writing and speaking has focused on the intersection of real time social media and conflict politics in the Middle East. It is possible that this has given a bias to focus my lense in the region as opposed to elsewhere. Perhaps if my background was in South American affairs, I would determine Colombia to be even more vulnerable. This is not an argument I immediately deem illegitimate.
Experts in the fields of technology and international affairs I have spoken to have asked why not focus on countries on the Highly Developed Index (HDI). Are the United States, the European Union and the Asian Tigers not at risk? The determined risk assessment for these countries are not as high in terms of the short term potential for intense damage. Mostly based on factors of civic trust, collective technological literacy and anti-fraud infrastructure, the vulnerability of highly developed countries is not as great as countries in the process of development. Interestingly, countries on the HDI do face unique problems that developing countries do not in relation to deepfake vulnerability.
Section 4: Determining best practices in combating against and preparing for deepfakes.
Deepfakes are a very new invention, but the spreading of propaganda is not. There is some precedent for best practices around even the use of social media effects to spread misinformation. Determining best practices can and should initially use prior interference as a template. Some initial assumptions about the likely spreading of deepfakes will be in line with past similar examples. This is a good starting point for future incidents and determining best practices in combating against deepfakes.
However, deepfakes are still not part of common parlance the way that more conventional forms of “fake news” are. The intensity of the ruse is unheard of and requires special precaution. In this fourth section, I suggest a number of best practices, but also simulations to make for better best practices. I lay out the modeling of repeated Red Team/Blue Team (RTBT) exercises in a fashion deemed best to simulate threats of deepfakes. By recreating chaos and likely routes of entropy, select subject matter experts can act as players in a realistic setting. The specific modeling of these RTBT exercises requires repetition and moderators who can reasonably determine outcomes.
This project is still very much ongoing and requires more research, but to the best of my knowledge, the explained above will be the basic outline. Please feel free to ask me questions and I will try to get back to everyone as soon as possible.