Thanks a lot, Brookings.
The Swamp’s favorite think tank kicked off the new year with “an overview of how deepfake technologies will impact security and intelligence operations,” combined with a warning that “officials and policymakers need a far greater understanding of how the technology works and the myriad ways it can be used.”
“Deepfakes and International Conflict,” authored by four Brookings Institution scholars, cites a real-world deployment of the weapon it fears. Not long after the shooting started in Ukraine,
a video message showing … President Volodymyr Zelenskyy briefly appeared on the news website Ukraine 24. Dressed in his iconic olive shirt, Zelenskyy’s tone and attire matched his other messages of the time. Yet the message itself was altogether different: Rather than urging Ukrainians to carry on their fight, Zelenskyy instead implored them to lay down their arms and surrender. Not surprisingly, the video then quickly spread on VKontakte, Telegram, and other social media platforms, where it was picked up and reported on by global media.
Zelenskyy, as you suspect, issued no such message, but the “incident marked a turning point in information operations.” Whereas “[d]eceit and media manipulation have always been a part of wartime communications,” now it’s possible “for nearly any actor in a conflict to generate realistic audio, video, and text of their opponent’s political officials and military leaders.”
For those of us who score so-so on the IT-geek scale, “Deepfakes and International Conflict” keeps its technical discussion (e.g., “generative adversarial networks”) mercifully brief. The specifics, for people who make policy and the voters who put them in office, are far less important than the trouble this new type of trickery can cause. False-flag operations are one potentiality. In 1939, Hitler’s goons “dressed in Polish uniforms seized a radio station and broadcast a message condemning Germany” to create a justification for invasion. A deepfake “might foster or legitimate an insurgency.” (In a 2019 Foreign Affairs article, Robert Chesney and Danielle Citron envisioned “a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran,” an “audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq,” and “a video showing an American general in Afghanistan burning a Koran.”)
How about bogus orders, such as a command “to dislodge well-defended troops” — or sowing confusion to induce the ignoring of “legitimate orders”? A useful divide-the-ranks tactic could be the depiction of “top political or military leaders voicing racist remarks, expressing disdain for their soldiers and political bosses, laughing at the dead and wounded, or otherwise discrediting them.” Anti-recruitment deepfakes “might show military forces committing human rights abuses, favoring one community over another, fleeing as cowards rather than fighting bravely, looting and stealing from the local community, or otherwise betraying the cause and the people they claim to be defending.” And since alliances can be fractured, given “different security priorities and domestic political concerns,” a well-crafted deepfake “can play up these differences.” (During the Cold War, for example, “the KGB put out convincing, but false, ‘leaked’ official U.S. reports that called for using nuclear weapons on the territory of members of the North Atlantic Treaty Organization, creating widespread anger.”)
Brookings avers that while “it’s still possible to design and train algorithms today that can identify deep fake images, videos, and texts, in the long-term such an approach is unlikely to work — any advances in the algorithmic detection of deep fakes can be baked into the next generation of algorithms used to generate them.” So in time, “defending against deep fakes [sic] will require robust forms of authenticating and verifying digital content, and greater digital literacy and critical reasoning among the public at large.” The “security and intelligence enterprise” needs “systems capable of assuring the provenance and chain of custody of a given piece of audio, video, or text.” The paper’s best advice, for the feds, average folks, and the media, is strong suspicion regarding “[s]ingle-source information.”
Since no one at Brookings is capable of thinking outside the Box of Solipsism, District of Columbia, “Deepfakes and International Conflict” commits an unforgivable sin. It ignores the role nongovernmental actors can play in ferreting out high-tech skullduggery. Washington would be wise to enlist skilled individuals, nonprofits, and companies to aid its effort to discern objective reality. And keeping the process transparent must be paramount. For some reason — nothing more than a hunch, really — closely monitoring the feds’ pursuit of “fake media” seems like a good idea….
Since Orson Wells did the "War Of The Worlds" trickery has been alive and well. Today we have the Marxist Socialist Communists doing their best to fake it to make it. Because the American people are so gullible and stupid, the fake media has been able to do their magic to fool the fools. The Courts and the most corrupt Government in the History of the US has done what ever they've wanted to with our Laws and our Constitution. Unless, we vote out the Criminals and say NO to Diversity Politics we will be the victims of our own ignorance and find ourselves in a 3rd World Country wondering how we got there!!
We can't forget that Brookings was, as Jonathon Turley put it, the "mothership" of the deep-fake Russian collusion scandal. Because of the DC courts, Durham failed to take Brookings gossips like Igor Danchenko down a notch. So now Brookings "experts" are able to preach what they practice so well - treasonous slander. https://www.foxnews.com/opinion/brookings-think-tank-durham-investigation-jonathan-turley