In my June 15, 2020 post, “End Anonymity” I wrote:
“Anonymity is a plague in our lives, public and private. The web is filled with anonymous material authored by anonymous creators. Facebook is filled with millions of anonymous fake people who are really, it frequently turns out, paid actors for various political and economic actors.”
The world is filled with anonymous people and organizations participating in our social and political lives. By definition, and in fact, this anonymity is dangerous for our political and social institutions, not to mention us individually. The corporate world is filled with shell corporations that effectively shield the actual owners, the responsible parties, in an enormous array of activities. Lawyers regularly toil to build shields through these entities for their rich clients. Dark money is everywhere in our political and regulatory systems. It hardly needs pointing out that our media is filled with content provided by anonymous people and entities.
Now we have the first signs of what will be a tsunami of AI-generated media as standard practice for corporations and politicians and their hangers-on. Professors Lawrence Lessig and Archon Fung wrote in their piece:”How AI could take over elections – and undermine democracy”1:
As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger [their name for an AI political propaganda system] could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.
Given the crass revenue-generating morals of the big social media companies and the demonstrated corruption of our political system, we should take action immediately to prevent this from happening.
In an earlier post,”Further thoughts on making AI a good neighbor“, I suggested some rules that we should put in place with regard to AI in the world:
AI must be visible to us in real-time when it is making judgments or providing information affecting us. No hidden algorithms. We must know that there is an AI actor present in real time during the encounter.
Every product of AI, written, video, picture, or audio, must contain an easily accessible, visible marker that indicates its origins. This marker will include the identifying IDs of the computer, network location, time, AI software name, revision, and license number. This should be fairly easy to accomplish since all of these identifiers already exist. Appropriate penalties for failing to provide or falsifying this information should be developed.
Who owns the AI platform? Who is the responsible person? We must be able to reach another human being directly in real-time to make inquiries about the behaviors of the AI. No shell or anonymous corporate or government structures can own or deploy any AI.
Is the advice legal? Does it conform with best practices? Is it ethical?
In situations where the AI is making decisions about a human being, there must be present in real-time a system to lodge complaints and/or make corrections to AI behaviors.
AI must be able to cite the facts that are the basis for its activities and, further, explain the reasoning that supports its decisions.
AI must maintain a continuous log of human queries and challenges. Appropriate quality measures that reflect the content of AI activities should be present, updated in real-time, and immediately available.
No information or decision by AI that affects a human being’s health, income, family, mobility, or safety can be implemented without prior confirming action by a responsible human being accessible in real-time to the subject.
No surprise, the European Union is already considering legislation to regulate AI and even forbid its use in certain situations.
- Archon Fung and Lawrence Lessig, “How AI Could Take over Elections – and Undermine Democracy,” The Conversation, June 2, 2023, http://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051.