Further thoughts on making AI a good neighbor

What expectations and constraints should be placed on the development and ongoing deployment of AI systems?

If one thinks about how we evaluate the merits of human sources of knowledge, we can see some useful practices that may help us develop expectations and constraints to govern AI and keep it a good neighbor.

When meeting a new person who seeks to play the role of a knowledgeable expert in some field of interest, we apply a number of tests.

  1. First, does this person have a track record in the community of providing good, reliable guidance? Are references available from people who have worked earlier with the expert?
  2. If we ask where they got the knowledge in play, they can explain its sources. We then can determine whether these sources are already known to us or suggest the need to ask around for confirmation that the sources are reliable.
  3. We ask the putative expert to explain the thought processes and experiences that form the basis of the expertise. Do they make a persuasive case that there is a clear connection between their knowledge, their thought processes, and the solution they offer for the problem at hand?
  4. Is the advice legal? Does it conform with best practices? Is it ethical?
  5. This person knows that their credibility and future trustworthiness are in play. If bad advice is rendered with any frequency, this person knows that they will be held accountable in the court of public opinion. They will lose their status as an expert. In extreme cases, shunned.
  6. The advice should be supplied in a fashion that supports the independent agency of the recipient. Sixth, we watch out for conflicts of interest. Is our expert likely to give advice that is influenced by personal gain for themselves or some institution?
  7. Our expert is present and visible.
  8. Positive responses to these tests establishes the credibility of the expert and increases our sense of trust. This makes the process of asking for expert help more efficient and less time costly.

Applying these tests to AI, we might come up with some tests and constraints that we should apply to keep AI a good neighbor or lead to banishment.

  1. AI must be visible to us in real-time when it is making judgments or providing information affecting us. No hidden algorithms. We must know that there is an AI actor present in real time during the encounter.
  2. Every product of AI, written, video, picture, or audio, must contain an easily accessible, visible marker that indicates its origins. This marker will include the identifying IDs of the computer, network location, time, AI software name, revision, and license number. This should be fairly easy to accomplish since all of these identifiers already exist. Appropriate penalties for failing to provide or falsifying this information should be developed.
  3. Who owns the AI platform? Who is the responsible person? We must be able to reach another human being directly in real-time to make inquiries about the behaviors of the AI. No shell or anonymous corporate or government structures can own or deploy any AI.
  4. Is the advice legal? Does it conform with best practices? Is it ethical?
  5. In situations where the AI is making decisions about a human being, there must be present in real-time a system to lodge complaints and/or make corrections to AI behaviors.
  6. AI must be able to cite the facts that are the basis for its activities and, further, explain the reasoning that supports its decisions.
  7. AI must maintain a continuous log of human queries and challenges. Appropriate quality measures that reflect the content of AI activities should be present,  updated in real-time, and immediately available.
  8. No information or decision by AI that affects a human being’s health, income, family, mobility, or safety can be implemented without prior confirming action by a responsible human being accessible in real-time to the subject.

A central problem in implementing these rules of good neighbor AI behavior is the rampant anonymity present today throughout societies. Within the internet, VPNs (Virtual Private Networks) make it very difficult to prevent malicious activities by anonymous actors (individuals, corporations, and governments).1 Further, the world is awash in legal structures, anonymous shell companies2, for example, that makes it difficult to impossible to identify responsible parties. Anonymity needs to be banned worldwide. All one needs to do to observe the dangers posed by anonymous actors in the public sphere is to look at the role of anonymous “dark” money in American politics and society. Anonymity weakens social relations and structures. The success, or failure, of our species is very much tied up in our capacity for cooperation. Cooperation is undermined by anonymous actors because it erodes trust.

Finally, we must take note that much of the development of AI is funded and controlled by huge private enterprises like Facebook (Meta), Google (Alphabet), and Microsoft. They are not doing this for the general benefit of mankind. They are developing AI to enhance their top and bottom lines. As already demonstrated by these companies, they will act to enhance their positions regardless of the costs to end users. Enormous amounts of human attention and energy are now taken up by digital media, whose purpose is to fill the coffers of the tech giants with money from advertisers. These companies continue to foster an environment of chronic malicious, frequently false information. Exactly how they will apply AI to further there search for increased sales and profits is unclear. Nevertheless, you can be certain that their game plan is focused on sales and profits not the welfare of societies and humankind.

Footnotes

  1. It must be noted that the use of VPNs is, in large part, driven by censoring and monitoring activities by governments.
  2. See, for example, J. C Sharman, “Shopping for Anonymous Shell Companies: An Audit Study of Anonymity and Crime in the International Financial System,” Journal of Economic Perspectives 24, no. 4 (November 1, 2010): 127–40, https://doi.org/10.1257/jep.24.4.127.

1 Comment

Comments are closed.