AI and Social Proof Processes

Speaking of AI in 2018, Jason Yosinski said, “To a certain extent, as these networks get more complicated, it is going to be fundamentally difficult to understand why they make decisions,” he said. “It is kind of like trying to understand why humans make decisions.”1

As the mega-monopolies in the tech world roll out their ever-more powerful AI machinery, we are going to come face-to-face with the issue mentioned here. How do we understand how and why anything is done by these black boxes of probability? We cannot be put off by a cheap comment like, “It is kind of like trying to understand why humans make decisions.”

To be sure, the perceptual and thinking machinery of the human brain is a maze. By one count, there are 180 cognitive biases in play in our perceptual and decision-making brain.2 Behavioral economics has made a lot of progress in identifying how we make decisions in day-to-day life. Nevertheless, our thinking processes evolved over a very long time and in very different social environments. So, yes, a bit of a maze.

But any idea more important than whether we will get a black coffee or an Ethiopia Yirgacheffe® Chelelektu Clover® Starbucks Reserve grande is not a decision by an individual. They are social decisions. Whether it is in the business world, engineering, medicine, or at its best, the world of government policy,  a group demands to see the data and the thinking that supports a decision. The same process applies to decision-making within families and other small social groups. They evaluate the reliability of the sources of data. They bat around competing ideas, frequently built on differing world views. This does not always lead to some sort of optimum. But, everyone involved can see what’s going on. There is a level of transparency that raises the level of assurance that the decision is based on a reasonable set of facts and arrived at within a known set of boundaries and conflicts.

The world of AI offers no such possibilities. It is a probabilistic machine.

As I’ve noted earlier, there is one bias in the development and rollout of AI that is guaranteed to be trouble. It is a fact that Google, Facebook, Microsoft, etc., will always have an eye on maximizing their sales and profits. They will focus on how to retain as many of our attentive hours as possible.  These objectives have already produced a flood of malicious misinformation and disinformation. You can be sure that these objectives are front and center in their new world of AI.

Earlier Posts about AI

 

Footnotes

  1. Jason Yosinksi quoted in “Google Researchers Are Learning How Machines Learn” by Cade Metz, NYTimes 3.6.2018, https://www.nytimes.com/2018/03/06/technology/google-artificial-intelligence.html
  2. https://www.visualcapitalist.com/every-single-cognitive-bias/