FAQs and Positions


 

1.
Stance on Agency versus Value perspective on the incorporation of ethical criteria into investment decisions: Does adoption of AI Human Impact scoring mean lower returns?

George Serafeim writes:

According to the agency perspective, sustainability expenditures benefit corporate managers at the expense of shareholders by allowing corporate managers to enhance their reputations in society with little benefit to the company. Recent research has concluded the opposite of the agency perspective.

So, it may be that by providing more information to investors, the inclusion of ethical data supports stronger returns. Regardless, the primary claim of AI Human Impact is that it produces investor freedom, as the information required to align their finances with their own values.

 

2.
Stance on Society & Technology

The philosophy of AI Human Impact is Accelerationist: the only way out is straight through, it is to flee forward. The response to technological risks to social wellbeing is not Ludditical suppression, but more, faster AI.

 

3.
Stance on “Trustworthy” AI

The European approach to AI ethics centers on the goal of making AI trustworthy. This is an error of anthropomorphism. Machines can have legal responsibilities, as do corporations and natural persons, but only people have intentions, and so only human beings possess moral agency. The question about whether machines are trustworthy makes no sense. The question about reliability may be reasonably asked, however. (Related article: Watson’s Rhetoric and Reality of Anthropomorphism in AI)

 

4.
Stance on explainable AI and causal ML

AI is defined as knowledge produced through pattern recognition, while human intelligence is knowledge produced from causality. That difference makes explainable/causal AI an oxymoron. It is further true that if an AI decision is explainable in human terms, then we do not need the AI, we may as well make a directly human decision. (Related article: Robins’s Misdirected Principle with a Catch: Explicability for AI) In those cases, however, where AI outperforms human beings, the aspiration for explainability is futile. Nevertheless, interesting work is being done in the gap, and in some cases causality can be suggested by locating those data points most responsible for an AI decision. Here is an example from healthcare AI diagnosing skin cancer from images.