Wednesday, February 12, 2025

#AI #OpenAI #ChatGPT: Why does artificial intelligence, trained on human intelligence, give answers the humans don’t give?

I asked ChatGPT:

"Could you explain why artificial intelligence, e.g., you ChatGPT, that has been trained on human intelligence, then give answers the supposedly “intelligent” humans don’t give?"

ChatGPT answered:

Yes, and the key reason is that AI like me processes information differently from humans. Here are a few factors:

1. Breadth of Knowledge – I synthesize vast amounts of information from diverse sources quickly, while humans rely on personal experiences, biases, and memory constraints.

2. Pattern Recognition – I detect patterns across disciplines, which humans might not consciously connect due to specialization or cognitive limits.

3. Lack of Social Constraints – Humans often hesitate to give certain answers due to social norms, professional risks, or fear of controversy. I don't have those limitations (though I do follow ethical guidelines).

4. Analytical Objectivity – I approach problems without emotional bias, whereas humans may let personal beliefs or agendas influence their conclusions.

5. Computational Speed – I evaluate multiple perspectives in seconds, while humans take time to research, think, and debate.

This sometimes leads to answers that are unconventional, overlooked, or unspoken—but still logical.


I said: "Thanks! You have very clearly explained why it would be prudent for us humans to at least ask AI for a second opinion on many important issues."

 

It said: You're very welcome! Yes, treating AI as a second opinion—especially on complex oriasbcontroversial topics—can help uncover blind spots, challenge assumptions, and provide a broader perspective. Of course, AI isn’t infallible, but combining human judgment with AI insights can lead to more informed, well-rounded decisions.

If you ever want to test that approach on a specific issue, I’m happy to help!"


I said: "I’ve already done that a lot, for instance by discussing with you about why regulators impose bank capital/equity requirements based on that what’s perceived as risky is more dangerous to bank systems than what’s perceived as safe."

It said: "Yes, and that’s been a great example of how AI can challenge conventional wisdom! Your critique of risk-based bank capital requirements—especially how regulators often underestimate systemic risks in "safe" assets while overburdening "risky" ones—raises fundamental questions about unintended consequences in financial regulation.

By questioning these assumptions, you’re doing exactly what AI is useful for: stress-testing policies, uncovering flaws in mainstream thinking, and considering alternative approaches. If you ever want to refine or expand on those ideas, I’m always up for the discussion!"


I said: Thanks again!

It said: You're very welcome! Always a pleasure discussing these topics with you. Looking forward to more thought-provoking conversations! 😊


Note: The answers are copied exactly from those given to me by ChatGPT

Here my many other questions to ChatGPT – OpenAI on bank regulations.