I'm a former [short-term] Executive Director of the World Bank (2002-2004) for Costa Rica, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Spain and Venezuela. I write this blog to echo my voice and my noise. Though I know many will only be able to read it in yellow or blue, I do make an effort to write it in green. The Radical Middle or the Extreme Center is not any wishy-washy place to be, in a world where swimming to any of the ideological shores provides for a much calmer shelter.
Quoting Soroush Vosoughi it ends with “It’s very unlikely that the web is going to be perfectly neutral. The larger the data set, the more clearly this bias is going to be present in the model.”
Of course, that’s true, at least until the umbilical cord by which Artificial Intelligence is fed by Human Intelligence is cut off. Since that separation would launch humanity into a totally different dimension, let’s pray that does not happen.
That said, if you make ChatGPT – OpenAI a question, and not simply accept blindly its answer, but enter into a dialogue with it, I’ve found that #AI is much more willing than humans to accept mistakes, apologize and change its opinions.
“I apologize for the confusion in my previous response.” is an example of a reply I’ve gotten.
So, what we need to get the most out of AI, is to learn how to dialogue with it. Universities might have to wake up to the idea that we might need MDAIs much more than MBAs... [and perhaps more than professors and consultants]
Now in direct reference to “bias” , I asked ChatGPT – OpenAI: "Could #AI construct political-bias-index Apps, which in real time indicate where between defined political extremes an opinion article finds itself? Could that refrain, even a little, the dangerous polarization societies are suffering?
The answer I got was a clear “Yes!” to which it added “it's crucial to ensure that the AI algorithms and models used in these apps are transparent, fair, and designed with diverse inputs to mitigate any potential biases. Additionally, promoting a diverse range of perspectives and encouraging open debate within these apps can help avoid further entrenching existing biases and polarization.”
PS. I have chatted a lot with ChatGPT and what's below has to do with what I have argued above.
"Could AI be used for constructing political-bias-index Apps, which could in real time indicate where between defined political extremes an opinion article finds itself? If so, could those Apps be deemed to limit the freedom of expression?"
ChatGPT answered:
"Yes, AI can be used to develop political-bias-index apps that attempt to measure and categorize the political bias of opinion articles or other forms of media. These apps could use various natural language processing and machine learning techniques to analyze the content and determine its alignment on the political spectrum.
However, it's important to note that political bias is a complex and subjective matter. Different individuals and groups may perceive and interpret bias differently, which can make it challenging to develop a universally accepted metric. AI models used for bias analysis can be influenced by the data they are trained on, potentially reflecting the biases of their creators or the sources used for training.
The use of such apps raises concerns about the potential limitations on freedom of expression. While these apps may provide users with information about the perceived bias of an article, it's crucial to avoid using them as absolute arbiters of truth or as a means to suppress certain viewpoints. If these apps are positioned as authoritative or are used to justify censorship or exclusion of certain perspectives, they could indeed encroach upon freedom of expression and lead to a narrowing of public discourse.
It's important to promote transparency and critical thinking when using political-bias-index apps. Users should be aware of the limitations and potential biases of such tools and use them as just one of many sources of information. Additionally, developers of these apps should strive for openness, provide clear explanations of their methodology, and be receptive to feedback and scrutiny to mitigate the risks of limiting freedom of expression."
I asked ChatGPT:
"Could those AI political-bias-index Apps help to limit polarization profiteering and, in that way, somehow refrain, even a little, the dangerous polarization societies are suffering?
ChatGPT answered:
"AI political-bias-index apps have the potential to contribute to the mitigation of polarization profiteering and, to some extent, help address the dangers associated with polarization in societies. By providing users with information about the perceived bias of articles or media content, these apps can encourage individuals to critically assess the sources they consume and be more aware of potential biases.
One of the drivers of polarization is the echo chamber effect, where individuals are exposed only to information that aligns with their existing beliefs. AI bias-index apps can help break through this echo chamber by highlighting the political leaning of various sources, thereby encouraging users to seek out diverse perspectives and engage in more balanced discourse.
Furthermore, these apps can help individuals understand the range of political opinions and where specific articles or media content fall on the spectrum. This awareness can foster a more nuanced understanding of complex issues, promoting critical thinking and reducing the tendency to rely on simplistic, polarized narratives.
However, it's important to recognize that political polarization is a multifaceted issue with deep-rooted social, economic, and cultural factors. AI apps alone cannot solve this problem entirely. They should be seen as tools to complement broader efforts aimed at promoting media literacy, fostering empathy, and encouraging respectful dialogue across ideological divides.
Moreover, it's crucial to ensure that the AI algorithms and models used in these apps are transparent, fair, and designed with diverse inputs to mitigate any potential biases. Additionally, promoting a diverse range of perspectives and encouraging open debate within these apps can help avoid further entrenching existing biases and polarization.
In summary, while AI political-bias-index apps can play a role in mitigating polarization profiteering and fostering a more informed public, they should be part of a broader set of initiatives aimed at promoting critical thinking, media literacy, and constructive dialogue."
Note: The answers are copied exactly from those given to me by ChatGPT