Friday, August 18, 2023

We need Masters in Dialogue with Artificial Intelligence; MDAIs

We need Masters in Dialogue with Artificial Intelligence; MDAIs

I refer to “ChatGPT’s political bias leans liberal, research shows” Gerrit De Vynck, Washington Post, August 18, 2023.

Quoting Soroush Vosoughi it ends with “It’s very unlikely that the web is going to be perfectly neutral. The larger the data set, the more clearly this bias is going to be present in the model.”

Of course, that’s true, at least until the umbilical cord by which Artificial Intelligence is fed by Human Intelligence is cut off. Since that separation would launch humanity into a totally different dimension, let’s pray that does not happen.

That said, if you make ChatGPT – OpenAI a question, and not simply accept blindly its answer, but enter into a dialogue with it, I’ve found that #AI is much more willing than humans to accept mistakes, apologize and change its opinions.

“I apologize for the confusion in my previous response.” is an example of a reply I’ve gotten. 

So, what we need to get the most out of AI, is to learn how to dialogue with it. Universities might have to wake up to the idea that we might need MDAIs much more than MBAs.

Now in direct reference to “bias” , I asked ChatGPT – OpenAI: "Could #AI construct political-bias-index Apps, which in real time indicate where between defined political extremes an opinion article finds itself? Could that refrain, even a little, the dangerous polarization societies are suffering?

The answer I got was a clear “Yes!” to which it added “it's crucial to ensure that the AI algorithms and models used in these apps are transparent, fair, and designed with diverse inputs to mitigate any potential biases. Additionally, promoting a diverse range of perspectives and encouraging open debate within these apps can help avoid further entrenching existing biases and polarization.”

PS. I have chatted a lot with ChatGPT and the below has to do with what I have argued above.

@PerKurowski