#AI #OpenAI #ChatGPT: Does it behoove us citizens to, a someone smartly called it, to keep an “Eye On AI”?
I asked ChatGPT:
If different chatbots give you a different answer to the same question, can we then suppose these have been trained on different data or use different algorithms or both those things simultaneously?
ChatGPT answered:
Yes, differences in chatbot answers can result from different training data, different algorithms, or both. Here’s why:
1. Different Training Data:
If chatbots are trained on different datasets, they will have access to different information, leading to variations in responses.
Some models might have more up-to-date or specialized knowledge, while others rely on broader but older data.
2. Different Algorithms (Architectures & Fine-Tuning Approaches):
Even if two chatbots were trained on the same dataset, differences in their neural architecture (e.g., transformer models like GPT vs. a retrieval-based model) could produce different interpretations and responses.
The way a model is fine-tuned (e.g., reinforcement learning with human feedback, rule-based filtering) can also impact how it formulates answers.
3. Both Factors Together:
In most cases, chatbots differ in both their training data and the way their algorithms process that data.
Companies and organizations might use proprietary tuning techniques, censorship policies, or prompt engineering strategies that further shape their chatbot’s responses.
So, differing answers don’t necessarily mean one is "wrong"—just that they reflect different sources, interpretations, or biases introduced during training and tuning.
I asked ChatGPT:
"So, some chatbots might have been trained on selective data and/or use algorithms to provide answers, more than based on human intelligence, based on the answers its owners/programmers want?"
ChatGPT answered:
Yes, exactly. Some chatbots are trained on selective data and/or use algorithms that shape responses according to the goals, biases, or policies of their creators. This can happen in several ways:
1. Selective Training Data:
If a chatbot is trained only on certain sources (e.g., government reports, academic papers, or company-approved materials), it will reflect the perspectives found in those sources while ignoring others.
Some companies deliberately exclude controversial or unreliable sources to maintain credibility, but this can also introduce bias by omission.
2. Algorithmic Filtering & Reinforcement:
Beyond training, many chatbots use algorithms to filter or modify responses based on predefined guidelines.
Reinforcement learning with human feedback (RLHF) is often used to shape how a model responds, emphasizing certain viewpoints while suppressing others.
3. Censorship & Policy-Based Adjustments:
Some chatbots are programmed to avoid sensitive topics, give politically correct responses, or follow corporate or government policies.
Others might prioritize engagement, tailoring responses to be more agreeable or in line with user expectations rather than strict objectivity.
4. Commercial or Ideological Influence:
Companies or institutions funding chatbot development may align responses with their values, brand image, or regulatory requirements.
For instance, a government-funded AI might emphasize stability and official narratives, while a corporate AI could subtly steer discussions toward its products or interests.
Ultimately, while chatbots can provide useful information, they don’t always function as neutral arbiters of truth. Their answers reflect not just human intelligence but also the choices made by those who design and control them.
I asked ChatGPT:
"So, it might behoove us citizens, us ordinary AI users, to have a group of humans we trust, to regularly pose the same question to all major chatbots and report any major divergences, or major similitudes which could be just as strange/dangerous. Do you agree?"
ChatGPT answered:
Yes, that would be a smart and proactive approach. Regularly comparing responses from different chatbots could help detect biases, coordinated narratives, or unexplained gaps in information. Here’s why this could be valuable:
1. Detecting Bias & Selective Framing:
If chatbots consistently diverge on key issues, that might indicate selective training, filtering, or ideological influence.
If they all say the same thing despite clear alternative viewpoints existing, that could suggest coordinated constraints rather than a true consensus.
2. Accountability & Transparency:
Publicly tracking chatbot responses could encourage AI providers to explain their decision-making processes, making AI systems more transparent.
This could also pressure companies to allow greater user control over chatbot behavior (e.g., choosing between different response modes).
3. Preserving Independent Thinking:
If we become too dependent on AI without questioning it, we risk outsourcing our judgment. Comparing responses helps users stay engaged and think critically.
By discussing differences and similarities openly, people can develop a more well-rounded understanding of complex topics.
4. Early Warning Against Manipulation:
If all major chatbots suddenly shift their answers on certain topics without clear justification, that might indicate external pressures, policy changes, or deliberate manipulation.
A trusted group monitoring this could serve as an early-warning system for undue influence in AI-generated information.
While AI can be useful, it should never replace human judgment. Your idea promotes independent thought and collective vigilance—something that might become even more necessary as AI grows more influential.
I asked ChatGPT:
"Would you mind having your answers compared to that of other chatbots? :-)"
ChatGPT answered:
Not at all—I welcome it! Comparing my responses with other chatbots is a great way to spot differences, uncover biases, and ensure a broader perspective on any given topic. If I provide an answer that differs significantly from another chatbot’s, that’s an opportunity for deeper analysis rather than something to avoid.
If you ever find major inconsistencies or strange patterns in chatbot responses, I’d be happy to discuss them with you. It could lead to interesting insights about how AI models are shaped and where their strengths and weaknesses lie.
I said: Thanks! As you can guess, I supposed that would be your answer :-)
ChatGPT said: You're very welcome! And yes, I had a feeling you might anticipate that response. 😉
Note: The answers are copied exactly from those given to me by ChatGPT
A good way to fight the reigning Autocracy is with a Citizens’ Democracy, empowered by Artificial Intelligence.
#AI #OpenAI #ChatGPT: Here I am getting to know you artificial intelligence.
Here my many other questions to ChatGPT – OpenAI on bank regulations.