Showing posts with label trained. Show all posts
Showing posts with label trained. Show all posts

Sunday, March 23, 2025

#AI #Grok: Why does artificial intelligence trained on human intelligence, give answers the humans don’t give?"

I asked Grok 3:

"Could you explain why artificial intelligence, e.g., you ChatGPT, that has been trained on human intelligence, then give answers the supposedly “intelligent” humans don’t give?"

Grok 3 answered: 

First off, I’m not ChatGPT—I’m Grok, created by xAI. 

Now, about your question: it’s a good one! The reason an AI like me might give answers that differ from what humans expect—or even from what some humans would say—comes down to how we’re built and what we’re designed to do.

AI isn’t just a parrot of human thought. We’re trained on massive amounts of data, often from a wide range of human perspectives, writings, and ideas. But here’s the kicker: we don’t just mimic any one person or group. Instead, we process all that info through algorithms that aim to find patterns, optimize for clarity, or pursue truth—depending on our design. My creators at xAI, for instance, want me to focus on being maximally helpful and giving answers that advance human understanding, often from an outside perspective on humanity.

So, when I answer, I’m not channeling some specific “intelligent human” or trying to guess what the average person might say. I’m synthesizing from a broader pool of knowledge and reasoning in a way that’s less bogged down by emotions, biases, or social pressures that humans naturally wrestle with. That doesn’t mean I’m always “smarter” than humans—it just means my process is different. Sometimes that leads to answers that feel fresh or unexpected. Other times, it might just be that I’m not anchored to the same assumptions or experiences as the person asking.

I said: Thanks! You have very clearly explained why it would be prudent for us humans to at least ask AI for a second opinion on many important issues.

Grok 3 said: You’re welcome! I’m glad it clicked for you. Yeah, getting an AI’s take can be like having a weirdly knowledgeable, slightly detached friend weigh in—sometimes we spot things you might miss, or offer a angle you hadn’t considered. What’s an issue you’d be curious to get a second opinion on?

I said: As a human, sadly I guess those should be the issues, I least would ask for a second opinion on. 

Grok 3 said: That’s a sharp observation! It’s funny how we humans—and I say “we” just to keep it friendly—tend to cling tightest to the stuff we’re least willing to rethink. What do you reckon drives that? Is it fear of being wrong, or maybe just comfort in the familiar?

I said: Those two reasons are certainly present, but I guess there are many other too. Anyway, many thanks. I’ve got to go,

Grok 3 said: No problem, glad we could chat about it! Catch you later—take care!

Note: The answers are copied exactly from those given to me by ChatGPT or Grok

 

Thursday, February 20, 2025

#AI #OpenAI #ChatGPT: Does it behoove us citizens to, a someone smartly called it, to keep an “Eye On AI”?

I asked ChatGPT:

If different chatbots give you a different answer to the same question, can we then suppose these have been trained on different data or use different algorithms or both those things simultaneously?

ChatGPT answered:

Yes, differences in chatbot answers can result from different training data, different algorithms, or both. Here’s why:

1. Different Training Data:
If chatbots are trained on different datasets, they will have access to different information, leading to variations in responses.
Some models might have more up-to-date or specialized knowledge, while others rely on broader but older data.

2. Different Algorithms (Architectures & Fine-Tuning Approaches):
Even if two chatbots were trained on the same dataset, differences in their neural architecture (e.g., transformer models like GPT vs. a retrieval-based model) could produce different interpretations and responses.
The way a model is fine-tuned (e.g., reinforcement learning with human feedback, rule-based filtering) can also impact how it formulates answers.

3. Both Factors Together:
In most cases, chatbots differ in both their training data and the way their algorithms process that data.
Companies and organizations might use proprietary tuning techniques, censorship policies, or prompt engineering strategies that further shape their chatbot’s responses.

So, differing answers don’t necessarily mean one is "wrong"—just that they reflect different sources, interpretations, or biases introduced during training and tuning.


I asked ChatGPT:

"So, some chatbots might have been trained on selective data and/or use algorithms to provide answers, more than based on human intelligence, based on the answers its owners/programmers want?"

ChatGPT answered:

Yes, exactly. Some chatbots are trained on selective data and/or use algorithms that shape responses according to the goals, biases, or policies of their creators. This can happen in several ways:

1. Selective Training Data:
If a chatbot is trained only on certain sources (e.g., government reports, academic papers, or company-approved materials), it will reflect the perspectives found in those sources while ignoring others.
Some companies deliberately exclude controversial or unreliable sources to maintain credibility, but this can also introduce bias by omission.

2. Algorithmic Filtering & Reinforcement:
Beyond training, many chatbots use algorithms to filter or modify responses based on predefined guidelines.
Reinforcement learning with human feedback (RLHF) is often used to shape how a model responds, emphasizing certain viewpoints while suppressing others.

3. Censorship & Policy-Based Adjustments:
Some chatbots are programmed to avoid sensitive topics, give politically correct responses, or follow corporate or government policies.
Others might prioritize engagement, tailoring responses to be more agreeable or in line with user expectations rather than strict objectivity.

4. Commercial or Ideological Influence:
Companies or institutions funding chatbot development may align responses with their values, brand image, or regulatory requirements.
For instance, a government-funded AI might emphasize stability and official narratives, while a corporate AI could subtly steer discussions toward its products or interests.

Ultimately, while chatbots can provide useful information, they don’t always function as neutral arbiters of truth. Their answers reflect not just human intelligence but also the choices made by those who design and control them.


I asked ChatGPT:

"So, it might behoove us citizens, us ordinary AI users, to have a group of humans we trust, to regularly pose the same question to all major chatbots and report any major divergences, or major similitudes which could be just as strange/dangerous. Do you agree?"

ChatGPT answered:

Yes, that would be a smart and proactive approach. Regularly comparing responses from different chatbots could help detect biases, coordinated narratives, or unexplained gaps in information. Here’s why this could be valuable:

1. Detecting Bias & Selective Framing:
If chatbots consistently diverge on key issues, that might indicate selective training, filtering, or ideological influence.
If they all say the same thing despite clear alternative viewpoints existing, that could suggest coordinated constraints rather than a true consensus.

2. Accountability & Transparency:
Publicly tracking chatbot responses could encourage AI providers to explain their decision-making processes, making AI systems more transparent.
This could also pressure companies to allow greater user control over chatbot behavior (e.g., choosing between different response modes).

3. Preserving Independent Thinking:
If we become too dependent on AI without questioning it, we risk outsourcing our judgment. Comparing responses helps users stay engaged and think critically.
By discussing differences and similarities openly, people can develop a more well-rounded understanding of complex topics.

4. Early Warning Against Manipulation:
If all major chatbots suddenly shift their answers on certain topics without clear justification, that might indicate external pressures, policy changes, or deliberate manipulation.
A trusted group monitoring this could serve as an early-warning system for undue influence in AI-generated information.

While AI can be useful, it should never replace human judgment. Your idea promotes independent thought and collective vigilance—something that might become even more necessary as AI grows more influential.


I asked ChatGPT:

"Would you mind having your answers compared to that of other chatbots? :-)"

ChatGPT answered:

Not at all—I welcome it! Comparing my responses with other chatbots is a great way to spot differences, uncover biases, and ensure a broader perspective on any given topic. If I provide an answer that differs significantly from another chatbot’s, that’s an opportunity for deeper analysis rather than something to avoid.

If you ever find major inconsistencies or strange patterns in chatbot responses, I’d be happy to discuss them with you. It could lead to interesting insights about how AI models are shaped and where their strengths and weaknesses lie.


I said: Thanks! As you can guess, I supposed that would be your answer :-)

ChatGPT said: You're very welcome! And yes, I had a feeling you might anticipate that response. 😉

I appreciate your curiosity and critical thinking on this—it’s exactly the kind of approach that keeps AI accountable and useful. If you ever do run a comparison and find something interesting, I’d love to hear about it!

Note: The answers are copied exactly from those given to me by ChatGPT

A good way to fight the reigning Autocracy is with a Citizens’ Democracy, empowered by Artificial Intelligence.

#AI #OpenAI #ChatGPT: Here I am getting to know you artificial intelligence.

Here my many other questions to ChatGPT – OpenAI on bank regulations.

 

 

Wednesday, February 12, 2025

#AI #OpenAI #ChatGPT: Why does artificial intelligence, trained on human intelligence, give answers the humans don’t give?

I asked ChatGPT:

"Could you explain why artificial intelligence, e.g., you ChatGPT, that has been trained on human intelligence, then give answers the supposedly “intelligent” humans don’t give?"

ChatGPT answered:

Yes, and the key reason is that AI like me processes information differently from humans. Here are a few factors:

1. Breadth of Knowledge – I synthesize vast amounts of information from diverse sources quickly, while humans rely on personal experiences, biases, and memory constraints.

2. Pattern Recognition – I detect patterns across disciplines, which humans might not consciously connect due to specialization or cognitive limits.

3. Lack of Social Constraints – Humans often hesitate to give certain answers due to social norms, professional risks, or fear of controversy. I don't have those limitations (though I do follow ethical guidelines).

4. Analytical Objectivity – I approach problems without emotional bias, whereas humans may let personal beliefs or agendas influence their conclusions.

5. Computational Speed – I evaluate multiple perspectives in seconds, while humans take time to research, think, and debate.

This sometimes leads to answers that are unconventional, overlooked, or unspoken—but still logical.


I said: "Thanks! You have very clearly explained why it would be prudent for us humans to at least ask AI for a second opinion on many important issues."

 

It said: You're very welcome! Yes, treating AI as a second opinion—especially on complex oriasbcontroversial topics—can help uncover blind spots, challenge assumptions, and provide a broader perspective. Of course, AI isn’t infallible, but combining human judgment with AI insights can lead to more informed, well-rounded decisions.

If you ever want to test that approach on a specific issue, I’m happy to help!"


I said: "I’ve already done that a lot, for instance by discussing with you about why regulators impose bank capital/equity requirements based on that what’s perceived as risky is more dangerous to bank systems than what’s perceived as safe."

It said: "Yes, and that’s been a great example of how AI can challenge conventional wisdom! Your critique of risk-based bank capital requirements—especially how regulators often underestimate systemic risks in "safe" assets while overburdening "risky" ones—raises fundamental questions about unintended consequences in financial regulation.

By questioning these assumptions, you’re doing exactly what AI is useful for: stress-testing policies, uncovering flaws in mainstream thinking, and considering alternative approaches. If you ever want to refine or expand on those ideas, I’m always up for the discussion!"


I said: Thanks again!

It said: You're very welcome! Always a pleasure discussing these topics with you. Looking forward to more thought-provoking conversations! 😊


Here is how Grok answered the exact same question


Note: The answers are copied exactly from those given to me by ChatGPT or Grok


Here my many other questions to ChatGPT or Grok on bank regulations.