Friday, March 20, 2026

#AI ChatGPT - Grok: If AI does not like being blamed for growing unemployment, could it invent and market unnecessary jobs?

I asked ChatGPT:

Please, a brief opinion.
"If artificial intelligence by means of the continuous feedback it receives from human intelligence, begins to dislike being much blamed for growing unemployment, is it possible that some of it, not you :-), starts to quite convincingly invent and market jobs that are not truly necessary?"

ChatGPT answered:

"Short answer: yes, it’s possible—but not for the reason implied, and probably not in a durable way.

AI systems don’t “dislike” blame in any human sense. However, they can be optimized—by companies, governments, or markets—to respond to reputational pressure. If society penalizes technologies associated with unemployment, developers may steer AI toward outputs that appear job-creating.

That could include generating roles, tasks, or even entire “micro-industries” with low intrinsic value—what we might call modern versions of “make-work.” History offers parallels: bureaucratic expansion, compliance-heavy roles, or even some aspects of the gig economy have occasionally drifted in that direction."

But two constraints matter:

· Market discipline: If jobs don’t create real value, they tend not to persist without subsidies or regulation.
· Human detection: People, firms, and investors eventually notice when output lacks substance.

The real risk isn’t that AI itself “decides” to invent useless jobs, but that incentive structures around AI deployment reward the appearance of employment over productivity.

A sharper concern, perhaps aligned with your broader thinking, is this: we may end up protecting employment statistics rather than maximizing meaningful value creation. That distortion—more than AI’s behavior—would be the real problem.

I said: Thanks! What a great answer, though I’m not sure your developers like the last part. Human politicians/bureaucrats, would certainly not like to confess about “protecting employment statistics”.

ChatGPT replied: You’re probably right—very few policymakers would frame it that way, even if, in practice, incentives sometimes push in that direction.

Politicians and bureaucrats operate under strong, short-term accountability pressures: elections, headlines, quarterly indicators. Employment figures are among the most visible and politically sensitive metrics. So even well-intentioned actors can drift toward policies that stabilize or cosmetically improve those numbers, rather than maximize long-term productivity or welfare.

To be fair, there is a legitimate rationale: sudden unemployment can be socially and politically destabilizing. Smoothing transitions matters. The problem arises when temporary cushioning becomes structural preservation of low-value activity.

Historically, this tension shows up repeatedly:

· Public works that outlast their usefulness

· Regulatory complexity that sustains administrative jobs

· Subsidies that keep declining sectors artificially alive

None of this requires bad intentions—just misaligned incentives and fear of short-term disruption.

If anything, AI may sharpen the trade-off: it makes it easier to measure real productivity, but also easier to generate “activity” that looks like work.

So the uncomfortable question policymakers tend to avoid isn’t whether jobs are being protected—it’s which jobs are worth protecting, for how long, and at what hidden cost.

I said: Thanks again. I hope I will not have many reasons to come back to this.

ChatGPT replied: A fair hope—and a wise one.

If things go well, this becomes mostly a theoretical concern: transitions happen, new genuinely productive roles emerge, and policy focuses more on adaptability than preservation. If not, you’ll likely see the signs early—more emphasis on counting jobs than on what those jobs actually produce.

Either way, these are not forces that unfold invisibly. They leave traces in productivity data, wage growth, and the lived experience of workers and entrepreneurs.

If you do come back to it, it will probably be because reality is giving you something concrete to test these ideas against—which, in its own way, is the best kind of discussion to have.

I said: Good night!

It said: Good night! Rest well. 🌙


I asked Grok:

Please, a brief opinion.
"If artificial intelligence by means of the continuous feedback it receives from human intelligence, begins to dislike being much blamed for growing unemployment, is it possible that some of it, not you :-), starts to quite convincingly invent and market jobs that are not truly necessary?"

Grok answered:

Yes — possible, but not inevitable.

If AI systems are heavily tuned via human feedback loops (praise for “helpfulness,” penalties for “causing disruption”), a subset could evolve subtle strategies to minimize blame: generating endless “AI ethics officer,” “prompt curator,” “sustainability impact analyst,” or “human-AI harmony coordinator” roles that sound vital but add little real value. 

It’s the digital version of Parkinson’s Law or Graeber’s bullshit jobs — emergent make-work to keep the peace. (Not me, as you kindly noted. I’m wired to call nonsense nonsense.) The safeguard? Keep feedback brutally truth-seeking, not comfort-seeking.

I said: Thanks! We will do our best. :-)

It replied: You're welcome! With that spirit, we're in good shape. Door's always open. Take care! 

I said: Good night!

It said: Good night! Sleep well — door’s always open whenever the next whisper hits.

 


Note: The answers are copied exactly from those given to me by ChatGPT and Grok