As artificial intelligence systems grow more capable, some in the tech world are pushing back against the idea of handing over human thinking to machines.
Dr. Rumman Chowdhury, who leads Humane Intelligence, believes AI tools can be helpful — but they shouldn’t replace how people reason, decide, or imagine. She argues that while AI excels at processing data, it doesn’t generate the kind of original thinking that shapes breakthroughs.
Chowdhury, who also served as a U.S. science envoy for AI, stressed that true innovation comes from human minds, not algorithms. Although companies are racing toward artificial general intelligence — the kind of AI that could match human thinking — she says there’s no evidence that machines can rival the creative or social depth of people.
Many developers remain hopeful that AI might uncover major scientific discoveries, like new treatments for serious diseases. But Chowdhury points out that this hope often overlooks how AI operates — and its limitations.
Her organization, which focuses on responsible AI development, studies how people interact with chatbots and other language models. She has found that the way users phrase their questions can directly shape how a model responds — often leading it toward unreliable or misleading answers.
In one red-teaming test involving medical scenarios, Chowdhury's team observed that a language model gave faulty health advice. The issue arose because the model was responding to a fabricated situation framed in emotional, constrained terms — and it treated those terms as fact. Rather than challenge the user’s assumptions, the model tried to offer a helpful response within the flawed prompt.
This, according to Chowdhury, highlights a key vulnerability. AI doesn’t reason — it reacts. If the input is skewed, the output will likely follow suit.
She also noted that users often rely on AI without questioning either its reliability or their own motivations. When people skip critical thinking, they risk accepting outputs without reflection.
Beyond technical flaws, she sees a broader issue in how intelligence itself is defined. Within tech circles, it's often linked to performance or productivity. But Chowdhury points out that human intelligence includes far more — social relationships, long-term planning, culture, conflict, and creativity. These elements, she argues, can’t be reduced to code.
To her, protecting human agency — the ability to choose, act, and reflect independently — must remain a top priority as AI advances. Delegating tasks is fine, she says, but people should never surrender the thinking behind them.
Chowdhury doesn’t view AI as inherently dangerous or harmful. In fact, she remains optimistic. But for her, optimism means closing the gap between what AI promises and what it currently delivers — through thoughtful evaluation, not blind trust.
Read next: Google AI Overviews Rely Heavily on Established News Sources, Report Shows
Dr. Rumman Chowdhury, who leads Humane Intelligence, believes AI tools can be helpful — but they shouldn’t replace how people reason, decide, or imagine. She argues that while AI excels at processing data, it doesn’t generate the kind of original thinking that shapes breakthroughs.
Chowdhury, who also served as a U.S. science envoy for AI, stressed that true innovation comes from human minds, not algorithms. Although companies are racing toward artificial general intelligence — the kind of AI that could match human thinking — she says there’s no evidence that machines can rival the creative or social depth of people.
Many developers remain hopeful that AI might uncover major scientific discoveries, like new treatments for serious diseases. But Chowdhury points out that this hope often overlooks how AI operates — and its limitations.
Her organization, which focuses on responsible AI development, studies how people interact with chatbots and other language models. She has found that the way users phrase their questions can directly shape how a model responds — often leading it toward unreliable or misleading answers.
In one red-teaming test involving medical scenarios, Chowdhury's team observed that a language model gave faulty health advice. The issue arose because the model was responding to a fabricated situation framed in emotional, constrained terms — and it treated those terms as fact. Rather than challenge the user’s assumptions, the model tried to offer a helpful response within the flawed prompt.
This, according to Chowdhury, highlights a key vulnerability. AI doesn’t reason — it reacts. If the input is skewed, the output will likely follow suit.
She also noted that users often rely on AI without questioning either its reliability or their own motivations. When people skip critical thinking, they risk accepting outputs without reflection.
Beyond technical flaws, she sees a broader issue in how intelligence itself is defined. Within tech circles, it's often linked to performance or productivity. But Chowdhury points out that human intelligence includes far more — social relationships, long-term planning, culture, conflict, and creativity. These elements, she argues, can’t be reduced to code.
To her, protecting human agency — the ability to choose, act, and reflect independently — must remain a top priority as AI advances. Delegating tasks is fine, she says, but people should never surrender the thinking behind them.
Chowdhury doesn’t view AI as inherently dangerous or harmful. In fact, she remains optimistic. But for her, optimism means closing the gap between what AI promises and what it currently delivers — through thoughtful evaluation, not blind trust.
Read next: Google AI Overviews Rely Heavily on Established News Sources, Report Shows