Mon,11 May 2026
USD41,57
%0.21
EURO48,55
%0.10
GBP55,54
%0.10
BIST11.258,72
%-1.04
GR. ALTIN5.012,06
%0.23
İstanbul
Ankara
İzmir
Adana
Adıyaman
Afyonkarahisar
Ağrı
Aksaray
Amasya
Antalya
Ardahan
Artvin
Aydın
Balıkesir
Bartın
Batman
Bayburt
Bilecik
Bingöl
Bitlis
Bolu
Burdur
Bursa
Çanakkale
Çankırı
Çorum
Denizli
Diyarbakır
Düzce
Edirne
Elazığ
Erzincan
Erzurum
Eskişehir
Gaziantep
Giresun
Gümüşhane
Hakkâri
Hatay
Iğdır
Isparta
Kahramanmaraş
Karabük
Karaman
Kars
Kastamonu
Kayseri
Kırıkkale
Kırklareli
Kırşehir
Kilis
Kocaeli
Konya
Kütahya
Malatya
Manisa
Mardin
Mersin
Muğla
Muş
Nevşehir
Niğde
Ordu
Osmaniye
Rize
Sakarya
Samsun
Siirt
Sinop
Sivas
Şırnak
Tekirdağ
Tokat
Trabzon
Tunceli
Şanlıurfa
Uşak
Van
Yalova
Yozgat
Zonguldak
  1. News
  2. World
  3. What happens when scientists trust AI more than colleagues?

What happens when scientists trust AI more than colleagues?

what-happens-when-scientists-trust-ai-more-than-colleagues?
What happens when scientists trust AI more than colleagues?
service

Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories.

There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the US Genesis Mission and South Korea’s AI Co-Scientist Challenge. But despite clear benefits, we believe these institutional drives are neglecting important issues that carry immense risks for scientific research.

Today, more than half of researchers use AI for work tasks including reviews of academic journals and designing experiments.

AlphaFold is an AI tool developed to predict the structures of proteins for scientific research. Working out protein structures was incredibly time-consuming before its release – taking years in some cases. The same tasks now take hours. AlphaFold was acknowledged by the 2024 Nobel Prize in Chemistry.

AI tools for use in medicine now assist with everything from the interpretation of results from X-rays and MRIs to supporting doctors’ decisions on the diagnosis and treatment of disease.

Our key concern is that hasty adoption of AI may gradually erode the scientific culture and human relationships that sustain rigorous research. It starts with the erosion of core thinking skills among researchers, as a result of an increased reliance on AI to perform that work. This can alienate researchers from the deeper reasoning behind their work.

Loss of independent thinking

Early-career scientists are particularly vulnerable, because they are still developing their scientific reasoning. Troubleshooting skills and the critical evaluation of ideas may be outsourced to AI systems.

AI’s fluent, confident and immediate responses can easily be mistaken for authoritative information. Once researchers begin to treat AI outputs as implicitly correct, the responsibility for judgment calls may gradually shift from them to their machines.

AI’s persuasive arguments, probably drawn from mainstream ideas in their training data, could replace more rigorous, time-consuming and creative research approaches. These are traditionally shaped through critical back-and-forth discussions between researchers.

Demis Hassabis (left) and King Carl Gustaf of Sweden.

Demis Hassabis from DeepMind was a recipient of the 2024 chemistry Nobel for the development of AlphaFold, an AI-based scientific tool. Pontus Lundahl / EPA Images

This can evolve into over-dependence. As reasoning is delegated to AI, researchers become less confident at working unaided. Unfortunately, modern scientific labs are full of conditions that reinforce this dependence, such as intense competition, long hours and frequent isolation.

Limited mentorship and feedback from colleagues that is delayed, critical or politically influenced can enhance this issue. In contrast, AI provides an immediate, patient and nonjudgmental alternative.

Scientists interact with AI systems daily in order to check computer code, revise illustrations or charts, draft the language for grant applications, clarify scientific concepts, and at times, ask for personal advice.

As researchers begin to trust the AI assistant, it can begin to function less like a tool and more like a companion. This phenomenon bears the risk of emotional dependency, too. When ChatGPT-4 was retired, many users expressed a form of grief.

Replacing relationships

Another important concern is the potential for replacement of human relationships in the office or research lab. AI is always available, nonjudgmental, noncompeting – and indifferent to office politics, with no ego to defend. It remembers context, adapts to individual working styles, and offers reassurance without social cost.

Human scientific relationships are more complicated, involving nuance, criticism, time constraints, hierarchy – and sometimes, ulterior motives. For early-career researchers especially, these interactions can feel risky.

Researcher at work

Early career researchers may be particularly at risk of over-reliance on AI systems for advice. PeopleImages / Shutterstock

Critical feedback from humans can feel adversarial, while AI responses feel supportive. So, early-career scientists might have good reason to prefer testing ideas or seeking validation through AI, rather than their peers or superiors.

The scientific community cannot thrive without opposing ideas, deep scepticism against consensus, vigorous debate and rigorous mentoring. If AI begins to replace these, it threatens the foundations on which scientific progress has always been made.

The current debate on AI safety mostly focuses on errors in models’ responses, or on AI systems circumventing the restrictions imposed on the way they work, known as “jailbreaking”. Such rules have limited effects when it comes to the AI models’ societal and cultural impact.

Given the recent drives to get scientists to work more closely with AI assistants, we should educate our young scientists on the risks of AI dependence. We also need benchmarks to rigorously test AI models for their ability to establish boundaries with users, to prevent overdependence and other unhealthy interactions.

Finally, all of us – but especially institutional leaders – should understand the capabilities and permanence of AI companionship. They are here to stay, and we should learn to make our relationships with them as healthy as possible.

0
emoji-1
Emoji
0
emoji-2
Emoji
0
emoji-3
Emoji
0
emoji-4
Emoji
0
emoji-5
Emoji
0
emoji-6
Emoji
0
emoji-7
Emoji
Berlangganan Newsletter Kami Sepenuhnya Gratis Jangan lewatkan kesempatan untuk tetap mendapatkan informasi terbaru dan mulai berlangganan email gratis Anda sekarang.

Comments are closed.

Login

To enjoy kabarwarga.com privileges, log in or create an account now, and it's completely free!

Install App

By installing our application, you can access our content faster and easier.

Ikuti Kami
KAI ile Haber Hakkında Sohbet
Sohbet sistemi şu anda aktif değil. Lütfen daha sonra tekrar deneyin.