10 Groundbreaking Ways AI is Revolutionizing Scientific Research

I was wondering, is artificial intelligence really revolutionizing scientific research?

Every day, new things are born that speed up scientific discoveries, and this gives us a certain advantage, since we often wonder if we could have done this or that 10, 20 or 50 years ago.

Seriously, do you think that generation X could have imagined that a game like cyberpunk 2077 could exist? (personally, it’s my favorite game, I love it too much!) Or get answers on command with artificial intelligence? Of course not!

That’s why today we’re going to tell you what AI does at every stage of the research process, from hypothesis formulation to data analysis. It’s going to be fascinating!

Accelerating scientific discovery

Scientists look at the yellow chemicals in glass at the laboratory
Credits: Image by jcomp on Freepik

There’s one thing that’s important in all scientific disciplines if we want to use AI in scientific research, and that’s the fact that it’s capable of processing astronomical quantities of data, and the fact that it’s capable of identifying patterns.

3D representation of DNA
Credits: Image by freepik

If I take genomics as an example (according to the dictionary, genomics is a branch of genetics that studies genomes (a genome is the set of hereditary material composed of nucleic acids (DNA or RNA) of a cellular organelle, organism or species)).

So I was saying that if I take genomics as an example, AI would be very useful for analyzing huge datasets to discover which disease might be associated with a gene and vice versa.

side-view-male-researcher-biotechnology-laboratory-with-plant
Credits: image by freepik

If we now take the environmental sciences, AI can be used to process data coming from sensors and satellites, so it can monitor climate change and predict natural disasters in advance, but of course at first it won’t be at all accurate, but it will get better and better.

Then there’s the discovery and development of medicines. The way drugs are currently discovered is insanely time-consuming and costly, but if we used artificial intelligence,

we’d be able to analyze databases of chemical compounds in no time at all, so we’d know whether they’re effective or not, not to mention whether they’re safe.

Engineers developing a robot.
Credits: image on pexels

Robotics and automation play an important part in this. Robots are designed to do the same tasks over and over again, so that’s what they can be used for, and scientists can concentrate on other things. Another field of science in particular is materials science, where robots will be used to synthesize and test new materials in no time at all.

We can also improve data analysis and modeling.

Business data analytics management with connected gear cogs with KPI financial charts and graph.Businessman hand pressing an imaginary button on virtual screen
Credits: stock photo by vecteezy

It’s important for scientists today to have AI models that are able to predict and make better simulations. And this could be particularly useful in climate science, for example,

if we needed to know what impacts different global weather patterns might have, AI would be a great asset for making simulations.

We’d even be able to understand the behavior of subatomic particles, and if you haven’t got a clue, you should know that it’s impossible to do that kind of thing if you were just trying to experiment with physics.

On the one hand, if researchers were to use natural language processing technologies and knowledge graphs, this would help to blend different data sets, and would also be very useful if we needed to retrieve important information from the scientific literature.

On the other hand, they could be used in biomedicine, because since it’s its specialty to analyze data, it could do the same here by analyzing published research, so we could find potential drugs or even try other personalized therapeutic approaches.

 A warm welcome to the scientific research manager! 

An interesting study cited by techxplore,

Image of koehler
Credits: Maximilian Koehler| ESMT Berlin
Image of Henry sauherman
Credits: Henry Sauermann (@HSauermann)
X.com

published in Research Policy by Maximilian Koehler and Henry Sauermann, is examining a new role for artificial intelligence in scientific research: guess what it is! Well, as you saw in the header, it’s the role of manager supervising human workers.

This concept of algorithmic management(AM) represents a change in the way research projects are conducted, and could enable us to think bigger and operate on a larger scale and with greater efficiency.

Koehler and Sauermann‘s research shows that it is indeed true that AI can replicate human managers, but it can also supervise them if we consider certain parts of research management.

 They identify five key managerial functions that AI can perform effectively:

1. Task allocation and assignment

2. Leadership

3. Coordination

4. Motivation

5. Learning support

The researchers studied various projects using online documents, interviews with organizers, AI developers and project participants, and even participated in some projects themselves.

Thanks to this approach, it’s obvious that we can find out which projects use algorithmic management, and it’s also obvious that we can understand how AI manages to do all this.

In fact, we’re seeing more and more use of artificial intelligence in AM, and that’s not good at all, absolutely not! Because by doing so, research productivity drops.

 

 As Koehler states, quoted by Techxplore,

“The capabilities of artificial intelligence have reached a point where AI can now significantly enhance the scope and efficiency of scientific research by managing complex, large-scale projects”.

So we’re all asking the same question, what can be the: 

Key benefits of AI in research and education 

 

According to the National Health Institute, AI could dramatically transform research and education through several key benefits:

1. Data processing:

as I mentioned above, AI’s specialty is processing huge amounts of data which is a huge advantage for researchers who want to use elaborate datasets and like that they will be able to derive worthwhile insights. (National Health Institute, 2024).

2. Task automation:

as AI is capable of automating tasks, this can be useful for organizing certain tasks such as formatting and citation, and as it saves researchers time and energy, they can then concern themselves with more difficult and innovative work (National Health Institute, 2024).

3. Personalized learning 

 AI can create personalized learning paths for students, tailoring the experience to their unique needs and learning preferences (National Health Institute, 2024). 

As usual, all is not so rosy 

I hope you already know that even in scientific research, all is not so rosy in terms of morality and challenges. If you remember, AI’s specialty is actually analyzing data,

so, as the National Health Institute makes clear, if it’s just analyzing the same data over and over again, or even if it’s just analyzing the same things in the same data over and over again,

we can end up with predictions that are wrong, and that will lead to results that are downright bad and harmful.

It’s the same as when we use AI to write an entire article, the AI draws on the same data, and that’s why we end up with articles that bring no value to the reader, lack personal experience and are plagiarism of other articles.

The same goes for AI used to write film scripts: the more you use it, the more you’ll realize that the scripts are all the same, so there’s no originality left.

It’s a bit like the way it works with scientific research, except that here we’re talking about sensitive data, especially in the fields of health and medical research.

Let’s not forget, too, that these biases can appear at any stage, whether in the collection of data or in the evaluation of models, so this kind of thing can lead to results that aren’t true, and these results can influence the instructions given in clinics or medical interventions.

Recent studies agree with this point, saying that these biases can lead to significant health disparities. If researchers are vigilant in identifying and reducing these biases, no problem!

It’s always important to make sure that the information generated by AI is fair and accurate, and not a hallucination . You don’t want to be the guinea pig in a scientific experiment that’s guaranteed to kill you, do you? 

The rise of AI-generated content in scientific publications is yet another dilemma to be solved, and why are we talking about this?

Because the Cornell Daily Sun, reported that it has already happened that AI-generated articles containing, we must remember, totally absurd or fabricated information have been submitted to and even published in scientific journals.

A perfect example occurred just recently, in February 2024, when Frontiers in Cell and Developmental Biology published an article entitled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway”.

A day after publication, readers noted that the figures were undoubtedly AI-generated and contained spelling mistakes, diagrams that represented nonsense and anatomically incorrect illustrations. The journal withdrew the article within three days. 

It’s because of stuff like this that it’s important that we put in place robust peer review processes and clear guidelines on how we use and disclose AI in research publications. And at the same time,

isn’t AI being abused in academic publications? It’s true! It’s hard to maintain scientific integrity now that technology is advancing so rapidly.

Don’t tell me that artificial intelligence is being used in paper mills!

I don’t know if you knew this, but according to the National Health Institute, AI is even being abused in “paper mills” to produce fraudulent articles on a massive scale,

and you wouldn’t believe how much this use has led to an increase in the volume of false publications.

And with all this, can we still believe in scientific research? I wonder. The fact that these factories use AI to generate text and images makes it increasingly difficult to know whether research is genuine or not,

and that’s not at all a good thing for scientific literature, which is supposed to have integrity.

Also according to the National Health Institute, Gianluca Grimaldi and Bruno Ehrler address this issue in their book “AI et al: Machines Are About To Change Scientific Publishing Forever”. They warn that

“A text-generation system combining speed of implementation with eloquent and structured language could enable a leap forward for the serialized production of scientific-looking papers devoid of scientific content, increasing the throughput of paper factories and making detection of fake research more time-consuming”.

So it’s hard to detect AI-generated content?

It’s true that publishers and editors have developed various software tools to detect similar texts and plagiarism, but that doesn’t mean that AI-generated texts can be easily identified.

However, there are various players in the academic and publishing world, such as publishers, reviewers and editors, who increasingly want to use the world’s artificial intelligence content detectors,

if you still haven’t figured out how they’re going to use them, basically, they just differentiate between texts written by humans and those generated by AI but even if there are some tools for that, they’re not 100% reliable.

Advantages of AI in scientific publishing 

Leaving aside the challenges, let’s think about what artificial intelligence has to offer in terms of advantages in the scientific publishing process.

According to technology network, Dmytro Shevchenko, (not the footballer but) PhD student in computer science and data scientist at Aimprosoft, highlights several positive applications of generative AI (GAI) in publishing:

1. Creating abstracts and summaries: we can use Large Language Models (LLM) to generate abstracts of research articles, and it’s much easier for readers to understand what the conclusions and implications of the research are.

2. Linguistic translation: LLMs can also make it easy to translate research articles into several languages, making research results more accessible and far-reaching.

3. Text checking and correction: LLMs trained on large datasets can generate consistent and grammatically correct texts, which can improve the overall quality and readability of research articles (Technology Network, 2024).

Andrew Stapleton, former chemistry researcher and current content creator for academics, agrees:

“AI is a fantastic tool to streamline and speed up the publishing process. So much of the boring and procedural can be written faster (abstracts, literature reviews, summaries and keywords etc.)”

 

AI policy developments in scientific publishing

According to technology network, the scientific publishing community has been debating how to start using AI in scientific research and writing. Early 2023,

 Many publishers adopted restrictive positions, with some, such as Science, banning the use of AI tools altogether. Herbert Holden Thorp, editor-in-chief of Science magazine, said:

“The scientific record is ultimately one of the human endeavor of struggling with important questions. Machines play an important role, but as tools for the people posing the hypotheses, designing the experiments and making sense of the results. Ultimately the product must come from – and be expressed by – the wonderful computer in our heads”(Technology Network, 2024).

However, given the rapid evolution of technology, many magazines have seen fit to change their policy.

Science, for example, changed its stance later in the year, now allowing authors to declare how AI has been used in their work. 

Other major journals have done the same, so they require you to say whether you’ve used AI but are totally against using AI to generate or modify research images.(They’re good Science, very good!)

Policies vary from publisher to publisher:

  •  JAMA wants detailed information on any AI software used, including name, version, manufacturer and dates of use.
  • Springer Nature has specific policies for peer reviewers, so they are asked not to upload manuscripts to generative AI tools if they don’t have safe AI tools.
  • Elsevier’s policies accept the use of AI to write manuscripts so that readability and language are improved, but still require others to declare that they have used AI when they are ready to submit (Technology Network, 2024).

More policy implementation challenges? It gets boring in the end!

Despite these efforts, implementation and enforcement of AI policies in scientific publishing remain problematic.

There’s a recent incident and it involved an Elsevier journal that puts these difficulties in a new light when it published a peer-reviewed introduction, which, you guessed it, was generated by artificial intelligence.

This particularly upset the public, who wondered whether we were really following the guidelines? (Technology Network, 2024).

A study by Ganjavi et al. explored the extent and content of guidelines for AI use among the top 100 academic publishers and scientific journals.

They found that only 24% of publishers provide guidelines, with only 15% among the top 25 publishers analyzed.

The authors concluded that the guidelines of some leading publishers were “deficient” and noted substantial variations in the permitted uses of BGS and disclosure requirements (Technology Network, 2024).

Towards a robust framework for AI in scientific publishing

To meet these challenges, experts call for a comprehensive approach to managing the use of AI in scientific research and publishing.

Nazrul Islam and Mihaela van der Schaar  suggest a multi-faceted strategy that includes:

1. Developing comprehensive guidelines for the acceptable use of AI in research.

2. Implement suitable peer review processes to identify and scrutinize AI-generated content.

3. Foster collaboration between clinicians, editorial boards, AI developers and researchers to understand the capabilities and limitations of AI.

4. Create a strong framework for transparency and accountability in the disclosure of AI use.

5. Conduct ongoing research into the impact of AI on scientific integrity (Technology Network, 2024).

Nevertheless, progress is already being made in developing these frameworks. The “ChatGPT and Generative Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use” (CANGARU) project, led by Giovanni Cacciamani and his colleagues, aims to establish guidelines, obviously with consensus agreement, for the use of AI in universities.

It’s a global, multi-disciplinary project involving more than 3,000 academics from a wide range of fields, and has the following main stages:

1. A systematic review of GAI/GPT/LLM applications in university research.

2. A bibliometric analysis of existing author guidelines mentioning GAI/GPT/LLM.

3. A Delphi survey to establish agreement on the elements of the guidelines.

4. Development and dissemination of finalized guidelines and complementary documents (Technology Network, 2024).

The future of AI in scientific research

It’s quite certain that AI technologies will continue to evolve, and this may expand their role in scientific research with more advanced artificial intelligence models that could even generate hypotheses,

design experiments and make scientific discoveries without much human intervention.

But… is this such a good idea? As you’ll recall, we’ve been given guidelines to follow, so we’re obliged to have humans on hand to guide these systems, interpret the results and ensure that scientific progress respects ethical principles and the needs of society.

On the other hand, if AI remains an integral part of scientific research, we have the opportunity to tackle long-standing problems in academia.

As Mr. Stapleton, quoted by technology network, points out,

“AI has just highlighted the issues with an already broken and gamified academic system. It is the breaking point where we now need to address the underlying issues of the publish or perish culture that has eaten away the foundations of academia. It’s easy to blame AI as the cause of the issues, when in fact it is the magnifier that shows just how bad things have gottens” (Technology Network, 2024).

 Conclusion

AI is undoubtedly transforming scientific research, offering unprecedented opportunities to accelerate discovery and tackle complex global challenges.

From improving data analysis and modeling to revolutionizing research management and scientific publication, AI is reshaping the way science is conducted and communicated.

However, realizing the full potential of AI in science will require ongoing collaboration between researchers, AI developers, ethicists and policymakers to ensure that these powerful tools are used responsibly and effectively.

The development of comprehensive guidelines, robust peer review processes and transparent reporting mechanisms will be crucial to preserving the integrity of scientific research in the AI era.

It’s clear that there’s no chance of AI replacing researchers, at least for the time being, but it may just serve as usual to augment

what humans can do, they’ll focus on more complex problems to solve and we’ll no doubt gain insights we could never have achieved without its help, and we thank it for that.

On the other hand, if we use it properly, i.e. if we commit ourselves to scientific rigor and ethical principles, we’ll be able to make sick progress and better understand where we’re from,

what we’re doing and where we’re going.Remember this: don’t use AI to lead you to your downfall.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top