Researchers create AI scientists, and it turns out better than expected
Researchers are developing artificial intelligence capable of independently formulating hypotheses, conducting experiments and writing scientific articles.
The difference between scientists who use AI, AI-scientists and “AI-scientists”
There are many examples of how scientists use AI. One of them is a 2019 MIT study. The AI was then trained on 1,700 FDA-approved drugs and 800 natural substances, many of which were antibiotics or had antibacterial properties. II analyzed a library of 6,000 compounds to find a new substance with similar properties. This is how the antibiotic Halicin was discovered. In 2023, the same team found a second antibiotic that could be effective against MRSA, an antibiotic-resistant bacteria.
Usually, the term “AI scientist” refers to a specialist who has a deep understanding of AI, such as large language models or artificial neural networks. Increasingly, however, the term is also being used to refer to a new type of AI that works as a scientist. So far, this AI scientist only studies AI models, making him both an AI expert and an “AI scientist” in the literal sense.
II-scientist
Japanese company Sakana AI is funding a lab at the University of British Columbia and the University of Oxford to develop an AI that can perform the entire scientific process on its own. This AI must study the scientific literature, formulate hypotheses, conduct experiments, write scientific articles and check his own works for compliance with the original scientific literature.
To reduce the chance of errors, the team at the University of British Columbia developed a step-by-step process for the AI to follow. AI takes data about a Model II or Type II and makes several hypotheses about how that model can be improved. It evaluates ideas based on their “interest, novelty and feasibility.” After choosing a hypothesis, the AI checks the databases to make sure that the idea is truly new and original. He then uses a helper program to write the code and test the hypothesis while keeping research notes in parallel. If necessary, he conducts additional experiments and writes a scientific article.
At the final stage, the AI evaluates the article and can reject it if it detects fabricated data or hallucinations – false conclusions typical of some AI models. One of the researchers admitted that they managed to reduce the number of hallucinations to only ten percent. “We believe that ten percent is still unacceptable.”
Possible disadvantages
One of the most common problems in AI is popularity bias. Researchers have also observed this problem in an AI scientist: He may favor areas that have already been well studied, or overestimate theories on which more data has been collected.
On the other hand, AI can seem more creative because it has no intuition or prior experience that can limit its possibilities. For example, a physicist studying quantum particles of light had to develop a method of observing a certain phenomenon within a few weeks. Suspecting that his own intuition was getting in the way, he decided to engage AI. Within a few hours, AI proposed an experiment that proved successful.
However, the lack of intuition and experience in AI can prevent the correct interpretation of the results. Researchers at the University of British Columbia compared his findings with those of novice graduate students, who tend to make hasty or inaccurate conclusions.
In addition, the development of AI scientists raises ethical issues. Who will receive recognition for the work of the II-scientist? And who will be responsible for errors, plagiarism or data distortion?