Challenges Facing AI in Science and Engineering
Join the leaders July 26-28 for Transform’s AI & Edge Week. Hear high-level leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Book your free pass now!
An exciting possibility offered by artificial intelligence (AI) is its potential to solve some of the most difficult and important problems facing the fields of science and engineering. AI and science complement each other very well, with the former looking for patterns in data and the latter focusing on uncovering the fundamentals that give rise to those patterns.
As a result, AI and science are expected to massively unleash scientific research productivity and the pace of engineering innovation. For instance:
- Biology: AI models such as DeepMind’s AlphaFold provide the ability to discover and catalog protein structure, allowing professionals to unlock countless new drugs and drugs.
- Physics: AI models emerge as the best candidates to address crucial challenges in achieving nuclear fusion, such as real-time predictions of future plasma states during experiments and improving equipment calibration.
- Medicine: AI models are also excellent medical imaging and diagnostic tools, with the potential to diagnose conditions such as dementia or Alzheimer’s disease much earlier than any other known method.
- materials science: Artificial intelligence models are very effective in predicting the properties of new materials, discovering new ways to synthesize materials and modeling the performance of materials under extreme conditions.
These major profound technological innovations have the potential to change the world. However, to achieve these goals, data scientists and machine learning engineers face significant challenges to ensure their models and infrastructure achieve the change they want to see.
A key part of the scientific method is being able to interpret both the workings and results of an experiment and to explain it. This is essential to allow other teams to repeat the experiment and verify the results. It also allows non-experts and members of the public to understand the nature and potential of the results. If an experiment cannot be easily interpreted or explained, there is probably a major problem in further testing a discovery and also in popularizing and commercializing it.
When it comes to AI models based on neural networks, we should also treat inferences as experiments. Even though a model technically generates inference based on patterns it has observed, there is often a degree of randomness and variance that can be expected in the output in question. This means that understanding the inferences of a model requires the ability to understand the intermediate steps and the logic of a model.
This is a problem faced by many AI models that exploit neural networks, as many currently serve as “black boxes” – the steps between the input of a piece of data and the output of a piece of data. given are not labeled, and there is no ability to explain “why” it gravitates towards a particular inference. As you can imagine, this is a major problem when it comes to making inferences from an AI model explainable.
Indeed, this risks limiting the ability to understand what a model does to data scientists who develop models and DevOps engineers who are responsible for deploying them to their compute and storage infrastructure. This in turn creates a barrier to the ability of the scientific community to verify and peer review a discovery.
But it is also a problem when trying to develop, commercialize or apply the fruits of research beyond the laboratory. Researchers who want buy-in from regulators or customers will find it difficult to get buy-in for their idea if they cannot clearly explain why and how they can justify their finding in layman’s language. And then there is the question of ensuring that an innovation can be used safely by the public, especially when it comes to biological or medical innovations.
Another fundamental principle of the scientific method is the ability to reproduce the results of an experiment. The ability to replicate an experiment allows scientists to verify that a result is not a falsification or a fluke, and that a putative explanation for a phenomenon is correct. This provides a way to “recheck” the results of an experiment, ensuring that the wider academic community and the public can have confidence in the accuracy of an experiment.
However, the AI has a major problem in this regard. Minor adjustments in a model’s code and structure, slight variations in the training data it powers, or differences in the infrastructure on which it is deployed can result in models producing markedly different outputs. This can make it difficult to trust the results of a model.
But the problem of reproducibility can also make it extremely difficult to scale a model. If a model is inflexible in its code, infrastructure, or inputs, then it is very difficult to deploy it outside of the research environment in which it was created. This is a huge problem for moving innovations from the lab to industry and society at large.
Escaping theoretical hold
The next question is less existential – the embryonic nature of the field. Articles are continuously published on the use of AI in science and engineering, but many of them are still extremely theoretical and not too concerned with translating developments in the laboratory into practical use cases in the real world.
This is an inevitable and important phase for most new technologies, but it illustrates the state of AI in science and engineering. AI is currently on the cusp of making huge discoveries, but most researchers still treat it as a tool to be used only in a lab setting, rather than generating transformative innovations to be used beyond the desktops of researchers.
Ultimately, this is a passing issue, but a shift in mindset away from theoretical concerns and towards operational and implementation concerns will be essential to realize the potential of AI in this area and to meet major challenges such as explainability and reproducibility. Ultimately, AI promises to help us make major breakthroughs in science and engineering if we take seriously the question of its extension beyond the laboratory.
Rick Hao is the Lead Deep Technology Partner at Speedinvest.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers