Highlights

In brief

Researchers have made transfer Bayesian optimization algorithms more broadly applicable with the help of a neural network that transforms features from one problem to another.

© Shutterstock

Teaching machines transferable skills

30 Jul 2021

By giving algorithms the ability to generalize, researchers are expanding the range of problems that can be tackled with artificial intelligence.

With the ability to analyze large amounts of data and discern patterns that humans can’t, artificial intelligence (AI) has taken the world by storm. While AI algorithms have advanced fields like computer vision and natural language processing, certain tasks remain the preserve of human experts, who can seamlessly transfer domain knowledge in one field to similar—though not identical—situations.

Can computers transfer learn in the same way that humans do? This question is at the heart of a subfield in machine learning aptly named transfer learning. In particular, an emerging technique known as transfer Bayesian optimization (TBO) can cut the time needed to solve computationally expensive decision problems from many days to a matter of hours, by building on the solution to a related problem instead of starting from scratch.

“TBO algorithms have exhibited human-like ability to leverage experiential priors to rapidly solve new problems,” explained Abhishek Gupta, a Scientist at A*STAR’s Singapore Institute of Manufacturing Technology (SIMTech). “However, the applicability of existing TBO algorithms was limited to scenarios where new problems bore exactly the same input features as the experiential priors.”

To extend the benefits of TBO algorithms to a wider range of decision problems, Gupta and his collaborators Yew-Soon Ong, A*STAR’s Chief Artificial Intelligence Scientist, and Nanyang Technological University graduate student Alan Tan sought to generalize the algorithm to work across problems with dissimilar features. They used a feature transformation function in the form of a multi-layer neural network, which allowed features from previous examples to be automatically aligned to new problems without the need for human intervention.

The team tested their method on several case studies. In one, the algorithm was able to adapt knowledge from a turbojet engine to accurately learn the behavior of a turbofan engine with different features. Similarly, an algorithm trained to optimize a composite manufacturing process with four control parameters was able to quickly extend its results to a different process with six parameters.

“Our generalized TBO algorithm can learn from diverse experiential priors, thus boosting the productivity of optimization and decision-making processes,” Gupta said. “In addition, TBO algorithms could offer a path towards greater AI democratization, mimicking an expert to build optimized AI models—such as deep neural networks—with limited computational resources.”

Follow-up studies aim to increase the scalability and flexibility of the method to improve its ability to solve real-world problems, from the geometric design of aerodynamic structures to the rapid personalization of 3D printable products such as face masks.

The A*STAR-affiliated researchers contributing to this research are from the Singapore Institute of Manufacturing Technology (SIMTech).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Tan, A.W.M, Gupta, A., Ong, Y.S. Generalizing transfer Bayesian optimization to source-target heterogeneity, IEEE Transactions on Automation Science and Engineering (2020) | article

About the Researcher

Abhishek Gupta is a Scientist in A*STAR’s Singapore Institute of Manufacturing Technology (SIMTech). He holds a PhD in Engineering Science from the University of Auckland, New Zealand. His current research is on developing algorithms at the intersection of optimization and machine learning, with particular application to cyber-physical production systems.

This article was made for A*STAR Research by Wildtype Media Group