9 April 2025

‘AI’s future lies in making it more efficient, accessible, and sustainable’

Asst Prof Wang’s research focuses on improving the efficiency of AI models
Asst Prof Wang’s research focuses on improving the efficiency of AI models

Assistant Professor Wang Xinchao (Electrical and Computer Engineering), a leading researcher in the AI field at CDE, has been honoured with a prestigious spot on the IEEE AI’s 10 to Watch list for 2024.

Awarded by the IEEE Computer Society, the 10 to Watch list recognises young researchers making groundbreaking contributions to AI. Asst Prof Wang’s inclusion highlights his exceptional work in the area of Machine Learning and his significant role in shaping the future of AI.

Asst Prof Wang, who is also a Presidential Young Professor, said, “I am truly honoured to receive this recognition and grateful for the community’s recognition of my work in AI. While this is given to individuals, it belongs just as much to my outstanding research team, whose commitment and talent have been vital. I also sincerely appreciate NUS and Singapore's generous support, without which this achievement would not have been possible.”

AI has become an integral part of our lives, from voice assistants to healthcare innovations, but the most advanced AI systems require massive computational resources. Asst Prof Wang’s research focuses on improving the efficiency of AI models, making them smaller, faster, and less reliant on vast amounts of data. His innovative work has the potential to make AI more accessible, environmentally friendly, and impactful across a wide range of industries.

In this Q&A, Dr Wang shares his insights on the biggest challenges facing AI today, thoughts on its future evolution, and how his work is pushing the boundaries of what’s possible with AI technology.

Q: Your research focuses on Efficient Machine Learning. Could you explain in simple terms what that means and why it is important?

AI is transforming the world, but modern AI models are often large, resource-intensive, and costly to train. This brings several challenges:

  • Such models are too large to run on devices with limited computing power.
  • Individuals and smaller organisations are often unable to participate in AI development due to the high computational barriers,
  • The training process itself is environmentally unsustainable, consuming vast amounts of energy.

My current research focuses on efficient machine learning — developing methods to make AI models smaller, faster, and more accessible. The goal is to broaden the reach of AI, enabling its deployment across a broader range of applications and democratising AI development for a more inclusive and sustainable future.

Q: AI models are getting more powerful but also require a lot of computing power. How does your work help make AI more efficient?

My research spans three key sub-domains: efficient strategies, efficient models, and efficient data.

Efficient strategies focus on optimising existing pre-trained models — making them smaller, faster, and more adaptable to specific use cases. The idea is to take models already trained by others and enhance their efficiency without compromising performance.

Efficient models involve designing novel neural architectures that are inherently lightweight and computationally efficient yet still deliver strong performance across tasks.

Efficient data explores how to train models using significantly less data while maintaining accuracy close to that achieved with large-scale datasets.

"One of the most urgent concerns is the potential misuse of AI," says Assoc Prof Wang.
"One of the most urgent concerns is the potential misuse of AI," says Assoc Prof Wang.

Q: AI is sometimes described as a ‘black box’ because it can be difficult to understand what influences the results AI produces and how it makes decisions. How does your research contribute to making AI more transparent and trustworthy?

Efficiency and trustworthiness are interconnected. For instance, a key aspect of trustworthy AI is “attribution" which aims to identify which parts of the input data or model architecture are most influential in decision-making. This naturally ties into efficiency: if we can determine which components are less critical, we can remove or simplify them to create more compact models.

Conversely, when we create a smaller model — for instance, by pruning a larger one — the components that are retained can offer insight into attribution. In a sense, what remains after pruning reveals the essence of the original model's reasoning.

Q: What do you see as the biggest challenge in AI research today?

Safety.

For me, one of the most urgent concerns is the potential misuse of AI — and whether meaningful regulation is even possible, especially as the technology becomes increasingly open and accessible. Right now, the risks might still seem manageable. Training powerful AI models still requires vast computational resources and technical expertise, which creates natural barriers.

But those barriers are eroding. As hardware becomes more affordable and the tools to build and deploy AI models become more user-friendly, we could reach a point where almost anyone can fine-tune or develop high-performing models with very little oversight. This raises serious safety questions. Who ensures these models are used responsibly? How do we prevent misuse if powerful models are freely available and hard to trace?

We need to think ahead and act now — because if we wait until these risks materialise at scale, it may already be too late to do much about them.

Q: The field of AI is evolving rapidly. What excites you most about where AI is heading in the next five years?

AI for science — and vice versa. I’m incredibly excited about the potential for AI to accelerate discovery in fields like biology, chemistry, and physics, as well as for ideas from biology to inspire new forms of AI beyond traditional silicon-based systems.

One fascinating direction is using AI to study genetics and trace deep connections between species. For example, we've explored how AI can help identify possible links between the genes of fish and those of humans — such as whether structures like fins and human limbs share a common origin in our evolutionary history. These are complex problems involving billions of data points, and AI can speed up the analysis by orders of magnitude.

Another exciting area is using AI in engineering design — like asking generative models to propose circuit layouts or materials with specific properties, based on just a short prompt. AI doesn’t just analyse existing data — increasingly, it can help create new designs, new hypotheses, and even new tools. That’s a game-changer for how we do science.

Q: There’s a lot of talk about ethical AI and responsible AI development. What role does efficiency play in making AI more ethical or sustainable?

Efficient AI per se, as discussed, contributes to sustainability by enabling greener approaches to both training and inference. As for ethical and responsible AI, that is precisely the next direction I aim to pursue — integrating efficiency and responsibility within a unified framework, hoping that the two can mutually reinforce and enhance one another.

Q: How do you see Singapore’s role in the global AI landscape?

It is true that Singapore might not be able to compete with the US and China in building extremely large models — but that is not, and should not be, the only goal of AI. In fact, Singapore is doing very well and, in my opinion, is leading in several areas of AI.

I believe Singapore has unique advantages for AI: the forward-thinking approach to AI governance, AI-/startup-friendly policies, and its geopolitical position as a bridge between East and West. In addition, our high density of top-level talent across various domains lends us the feasibility and convenience of conducting interdisciplinary research.  All of these create a vibrant ecosystem where impactful, responsible, and innovative AI can thrive.

Q: What advice would you give to young researchers or students who want to contribute to the future of AI?

Regarding research directions in AI, I’d like to quote the advice from my postdoctoral advisor, the late Professor Thomas S. Huang: “Just be yourself.” To that, I would add—make your AI responsible.

Recent News