Features

In brief

Published as the cover feature of A*STAR Research volume 61, this feature highlights contemporary issues around research integrity, covering common pitfalls and best practices for good and honest science as well as relevant A*STAR efforts in the scientific ecosystem.

© A*STAR Research

A lodestar for integrity

11 Nov 2025

From biomedical research to generative AI models, scientific progress depends on strong ethical standards that maintain integrity, transparency and accountability.

While it seems straightforward to expect researchers to conduct responsible and rigorous science, they actually face many complex considerations and potential pitfalls throughout the process. These include adhering to ethical standards for animal welfare; devising data management structures for accessibility and reliability; navigating the many methods for statistical analysis; and managing the peer pressure to publish results in high-impact journals.

“These considerations can sometimes cause researchers to overlook research ethics, leading to data falsification, selective result reporting, insufficient checks on the reliable reproduction of findings, or other actions that undermine trust in science,” said Huck Hui Ng, Assistant Chief Executive for Research and Talent Development at A*STAR.

“The rise of generative artificial intelligence (AI) also brings a new set of opportunities in science, but also ethical considerations,” Ng added. “Used carelessly, AI may introduce inaccuracies and blur the boundaries of ownership of generated materials, raising questions about the credibility, originality and authenticity of research works.”

Issues such as these need clear guidelines and open conversations, with policies that can evolve in tandem with research questions and technological changes. Systemic checks and balances are also vital to sustaining scientific progress while upholding standards of integrity.

As Singapore’s lead public sector R&D agency, A*STAR has a duty to establish standards for research integrity. “It is crucial to foster a research culture that values rigour and responsible conduct alongside impact and innovation,” said Ng.


“A*STAR continually refines and updates its research integrity policies, procedures and guidelines to provide researchers with clear steps and actionables in this area.”
Huck Hui Ng, Assistant Chief Executive for Research and Talent Development at A*STAR


A culture of responsible research

Broadly speaking, research integrity entails a commitment to honesty, accuracy, accountability and transparency. However, turning words into actions requires a steady ‘North Star’ by which to judge a fair and right direction. Research ethics provides that compass; a guiding set of values that helps set boundaries and clarify definitions so that all parties are aligned when distinguishing responsible from potentially harmful practices.

With ethics in place, research integrity can then be codified through compliance frameworks overseen by institutional bodies and regulatory authorities. Adherence to these guidelines helps researchers better navigate grey areas in day-to-day situations, uphold accountability and strengthen the credibility of their work. By monitoring research projects throughout their lifecycle, institutions can also identify gaps and potential risks early, enabling timely interventions.

“A*STAR continually refines and updates its research integrity policies, procedures and guidelines to provide researchers with clear steps and actionables in this area,” said Ng.

He added that A*STAR proactively empowers its researchers through new tools and resources, citing the recent rollout of mandatory image integrity checks, ongoing research integrity-related workshops and internal newsletters with real-life case studies of responsible science in action.

Within A*STAR, additional guidelines are implemented on a domain-specific level, with Institutional Review Boards (IRB) providing more support for each research project’s contextualised needs. IRBs are heavily involved in reviewing proposed projects, taking into account all relevant ethical, regulatory, legal and institutional standards before greenlighting any studies to commence. When animal models are involved, research teams must also get approval from the relevant Institutional Animal Care and Use Committee (IACUC).

Ethics evaluations from institutional committees are also aligned with guidelines established by national advisory committees, such as the Bioethics Advisory Committee (BAC), the Genetic Modification Advisory Committee (GMAC) and the National Advisory Committee on Laboratory Animal Research (NACLAR). In turn, these national guidelines anchor their legal basis in governmental policies; for example, GMAC’s Biosafety Guidelines reference the Ministry of Health’s Biological Agent and Toxins Act, which emphasises the consideration of containment measures and health, both environmental and human, when dealing with genetically modified samples.

“Innovation and responsibility must go hand in hand,” said Yan Hong, GMAC Chair. “Our Biosafety Guidelines provide a structured way for researchers to address biosafety concerns while pursuing new scientific frontiers in genetic modification (GM). With a commitment to risk assessment, safety measures, and transparency, such safeguards can help researchers maintain both scientific rigour and public confidence in research outcomes.”


In practice: A*STAR Skin Research Labs (A*STAR SRL)

At A*STAR SRL, every HBR project undergoes thorough evaluation to ensure that the involvement of human subjects or the use of human-derived material can be justified. This entails a well-designed study that has clearly stated objectives, uses well-established methods, and is expected to bring relevant and reliable results, explained María del Mar Álvarez Villamandos, an A*STAR SRL Senior Research Officer (Human Biomedical Research) and Clinical Research Coordinator.

Moreover, A*STAR SRL researchers must carry out an equitable recruitment process and obtain truly informed consent from all participants. Under the A*STAR SRL HBR Committee’s guidance, research staff participate in training sessions and workshops to understand their responsibilities and keep biomedical ethical standards top of mind when conducting HBR.

“We consistently promote the attitude that the Committee is a useful asset to assist with study design and compliance, and that it acts as a helpful colleague to guide researchers in designing and executing better scientific studies,” said Álvarez Villamandos.


The AI conundrum

In an ideal world, societies would adopt new technologies only after ethical standards of use have been established. AI is no exception, especially with generative AI models becoming more advanced and publicly accessible. Yet the growing contention around AI governance in science long predates ChatGPT’s rise to fame—from models that deal with sensitive clinical information for disease diagnostics, to those that can influence supply chain logistics and other critical infrastructure systems.

Partly due to the nature of data gathered and given to algorithms, one persisting issue in the field is bias in AI models, whether in the form of reinforcing cultural stereotypes or underrepresenting certain identities and societal groups.

“Previous studies have observed that around 26 percent of GPT-4 outputs echo gender stereotypes, such as associating nursing with women and leadership roles with men,” said Jie Zhang, Research Scientist at the A*STAR Centre for Frontier AI Research (A*STAR CFAR) and A*STAR Institute of High Performance Computing (A*STAR IHPC). “We also noted that the limited representation of diverse gender identities could contribute to recognition gaps in AI-generated content.”

While diversity and inclusion have become part and parcel of good scientific practice, it takes a conscious effort to unlearn biases. In much the same way, counteracting biases in AI models may require additional training or data augmentation, added Zhang.

In an effort to debias AI models, Zhang and team recently developed the GenderCARE framework in collaboration with Nanyang Technological University, Singapore and the University of Science and Technology of China. After identifying the limitations of existing AI benchmarks for bias, they established six criteria—diversity, inclusivity, explainability, objectivity, robustness and realisticness—that aligned with global AI governance and gender equality standards. The framework draws inspiration from the US National Institute of Standards and Technology’s criteria on trustworthy AI and the White House’s National Gender Equality Strategy. These principles were implemented through a combination of counterfactual data augmentation and low-rank adaptation fine-tuning strategies, with clear metrics to quantify the extent of bias.

“Our experiments show that these two approaches reduce gender bias while preserving overall model performance, proving that debiasing and computing capability can be balanced,” Zhang said. Through transparent, benchmark-driven evaluations and compliance with AI governance frameworks, the team believes that more inclusive models can be designed that challenge stereotypes rather than echo them. The ethical impacts of generative AI have been especially palpable in scientific publishing. Although issues of plagiarism and data manipulation are longstanding, generative AI has the potential to change the speed, scale and sophistication by which such issues arise, commented Ng.

“It is vital to raise awareness among the research community about AI ethics and the importance of transparency and ownership over content generation,” said Ng. “Moreover, proactive safeguards and digital tools are key to prevent AI misuse and conduct integrity checks. Responsible AI adoption with robust safeguards will help us stay ahead of the curve while embracing the opportunities it presents.”

Internationally established journals have also issued guidelines on the incorporation of AI tools, including their ineligibility to be considered as co-authors, according to Magdalena Skipper, Editor in Chief of Nature.

“AI cannot fulfil authorship criteria; it cannot be held accountable for what has been done or written, which is a key author responsibility,” explained Skipper, who added that AI-generated images are also prohibited for similar reasons. “The appropriate use of AI tools, and the disclosure thereof—whether for hypothesis generation, creation of synthetic data, paper compilation or anything else—lies at the heart of doing research with integrity.”


“Artificial intelligence cannot fulfil authorship criteria; it cannot be held accountable for what has been done or written, which is a key author responsibility.”
Magdalena Skipper, Editor in Chief of Nature
Photo credit: Ian Alderman


Transparency breeds trust

While digital tools are now helping to detect falsified or manipulated materials, the primary defence against breaches of integrity remains human judgement, noted Ng.

Skipper added that this is especially evident in the publishing process, where rigorous peer review serves to evaluate whether a study’s conclusions are backed by the data presented, and to offer new ideas that solidify its research story.

“Prior to a paper’s acceptance for publishing, we work with authors to ensure that the work they’ve done is transparently described, and the relevant data and code are shared in an appropriate way,” said Skipper. “Over the years, we’ve developed extensive guidelines on sharing materials, data and code; as well as ethical codes of conduct, including ethical co-authorship.”

A critical examination of science goes beyond publication; the strongest tests of a paper’s integrity may come after its public release. Corrections and retractions are not uncommon, with Springer Nature reporting over a thousand retractions in 2024 for papers published after January 2023.

“It’s worth remembering that these cases aren’t always tied to misconduct,” said Skipper. “We’ve retracted papers as a result of the authors themselves contacting us because they can no longer repeat their own experiments for reasons they don’t understand, and now wish to retract. This, in fact, is an example of appropriate scientific conduct and should be acknowledged as such.”

Ng echoed this sentiment, highlighting that openness around errors and failures is key to advancing science and maintaining accountability.

“Rather than seeing retractions and corrections as stains on one’s records, they should be recognised as opportunities for collective learning,” said Ng. “Approached constructively, they serve as reminders that science is a self-correcting endeavour. Transparency is central to that process.”

Such transparency is critical to maintaining trust, not only within the scientific community but also from the public at large. For research to be considered as a self-correcting engine, Skipper believes that failures or so-called ‘negative’ results in science are as important to talk about as success stories.

Public-facing communication about how science is done can also effectively strengthen transparency and encourage societal acceptance over emerging technologies such as gene therapies and AI. Engagements with the broader community offer researchers and regulators a chance to highlight the ethical standards and safety frameworks that govern what they do, improving their credibility and creating a sense of accountability by the scientific ecosystem toward public good.

“Outreach makes science more accessible to the general public and conveys why it’s important, how risks are managed and what safeguards are in place,” noted Hong.

At GMAC, for instance, various initiatives are underway to promote a greater understanding of GM and dispel misconceptions around it. Educational materials and guidelines are regularly published on the GMAC website; the GMAC Students Challenge gives younger generations a chance to brainstorm creative GM applications; and regular seminars and interviews allow GMAC to discuss its work and the importance of a ‘safety first’ mindset in GM-related research. Such efforts can also help facilitate dialogues between the public and regulatory authorities regarding the adoption of innovations into existing systems, such as healthcare or manufacturing, as well as policy evaluations for research and associated compliance frameworks.

“At its core, science is a human endeavour, and so we must constantly remind ourselves why is it that we do research and want to communicate its results,” said Skipper.


“Outreach makes science more accessible to the general public and conveys why it’s important, how risks are managed and what safeguards are in place.”
Yan Hong, Chair of the Genetic Modification Advisory Committee (GMAC)


Integrity in collaboration

As research integrity is an indispensable element throughout the scientific life cycle, it must be coordinated on multiple levels that run from the individual to the international. Given the increasingly global and interdisciplinary nature of research today, opportunities for meaningful exchange and cooperation are necessary to make sure all involved parties are aligned during the conduct of any study and are actively complying with all established ethical guidelines.

The foremost drivers of global-regional-local synergies are international coalitions such as the World Conferences on Research Integrity Foundation (WCRIF) and the Asia Pacific Research Integrity (APRI) Network. Meanwhile within Singapore, A*STAR collaborates with universities in providing similar opportunities through events such as the Singapore Institutional Research Integrity Offices Network (SIRION) Research Integrity Conference.

Within these associations and conferences, attendees from academia, industry, publishing and government transcend titles and disciplines to participate as individuals with a common ground to advocate for good scientific practice, discuss issues observed in their own work and share perspectives with those who have tackled similar cases.

“WCRIF has emerged as a very important platform in sharing best practices for tackling research integrity issues,” said Mai Har Sham, Pro-Vice-Chancellor of The Chinese University of Hong Kong and a governing board member of WCRIF. “It also generates a voice, because each conference often leads to a statement or set of guidelines that can help the scientific ecosystem as a whole.”

While training is key to prevent incidents of misconduct, Sham noted that a culture of responsible research also needs suitable monitoring measures, such as periodic reviews and spot-checks to help encourage compliance with research ethics approvals. This can be particularly relevant in work involving animal models or human patients; for example, external regulatory authorities or internally designated safety officers might visit labs occasionally to ensure that all experiments are well-documented and following approved protocols.

“Any institute must have systems in place so that researchers know that the institute cares about integrity,” said Sham. “It’s not about policing colleagues, but enforcing a sense of accountability towards doing good, safe and trustworthy science.”

In a similar view, Ng suggested that research bodies could also appoint institutional stewards of research integrity. Such a practice is already in place within A*STAR, where scientists appointed as Research Integrity Ambassadors offer peer support and insight, especially for complex issues.

“As a Research Integrity Ambassador, I help to distribute relevant information to scientists within my research institute, thereby providing a link between research groups and the central A*STAR Research Office,” explained Jonathan Göke, a Principal Scientist at the A*STAR Genome Institute of Singapore (A*STAR GIS). “If any concerns are raised, I can provide an independent opinion.”

Such support can build more open and credible environments within research groups and institutions, as well as in collaborations across sectors. The insights provided can be particularly helpful for trainees and junior scientists, as they may not always have the confidence—or power—to decide how best to approach an issue, or to speak up when potential ethical breaches arise.

“For science to be truly transformative, it must be rigorous and trustworthy,” remarked Ng. “The foundation for a culture of excellent science rests on our ability to equip researchers with the knowledge and tools to responsibly conduct research with integrity.”

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

This article was made for A*STAR Research by Wildtype Media Group