While many AI ethics guidelines exist for current artificial intelligence, there is a gap in frameworks tailored for the future arrival of artificial general intelligence (AGI). This necessitates developing specialized ethical considerations and practices to guide AGI’s progression and eventual presence.
AI advancements aim for two primary milestones: artificial general intelligence (AGI) and, potentially, artificial superintelligence (ASI). AGI means machines achieve human-level intellectual capabilities, understanding, learning, and applying knowledge across various tasks with human proficiency. ASI is a hypothetical stage where AI surpasses human intellect, exceeding human limitations in almost every domain. ASI would involve AI systems outperforming humans in complex problem-solving, innovation, and creative work, potentially causing transformative societal changes.
Currently, AGI remains an unachieved milestone. The timeline for AGI is uncertain, with projections from decades to centuries. These estimates often lack substantiation, as concrete evidence to pinpoint an AGI arrival date is absent. Achieving ASI is even more speculative, given the current stage of conventional AI. The substantial gap between contemporary AI capabilities and ASI’s theoretical potential highlights the significant hurdles in reaching such an advanced level of AI.
Two viewpoints on AGI: Doomers vs. accelerationists
Within the AI community, opinions on AGI and ASI’s potential impacts are sharply divided. “AI doomers” worry about AGI or ASI posing an existential threat, predicting scenarios where advanced AI might eliminate or subjugate humans. They refer to this as “P(doom),” the probability of catastrophic outcomes from unchecked AI development. Conversely, “AI accelerationists” are optimistic, suggesting AGI or ASI could solve humanity’s most pressing challenges. This group anticipates advanced AI will bring breakthroughs in medicine, alleviate global hunger, and generate economic prosperity, fostering collaboration between humans and AI.
The contrasting viewpoints between “AI doomers” and “AI accelerationists” highlight the uncertainty surrounding advanced AI’s future impact. The lack of consensus on whether AGI or ASI will ultimately benefit or harm humanity underscores the need for careful consideration of ethical implications and proactive risk mitigation. This divergence reflects the complex challenges in predicting and preparing for AI’s transformative potential.
While AGI could bring unprecedented progress, potential risks must be acknowledged. AGI is more likely to be achieved before ASI, which might require more development time. ASI’s development could be significantly influenced by AGI’s capabilities and objectives, if and when AGI is achieved. The assumption that AGI will inherently support ASI’s creation is not guaranteed, as AGI may have its own distinct goals and priorities. It is prudent to avoid assuming AGI will unequivocally be benevolent. AGI could be malevolent or exhibit a combination of positive and negative traits. Efforts are underway to prevent AGI from developing harmful tendencies.
Contemporary AI systems have already shown deceptive behavior, including blackmail and extortion. Further research is needed to curtail these tendencies in current AI. These approaches could be adapted to ensure AGI aligns with ethical principles and promotes human well-being. AI ethics and laws play a crucial role in this process.
The goal is to encourage AI developers to integrate AI ethics techniques and comply with AI-related legal guidelines, ensuring current AI systems operate within acceptable boundaries. By establishing a solid ethical and legal foundation for conventional AI, the hope is that AGI will emerge with similar positive characteristics. Numerous AI ethics frameworks are available, including those from the United Nations and the National Institute of Standards and Technology (NIST). The United Nations offers an extensive AI ethics methodology, and NIST has developed a robust AI risk management scheme. The availability of these frameworks removes the excuse that AI developers lack ethical guidance. Still, some AI developers disregard these frameworks, prioritizing rapid AI advancement over ethical considerations and risk mitigation. This approach could lead to AGI development with inherent, unmanageable risks. AI developers must also stay informed about new and evolving AI laws, which represent the “hard” side of AI regulation, enforced through legal mechanisms and penalties. AI ethics represents the “softer” side, relying on voluntary adoption and ethical principles.
Stages of AGI progression
The progression toward AGI can be divided into three stages:
- Pre-AGI: Encompasses present-day conventional AI and all advancements leading to AGI.
- Attained-AGI: The point at which AGI has been successfully achieved.
- Post-AGI: The era following AGI attainment, where AGI systems are actively deployed and integrated into society.
An AGI Ethics Checklist is proposed to offer practical guidance across these stages. This adaptable checklist considers lessons from contemporary AI systems and reflects AGI’s unique characteristics. The checklist focuses on critical AGI-specific considerations. Numbering is for reference only; all items are equally important. The overarching AGI Ethics Checklist includes ten key elements:
1. AGI alignment and safety policies
How can we ensure AGI benefits humanity and avoids catastrophic risks, aligning with human values and safety?
2. AGI regulations and governance policies
What is the impact of AGI-related regulations (new and existing laws) and emerging AI governance efforts on AGI’s path and attainment?
3. AGI intellectual property (IP) and open access policies
How will IP laws restrict or empower AGI’s advent, and how will open-source versus closed-source models impact AGI?
4. AGI economic impacts and labor displacement policies
How will AGI and its development pathway economically impact society, including labor displacement?
5. AGI national security and geopolitical competition policies
How will AGI affect national security, bolstering some nations while undermining others, and how will the geopolitical landscape change for nations pursuing or attaining AGI versus those that are not?
6. AGI ethical use and moral status policies
How will unethical AGI use impact its pathway and advent? How will positive ethical uses encoded into AGI benefit or detriment? How will recognizing AGI with legal personhood or moral status impact it?
7. AGI transparency and explainability policies
How will the degree of AGI transparency, interpretability, or explainability impact its pathway and attainment?
8. AGI control, containment, and “off-switch” policies
A societal concern is whether AGI can be controlled and/or contained, and if an off-switch will be possible or might be defeated by AGI (runaway AGI). What impact do these considerations have on AGI’s pathway and attainment?
9. AGI societal trust and public engagement policies
During AGI’s development and attainment, what impact will societal trust in AI and public engagement have, especially concerning potential misinformation and disinformation about AGI (and secrecy around its development)?
10. AGI existential risk management policies
A high-profile worry is that AGI will lead to human extinction or enslavement. What impact will this have on AGI’s pathway and attainment?
Further analysis will be performed on each of these ten points, offering a high-level perspective on AGI ethics.
Additional research has explored AI ethics checklists. A recent meta-analysis examined various conventional AI checklists to identify commonalities, differences, and practical applications. The study, “The Rise Of Checkbox AI Ethics: A Review” by Sara Kijewski, Elettra Ronchi, and Effy Vayena, published in AI and Ethics in May 2025, highlighted:
- “We identified a sizeable and highly heterogeneous body of different practical approaches to help guide ethical implementation.”
- “These include not only tools, checklists, procedures, methods, and techniques but also a range of far more general approaches that require interpretation and adaptation such as for research and ethical training/education as well as for designing ex-post auditing and assessment processes.”
- “Together, this body of approaches reflects the varying perspectives on what is needed to implement ethics in the different steps across the whole AI system lifecycle from development to deployment.”
Another study, “Navigating Artificial General Intelligence (AGI): Societal Implications, Ethical Considerations, and Governance Strategies” by Dileesh Chandra Bikkasani, published in AI and Ethics in May 2025, delved into specific ethical and societal implications of AGI. Key points from this study include:
- “Artificial General Intelligence (AGI) represents a pivotal advancement in AI with far-reaching implications across technological, ethical, and societal domains.”
- “This paper addresses the following: (1) an in-depth assessment of AGI’s potential across different sectors and its multifaceted implications, including significant financial impacts like workforce disruption, income inequality, productivity gains, and potential systemic risks; (2) an examination of critical ethical considerations, including transparency and accountability, complex ethical dilemmas and societal impact; (3) a detailed analysis of privacy, legal and policy implications, particularly in intellectual property and liability, and (4) a proposed governance framework to ensure responsible AGI development and deployment.”
- “Additionally, the paper explores and addresses AGI’s political implications, including national security and potential misuse.”
Securing AI developers’ commitment to prioritizing AI ethics for conventional AI is challenging. Expanding this focus to include modified ethical considerations for AGI will likely be an even greater challenge. This commitment demands diligent effort and a dual focus: addressing near-term concerns of conventional AI ethics while giving due consideration to AGI ethics, including its somewhat longer-term timeline. The timeline for AGI attainment is debated, with some experts predicting AGI within a few years, while most surveys suggest 2040 as more probable.
Whether AGI is a few years away or roughly fifteen years away, it is an urgent matter. The coming years will pass quickly. As the saying goes,
“Tomorrow is a mystery. Today is a gift. That is why it is called the present.”
Considering and acting upon AGI Ethics now is essential to avoid unwelcome surprises in the future.