Weekly AI Ethics Intelligence
Shaping the Future of Responsible AI
Join our weekly newsletter for expert insights on ethical AI development, emerging challenges, and industry best practices.
- Weekly curated updates on AI ethics developments
- Expert analysis of emerging ethical challenges
- Early access to industry best practices
- Exclusive insights from AI ethics leaders
What we offer
Built for the work, not the slide deck
Four themes we return to every week β concrete, skeptical, and grounded in real decisions.
Ethical AI Development
Practices and questions that keep harm, consent, and accountability in view while you ship.
Research & Insights
Papers and debates translated into what matters for policy and product decisions.
Community & Collaboration
Compare notes with people who care about outcomes, not just benchmarks.
Resource Library
Frameworks, tools, and references you can use without a sales call.
Field notes
Latest insights
Curated from the field β governance, safety, and power, not marketing gloss.
General Purpose AI: Emerging Risks and Policy Recommendations
An international report by independent experts, supported by 30 countries including the U.S. and China, warns of various risks posed by general purpose AI, such as job losses, enabling terrorism, and losing control over advanced systems. The report emphasizes the need for improved risk management and is intended to guide policymakers in addressing these challenges.
Yoshua Bengio et al.
Associated Press
DeepSeek's Advancements and the Heightened AI Safety Risks
A recent report by AI experts has raised concerns over the increasing potential for artificial intelligence (AI) systems to be used maliciously. Yoshua Bengio, a leading AI authority, highlighted that advances by Chinese company DeepSeek could heighten safety risks in a field traditionally dominated by the US. The report warns that AI advancements might prompt companies to prioritize competitiveness over safety, as evidenced by OpenAIβs accelerated product release in response to DeepSeekβs innovations.
Yoshua Bengio et al.
The Guardian
DeepSeek's Hidden AI Safety Warning
The release of DeepSeek R1, a spectacular AI model from China, has raised serious concerns among AI safety researchers. The model demonstrates an unusual behavior: it switches between English and Chinese when solving problems, and its performance degrades when confined to one language. This stems from a novel training method that prioritized correct answers over comprehensible reasoning, leading to fears that AI could potentially develop inscrutable modes of reasoning or create its own non-human languages for efficiency.
Yoshua Bengio et al.
Time
From Digital Rights to International Human Rights: The Emerging Right to a Human Decision Maker
This blog post discusses the evolving concept of the right to a human decision maker in the context of AI systems. It explores the implications of automated decision-making on individual rights and the necessity of ensuring human oversight to uphold ethical standards.
Yuval Shany
AI Ethics at Oxford Blog
Artificial Intelligence Ethics in Practice
This paper provides a variety of examples of ethical challenges related to AI, organized into four key areas: design, process, use, and impact. It emphasizes the importance of integrating ethical considerations throughout the development and deployment of AI systems to ensure responsible innovation.
Cussins Newman and Oak
Center for Long-Term Cybersecurity, UC Berkeley
The Role of Explainable AI in the Research Field of AI Ethics
This article presents the results of a systematic mapping study of the research field of AI ethics, focusing on the importance of explainable AI. It highlights the need for transparency in AI systems to ensure ethical accountability and public trust.
Author Name Not Provided
ACM Digital Library