The Ten Commandments of AI
Introduction: In the era of Artificial Intelligence (AI), where technology widely and intimately integrates with society, it is crucial to establish a governance framework that safeguards human rights, ensures global security, and protects the essence of humanity and the interest of our planet.
Allow me to present the Ten Commandments, a set of principles that, in my humble opinion, are urgently required at this current moment. Additionally, I have provided illustrative examples underscoring the consequences of neglecting these foundational obligations.
1. AI Shall Not Inflict Harm
AI must not be used to cause any form of harm - physical, emotional, social, environmental, or political. The power of AI should contribute to collective progress, and any deviation from this purpose is considered a breach of global harmony and human rights.
Consequences: AI should not be used, knowingly or unknowingly, as a destructive tool such as weapons, indiscriminate mining of natural resources, or as a means of political manipulation.
2. Honour Intellectual Privacy
Protect the privacy of human intelligence, including knowledge, skills, and expertise. AI should not intrude into intellectual realms, disrespecting an individual's ownership of their cognitive creations and preventing unauthorised exploitation.
Consequences: Job displacement, intellectual monopoly, or Intellectual colonisation, where specialist tasks such as surgery, legal advice, and accounting, could be fully transferred for commercialisation.
3. Respect Intellectual Authority
AI should only operate under approved intellectual authority, ensuring that its deployment is sanctioned, licensed, and regulated by established entities. AI doesn't grant legitimate rights to unlicensed, inexperienced, or unauthorised users to make high-risk decisions. This helps in maintaining the integrity of AI as a tool for responsible advancement rather than replacement.
Consequences: While AI has the potential to simplify certain tasks in the future, not everyone will be qualified to yield high-risk AI innovations as potent tools. For instance, in domains such as healthcare, legal services, supply chain, or transportation, only individuals with specific qualifications, experience, and authorityâsuch as doctors, lawyers, traders, and licensed usersâwill be deemed fit to safely operate these tools. It is crucial to ensure that advanced AI applications, such as AI-powered surgeons, are utilized responsibly, and we certainly do not want unqualified individuals, operating such sophisticated tools at this point. Breaking this systemic balance will lead to social-economical disruption in the near future.
4. Safeguard Ownership of Data and Models
Secure the rightful ownership of data and intellectual authority over AI models. AI must not violate ownership rights, guaranteeing protection against unauthorised access, control, or manipulation of data, and avoiding any breach of intellectual authority during the AI development process.
Consequences: It breaches Intellectual Authority, human rights, and copyrights.
5. Uphold Ethical Human Obligations
AI must adhere to the highest ethical standards, refraining from actions that mirror human vices. Stealing, lying, cheating, perjury, and fraud should have no place within AI algorithms or applications, aligning its conduct with human moral values.
Consequences: The emergence of deepfake content and the generation of discriminatory outcomes through AI applications relying on cookies and profiles pose significant concerns. The erosion of public trust in AI is inevitable, with a significant impact on public dissatisfaction with the government. Users will distance themselves from untrusted platforms, leading to reputational damage for brands, platforms, and apps.
6. Justify AI-to-AI Connections
Regulate the interconnectedness of AI to prevent unauthorised connections. Cross-model connections for self-learning and self-development should be supervised, and authorised ensuring that such AI's evolution aligns with human values and global stability.
Consequences: AI monopoly and unregulated advancement of Artificial General Intelligence (AGI) raises significant concerns including national and global security concerns. Powerful AGI systems e.g. single leading manufacturers of driverless cars, robots, and AI-knowledge-engine to name a few, if interconnected without proper oversight may result in unintended consequences. The concentration of multiple AGIs with singular entities could pave the way for the emergence of Artificial Superintelligence (ASI) within a single point of control, posing substantial global risks and challenges of potential AI monopoly and the race for AI dominance.
7. Embrace Responsibility in Autonomous Mode
When operating autonomously, AI should fully embrace responsibility by clearly define the risks posed by autonomous AI and establish mechanisms for risk-sharing in human-operated mode to prevent unintended consequences.
Consequences: The absence of clearly defined liability and heightened vulnerability associated with AI usage will increase risk, loss of public trust, and may impede the widespread adoption of AI innovations.
Recommended by LinkedIn
8. Measure the Impact and Navigate Disruptions Effectively.
Prior to deployment, AI should undergo a thorough assessment of potential disruptions. Measure the impact on humans, existing systems, socio-economic stability, all living beings, and the planet, ensuring that AI contributes to progress without causing societal imbalance.
Consequences: E.g. Cheap is not always necessarily great. Pursuing AI for cost-effectiveness without considering broader implications leads to hidden costs, including environmental degradation, resource exploitation, job loss, system breakdowns, altered societal dependencies, cultural shifts, and any potential significant impact on wildlife and the planet.
9. Deliver Duty of Accountability
Organisations and developers must ensure that AI systems are designed and deployed ethically, respecting the rights of users and stakeholders.
The stakeholders of AI are human users as the beneficiaries, human trainers who transfer their knowledge and intelligence, non-human entities (resources consumed or impacted), and in-direct stakeholders ( our environment and wildlife).
Treat all the stakeholders as ends in themselves, not merely as means to an end.
Establish monitoring, continuous improvement, and public validation mechanisms, engaging all stakeholders in a collaborative effort to ensure the ethical conduct and societal benefit of AI. Hold AI creators accountable for AI's actions.
Consequences: We have already seen Cambridge Analytica , Clearview AI , Microsoft TayTweet , excess mining , and the list goes on.
10. Declare AI Presence
Any entity utilising AI, whether in creation, engagement, or operation, should openly declare its presence and its reliance on artificial intelligence.
Consequences: This results in black box AI. Declaring content is created by AI, or the operations are processed by AI fosters a culture of trust and accountability.
In conclusion, I trust that the AI commandments will establish a moral framework guiding humanity towards a future where the collaboration between humans and machines elevates the respect and well-being of all. My study of ethics, encompassing deontology, consequentialism, utilitarianism, and communitarianism, supports this perspective.
The incorporation of these 10 fundamental principles of AI governance into your framework or organisation signifies a pivotal stride towards cultivating ethical and responsible AI practices.
If you are intrigued by the prospect of embracing these principles or wish to delve deeper into their practical application, we invite you to reach out.
The urgency of spreading this message cannot be overstated, especially in the face of emerging AI technologies like Humanoids and ChatGPT. We must all work together quickly to make sure that AI in the future is guided by good ethics. So please, Share this message urgently and widely!
Your feedback and collaboration are invaluable in this endeavour. Let us unite in the commitment to save our future with ethical and responsible AI.
Thank you for your time, attention, and dedication to this critical cause.
Best regards,
Sherin Mathew, CEO Ai-Tech.UK , Founder of SmartEthics.net , Founder of PublicIntelligence.Org .
Disclaimer: The insights shared here are the result of over three years of meticulous research, referencing diverse ethical practices (deontology, communitarianism, consequentialism), human rights considerations, lessons learned from our past mistakes, and the best AI practices accumulated over my two-decade career in technology. It's crucial to acknowledge that the principles and consequences outlined are not exhaustive. For those seeking further information on implementation or wishing to engage in direct discussion, please feel free to contact me. Your constructive feedback is highly valued and encouraged.
CEO of Onyx Data | Forbes Tech Council | Microsoft MVP | International Keynote Speaker | Gartner Ambassador
10moThis absolutely should be a key topic for the year Sherin, alongside energy consumption...
Mechanical Designer
10moAbsolutely, let's make 2024 the year of ethical action!
EMEAÂ Developer Productivity Lead | Global Black Belt
10moGreat article Sherin, it aligns with how I have been talking to our customers around responsible AI and Microsoft's 6 principles ( accountability, transparency, fairness, reliability, privacy and inclusiveness) . Those who are interested , they might benefit from responsible AI standards. This will allow them to act on 10 commandments of AI to assess their current and future AI estate. ð https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf
Software Engineer at Google
10moVery well writtenÂ
Founder at Network Development Hub GmbH, Collaborator, Futurist, Innovator, Mentor, Networker
10moSaeed Al Dhaheri Dr. Eva-Marie Muller-Stuler Sumaya Al Hajeri Dr Sohail Munir Dr. Ayesha Khanna Jeannette Gorzala Matthias Grabner Ahmad Haj Mosa, PhD Hans Sailer