Skip to content

Future of Law

The Ethics of Using AI: Navigating Biases, Dangers, and Solutions

post image

Artificial Intelligence (AI) has rapidly integrated into various sectors, promising transformative potential. From enhancing efficiency in industries to aiding decision-making processes, AI holds immense promise. However, alongside its advancements come ethical concerns that must be addressed. 

Today, AI is able to develop, with little oversight from governments and regulatory bodies. As AI becomes a key part of how we live and work, it's important to acknowledge and confront the fundamental biases that may be ingrained within it.

The Ethical Dilemma of AI

1.1 Biases Embedded in AI Systems

AI's integration into society risks embedding biases deeper into various facets of life. The source data upon which AI algorithms are built often mirrors societal biases, leading to discriminatory outcomes. 

ChatGPT, the leading language model, achieved a remarkable milestone by amassing 1 million users within just five days of its launch. This widespread adoption underscores the prevalence of AI tools in today's world. 

However, like many other AI systems, the breadth of the source material used to train ChatGPT is unknown. As individuals worldwide increasingly rely on ChatGPT for information, idea generation, and content creation, it becomes challenging to gauge the potential impact of unintentional biases embedded within its training data. In the long term, these biases could significantly influence our collective societal perspective and outlook.

In a recent Harvard article, one of the most compelling illustrations of the potential impact of biassed AI on our daily lives is showcased: the imminent integration of these algorithms into hiring practices and lending decisions. 

Given that AI frequently relies on pre-existing data, it has to be acknowledged that there is a significant risk of these algorithms perpetuating existing biases.

1.2 AI within the Realm of Law

UNESCO’s second forum on AI, delved deeper into the ethical dilemmas facing the future of algorithms within justice and law. 

These include:

  • Lack of oversight: The lack of transparency in AI tools raises significant concerns. Generally, AI-driven decisions often remain opaque to human comprehension. This raises significant questions around how decisions made by AI can be seen as ‘right’ or ‘wrong’.
  • Lack of neutrality: AI’s inherent neutrality has been debunked, as decisions made by algorithms have been proven prone to inaccuracies and perpetuating existing biases of their source data.
  • Lack of privacy: Privacy concerns arise as AI algorithms require extensive data on individuals and their behaviours to make informed decisions within legal contexts. The ethical implications of surveilling people to this extent for algorithm training purposes are questioned, and raise doubts about the ethicality of such practices. 

Potential Dangers of Unchecked AI Development

2.1 Risks to Privacy and Surveillance

Unchecked AI development poses significant risks, particularly regarding privacy, surveillance, and biassed decision-making. In particular it is crucial that the data used to train AI remains safeguarded - to prevent misuse. 

The huge amounts of data used to train AI algorithms could cause catastrophic damage if allowed into the hands of bad actors. Without robust regulations and oversight mechanisms, there is a risk that sensitive personal information could be exploited for unethical purposes, such as surveillance or discriminatory targeting.

Presently, AI is being harnessed by cybercriminals to craft more sophisticated phishing emails and generate AI-driven voice recordings. With the proliferation of data accessible to these malicious actors, users confront escalating threats and heightened difficulty in distinguishing between authentic and fraudulent videos, emails, photos, and other content.

2.2 Biases in Healthcare and Justice Systems

Integrating AI into healthcare and justice systems requires confronting existing biases within these domains. One striking illustration, as highlighted by The University of San Diego, concerns the use of convolutional neural networks (CNNs) in dermatology. 

These AI tools have proven remarkably effective, matching or even surpassing the accuracy of trained dermatologists in identifying skin lesions, including melanoma. However, a significant issue arises from the fact that CNNs are predominantly trained on images of skin lesions from white patients, with only a minimal percentage of datasets representing black patients (approximately 5% to 10%). (Source.)

Consequently, when tested on black patients, CNNs exhibit only half the accuracy compared to their performance on white patients. This disparity is particularly concerning given that black patients have the highest mortality rate for melanoma, with an estimated 5-year survival rate of 70%, compared to 94% for white individuals.

Similarly, AI-driven predictive policing tools have faced criticism for disproportionately targeting marginalised communities. Data from the US reveals that black individuals are more frequently reported for crimes compared to their white counterparts, irrespective of the race of the reporter. Consequently, black neighbourhoods are disproportionately labelled as "high risk," exacerbating entrenched patterns of discrimination within the criminal justice system. 

Increased police presence in these communities may result in a higher number of reported crimes, further perpetuating biassed perceptions. When machine learning algorithms are trained on this skewed data, they inadvertently reinforce false notions about which neighbourhoods are deemed "high risk." (Source.)

Section 3: Towards Ethical AI Solutions

3.1 UNESCO's Framework for Ethical AI Development

UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence’ serves as a foundational framework for promoting ethical AI development and marks a crucial step forward. 

This framework underscores four key values essential for shaping responsible AI practices:

  • Human Rights and Human Dignity: Prioritising the respect, protection, and promotion of human rights and fundamental freedoms is paramount. Ensuring that AI technologies uphold human dignity is fundamental to ethical development.
  • Living in Peaceful, Just, and Interconnected Societies: Ethical AI should contribute to the advancement of societies characterised by peace, justice, and interconnectedness, fostering inclusive growth and collaboration.
  • Ensuring Diversity and Inclusiveness: AI governance must be inclusive, ensuring that diverse perspectives and voices are heard and represented in decision-making processes. Embracing diversity fosters innovation and mitigates the risk of bias.
  • Environment and Ecosystem Flourishing: Ethical AI development should prioritise environmental sustainability, safeguarding the health and resilience of ecosystems for present and future generations.

To achieve these goals and ensure ethical AI practices, better oversight and multi-stakeholder collaboration are imperative. UNESCO emphasises the importance of respecting international law and national sovereignty in data usage while advocating for the participation of diverse stakeholders in AI governance processes.

AI systems should also be auditable and traceable. This means implementing robust oversight mechanisms such as impact assessments and audits in place to prevent conflicts with human rights norms and environmental well-being.

3.2 Government Level Oversight

Aside from the implementation of a more ethical framework surrounding AI development, it is also crucial that governments take a more proactive and comprehensive approach to address the ethical challenges posed by AI deployment. Industry-specific oversight and regulatory measures, as advocated by Harvard, are essential for ensuring that AI technologies are deployed responsibly in various sectors.

UNESCO's Readiness Assessment Methodology (RAM) provides a valuable tool for evaluating AI projects and ensuring adherence to ethical principles. By incorporating RAM into AI development processes, organisations and governments can assess risks and promote ethical practices from the outset.

3.3 Technological Solutions

In addition to regulatory measures, technological solutions can also play a role in mitigating the ethical risks associated with AI. 

For example, researchers are exploring techniques such as algorithmic transparency and fairness-aware machine learning to improve the accountability and fairness of AI systems. By incorporating ethical considerations into the design and development process, it is possible to create AI technologies that align with societal values and respect fundamental rights.


Ethical concerns surrounding AI, including biases, privacy, and the need for oversight, demand immediate attention. The swift and exponential growth of this technology complicates the task of achieving this goal and establishing regulation. Similar to numerous technological advancements in recent years, governments struggle to keep pace with these developments, resulting in a significant lack of oversight and safeguarding.
Prioritising ethics and fairness in AI development is paramount to ensuring equitable benefits for all. As organisations embark on AI initiatives, it's essential to embrace ethical frameworks and collaborate with experts to navigate the complexities of AI development responsibly.

If you're not sure how to get your AI initiatives off the ground, or are struggling to find the right solution to solve your AI use cases, Deazy's AI sprint can help you with that. Ran by our AI product specialists, the sprint is designed to take you from idea to minimum viable product within a couple of weeks.