Ethics in Artificial Intelligence

Ethics - scales of justice

Why does AI ethics matter?

Artificial intelligence is a powerful tool – and like any powerful tool, it can be used for good or harm. As AI becomes more central to our lives, ethical questions become more urgent.

⚠️ Important: There are no single "right" or "wrong" answers to most ethical questions. The goal is to be aware of the issues and think about them carefully.

🎯 Bias

AI learns from data – and if the data is biased, the AI will be too.

Example: Hiring

Amazon developed an AI to screen résumés. It learned from the company’s hiring history – and because historically most hires were men, the AI favored men and discriminated against women. Result: The project was scrapped.

Example: Face recognition

Many face recognition systems are less accurate for people with darker skin – because they were trained mainly on images of white people.

💡 Solutions: More diverse data, rigorous testing, and human oversight.

🔒 Privacy

AI needs data to work – but how much of our privacy are we willing to give up?

Hard questions:

  • Is it okay for Google to read our emails to filter spam?
  • Is it okay for Netflix to know exactly what we like to watch?
  • Is it okay for employers to monitor productivity with AI?
  • What when governments use AI for face recognition in public?

China: "Social credit" system

AI tracks citizens’ behavior and assigns a "social score". Low score = restrictions on flights, trains, and more.

💼 Employment and work

A major concern: Will AI replace our jobs?

Higher-risk jobs:

  • Drivers (autonomous vehicles)
  • Cashiers (self-checkout)
  • Call center workers (chatbots)
  • Translators (machine translation)
  • Basic content editors

Jobs that are likely safer (for now):

  • Work requiring high creativity
  • Work with deep human interaction
  • Complex physical work
  • Work requiring ethical judgment
💡 Upside: AI also creates new jobs – AI engineers, data annotators, tech ethicists, and more.

🎭 Deepfakes and misinformation

AI can create fake images, videos, and voice recordings that look completely real.

Risks:

  • Politics: Fake videos of politicians saying things they never said
  • Fraud: Fake phone calls in a family member’s voice
  • Reputation: Fake embarrassing images or videos
  • Fake news: Harder to know what’s real
⚠️ Rule of thumb: If something seems too shocking or too good to be true – check the source before sharing.

⚖️ Accountability – who is responsible?

When AI makes a mistake – who is responsible?

Scenario: Autonomous car hits a pedestrian

Who is at fault? The manufacturer? The programmers? The driver who was supposed to supervise? The AI itself? (But you can’t put software on trial…)

Scenario: Medical AI misdiagnoses

Who is responsible? The doctor who relied on the AI? The company that built the system? The hospital that adopted it?

🎮 Control and autonomy

How many decisions do we want to give to AI?

The autonomous car dilemma

An autonomous car must decide: hit a pedestrian or risk the passengers? Who should decide how the car is programmed – the manufacturer? The government? The buyer?

Autonomous weapons

Should AI be allowed to decide on its own whether to fire at a person? Most experts say no. There should always be a "human in the loop".

✨ The positive side

AI can also do a lot of good:

Positive uses:

  • Medicine: Early cancer detection, drug development
  • Environment: Predicting natural disasters, energy efficiency
  • Accessibility: Image descriptions for the blind, sign language translation
  • Education: Personalized learning
  • Science: Discoveries that would take humanity decades

📋 Principles for responsible use

1

Transparency

Knowing when AI is making decisions that affect us

2

Fairness

AI should not discriminate on the basis of gender, race, age, etc.

3

Accountability

There must always be someone responsible for decisions

4

Privacy

Protecting people’s personal information

5

Safety

AI must not harm people

Summary 📝

  • AI is a tool – how we use it is up to us
  • Key issues: bias, privacy, employment, misinformation
  • It’s important to demand transparency from tech companies
  • We need smart regulation that balances innovation and protection
  • We’re all responsible – as consumers, citizens, and humans

📝 Test yourself

Answer 10 questions on AI ethics.

1 Why did Amazon’s AI discriminate against women in hiring?

2 Why are face recognition systems less accurate for people with darker skin?

3 What is "social credit" in China?

4 What is a deepfake?

5 When a medical AI misdiagnoses, who is responsible?

6 What does "human in the loop" mean?

7 Which jobs are at higher risk of being replaced by AI?

8 What is the "transparency" principle in AI ethics?

9 What can AI do for good?

10 What is the right thing to do when you see shocking content online?

Your results

0/10Correct answers
0%Success rate

→ 6. Prompt Engineering 8. The Future of AI – Next ←