AI Ethics Framework

Ethical questions, risks and issues to think about to help create responsible AI products and services


AI Ethics Framework

A general ethics framework for organisations to learn more about AI issues - used as a tool by www.ethicalby.design. For context around this framework please read the README.md document

Govenments

-> General public, policy makers, heads of state

[general technology consideration]

  • Legislation for new technologies and their uses

  • Education / knowledge and awareness

  • Appropriate measurements of technologically influenced social flourishing

    • What do we mean by flourishing in the digital age?
  • Values

    • What values are you promoting that will inform decision making?

[specific AI technology consideration]

  • Appropriate AI usage (guidance)

    • Where is or isn’t it appropriate to use AI/ML (see roboethics)
  • Warfare and direct harm

    • When, if ever, is it appropriate to use AI and automated technologies in warfare?
    • Who is accountable / responsible for an AI’s operation and outcomes
  • Surveillance

    • For what reason is AI based Surveillance appropriate?

Businesses / organisations / brand

-> Director, CEO, Manager, CTO

[general technology consideration]

  • Product alignment / intentions

    • Does the service meet a meaningful human need, address an injustice, or create a positive change in behaviour?
  • Diversity

    • Is the team a reflection of values and cultures found in your market and anyone else orthoganal to your service?
  • Business Values

    • What values are you promoting that will inform decision making during the project lifecycle?
  • Organisational culture / ethos

  • Human rights and measurements of human flourishing

  • Democratising

  • Political exploitation

  • Business model

[specific AI technology consideration]

  • Accountantable

  • Job loss or augmentation

    • Have you considered how the system will impact your workforce, how hey do their work, what upskilling they may need, or whether it will hollow out their job?
  • Dogma

    • Have you considered the dogmatic nature of your systems that may inforce or exasperate unfairness found in the data or team’s implementation?

Design

-> Designer, system architect, UX, UI, product manager

[general technology consideration]

  • System Transparency

    • People should be aware that they are interacting with an AI system, and should be informed of the system’s capabilities and limitations.
  • Fairness

    • What are the measures of fairness in your particallar area?
    • What compromises are you making between the fairness and equality of different groups?
  • Impersonation (Gender or otherwise)

    • How is the system being portrayed? (anthropomorphising)
  • Environmental impact

    • What is the direct negative environmental impact for your design? Is there was to mitigate?
  • Unintended consequences

    • If the product was to be wildly successful what are the good and bad consequences to the world?

[specific AI technology consideration]

  • Explainable

    • Is there anyway to look into how the decision had been made?
    • Can a decision be explained and understood by differing stakeholders
  • Upholding norms of wellness and human flourishing

    • Does the system promote exploitation, negative behaviours (like addiction), racism, and discrimination?

Data

-> data scientist, analyst, researcher, developer, programmer, sociologist, anthopologist, social scientist

[general technology consideration]

  • Data protection

    • Do you hold sensitive data securely?
    • Do you seek consent for a persons’s sensitive data?
    • Do you value the person’s time?

[specific AI technology consideration]

  • Unwanted Bias

    • Fairness of participants in an AI system should be upheld and, where possible, be demonstrated.
  • Data relevance

    • If the data is old, unreliable, or dirty (a data science term referring to there being many problems in the data that need to be sorted out before it can be used for inference) then it may not be appropriate to be used with Machine Leaning techniques.
  • Data manipulation

    • Do you manipulate data to the detriment of your users to best fit your own goals?
  • Obfuscation and duplication of personal data

    • Machine Learning techniques can be used to injest / transform input data into weights and biases, even after the original data is deleted, the model may be able to reproduce the data closely.

Technology

-> data scientist, developer, programmer, coder

[general technology consideration]

  • Security

    • Do you hold sensitive data securely? [refer: #Data-protection]
    • Do you have adequate security access log system?
  • Safety

    • Is you definition of safety inline with current norms or best in class?
    • What safe guarding considerations have you made for stakeholders, users, citizens and animals when you system is operational?
    • In what ways can you system fail gracefully?

[specific AI technology consideration]

  • Monitoring [specific AI technology consideration]

    • Definition of expected behaviour / error handling / unintended behaviour / introducing bias
  • Explainability

    • Is there any way to explain or demonstrate the process or getting to a decision? (see above)
  • Perversion / system exploitation

    • Is the model open to be perverted / exploited by external sources now or in the future?
  • Procurement

    • Have you applied considerations from this framework to other suppliers work e.g. third party models?

Research

-> researcher, data scientist, sociologist, anthopologist, social scientist

[specific AI technology consideration]

  • Robust & appropriate

    • Is the research considered robust, appropriate and ethically sound to conduct?