The OpenAI board crisis shows that AI ethics debates are contentious. The ethics debates in AI cover issues like privacy, regulation, and technology driven social inequality, which are debates that go back to before modern AI.
Westin’s privacy preferences
Westin studied American perspectives and found three different categories of views, privacy fundamentalists, privacy pragmatists, and privacy unconcerned and tracked changes over time. AI assessment will need to balance data collection needs with widely varying privacy expectations.
Forsyth’s moral taxonomy
Forsyth found 4 types of views about the importance of rules, absolutists want strict rules, exceptionalists favour rules with edge exceptions, relativists want rules but alter for contexts, subjectivists adapt as required. Different groups are likely to react quite differently to AI assessment policies.
Winner’s technology as politics
Winner said that technology is political. AI might require a power structure, be consistent with a political orientation, or be used to decide social issues. In its strongest form, AI is a power structure. AI assessment systems are going to concentrate power in the hands of systems designers and administrators.
AI challenges historical frameworks
Many issues go across categories and different groups have different starting points. AI ethical solutions need to be partial rather than universal, interim rather than lasting, and have wide rather than complete agreement (i.e., Lindblom’s incrementalism from the 1950s).
AI also challenges these historical frameworks with its extreme capabilities, but the roots of the issues aren’t modern. Newer work e.g,, Cathy O’Neill, Safya Noble) focuses specifically on AI (e.g. whose ethics get implemented).
Next section
A peep into the future of assessment
Last section
Return home
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).