Search

Book purpose

Table of contents

Note of thanks

Endorsements

References

License and referencing

Update tracking

Historical roots of AI assessment debates

Historical roots of AI assessment debates

  • Westin’s privacy preferences
  • Forsyth’s moral taxonomy
  • Winner’s technology as politics
  • AI challenges historical frameworks
  • References

The OpenAI board crisis shows that AI ethics debates are contentious. The ethics debates in AI cover issues like privacy, regulation, and technology driven social inequality, which are debates that go back to before modern AI.

Westin’s privacy preferences

Westin studied American perspectives and found three different categories of views, privacy fundamentalists, privacy pragmatists, and privacy unconcerned and tracked changes over time. AI assessment will need to balance data collection needs with widely varying privacy expectations.

Forsyth’s moral taxonomy

Forsyth found 4 types of views about the importance of rules, absolutists want strict rules, exceptionalists favour rules with edge exceptions, relativists want rules but alter for contexts, subjectivists adapt as required. Different groups are likely to react quite differently to AI assessment policies.

Winner’s technology as politics

Winner said that technology is political. AI might require a power structure, be consistent with a political orientation, or be used to decide social issues. In its strongest form, AI is a power structure. AI assessment systems are going to concentrate power in the hands of systems designers and administrators.

AI challenges historical frameworks

Many issues go across categories and different groups have different starting points. AI ethical solutions need to be partial rather than universal, interim rather than lasting, and have wide rather than complete agreement (i.e., Lindblom’s incrementalism from the 1950s).

AI also challenges these historical frameworks with its extreme capabilities, but the roots of the issues aren’t modern. Newer work e.g, Emily Bender, Margaret Mitchell, Timnit Gebru, Meredith Whittaker, Cathy O’Neill, Safya Noble) focuses specifically on AI (e.g. whose ethics get implemented).

References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency(pp. 610-623).

Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and Social psychology39(1), 175.

Lindblom, C. (2018). The science of “muddling through”. In Classic readings in urban planning (pp. 31-40). Routledge.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York university press.

O'Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Westin, A. F. (1968). Privacy and freedom. Washington and Lee Law Review25(1), 166.

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richardson, R., Schultz, J. and Schwartz, O., 2018. AI now report 2018 (pp. 1-62). New York: AI Now Institute at New York University.

Winner, L. (2017). Do artifacts have politics? In Computer ethics (pp. 177-192). Routledge.

Next section

Peep into the future: fine-tuned, multimodal, neuro-symbolic

Last section

LLMs and psychometric bias

Return home

Psychometrics.ai

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

image