<span class="sans">Queering ai: </span>a queer epistemology for ai with a global majority perspective

↘ Artigo
↘ Artigo

Umut Pajaro Velasquez

download da versão pdf ↓
volume 5 / número 1 / jul 2024 ↘ Artigo

Queering ai: a queer epistemology for ai with a global majority perspective

Umut Pajaro Velasquez

Abstract

The way we define “intelligence” and epistemology1 response to a way of knowing. AI as a set of new technologies reproduces this, and queer theory alongside critical race theory have shown how excludes others, including those traditionally considered feminines2, and also shown that this intelligence has its equivalence to white man’s knowledge3. AI thus engages in a broader sociotechnical exclusion or repression of women’s knowledge and reifies a gender- and race-based conceptualization of “intelligence.”

On the other hand, AI offers an opportunity to change assumptions about gendered epistemology4. For example, narratives of “hard” and “soft” intelligence are often classified as male and female, respectively. Adrian Weller5 points out that this “hard” intelligence, which encompasses logic and rationality, is easier to reproduce in technological form, which reinforces the idea that it is all “intelligence”, but “soft” intelligence, and creative problem-solving, empathy, negotiation, and persuasion, qualities that have historically been identified more with women and have been fostered in them, can become privileged as it is difficult to codify6.

Whether AI is thought to depend on embodying a male epistemology, or whether AI promises to give a feminist epistemology a head start, AI is perpetuating and reinforcing binary gender stereotypes, leaving behind those who are outside the normative and binary concepts of gender.  Hence, we propose a queer epistemology for AI with a Global Majority perspective in order to create a framework that allow a common language in all AI life-cycle.

1. Introduction: the foundation of the ai life-cycle: the 4ds, design, development, deployment, and detection of biases

To have an effective queer epistemological framework, we need a methodology with a queer perspective, we need to answer how queers, trans vision, and narratives will be included in our way to design, deploy, distribute, and detect biases in AI? And more importantly, what is done to make these AI inclusive for queer, BIPOC people from the majority world13, just to mention? How are we taking into consideration all the biases faced every day just for living in a cis-male white society?

If this framework is going to be used by technologists, final users, policymakers, sociologists, activists, and everyone involved with making real the practical aspects of Artificial Intelligence, we must consider using it in all stages of the foundation of any AI: design14 (first stage, what kind of AI we want to do, and who is going to be beneficiary or user from it), development15(coding and modelling the technology we want to do and how articulate with an ethical and regulatory framework and first stage of tests with the possible beneficiaries or users), deployment16 (testing and putting in contact to his final users in small, medium, or large scale in real life conditions according to necessities), and detection of biases (addressing and auditing the biases found when the technology has been deployed and create the necessary amendments in terms of regulations, code, and others related to make the AI less harmful and more beneficial)17 (4Ds). We must have into account the following aspects according to the technology we are going to bring to the people: first theoretical aspects that respondo to which theories are part of it, and why?; second, methodological aspects which means how to apply these theories to practices and actions?, and, finally methods and tools, or in other words, which ones and how to use them according to the stage?

As we are asking designers, developers, technologists, and policymakers to be more fair, accountable, transparent, and ethical in setting the foundation for AI and making them less harmful for queer people, these aspects must include non-traditional theories to feed the datasets that are going to make it work18, new approaches to traditional social sciences methods and tools as interviews, focus groups, participatory design, and new combinations of methodological approaches from social sciences, humanities, art, mathematics, natural sciences, engineering, and others19.

Also, this is going to demand that all of us involved in this process think outside the box and bring queer20, decolonial21, abya yalá theories21, MLP23, NLP24, neurosciences25, trans theories from the Global Majority26, body theories27, embody theories28, cultural studies29, as a methodology, interviews, focus groups, participatory design, documentary, or case analysis, research on the field, law-making regulatory processes30, and others. Altogether, creating an epistemological framework according to the current expectation of respect for human rights from AI, with this we are saying all human rights including the ones from queer people no matter where they come from or are.

Probably, this is not yet a definitive solution to making these technologies more beneficial for queer communities. Also, is understandable that reach this issue just from a theoretical framework could create biased results, that is why also in the practice of detection of biases we need to include the language that can transform this specific harmful stereotype into regulations, formal definitions, and actions that can benefit not only queer people but all of us. After all, as someone once said: “everything exists in the law, and nothing stands outside of it.”

Methodology: Making AI frameworks with a queer perspective

There is a need for research that analyses AI and that will impact gender equality, gender diversity, and/or queerness. So far, there has been little attention to interpreting these laws, policies, and theories through gender and queer lens, or indeed research into how these structures could be exploited to strive for gender equality and diversity.

Research could explore existing and emerging frameworks concerning AI, gender, and queerness. Specifically, such exploration could include, but would by no means be restricted to, policies, laws, research, ongoing frameworks, and social aspects surrounding two particular areas that are key in the 4Ds stages of an AI: data and privacy, and technological design.

As demonstrated in the research context, these areas are already being considered to AI more generally speaking, but would benefit from additional gender-based and queer -based perspective. These could be analysed through two mechanisms: (1) gender, race, queer theories, decolonial theories (2) a series of interviews with technologists, experts, and policymakers.

Firstly, a theoretical analysis could be used to consider how policy, legislation, and the use of queer theories from the Global Majority can facilitate AI to work for gender equality, and social equality more broadly speaking31,. Secondly, the interviews would function as a way to gain mutual understanding between policymakers and technologists regarding definitions of gender, and how vulnerable gender groups would be impacted by certain structural changes. Whittlestone et al32,. outline that knowledge of technological capabilities should inform our understanding of the ethical tensions, which will be “crucial for policymakers and regulators working on the governance of AI-based technologies”. Collaboration between experts, policymakers, and technologists would enable the formation of frameworks that tackles the main issues in a thorough, accurate, and realistic manner.

Additionally, a set of guidelines for ongoing developments would outline certain standards to be upheld when designing and implementing, which both, directly and indirectly, impact issues surrounding AI and gender diversity. The practical element of these standards is of the utmost importance. They are not the same as an ethical framework that cannot be directly applied, but rather they would be specific, context-related, and therefore straightforward for policymakers, technologists, and others to implement.

Overall, there is a need for research that assesses how emerging and future epistemological frameworks (Villani, 2018) are failing to establish gender equality, and how they could be altered to strive for social justice. In a few points, this should point to the following: first, to analyze current and emerging law and policy, ethical, and academic frameworks that impact the intersection of AI and gender diversity, especially those that are from the Majority World. Following by to outline specific recommendations for alterations to laws and policies, academic frameworks surrounding AI and gender diversity, as well as a set of research-based guidelines for ongoing developments. These would rigorously promote the enhancement of social justice and gender equality and diversity. 

After that we need to collaborate with existing research projects, initiatives, policymakers, experts, users, designers, developers, and technologists. This research aims to contribute to understanding what it means to be inclusive, especially concerning questions of gender diversity and queerness beyond the perspective of the Global North.

And finally it is also important to harness an intersectional approach and consider how these are impacting and shaping gender, as well as race, ethnicity, sexuality, social class, disability, and so on. This will aid the pursuit of shaping structures in a way that considers not only one aspect of identity that could be detrimentally impacted by AI but multiple, in a few words create mechanisms embedded into the different frameworks that can allow anybody that detects a bias can share that information to improve the technology already deployed, developed and designed. 

Methods for research

Interviews

Interviews would be conducted with researchers, designers, developers, technologists, and policymakers who are working in the relevant field and users who self-identify as queer, gender-diverse, women from the Global Majority with some or no expertise about the topic but are exposed to these technologies in their everyday lives. When interviewing, it would be useful to understand how AI functions in discriminatory or inclusive ways:

  • Data. In addition to understanding how data is used in ways that are both visible and invisible to the public eye, and how this could be abused, interviews would cover how viable it would be to regulate or address inside an epistemological framework such large amounts of data used by the diverse AI.
  • Technological design. Interviews would focus on the process of design, seeking insight into decision-making and which processes, theories, laws, and policies influence these design decisions.

Interviews with policymakers would allow the research to understand processes, definitions, tensions, and trade-offs that are being employed in current policy documents. Overall, interviews would enable the recommendations to be as specific and realistic as possible, especially for users who could bring a set of expectations that would allow understanding better of the ideal AI when it comes to gender and queer issues.

Theoretical Analysis

Some principles of feminist theory had been used in the past to analyse AI and to shape ethics surrounding these technologies34. Feminist legal theory, for example, has been employed to analyse technical issues such as privacy, surveillance, and cyberstalking35.

Feminist, decolonial, and the majority of world theories will be employed to analyse gendered and queer aspects regarding AI. Mary Hawkesworth36 outlines how feminist scholarship seeks to reshape the dominant paradigms so that women’s and gender-expansive people’s needs, interests, and concerns can be understood and considered in this process. Canada, Norway, and Sweden have all adopted gender, queer and feminist-informed approaches to their foreign policies, for example. Aggestam, Rosamund, and Kronsell37 draw upon feminist theory and ethics of care to theorize feminist foreign policy. This use of gender theory could be replicated to shape practices and theories surrounding AI.

In addition, anti-essentialist theories could be harnessed and used for analysis. In Feminist Legal Theory, Levit and Verchick38 outline how during the mid to late-1980s, several legal theorists complained about the essentialist nature of feminist legal theory. In ‘Race and Essentialism in Feminist Legal Theory39, Angela P. Harris argues that feminist legal theory relies on gender essentialism. This is the notion that a unitary, essential women’s experience can be isolated and described independently of race, socio-economical class, sexual orientation, and other realities of experience. The result of this is:

“[…]Not only that some voices are silenced to privilege others…but that the voices that are silenced turn out to be the same voices silenced by the mainstream legal voice of “we the people” – among them, the voices of black women.” Harris,(1990) .

This research would draw on relevant theories relating to race, gender, ethnicity, disability, sexuality, and so on to analyse existing and emerging frameworks from an intersectional perspective. This will help to ensure that a broad range of standpoints are considered when it comes to shaping AI with a more inclusive queer perspective. For example, this could include the use of critical race feminist theory, which looks at how traditional power relationships are maintained, as well as postmodern gender theory, and scholars who apply queer or transgender theory to technology and law. Such research could also consider how narrative analysis might enhance traditional methodologies when you include perspectives from the Global Majority that usually are excluded and considered just as users and not designers in the process of developing and deployment of these kinds of technologies.

Proposed analysis

Harnessing this theoretical work alongside the interviews, this kind of way of research would examine relevant theories, narratives, laws, and policies to examine their impact on issues of gender equality, gender diversity, and queerness with a global majority perspective. This would especially be concerned with theory, social needs, legislation, and policy surrounding the key areas identified here: data and privacy and technological design.

Broadly speaking, this would isolate any content or wording which relates to how technology can facilitate inequality of power, discrimination, or social injustice. Within this analysis, it could focus on: (1) how these structures impact gender, racial and ethnic minorities; (2) how these theories, narratives, and legislation use language and terminology to assume essentialist views of gender in intersection with race and ethnicity; (3) the loopholes which could allow for potential inequality of power or discrimination; (4) the subtext or sub-narrative in these pieces of legislation, or ethical frameworks, including their assumptions of what is meant by gender, race, and ethnicity; and finally, how these structures could be altered to endorse social justice and equality better.

We will face some challenges at the time using this approach to research AI from a gender and queer perspective, with also a glimpse of critical race and decolonial theories in mind. Some of them are: ensuring that technical and legal definitions of bias, equality, and fairness match up with what is valued more broadly in society, especially the ones in the Majority of the World,  Developments, deployments, and laws and policies on AI are still at the embryonic stage, which could make the process slightly staggered. However, this could also be an opportunity, especially as many policies and technologies are not yet ossified. Such research will need to keep abreast of emerging developments, and work to create access to, and inform policies, and technologies in development. It will be important for researchers to consider how they will address the trade-offs in terms of moral and ethical guidelines40.

This work hopes to contribute to the ongoing development of theories, policies, and law, but in general create a common language, an epistemological framework, surrounding AI, gender diversity, and queerness within the inclusion of the Global Majority, and therefore will be influential in shaping their content and impact. It has been established that the concepts that use and the way we embedded them into technologies affect our behaviour40. Structural changes implemented could contribute to shifting behaviour surrounding gender equality and the intersectional nature of it would enable us to consider many different standpoints, working for widespread social justice and redistribution of power.

F.A.T.E: a complementary analysis

Another approach addresses the detection of biases more obliquely, with accountability measures designed to identify discrimination in the processing of personal data. Numerous organizations and companies as well as several researchers propose such accountability. Therefore, having the difficulties of foreseeing AI technologies outcomes as well as reverse-engineering algorithmic decisions, no single measure can be completely effective in avoiding perverse effects. Thus, where algorithmic decisions are consequential, it makes sense to combine measures that should be taken to work together. Advance measures such as fairness, accountability, transparency, and ethics (F.A.T.E), combined with the retrospective checks of audits and human review of decisions (detection of biases), could help identify and address unfair results. A combination of these measures can complement each other and add up to more than the sum of the parts. This also would strengthen existing remedies for actionable discrimination by providing documentary evidence that could be used in litigation, creating new laws and policies, and frameworks, and developing a deeper understanding of the social implication of the different AI technologies and how we could use those results to improve them or not longer use them.

Conclusions

We think this proposal at the beginning could be problematic and considered hard to achieve, but it is exactly in that aspect where this transdisciplinary approach could offer a more holistic way to understand, embody, and code the experiences of queer, trans, marginalized people into AI and other new technologies where data is the main source.

We argue that any AI design, development, deployment, and detection of biases framework that aspires to be fair, accountable, transparent, and ethical must incorporate queer, decolonial, trans, and other theories from the Global Majority into their 4Ds. Not only that, but we additionally explain the importance of justice and enfranchising, shifting power to the disempowered core values of any accountable and responsible AI system. Creating these AI necessities starting by funding, supporting, and empowering grass-roots work and advocacy to discuss if and how gender, sexuality, and other aspects of queer identity should be used in datasets and AI systems and how risks should be addressed in order to cause less or not harms along these lines, and don’t forget about the final users so the field of AI fosters from diversity and inclusion to credibly and effectively develop reliable AI. Based on this, we want to create an epistemological framework with a queer perspective for AI, as it was explained, and start to analyse the benefits it can bring from its social, mathematical, technical, and practical regulatory aspects.

References

  1. ADAM, A. Artificial knowing: Gender and the thinking machine. London:Routledge. 1998.
    ADAM, A. (2005). Gender, Ethics and Information Technology. London: Palgrave MACM, New York NY, USA. Illan. 2005.
  2. AGGESTAM K, BERGMAN-ROSAMOND A and KRONSELL A .Theorising feminist foreign policy. International Relations [online] 32(4), 1–17. 2018. Available at: https://journals.sagepub.com/doi/pdf/10.1177/0047117818811892
  3. ASARO, P.. Robots and Responsibility from a Legal Perspective. In: Proceedings of the IEEE International Conference on Robotics and Automation. Rome: IEEE. 2007.
  4. BUTLER, Judith. Gender Trouble: feminism and the subversion of identity. New York: Routledge. 1990.
    BUTLER, Judith. Critically queer (pp. 11-29). Routledge. 2020.
  5. COLLETT, Clementine and DILLON, Sarah. AI and Gender: Four Proposals for Future Research. Cambridge: The Leverhulme Center for the Future of Intelligence. 2019.
  6. COLLINS, P. H. Black Sexual Politics: African Americans, Gender, and the New Racism. New York: Routledge. 2005.
  7. COWLS, Josh and KING, Thomas and TADDEO, Mariarosaria and FLORIDI, Luciano, Designing AI for Social Good: Seven Essential Factors. 2019. Available at SSRN: https://ssrn.com/abstract=3388669 or http://dx.doi.org/10.2139/ssrn.3388669
  8. DIGNUM, V. Responsible Artificial Intelligence: Designing AI for Human Values. ITU Journal: ICT Discoveries, Special Issue No.1, pp. 1-8. 2017.
  9. ELLIOT, P. and ROEN, K. Transgenderism and the Question of Embodiment: Promising Queer Politics? GLQ: A Journal of Lesbian and Gay Studies, 4(2), pp. 231-261. 1998.
  10. EPSTEIN, S. A queer encounter: Sociology and the study of sexuality. Sociological Theory, 188-202. 1994.
  11. ERDÉLYI, O. and GOLDSMITH, J. Regulating Artificial Intelligence: Proposal for a Global Solution. In: AIES ’18 Proceedings of the 2018 AAAI/ACM, NEW YORK NY, USA. Conference on AI, Ethics, and Society. 2018. Available at: http://www.aies-conference.com/wp-content/papers/ main/AIES_2018_paper_13.pdf
  12. FOMICHOV, V. Semantics-Oriented Natural Language Processing: Mathematical Models and Algorithms (Vol. 27). Springer Science & Business Media.2009.
  13. FOUCAULT, Michael. Discipline and punish: The birth of the prison. London: Penguin Books. 1991.
  14. GROSSBERG, Lawrence. Cultural Studies and Deleuze-Guattari, Part 1. Cultural Studies (28): 1-28. 2014
  15. GROSZ, E. Volatile bodies: Toward a corporeal feminism. Bloomington: Indiana University Press. 1994.
  16. HARAWAY, Dorothy. Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s. Socialist Review, 80, pp.65-108. 1985.
    HARAWAY, Dorothy. Modest_Witness@Second_Millenium.FemaleMan[C]_ Meets_OncoMou seTM: Feminism and technoscience. Journal of the History of Biology, 30(3) pp. 494-497. 1997.
  17. HARRIS, A. Race and Essentialism in Feminist Legal Theory. Stanford Law Review, 42(3), pp. 581-616. 1990.
  18. HAWKESWORTH, M. Policy studies within a feminist frame. Policy Sciences, 27, pp.97-118. 1994.
  19. JAGOSE, A. Queer Theory. United States: NYU Press. 1997.
  20. LEVIT, N. and VERCHICK, R. Feminist legal theory: A primer. New York: New York University Press. 2006.
  21. M. FINDLAY and J. SEAH, An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections, 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G), 2020, pp. 192-197, doi: 10.1109/AI4G50087.2020.9311069.
  22. MONASTERIOS, G. Abya Yala en Internet políticas comunicativas y representaciones de identidad de organizaciones indígenas en el ciberespacio. Políticas de identidades y diferencias sociales en tiempos de globalización, 303-330. 2003.
  23. MUSTAFA, A. “White Crisis” and/as “Existential Risk”, or The Entangled Apocalypticism of Artificial Intelligence. Zygon: Journal of Religion and Science, 54(1), pp.207-224. 2019.
  24. O’CONNOR, S. The robot-proof skills that give women an edge in the age of AI. Financial Times. 2019. Available at: https://www.ft.com/ content/06afd24a-2dfb-11e9-ba00-0251022
  25. OYĚWÙMÍ, O. Invention of Women: Making an African Sense of Western Gender Discourses. Minneapolis; London: University of Minnesota Press. 1997.
  26. SALES, L. Algorithms, Artificial Intelligence and the Law. Judicial Review, 25(1), 46-66. 2020.
  27. SAVAGE, N. How AI and neuroscience drive each other forwards. Nature, 571(7766), S15+. 2019. Available at: https://link.gale.com/apps/doc/A594456957/AONE?u=anon~18f351ba&sid=googleScholar&xid=719bedf7
  28. SHAHIDUL Alam. Majority World: Challenging the West’s Rhetoric of Democracy, Amerasia Journal, 34:1, 88-98, 2008, DOI: 10.17953/amer.34.1.l3176027k4q614v5
  29. SHIRAEV, E. B., & LEVY, D. A. Cross-cultural psychology: Critical thinking and contemporary applications. Routledge. 2020.
  30. SILVERSTONE, R. Domesticating domestication. Reflections on the life of Domestication of media and technology, 229. 2005.
  31. SMITH, C.J.. Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development. ArXiv, abs/1910.03515. 2019.
  32. SPIVAK, G. Can the Subaltern Speak?, in Nelson, C. and Grossberg, L. (eds.) Marxism and the Interpretation of Culture. Urbana: University of Illinois Press, pp. 271-313. 1988.
    SPIVAK, G. In Other Worlds: Essays in Cultural Politics. New York: Routledge. 1988.
  33. STANLEY, E. and SMITH, N. Captive genders: Trans embodiment and the prison industrial complex. Edinburgh: AK Press. 2011.
  34. STONE, S. The Empire Strikes Back: A Postranssexual Manifesto. Camera Obscura, 10(2 29), pp. 150-176. 1992.
  35. STRYKER, S. Transgender history. Berkeley, CA: Seal Press. 2008.
  36. THEODOROU, A., DIGNUM, V. Towards ethical and socio-legal governance in AI. Nat Mach Intell 2, 10–12. 2020. Available at: https://doi.org/10.1038/s42256-019-0136-y
  37. VILLANI, C. For a Meaningful Artificial Intelligence: Towards a French and European Strategy. Mission assigned by the Prime Minister Édouard Philippe. 2018. Available at: https://www.aiforhumanity.fr/pdfs/ MissionVillani_Report_ENG-VF.pdf
  38. WAJCMAN, J. The Feminisation of Work in the Information Age? In: M. Frank Fox, D. Johnson and S. Rosser, eds., Women, Gender, and Technology. Champaign, Ill.: University of Illinois Press, pp. 80-97. 2006.
  39. WARNER, M. (Ed.). Fear of a queer planet: Queer politics and social theory (Vol. 6). U of Minnesota Press. 1993.
  40. WAYAR, Marlene. Travesti. Una teoría lo suficientemente buena. Fotografías de Lina M. Etchesuri. Ilustrada por Nina Kunan. 2ª reimpresión. CABA: Muchas Nueces. 2019.
  41. WELLER, Adam. Transparency: Motivations and Challenges. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L., Müller, KR. (eds) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Lecture Notes in Computer Science(), vol 11700. Springer, Cham. 2019. Available at: https://doi.org/10.1007/978-3-030-28954-62
  42. WHITTLESTONE, J. et al. Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation. 2019
  43. XI, Zhexu. How Can Humans Drive the Development of Ethical Artificial Intelligence?. Doi: 10.1007/978-3-030-73103-8_66. 2021.
Umut Pajaro Velasquez umutpajaro@gmail.com

Doutorando em Sistemas de Informação – Malmö Universitet. Mestre em Estudos Culturais. Embaixador da YouthLACIGF