How Can AI Affect LGBTQIA+ People In India? Three Indian-Origin Queer Researchers Explain
Avijit Ghosh, Arjun Subramonian and H are members of Queer in AI, an online collective that advocates for queer people*s interests in machine learning
When one tries to look up &Queer in AI*, the website one first stumbles across is &Queer AI*. This is an AI chatbot that was created by a group of researchers who spotted that AI was inheriting the biases of its creators 每 that it was behaving in a way that upheld existing systemic discriminations, including those against queer people. The researchers trained the Queer AI models on queer theory and feminist literature, and to pay homage to all queer people. they later left the website up as free to access for all: as &an ethics of embodiment*, like a message on the website puts it.
The online collective Queer in AI performs a very similar function 每 visibilisation of the effects of AI on queer people 每 but in the offline world. Created by scientists, it advocates for the interests of queer people in the field of AI and machine language via a community of, by, and for queer AI researchers. This includes outreach programmes such as workshops at conferences, social meetups, mentoring programs and a financial aid programme for graduate applications.
The group has also been invited to consult on initiatives, including by the Biden government to help build the government policy for AI and by the National Science Foundation in the United States to comment on demographic surveys.
Currently, one of the biggest projects the group is focusing on is advocating for making publishing inclusionary of transgender people 每 such as ensuring that Google Scholar avoids using the deadname, or old name, of transgender authors. The group has helped to make a dot pdf file checker to parse and correct these names. They are also supporters of a campaign on the matter, called Scholarhasfailedus.
It has also presented papers, such as one on community-led participatory design in AI, which won best paper at the ACM Conference on Fairness, Accountability, and Transparency (FaccT) for evaluating harms specific to queer communities in machine learning and envisioning new modes of LGBTQIA+ participation in AI.
Recently, PM Modi has called for global regulations to ensure ethical use of AI, after minister of state for electronics and information technology Rajeev Chandrasekhar said in June that India will start regulating AI. This is a shift from the government*s previous stance, as in April it had said that there was no plan to regulate AI. The Indian government has also opened a portal for &India*s AI vision.*
The core team of Queer in AI has many members who are either Indian or of Indian origin, which is a group that is prominently represented in the global tech industry. Many of them work with ethical practices in machine learning. As the AI wave arrives in India, we spoke to three of them about their greatest concerns.
Misinformation
※The greatest short-term danger from generative AI is misinformation,§ says 27-year-old Avijit Ghosh, a Research Data Scientist at AdeptID and a Lecturer in the Khoury College of Computer Sciences at Northeastern University. ※Image and text generation models can pose a serious risk in India without built-in fact-checking mechanisms, especially ahead of the general elections in 2024, and I fear they can cause communal violence if they*re used to spread lies.§
Also read: Quest For The West: Why Does The Indian Queer Population Want To Move Out?
※AI-based content moderation on social media sites can also be overwhelmingly censorious for sexual education and queer people, while doing nothing to combat harm or homophobia or transphobia,§ adds 23-year-old California native Arjun Subramonian, who is currently doing a PhD on machine learning at UCLA, looking at how deep learning models intensify structural inequities on social networks.
Amplify Implicit Bias
For Ghosh, another major danger to queer people from AI is from some of its possible uses in predictive analytics, in which an AI makes predictions by spotting patterns in datasets it has been trained on. In 2017, a pair of researcher at Stanford University claimed to have proved that AI could be trained to predict people*s sexual orientation from facial construction. They said they did it to prove the immense dangers that AI can pose to privacy.
※It*s a highly unethical application of digital physiognomy that could have massive implications for privacy and discrimination, in areas such as jobs or even incarceration,§ says Ghosh.
H, a core organiser of Queer in AI who is of Indian origin and who prefers to remain anonymous, expands this to other aspects of Indian socio-economic and cultural contexts.
Also read: How College Queer Collectives In India Are Creating Inclusive Spaces On Campus
※Applications of AI in many fields could be replicating majoritarian biases such as caste,§ they say. ※All our discussions regarding AI and queer people must be intersectional, especially because although upper-caste Indians and Indian-Americans are well represented in tech circles, this is not true for Indians belonging to gender, sexuality, caste, religion and disabled minority communities.§
Judicial system
※One of my biggest concerns with regard to India is the use of AI by the government in judicial systems,§ says H. ※This includes surveillance 每 we now have cameras everywhere, and its data goes to the police. Could it be used to start predicting who could be a criminal based on appearance?§
This has already happened in multiple parts of the world. Risk assessment systems that use AI models often predict that black people are more likely to commit crimes 每 even if the evidence proves otherwise. In the Indian context, Ghosh points out a paper by Vidushi Marda and Shivangi Narayan that showed that an AI-based predictive policing system in New Delhi affected poor people more.
Such biases could certainly extend to queer people as well, says Subramonian. ※If you don*t have a normative body with respect to gender, you could be at increased risk of police brutality,§ they say.
Job Losses
As generative AI, such as text and image generators, is mass adopted, many jobs that are done by humans now may become automated or machine-led to a much greater extent 每 we*ve already seen writing, music and art by AI being adopted across industries. Many experts have warned that India is not equipped to handle the impact of these job losses. Some reports have said that this year alone, 4,000 jobs in the tech industry have already been lost to AI.
※If AI systems are used to devalue or replace human labour to cut costs, it means queer people could lose access to family, housing, insurance and other services tied to jobs, and health insurance can be precarious for them anyway,§ says Subramonian.
Mitigation measures
※AI education and AI regulation are both hugely important,§ says Ghosh. ※There needs to be intensive training in awareness of harms and dangers from AI. People need to be taught how to use it ethically and warned that it can replicate or worsen biases and throw up massive intellectual property issues. Finally, models need to be trained properly in Indian cultural contexts.§
※As things stand, India does not have AI-specific regulation,§ said non-profit organization Internet Freedom Foundation*s policy director, Prateek Waghre. ※Automated decision-making will likely muddy the waters further because the use of such systems can be used to limit transparency and shield (authorities) from accountability.§
Also read: It's Meta*s World And The Queers Aren*t Feeling Welcome
The IT Rules 2021 include clauses that state that authorities must take due diligence measures to ensure that the data that they have on an individual is not misleading in any way, which should technically extend to misinformation generated by AI models. The Foundation*s research shows that these obligations are often caveated by exemptions, subjective application, and abuses of power. And though Transgender Persons (Protection of Rights) Act, 2019 prohibit discrimination against trans people, implementation is already spotty. Meanwhile, there is no such protection against homophobia, at least in the present form of the Indian Penal Code.
※The use of generative AI may add another dimension to (this), but our current institutional framework and societal resistance to such information is already weak,§ Waghre adds.
※One way to neutralise bias in AI is understanding and curating the data in certain ways, which is done during the exploratory data analysis phase 每 or EDA 每 before training the models,§ says data scientist Anirban Saha. ※During this phase, it is important to spot any lack of balance in data and then curate it so as to neutralise the bias.§
Saha adds that data scientists should try interpreting predictions or results given by AI models to help them spot biases in the results, if any, before releasing it to the public.
※The government thus needs to encourage companies and students developing AI systems to implement these methods, via its agencies,§ he says. He adds that there needs to be avenues for penalising entities using systems proven to have biases.
However, most of the work about elimination of bias in language processing is being done in English. The unique needs of India, with its many languages, religions, castes, creeds and, of course, queer communities, are not being met via this work.
※Another option may be to provide open-source packages for people to detect bias in AI, on marketplaces such as Hugging Face,§ says Saha, ※But for the packages to be made, first the data regarding bias in the Indian context needs to be put together.§
On the other hand, unionisation might help to cushion the effects of AI such as job losses. While Ghosh says that techno-fetishism 每 or the idea that technology can solve everything, with no nuance involved 每 is the real problem to be tackled, Subramonian says, ※Technology needs to be developed ethically; developers need to think about who can be negatively affected by it once it*s been deployed.§
"As part of the US government*s AI policies, some big AI companies have agreed to add watermarking to their generative models,§ says Ghosh. ※The Indian government needs to start enforcing this immediately.§
As India embraces the use of AI, conferences are also starting to be held. An upcoming one is Cypher 2023, an AI summit discussing &The Fusion of Art and AI: Navigating the Impact of Artificial Intelligence in the Creative Industry.*
※As a virtual group, Queer in AI has no local chapters, but we*d love to organise a session at any AI conference in India,§ says H. ※We*re certainly open to consulting with Indian government if invited to do so.§
For more stories on the LGBTQIA+ community and queerness in India, keep reading Spectrum on Indiatimes.