When one tries to lookup ¡®Queer in AI¡¯, the website one first stumbles across is ¡®QueerAI¡¯.This is an AI chatbot that was created by a group ofresearchers who spotted that AI was inheriting the biases of its creators ¨Cthat it was behaving in a way that upheld existing systemic discriminations,including those against queer people. The researchers trained the Queer AImodels on queer theory and feminist literature, and to pay homage to all queerpeople. they later left the website up as free to access for all: as ¡®an ethics ofembodiment¡¯, like a message on the website puts it.
The online collectiveQueer in AI performs a very similar function ¨C visibilisation of the effectsof AI on queer people ¨C but in the offline world. Created by scientists, itadvocates for the interests of queer people in the field of AI and machinelanguage via a community of, by, and for queer AI researchers. This includesoutreach programmes such as workshops at conferences, social meetups, mentoringprograms and a financial aid programme for graduate applications.
The group has also beeninvited to consult on initiatives, including by the Biden government to helpbuild the government policy for AI and by the?National Science Foundation?inthe United States to comment on demographic surveys.
Currently, one of thebiggest projects the group is focusing on is advocating for making publishing inclusionaryof transgender people ¨C such as ensuring that Google Scholar avoids using thedeadname, or old name, of transgender authors.?The group has helped to make a dot pdf file checker to parse and correct these names. They are also supporters of a campaign on the matter, called Scholarhasfailedus.
It has also presented papers, such as one?on community-led participatory design in AI, which won best paper at the ACM Conference on Fairness, Accountability, and Transparency (FaccT) for evaluating harms specific to?queer?communities in machine learning and envisioning new modes of LGBTQIA+ participation in AI.
Recently, PMModi has called for global regulations to ensure ethical use of AI, after minister ofstate for electronics and information technology RajeevChandrasekhar said in June that India will start regulating AI. This is a shift fromthe government¡¯s previous stance, as in April it had said that therewas no plan to regulate AI. The Indian government has also opened a portal for ¡®India¡¯s AI vision.¡¯
The core team of Queerin AI has many members who are either Indian or of Indian origin, which is agroup that is prominently represented in the global tech industry. Many of themwork with ethical practices in machine learning. As the AI wave arrives in India,we spoke to three of them about their greatest concerns.
¡°The greatest short-termdanger from generative AI is misinformation,¡± says 27-year-old Avijit Ghosh, aResearch Data Scientist at AdeptID and a Lecturer in the Khoury College ofComputer Sciences at Northeastern University. ¡°Image and text generation modelscan pose a serious risk in India without built-in fact-checking mechanisms,especially ahead of the general elections in 2024, and I fear they can causecommunal violence if they¡¯re used to spread lies.¡±
Also read:?Quest For The West: Why Does The Indian Queer Population Want To Move Out?
¡°AI-based contentmoderation on social media sites can also be overwhelmingly censorious forsexual education and queer people, while doing nothing to combat harm orhomophobia or transphobia,¡± adds 23-year-old California native ArjunSubramonian, who is currently doing a PhD on machine learning at UCLA, lookingat how deep learning models intensify structural inequities on social networks.
For Ghosh, another majordanger to queer people from AI is from some of its possible uses in predictiveanalytics, in which an AI makes predictions by spotting patterns in datasets ithas been trained on. In 2017, a pair of researcher at Stanford Universityclaimed to have proved that AI could be trained to predict people¡¯s sexual orientationfrom facial construction. They said they did it to prove the immense dangersthat AI can pose to privacy.
¡°It¡¯s a highly unethical application ofdigital physiognomy that could have massive implications for privacy anddiscrimination, in areas such as jobs or even incarceration,¡± says Ghosh.
H, a core organiser ofQueer in AI who is of Indian origin and who prefers to remain anonymous,expands this to other aspects of Indian socio-economic and cultural contexts.
Also read:?How College Queer Collectives In India Are Creating Inclusive Spaces On Campus
¡°Applications of AI inmany fields could be replicating majoritarian biases such as caste,¡± they say.¡°All our discussions regarding AI and queer people must be intersectional,especially because although upper-caste Indians and Indian-Americans are wellrepresented in tech circles, this is not true for Indians belonging to gender,sexuality, caste, religion and disabled minority communities.¡±
¡°One of my biggestconcerns with regard to India is the use of AI by the government in judicialsystems,¡± says H. ¡°This includes surveillance ¨C we now have cameras everywhere,and its data goes to the police. Could it be used to start predicting who couldbe a criminal based on appearance?¡±
This has alreadyhappened in multiple parts of the world. Risk assessment systems that use AImodels often predict that black people are more likely to commit crimes ¨C even if the evidenceproves otherwise. In the Indian context, Ghosh points out a paper by VidushiMarda and Shivangi Narayan that showed that an AI-based predictive policing system in New Delhiaffected poor people more.
Such biases couldcertainly extend to queer people as well, says Subramonian. ¡°If you don¡¯t havea normative body with respect to gender, you could be at increased risk of police brutality,¡± they say.
As generative AI, suchas text and image generators, is mass adopted, many jobs that are done byhumans now may become automated or machine-led to a much greater extent ¨C we¡¯vealready seen writing, music and art by AI being adopted across industries. Manyexperts have warned that India is not equipped to handle the impact of these job losses. Some reports have said that thisyear alone, 4,000 jobs in the tech industry have already been lost toAI.
¡°If AI systems are usedto devalue or replace human labour to cut costs, it means queer people couldlose access to family, housing, insurance and other services tied to jobs, andhealth insurance can be precarious for them anyway,¡± says Subramonian.
¡°AI education and AIregulation are both hugely important,¡± says Ghosh. ¡°There needs to be intensivetraining in awareness of harms and dangers from AI. People need to be taughthow to use it ethically and warned that it can replicate or worsen biases andthrow up massive intellectual property issues. Finally, models need to betrained properly in Indian cultural contexts.¡±
¡°As things stand, Indiadoes not have AI-specific regulation,¡± said non-profit organization InternetFreedom Foundation¡¯s policy director, Prateek Waghre. ¡°Automated decision-making will likely muddythe waters further because the use of such systems can be used to limit transparencyand shield (authorities) from accountability.¡±?
Also read:?It's Meta¡¯s World And The Queers Aren¡¯t Feeling Welcome
The IT Rules 2021include clauses that state that authorities must take due diligence measures toensure that the data that they have on an individual is not misleading in anyway, which should technically extend to misinformation generated by AI models.The Foundation¡¯s research shows that these obligations are often caveated byexemptions, subjective application, and abuses of power. And though TransgenderPersons (Protection of Rights) Act, 2019 prohibit discrimination against transpeople, implementation is already spotty. Meanwhile, there is no suchprotection against homophobia, at least in the present form of the Indian PenalCode.
¡°The use of generative AI may add anotherdimension to (this), but our current institutional framework and societalresistance to such information is already weak,¡± Waghre adds.
¡°One way to neutralisebias in AI is understanding and curating the data in certain ways, which isdone during the exploratory data analysis phase ¨C or EDA ¨C before training themodels,¡± says data scientist Anirban Saha. ¡°During this phase, it is importantto spot any lack of balance in data and then curate it so as to neutralise thebias.¡±
Saha adds that datascientists should try interpreting predictions or results given by AI models tohelp them spot biases in the results, if any, before releasing it to the public.
¡°The government thusneeds to encourage companies and students developing AI systems to implementthese methods, via its agencies,¡± he says. He adds that there needs to beavenues for penalising entities using systems proven to have biases.
However, most of thework about elimination of bias in language processing is being done in English.The unique needs of India, with its many languages, religions, castes, creedsand, of course, queer communities, are not being met via this work.
¡°Another option may beto provide open-source packages for people to detect bias in AI, onmarketplaces such as Hugging Face,¡± says Saha, ¡°But for the packages to bemade, first the data regarding bias in the Indian context needs to be puttogether.¡±
On the other hand,unionisation might help to cushion the effects of AI such as job losses.? While Ghosh says that techno-fetishism ¨C orthe idea that technology can solve everything, with no nuance involved ¨C is thereal problem to be tackled, Subramonian says, ¡°Technology needs to be developedethically; developers need to think about who can be negatively affected by itonce it¡¯s been deployed.¡±?
"As part of the US government¡¯s AI policies, some big AI companies have agreed to add watermarking to their generative models,¡± says Ghosh. ¡°The Indian government needs to start enforcing this immediately.¡±??
As India embraces theuse of AI, conferences are also starting to be held. An upcoming one is Cypher2023, an AI summit discussing ¡®The Fusion of Art and AI: Navigating the Impactof Artificial Intelligence in the Creative Industry.¡¯
¡°As a virtual group,Queer in AI has no local chapters, but we¡¯d love to organise a session at anyAI conference in India,¡± says H. ¡°We¡¯re certainly open to consulting withIndian government if invited to do so.¡±
For more stories on the LGBTQIA+ community and queerness in India, keep reading?Spectrum?on?Indiatimes.