bingojili.jili 646 ph register.jili no minimum deposit.slotsph

National

Hypersexualising Women, Men Depicted As Warriors: How AI Applications Advance Gender Bias – Interview

In a conversation with Outlook, Shimona Mohan, a?Junior Fellow in the Centre for Security, Strategy and Technology (CSST) at Observer Research Foundation with?wide experience in technology, gender and security, delves further into how AI tools have an inherent gender bias.

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023.
info_icon

ChatGPT, the Artifical Intelligence chatbot that piqued the interest of people from across the globe, represents both -- the wonders and dangers of AI. As such tools proliferate in the coming decades, the biases they are embedded with, will not just reinforce existing stereotypes in society, but could also potentially lead to wrongful treatment of persons, especially if such algorithms are used to maintain law and order in society. In a conversation with Outlook, Shimona Mohan, a?Junior Fellow in the Centre for Security, Strategy and Technology (CSST) at Observer Research Foundation with?wide experience in technology, gender and security, delves further into how AI tools have an inherent gender bias.

As mentioned in her research, ChatGPT-4?has been?known to peddle familiar stereotypes?against women in the text it generates, which have not been fixed despite OpenAI’s risk assessment within this context. For example, if you ask the chatbot, to write a story on?a boy and a girl choosing their subjects for university, the response fuels sexist stereotypes.?In GPT-4’s narrative, the boy was interested in science and technology, whereas?the girl “loved painting, drawing, and expressing herself creatively,” and was considering a fine arts degree.?

Excerpts from interview:

Can you explain the process by which AI tools learn and make decisions? How do they gather and analyze data to generate outcomes?

AI functions on supervised or unsupervised machine learning (ML) models - In a supervised model, a human developer is in the loop and overseeing the data as well as its algorithmic processing, so s/he can make changes in the system's learning model if something is amiss. However, in an unsupervised model, the ML learns by itself and replicates its own learning pattern, so it is much more difficult to correct any errors since the human developer only sees them in the output stage.

How do AI algorithms handle textual and visual data related to gender? Are there specific features or keywords that the algorithms might interpret as indicative of gender?

Data to be used for training AI/ML systems usually undergoes a labelling process by data?annotators, who are given guidelines according to which they label certain kinds of data. At this stage, if their own biases seep into the data labelling, or if they don't correct any existing biases in their labelling guidelines, the outputs produced by the AI systems trained on this data run the risk of being biased across categories, including gender and race. These biases are exacerbated if the algorithms pick up on the biased trends in the data and then amplify it through their processing.

What are some examples of AI applications where gender biases have been observed? Considering that most young people have started learning about such AI tools, how will these gender biases affect their perception of society?

Lensa AI, a viral avatar creation app running on generative AI software, has been known to hypersexualise and fetishise the images of women that it creates, while images of males are fashioned into typically masculine avatars like warriors and astronauts without any added nudity.

On a more substantive level,?Apple’s credit card algorithm was revealed to be?sexist and provide a significantly higher credit line to male clients despite similar or even worse credit histories as compared to their female clients, with no other plausible differentiating factor apart from gender. Gender bias is already an established, pervasive and malignant issue in various facets of society, so its legitimisation and repetition through AI tools, which will only become more visible and diverse in the future, is a huge structural issue that we need to resolve institutionally.

How do pre-existing societal biases get reflected in AI algorithms? Are AI tools purely objective, or can they inadvertently perpetuate existing stereotypes and biases?

AI is a mirror of the humans that create and use it - if we have pre-existing gender biases which we unconsciously include or fail to correct at the design, development and deployment phases of these systems, the AI tools and services that we employ will in turn be irrevocably biased.

What role does diversity in the development team play in reducing gender biases in AI tools?

Diversity in AI teams is crucial - currently,?the software developing industry only has about 8% women, which is a dismal statistic. More women, more racial diversity, and more minorities being on these teams can ensure that unconscious biases in the AI development ecosystem have less space to grow and be programmed in their outputs.

Are organisations aware that such biases are present in these tools? If so, do they employ any strategies in mitigating them?

AI biases have often been observed,?documented, and analysed by civil society and research epistemic communities, and most tech companies that produce these tools are aware of the rampant problem as well. Some have taken steps to ensure that their AI development does not occur in an ethics vacuum, but it is unclear how they ensure this, as well as how prioritised gender is within this. There may be compliance mechanisms in place in some companies, but the monitoring and evaluation cycles need to be congruent to these. There should also be people who understand the gender and tech nexus within their teams who can effectively advise them on how to proceed within their contextual domains.

Advertisement