Exploring the Role of AI in Cultural Sensitivity and Content Diversity
From ‘ChatGPT’ and ‘Gemini’ to ‘OpenAI’, generative artificial intelligence (AI) has developed in leaps and bounds since 2015. Now, AI can be used to process data and analytics. By analysing large amounts of data, AI learns to identify trends and patterns, simplifying data for users to comprehend and track, while also predicting future outcomes from past data. At Winyama, data is essential in everything we do. In some cases, the data we have is culturally sensitive, emphasising the need for utmost caution and respect in its handling and analysis, as discussed in our previous data sovereignty blog. With how AI is able to analyse data, particularly large amounts of data, important questions to consider are, how does AI play a role in cultural sensitivity and content diversity? What role could AI play in eliminating human bias from data? And how can we ensure that the impact AI plays in culturally sensitive data will be positive?
How AI Analyses Data and Content
AI is ‘taught’ how to complete tasks and commands through machine learning (ML) algorithms. Machine learning algorithms are made up of methods that can be used to discover underlying patterns in data. The more machine learning algorithms that an AI has, the more accurate the answers will be, as it will have more data to better understand pattern recognition, classification and future predictions.
For example, ChatGPT was trained by ‘web scraping’ (gathering information through websites) in January 2022, any response that it gives you won’t be able to take into account anything that has happened since then.
How Does AI Play a Role in Cultural Sensitivity and Content Diversity?
Whilst AI is trained on the data it’s provided, it also relies on your user feedback. It will try to curate the content it provides with the results that you as a user want to see. Similar algorithms are used across social media as well, for example, Instagram reels, TikTok for you page, YouTube shorts, etc. These algorithms gather feedback from what you most engage with, seeking similar content you might like to see and therefore, keeping you on the social media platform for longer. This is great for the platforms, however, it tends to lead to a lack of exposure to different cultures, perspectives and ideas for you as the user. This is referred to as echo chambers and filter bubbles. Echo chambers occur when someone is only exposed to the same beliefs and perspectives that they already have, while filter bubbles refer to algorithms that show users content that they have previously engaged with. Both echo chambers and filter bubbles can lead to a lack of content diversity.
The data that an AI system is trained on plays a major role in its ability to determine human bias. If the data an AI is trained with contains human bias, it will then go on to recommend content that continues those same biases to users. This, alongside filter bubbles and echo chambers, can create negative implications like the exclusion of marginalised voices or favouring particular cultural narratives. If an AI system is trained only on data from a specific culture or perspective, it will disproportionately recommend content from that same culture or perspective.
For example, some music streaming platform’s algorithms tend to favour well-established songs, artists, and genres, potentially overshadowing the diverse preferences of individual listeners. This bias can inadvertently exclude voices from Indigenous communities, exacerbating the obstacles they face in gaining visibility and breaking into the highly competitive music industry. Consequently, Indigenous musicians may encounter significant hurdles in sharing their cultural expressions and narratives with a wider audience, perpetuating inequalities within the music industry. To increase the diversity of responses given from an AI system, the AI system would need to be given diverse data to be trained on. To be able to achieve this, more transparency and accountability is required from current and future AI developers. To foster that transparency and accountability from AI developers, we could start by implementing stricter regulations and guidelines, encourage open dialogue and collaboration within the AI community, as well as promoting ethical training and education programs focused on responsible AI development practices.
Whilst there is a long way to go to ensure that AI increases its ability to provide users with diverse content, there are ways that AI is already being utilised to impact cultural diversity in a positive way.
Language is a significant part of culture but also one of the hardest to understand as a new learner. AI powered translation tools have helped to bridge this divide. For example, every time you use Google Translate, AI is helping to foster understanding between languages, as well as enable cross cultural communication.
Cultural preservation can also be aided through AI by preserving and promoting cultural heritage. Examples of this include AI techniques like image recognition. AI is able to help digitise artefacts, documents and languages. And when culturally appropriate, can make them more accessible to a wider audience as well as ensuring that their cultural significance is not forgotten.
What Role Could AI Play in Eliminating Human Bias from Data?
We mentioned the risk of human bias in AI training before, however, there is the potential for AI to play a positive role in eliminating human bias within vast datasets.
In theory, eliminating human bias from data could be achievable by training AI to focus solely on predetermined variables within a dataset, eliminating any possibility of human bias. For instance, a company employing AI to sift through job applications could instruct it to disregard factors like gender or race, giving all applicants an equal opportunity by prioritising experience, encouraging diversity and limiting unconscious human bias. However, this concept hinges on the AI’s creation also being bias free.
While progress is ongoing, ensuring bias free training data remains a critical challenge to be addressed in order to realise this potential. Some suggested processes to ensure bias free training data could be the implementation of diverse and inclusive data sampling strategies as well as the ongoing monitoring and adjustment of AI models to mitigate bias.
How can we Ensure That the Impact AI Plays in Culturally Sensitive Data Will be Positive?
Despite the current shortcomings of AI and its influence on data cultural sensitivity, there is the potential for positive utilisation. By implementing measures to safeguard against adverse effects, AI has the potential to play a beneficial role in ensuring data is culturally diverse and accurate. Some suggested measures include:
Collecting and using datasets that encompass a wide range of cultural perspectives and experiences, this would mean that AI developers actively seek out diverse sources of data and ensure equitable and accurate representation.
Establishing clear ethical guidelines and standards for AI development, ensuring responsible and respectful use of AI technologies.
Engaging with communities and involving them in the development process can help identify potential issues as well as ensuring that AI systems are culturally sensitive and relevant.
Providing ongoing education and training for AI developers on the importance of cultural sensitivity and how to integrate it into AI systems.
Developing and implementing regulatory frameworks that address the ethical and cultural implications of AI technologies with accountability measures.
By addressing these considerations collectively, we can strive to build more inclusive and equitable digital environments for all users.
In order to enhance cultural sensitivity and diversity online, we can all play a role in shaping the evolution of AI. This starts with actively advocating for diverse training datasets that accurately represent different cultures and perspectives. You can take proactive steps within your organisations by engaging with supervisors and stakeholders to advocate for the inclusion of diverse perspectives in data selection and reducing data bias. Additionally, we can contribute by promoting clear ethical guidelines for AI development, engaging with our communities to raise awareness about the importance of cultural sensitivity, and supporting ongoing education initiatives for AI developers. Regulatory frameworks can also be encouraged to ensure that AI systems prioritise cultural diversity and sensitivity in online content. By implementing strategies collectively, we can work towards ensuring that the impact of AI on culturally sensitive data is positive, equitable and respectful of all diverse perspectives and experiences.