Women in AI: Anika Collier Navaroli is working to shift the power imbalance
To highlight the contributions of women in the AI field, TechCrunch is launching a series of interviews focusing on remarkable women who have been part of the AI revolution.
Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, in collaboration with the MacArthur Foundation.
She is recognized for her research and advocacy work in technology. Previously, she held positions at the Stanford Center on Philanthropy and Civil Society as a race and technology practitioner fellow. She also led Trust & Safety teams at Twitch and Twitter. Navaroli gained attention for her congressional testimony regarding the warnings of violence on social media prior to the January 6 Capitol attack.
How did you start in AI and what drew you to the field?
About two decades ago, while studying journalism as an undergrad, I became fascinated with the evolution of laws in the digital era. This curiosity led me to law school, where I delved into the intersection of technology and society. Working at various organizations, I examined how early AI systems perpetuated bias and unintended consequences for marginalized communities.
What work in the AI field are you most proud of?
I take pride in using policy to shift power dynamics within technology companies and address bias in algorithmic systems. For example, working at Twitter, I spearheaded campaigns to verify underrepresented individuals for their voices to be recognized in the tech industry.
How do you navigate the challenges of the male-dominated tech and AI industries?
As a Black queer woman, I have faced challenges in male-dominated spaces. I coined the term 'compelled identity labor' to describe situations where individuals with marginalized identities carry the burden of representing entire communities. Setting boundaries and choosing which issues to engage with have been crucial in navigating these spaces.
What are some pressing issues facing the evolution of AI?
The use of synthetic data in training AI models presents ethical challenges as it can perpetuate bias and misinformation. Training systems with synthetic data might create a feedback loop that reinforces existing biases.
What should AI users be aware of?
AI users have the power to advocate for ethical guidelines and regulations. Organizing for a 'People Pause on AI' can help create meaningful boundaries for the responsible use of AI technologies.
How can AI be responsibly built?
Having diverse voices in policy-making and decision-making processes is crucial for ensuring responsible AI development. I advocate for external regulation to establish and enforce safety and privacy standards in the AI industry.