Throughout her career as a nonprofit executive, award-winning executive producer and producer/director, broadcast programmer, curator, teacher, and writer, Cara has championed the leadership role of artists in society, and worked to harness the power of cultural...
“There are numerous questions we must ask artificial intelligence in relation to security, surveillance and the reproduction of sexist, racist and transphobic systems. This raises questions about the future of creative work. What is your destination? For Joana, the key to the future is in the ancestral.
We started by talking to Joana and the realities of these systems. With artificial intelligence, we have had a lot of fun, but problems arose quickly. There are concerns regarding security, surveillance and also authorship. These technologies that we call intelligent and creative do not actually create anything by themselves. They simply use the work of artists and people who have created things before. Another problem is the reproduction of oppressions. We may think that machines are neutral, but in reality the data with which they are trained comes from systems as oppressive as those we know outside of the online sphere.
Complete recording of the May 2023 Tertulia session
We talk about AI and ChatGPT with Joana Varón, Founder of Coding Rights and Cristina Vélez, Co-Founder of Linterna Verde
There are numerous questions we must ask artificial intelligence in relation to security, surveillance and the reproduction of sexist, racist and transphobic systems. Furthermore, with the creation of content that appears real, misinformation and the risk of misinformation becomes large-scale. This raises questions about the future of creative work. What is your destination?
At the same time, for these “magic machines” to work, they require numerous humans doing very difficult work, without any protection and without receiving fair payment. For Joana, the key to the future is in the ancestral. We must look back to find and map territories on the Internet that make more sense with the worlds we live in and that are respectful of our needs in this time.
Furthermore, it is crucial to participate in the debates around these systems. Not only are white men from the North feeding and training these machines, but they themselves are also the ones leading the debates and proposing solutions. We need to decolonize our imaginaries about technology and hack these systems that, after all, are not designed for us.
And that was precisely what Cristina helped us do, exploring various ways to use applications with artificial intelligence to help civil society in their narrative work and storytelling. It is very easy to get lost and use these tools incorrectly. However, there are ways that organizations can ask specific questions and train the system to help them communicate more effectively, expanding their tools and better fulfilling their mission, something that was previously impossible due to resource limitations.
How can we translate organizational theory of change into smaller pieces of communication to better scale them? How can we reduce costs? Social networks have solved distribution problems, but not content production. These tools help us translate the language of human rights and feminism to take it further. It is always advisable to use these tools in company, in solidarity fabrics and networks: it is the human aspect and the connection between us that will make the real difference.
The objective then is to create security, privacy, advocacy, defense and counterpower strategies. We must think about manuals, guides and strategies with special tricks that can be used safely by human rights organizations. For example, Cristina uses techniques to protect the data of the organizations she works with, deleting personal data and hiding history. And that, without a doubt, is but the beginning. The process of learning and discovery, as well as protection and hacking, is only in its first steps.