So, our theory is that if we want to arrive at a different type of technology, we have to start with the structure and the incentives. To us, that means being selective with funding, working slowly on research, working with an interdisciplinary team of researchers around the world who also bring the expertise of their lived experiences, and having a research process and incentive structure that doesn’t drive us to publish every few months.
What is your greatest fear over how AI is being developed and what is a positive development you foresee? Related: Who are key players now in the space and how do you assess their efforts?
The biggest issue with AI is that it is being developed by a very homogeneous, privileged group of people, and being driven by either of those two goals I mentioned earlier: warfare or Big Tech.
On the warfare side, there are some truly terrifying developments. For instance, borders surveilled by drones. Warfare and tech companies are intersecting with companies like Palantir which works with ICE. And people like Peter Thiel, an Open AI funder, and Sam Altman, the current OpenAI C.E.O., are investing in companies like Brinc whose earlier demos literally had a drone that talks to a man named Jose, that then automatically tases him. While it’s easier to imagine how things like this are scary, my former colleague El Mahdi wrote that social media companies have more blood on their hands than any rogue drone. The recent Tek Fog investigation from The Wire shows just how much disinformation, harassment and hate speech can be disseminated, and its impacts. Our paper on large language models discusses how it can be used for this purpose — this is cyberwarfare.
The positive developments I see are grassroots movements that are the antithesis to this. For instance, organizations like Masakhane NLP, EthioNLP and Ghana NLP working on natural language processing tools for African languages. Te Hiku Media working on language technology for the Maori language and rejecting an offer to be bought by an American company. And independent institutions like AI Now, Data & Society and the Algorithmic Justice League, as well as movements like Data for Black Lives, Radical AI, Black in AI, Queer in AI, Indigenous in AI, LatinX in AI. These are organizations resisting the move toward centralization and homogeneity in AI.
You told me recently you never spoke directly to Alphabet C.E.O. Sundar Pichai about what happened? What would you want to say to him now?
If it were years ago, I would schedule so many meetings with him, I would try to reason with him, I would write documents explaining the issues, send him articles, send him citations, have other people meet with him so that he understands their experiences and can empathize. That’s what I did with my leaders such as Jeff Dean, who reports to Sundar Pichai, over and over and over again. And look where I am now? I’ve learned that it’s a better use of my time to try to figure out how to galvanize those without power rather than appeal to the good will of those in power. So, what I would say to Sundar Pichai now is that you will not escape regulation and you will not stop the tech worker movement.