Authors: Robert Braun, Elisabeth Frankus and Shauna Stack
While many non-humans communicate, it is language that characterizes humans. Not anymore. Many herald this as a major achievement. We are cautious and skeptical. States and supra-national organizations must bring permissionless innovation to an end.
GPT-3 is a third generation Generative Pre-trained Transformer (GPT) language generator AI model, developed by OpenAI, and (partly) owned by Microsoft. It contains 175 billion parameters (GPT-2 had 1.5 billion) and had been trained on tokens gathered from various internet resources. As a commercial product, GPT-3 gets embedded in dozens of real-world applications, including chatbots, report applications, software development tools. Despite not having been designed to deal with mathematical, semantic, and ethical questions, GPT-3 writes (but not necessarily knows) better than many people. Its availability is the dawn of an age in which semantic artefacts may be produced at low cost and at speed. Translations, summaries, minutes, newspaper articles, manuals; but also, online comments, reviews, university assignments, bachelor theses and books. Semantic artefacts so produced are not plagiarized. They sure would pass the Turing-test.
The original paper publishing the concept of GPT-3, in 2020, warned about concerns and biases: ‘advanced persistent threats’ by highly skilled and well-resourced groups with long-term agendas to influence AI training; biases relating to gender, race, and religion. Training consumed tons of energy and content generation will cost millions of kW-hrs. Such concerns increased as GPT-3 was released to be used openly by millions of users across the Globe. Microsoft intends to build it into its commercial products: Word, PPT, its search engine, Bing. Google and Facebook are working on similar models.
Researchers and policy makers have called for responsible and trustworthy AI systems by design: developing and deploying AI to empower and fairly impact stakeholders and society at large. Good policy intentions, such as responsible and trustworthiness initiatives, easily run astray. As we have learned from social media -- monetization and political intervention by surveillance-targeting presents enormous threats to privacy, democracy, and to social wellbeing. AI driven ‘algorithms-for-hire’ present a constant threat to democracy and social wellbeing. Privatizing regulation, as in the case of Facebook and Twitter, does more harm than good.
GPTs must be regulated. At a minimum, all applications and semantic artefacts must indicate that they work with ChatGPT ( e.g., making it transparent that “this text has been created with the help of ChatGPT”). States should intervene and limit AI development and deployment, especially in enforcing human rights standards and right to freedom of opinion. Global institutional and sovereignty logics, such as those of the UN, must be combined with new competences and practices, activists’ and research awareness and knowledge bases. New, hybrid and global regulatory models need to be developed – ground-up activist and academic initiatives supported by the legitimacy and legal enforceability of global institutions and those of states.
GPT-3 is a powerful artificial intelligence language model developed by OpenAI that has been generating significant interest and excitement in the technology industry. The model has the potential to revolutionize a wide range of fields, from natural language processing to content generation and machine translation. However, it also has the potential to create negative social impacts, including spreading disinformation and increasing the spread of fake news.
One important question regarding GPT-3 is whether it should be regulated or left to permissionless innovation based on its social impacts. On one hand, supporters of permissionless innovation argue that the model has the potential to create significant benefits, and that excessive regulation could limit the potential of GPT-3 to transform industries and solve problems. They also argue that the model has been developed by a private company and that it is up to the company to decide how to manage and regulate its use.
However, there are also arguments in favor of regulation. Some have expressed concerns about the potential for GPT-3 to be used in ways that could cause harm, including generating fake news or creating malicious content. They argue that regulation is necessary to ensure that the model is used in a responsible and ethical manner and to mitigate any potential negative social impacts.
One approach to regulating GPT-3 could be to require that companies using the model adhere to certain guidelines or ethical principles. For example, guidelines could be put in place to prevent the use of GPT-3 for spreading disinformation or generating malicious content. Additionally, regulations could be implemented to ensure that companies using the model are transparent about how they are using it, and to require that they take responsibility for any negative impacts that result from its use.
Ultimately, the decision of whether to regulate GPT-3 or leave it to permissionless innovation based on its social impacts is a complex one, and will require careful consideration of the potential benefits and risks of the model. While permissionless innovation can encourage creativity and experimentation, it is important to also consider the potential social impacts of new technologies and to ensure that they are developed and used in a responsible and ethical manner. Therefore, a balanced approach to regulation is likely the best way to ensure that GPT-3 is used in a way that maximizes its benefits and minimizes its potential negative impacts.