Unleashing Transparency: The Emergence of Open Alternatives to ChatGPT

A team of linguists and language technology researchers from Radboud University have highlighted the emergence of several open-source alternatives to OpenAI's popular text generator, ChatGPT. These alternatives offer increased transparency. It allows users to learn more about the training information and algorithms used. This transparency is crucial for promoting the ethical utilization of generative AI.

In their paper and live-updated website, the researchers map out the evolving landscape of open source text generators. While these alternatives offer greater transparency, they also come with varying levels of openness and legal restrictions. Nevertheless, the researchers express cautious optimism about the growing availability of open alternatives.

Andreas Liesenfeld, the lead researcher, emphasizes the importance of open alternatives. He stated that they enable critical research and a better understanding of models like ChatGPT. With the lack of information about ChatGPT's training data and underlying mechanisms, open alternatives provide opportunities for fundamental research and developing apps with a clearer understanding.

The researchers challenge the notion put forth by corporations like OpenAI that AI should be shrouded in secrecy due to potential "existential risks." They argue that keeping things secret enables companies to cover up abusive work activities. But, it also diverts attention from real and present issues such as confabulation, unfair production, and unwanted information. Openness, on the other hand, promotes accountability and responsibility. Meanwhile, it enables a closer examination of the versions, their information (typically copyrighted), and the generated texts.

The study reveals that different models exhibit different levels of openness. Some models just provide the language model. At the same time, others offer comprehensive records and knowledge of the training information. Mark Dingemanse, a senior researcher, points out the limitations of ChatGPT. He stated that it lacks an understanding of intent, ownership, and rightful acknowledgment. These factors make it unsuitable for ethical usage in investigation and teaching. With open models, users can delve into the inner workings and make informed choices regarding technology.


The paper also raises the need to track the openness and transparency of new models systematically. As new models are introduced each month, the researchers emphasize the importance of monitoring their openness. To facilitate this, they have created an accompanying website that allows users to keep track of these developments.

Furthermore, the researchers highlight the legal complexities that arise when models borrow elements from one another. For example, the Falcon 40B-instruct prototype develops on the Baize information set intended for study purposes. But, its creators encourage commercial usage, leading to potential legal ambiguities.

One of the factors contributing to ChatGPT's fluidity is the workforce involved in the instruction-tuning process. This step is known as RLHF. It involves refining the model's production so that it becomes more communicative. Open models provide an avenue for investigating the factors that make individuals sensitive to interactive opinions.

The investigators will showcase their results at the international conference on conversational user interfaces. This conference will take place from July 19 to 21 in Eindhoven, Netherlands. The arXiv publication service has the complete document available.

Overall, the emergence of transparent alternatives to ChatGPT signals a positive development, allowing for increased scrutiny, research, and responsible use of generative AI. The availability of open models empowers users to make informed decisions about technology and promotes accountability within the AI community.

Read next: This New Report Reveals the World’s Divided Opinions on AI
Previous Post Next Post