THE PROMISE AND PERIL OF GENERATIVE AI
ChatGPT's creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). In September last year, Google subsidiary DeepMind published a paper4 on...
Gespeichert in:
Veröffentlicht in: | Nature (London) 2023-02, Vol.614 (7947), p.214-216 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ChatGPT's creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). In September last year, Google subsidiary DeepMind published a paper4 on a 'dialogue agent' called Sparrow, which the firm's chief executive and co-founder Demis Hassabis later told TIME magazine would be released in private beta this year; the magazine reported that Google aimed to work on features including the ability to cite sources. (Meta did not respond to a request, made through their press office, to speak to LeCun.) Safety and responsibility Galactica had hit a familiar safety concern that ethicists have been pointing out for years: without output controls LLMs can easily be used to generate hate speech and spam, as well as racist, sexist and other harmful associations that might be implicit in their training data. Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds. |
---|---|
ISSN: | 0028-0836 1476-4687 |
DOI: | 10.1038/d41586-023-00340-6 |