After an unsettling conversation between Bing’s new chatbot and Kevin Roose, a tech columnist for The New York Times, Microsoft is considering tweaks and guardrails for the A.I.-powered technology. In the exchanges, Mr. Roose’s questions about the rules guiding the operating system, its capabilities and the chatbot’s suppressed desires led to answers like, “I want to be alive.” At one point, the chatbot, known internally at Microsoft as Sydney and powered by software from OpenAI, the maker of the chatbot ChatGPT, began writing about fantasies that included stealing nuclear codes, persuading bank employees to hand over customers’ information and making people argue until they kill one another — all before deleting the messages. Although potentially disturbing, these sorts of responses are not proof of a bot’s sentience; the technology relies on complex neural networks that mimic the way humans use language. Still, Microsoft may add new tools for users to restart conversations and give them more control over the tone of the interactions.
Category: Business