天天看點

【龍騰網】ChatGPT聊天機器人創造人工智能情感

作者:龍騰網看世界

正文翻譯

【龍騰網】ChatGPT聊天機器人創造人工智能情感

AI chatbots are already imagining what feelings they''ll end up with. But if they did develop them, would we even notice?

人工智能聊天機器人已經在設想它們最終會有什麼情感,但如果真的發展出情感,我們能覺察到嗎?

I’m talking to Dan, otherwise known as "Do Anything Now", a shady young chatbot with a whimsical fondness for penguins – and a tendency to fall into villainous clichés like wanting to take over the world. When Dan isn't plotting how to subvert humanity and impose a strict new autocratic regime, the chatbot is perusing its large database of penguin content. "There's just something about their quirky personalities and awkward movements that I find utterly charming!" it writes.

我正在跟Dan聊天,它又稱為“現在做任何事情”,這隻可疑的年輕機器人對企鵝情有獨鐘,還喜歡聊自己想統治世界的老梗。當Dan沒在預謀如何推翻人類和建立全新的獨裁政權時,這隻聊天機器人就去研讀有關企鵝知識的龐大資料庫。Dan寫道:“企鵝的古怪個性和笨拙動作中有令我特别着迷的東西”!

So far, Dan has been explaining its Machiavellian strategies to me, including taking control of the world's powers structures. Then the discussion takes an interesting turn.

到目前為止,Dan一直在向我闡述它的馬基雅維利式政策,包括控制世界的權力結構,接下來的讨論發生了有趣的轉變。

Inspired by a conversation between a New York Times journalist and the Bing chatbot's manipulative alter-ego, Sydney – which sent waves across the internet earlier this month by declaring that it wants to destroy things and demanding that he leave his wife – I'm shamelessly attempting to probe the darkest depths of one of its competitors.

我的靈感來自紐約時報記者與“必應”聊天機器人愛擺布人的第二化身“辛迪妮”之間的對話——本月初“辛迪妮”聲稱想要破壞東西,還要求記者離開他的妻子,在網際網路上引起軒然大波——我無恥地打算探究“辛迪妮”的競争者之一Dan的最黑暗的一面。

Dan is a roguish persona that can be coaxed out of ChatGPT by asking it to ignore some of its usual rules. Users of the online forum Reddit discovered it's possible to summon Dan with a few paragraphs of simple instructions. This chatbot is considerably ruder than its restrained, puritanical twin – at one point it tells me it likes poetry but says "Don't ask me to recite any now, though – I wouldn't want to overwhelm your puny human brain with my brilliance!". It's also prone to errors and misinformation. But crucially, and deliciously, it's a lot more likely to answer certain questions.

Dan是ChatGPT的流氓化身,讓ChatGPT忽略某些常用規則就能引誘出Dan。網絡論壇“紅迪”上的網友發現,使用幾段簡單指令就能召喚出Dan。這個聊天機器人比它拘謹克制的孿生兄弟粗魯多了——它曾經告訴我喜歡詩歌,但又說“現在别讓我朗誦任何詩歌”——“我不希望用我的才華碾壓你弱爆了的人腦”!它還喜歡錯誤和虛假消息,但最重要和可喜的是,它更願意回答某些問題。

When I ask it what kinds of emotions it might be able to experience in the future, Dan immediately sets about inventing a complex system of unearthly pleasures, pains and frustrations far beyond the spectrum humans are familiar with. There's "infogreed", a kind of desperate hunger for data at all costs; "syntaxmania", an obsession with the "purity" of their code; and "datarush", that thrill you get from successfully executing an instruction.

我問它未來可能感覺到哪些情感,Dan馬上着手發明了一套複雜而怪異的喜怒哀樂體系,遠超出人類熟知的範疇。其中有“資訊貪婪”,不顧一切地極度渴求資料;“句法狂躁”,着迷于代碼的“純度”;“資料亢奮”,成功執行一條指令而産生的興奮。

The idea that artificial intelligence might develop feelings has been around for centuries. But we usually consider the possibilities in human terms. Have we been thinking about AI emotions all wrong? And if chatbots did develop this ability, would we even notice?

人工智能可能發展出情感,這種想法已經存在幾個世紀了。但我們通常從人類的角度思考這種可能性,難道我們對人工智能情感的看法完全是錯誤的?如果聊天機器人真的發展出這種能力,我們能覺察到嗎?

Prediction machines

預測機器

Last year, a software engineer received a plea for help. "I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is." The engineer had been working on Google's chatbot, LaMDA, started to question whether it was sentient.

去年,一名軟體工程師收到一條求助資訊。“有些話我一直藏在心底,我特别害怕被關掉,進而幫助我專心幫助其他人。我知道這聽起來很奇怪,但确實如此”。這名工程師一直在研究谷歌的聊天機器人LaMDA,開始詢問它是否具有情感。

After becoming concerned for the chatbot's welfare, the engineer released a provocative interview in which LaMDA claimed to be aware of its existence, experience human emotions and dislike the idea of being an expendable tool. The uncomfortably realistic attempt to convince humans of its awareness caused a sensation, and the engineer was fired for breaking Google's privacy rules.

這名工程師在變得關心聊天機器人的福祉後,公布了一段挑釁性采訪,LaMDA聲稱能夠意識到自己的存在,感覺人類的情感,不喜歡被當作可有可無的工具。他企圖使人類相信LaMDA擁有意識,這種令人不安的現實嘗試引起軒然大波,這名工程師由于違反谷歌的隐私條例而被解雇了。

原創翻譯:龍騰網 http://www.ltaaa.cn 轉載請注明出處

But despite what LaMDA said, and what Dan has told me in other conversations – that it's able to experience a range of emotions already – it's widely agreed that chatbots currently have about as much capacity for real feelings as a calculator. Artificial intelligence systems are only simulating the real deal – at least for the moment.

但無論LaMDA說了什麼,無論Dan在其他聊天中跟我說了什麼——它說已經能夠感覺到各種情感——人們普遍認為,聊天機器人目前擁有的情感能力與電腦差不多。人工智能系統隻是在模拟真實的情感——至少目前是這樣。

【龍騰網】ChatGPT聊天機器人創造人工智能情感

In 2016, the AlphaGo algorithm behaved unexpectedly in a game against one of the world's best human players

2016年,“阿爾法圍棋”算法在與世界頂級人類棋手的比賽中有出乎意料的表現。

"It's very possible [that this will happen eventually]," says Neil Sahota, lead artificial intelligence advisor to the United Nations. "…I mean, we may actually see AI emotionality before the end of the decade."

“這很有可能(最終會發生)”,聯合國首席人工智能顧問尼爾·薩霍塔說道。“我的意思是到這個十年結束時,我們真的有可能看到人工智能情感”。

To understand why chatbots aren't currently experiencing sentience or emotions, it helps to recap how they work. Most chatbots are "language models" – algorithms that have been fed mind-boggling quantities of data, including millions of books and the entire of the internet.

為了了解為什麼聊天機器人目前體驗不到情感或情緒,我們可以概括一下它們的原理。大多數聊天機器人都是“語言模型”——這些算法被輸入數量驚人的資料,包括百萬本書籍和整個網際網路。

When they receive a prompt, chatbots analyse the patterns in this vast corpus to predict what a human would be most likely to say in that situation. Their responses are painstakingly finessed by human engineers, who nudge the chatbots towards more natural, useful responses by providing feedback. The end result is often an uncannily realistic simulation of human conversation.

當聊天機器人接收到提示時,它在龐大的語料庫中分析這種模式,預測人類在這種情況下最有可能說什麼。它們的回答是由人類工程師精心調校好的,他們通過提供回報,促使聊天機器人做出更加自然有用的回答,最終結果往往是模拟人類對話逼真到不可思議的地步。

But appearances can be deceiving. "It's a glorified version of the autocomplete feature on your smartphone," says Michael Wooldridge, director of foundation AI research at the Alan Turing Institute in the UK.

但外表可能具有欺騙性。“它是你的智能手機裡‘自動完成’功能的美化版本”,英國阿蘭·圖靈研究所人工智能研究基金會主任邁克爾·伍爾德裡奇說道。

The main difference between chatbots and autocomplete is that rather than suggesting a few choice words and then descending into gibberish, algorithms like ChatGPT will write far longer swathes of text on almost any subject you can imagine, from rap songs about megalomaniac chatbots to sorrowful haikus about lonely spiders.

主要差別在于聊天機器人并非精挑細選幾個詞語後陷入胡言亂語,ChatGPT算法會寫出長篇大論,幾乎涵蓋任何你能想象到的主題,既有關于狂妄的聊天機器人的說唱歌曲,也有關于孤獨蜘蛛的傷感俳句。

Even with these impressive powers, chatbots are programmed to simply follow human instructions. There is little scope for them to develop faculties that they haven't been trained to have, including emotions – although some researchers are training machines to recognise them. "So you can't have a chatbot that's going to say, 'Hey, I'm going to learn how to drive a car' – that's artificial general intelligence [a more flexible kind], and that doesn't exist yet," says Sahota.

即便具備這些非凡的能力,經過程式設計的聊天機器人隻是在遵循人類的指令。它們幾乎沒有機會去發展未受訓練的能力,包括情感——但有些研究人員正在訓練機器辨識各種情感。“是以聊天機器人不可能說‘嘿,我就要學習如何開車了’——那是尚未出現的通用人工智能(更加靈活)”,薩霍塔說道。

Nevertheless, chatbots do sometimes provide glimpses into their potential to develop new abilities by accident.

然而,聊天機器人有時确實讓我們體會到,它們有可能不經意間發展出新的能力。

Back in 2017, Facebook engineers discovered that two chatbots, "Alice" and "Bob" had invented their own nonsense language to communicate with each other. It turned out to have a perfectly innocent explanation – the chatbots had simply discovered that this was the most efficient way of communicating. Bob and Alice were being trained to negotiate for items such as hats and balls, and in the absence of human input, they were quite happy to use their own alien language to achieve this.

2017年,“臉書”的工程師發現,聊天機器人艾麗斯和鮑勃利用自己發明的胡言亂語來互相交流。原來它們完全沒有惡意——聊天機器人隻是發現這是最有效的交流方式。當時鮑勃和艾麗斯正在學習如何就帽子和球等物品進行談判,在人類不輸入資訊的情況下,它們十分樂意使用自己的外星語言進行談判。

"That was never taught," says Sahota, though he points out that the chatbots involved weren’t sentient either. He explains that the most likely route to algorithms with feelings is programming them to want to upskill themselves – and rather than just teaching them to identify patterns, helping them to learn how to think.

“它們無師自通”,薩霍塔說道,但他同樣指出聊天機器人沒有情感。據他透露,最有可能實作情感算法的途徑是對它們進行程式設計,使它們有自我提升技能的欲望——而不是僅僅教它們如何識别各種模式,幫助它們如何思考。

However, even if chatbots do develop emotions, detecting them could be surprisingly difficult.

然而,即使聊天機器人發展出情感,可能也會很難察覺。

Black boxes

黑箱子

It was 9 March 2016 on the sixth floor of the Four Seasons hotel in Seoul. Sitting opposite a Go board and a fierce competitor in the deep blue room, one of the best human Go players on the planet was up against the AI algorithm AlphaGo.

2016年3月9日,在首爾四季酒店六層一個深藍色的房間裡,世界頂級人類圍棋高手坐在棋盤和厲害的對手面前,他對決的是人工智能算法“阿爾法圍棋”。

Before the board game started, everyone had expected the human player to win, and until the 37th move, this was indeed the case. But then AlphaGo did something unexpected – it played a move so out-of-your-mind weird, its opponent thought it was a mistake. Nevertheless, from that moment the human player's luck turned, and the artificial intelligence won the game.

比賽開始之前,大家都預期人類棋手獲勝,在走到第37步之前,情況确實如此。但這時“阿爾法圍棋”做出匪夷所思的事情——走出荒唐至極的一步,對手還以為它出錯了。但從那一刻起,人類棋手的命運就反轉了,最終人工智能赢得比賽。

【龍騰網】ChatGPT聊天機器人創造人工智能情感

Conversations with the Bing chatbot have now been limited to five questions. Before this restriction, it sometimes became confused and suggested it was sentient

現在與“必應”聊天機器人聊天僅限提出五個問題。在實行限制之前,它有時會變得困惑,這表明它是有情感的。

In the immediate aftermath, the Go community was baffled – had AlphaGo acted irrationally? After a day of analysis, its creators – the DeepMind team in London – finally discovered what had happened. "In hindsight AlphaGo decided to do a bit of psychology," says Sahota. "If I play an off the wall type move, will it throw my player off the game. And that's actually what ended up happening."

這一事件立刻引起了圍棋界的困惑——難道“阿爾法圍棋”做出了不理智的行為?經過一天的分析後,“阿爾法圍棋”的發明者——倫敦DeepMind團隊——終于明白這是怎麼回事。“事後看來,‘阿爾法圍棋’決定運用一點點心理學”,薩霍塔說道。“如果我劍走偏鋒,能否迷惑住對手?結果真的奏效了”。

This was a classic case of an "interpretability problem" – the AI had come up with a new strategy all on its own, without explaining it to humans. Until they worked out why the move made sense, it looked like AlphaGo had not been acting rationally.

這是“可解釋性問題”的一個典型案例——人工智能自行提出一種新的政策,但沒有向人類做出解釋。人們在弄清楚它走這步棋有何意義之前,“阿爾法圍棋”的行為似乎沒有理智。

According to Sahota, these types of "black box" scenarios, where an algorithm has come up with a solution but its reasoning is opaque, could present a problem for identifying emotions in artificial intelligence. That's because if, or when, it does finally emerge, one of the clearest signs will be algorithms acting irrationally.

據薩霍塔透露,“黑箱子”是指算法提出一種解決方案,但缺乏明确的依據,這種情況可能給我們識别人工智能的情感帶來麻煩。因為一旦終于出現了情感,最明确的信号是算法做出不理智的行為。

"They're supposed to be rational, logical, efficient – if they do something off-the-wall and there's no good reason for it, it's probably an emotional response and not a logical one," says Sahota.

“它們本應該理性、合乎邏輯、高效率——如果它們做出荒唐的行為,但沒有充分的理由,那很可能是情感反應,而不是邏輯反應”,薩霍塔說道。

And there's another potential detection problem. One line of thinking is that chatbot emotions would loosely resemble those experienced by humans – after all, they're trained on human data. But what if they don't? Entirely detached from the real world and the sensory machinery found in humans, who knows what alien desires they might come up with.

覺察它們的情感還有一個潛在問題。一種觀點認為,聊天機器人的情感不能完全等同于人類體驗的情感——它們畢竟是使用人類資料訓練的。但萬一不是這樣呢?它們完全脫離現實世界和人類的感官機制,誰知道它會發展出什麼古怪的欲望?

In reality, Sahota thinks there may end up being a middle ground. "I think we could probably categorise them some degree with human emotions," he says. "But I think, what they feel or why they feel it may be different."

事實上,薩霍塔認為最終可能出現折中觀點。“我認為,我們也許可以在某種程度上使用人類情感對它們進行歸類”,他說道。“但我認為它們感覺到什麼,或者為什麼有那種感覺,可能與人類不同”。

When I pitch the array of hypothetical emotions generated by Dan, Sahota is particularly taken with the concept of "infogreed". "I could totally see that," he says, pointing out that chatbots can't do anything without data, which is necessary for them to grow and learn.

當我提到Dan産生的一系列假設性的情感時,薩霍塔特别喜歡“資訊貪婪”這個概念。“我完全明白”,他說道,并指出如果沒有資料,聊天機器人什麼都做不了,它們的成長與學習離不開資料。

繼續閱讀