Ebola gives me nightmares. Bird flu and SARS scare me. But what terrifies me is artificial intelligence. The first three, with enough resources, humans could stop. The last, which humans are creating, could become unstoppable.
伊波拉病毒讓我做惡夢,禽流感和嚴重急性呼吸道症候群(SARS)讓我怕怕,而讓我嚇得要命的卻是人工智慧。前三種,人類只要有足夠資源就能阻止;最後一種,人類正在研發,卻可能變得無法遏制。
Consider what artificial intelligence is. Grab an iPhone and ask Siri about the weather or stocks. Her answers are artificially intelligent. These artificially intelligent machines are cute now, but as they are given more power , the y may not take long to spiral out of control.
看看什麼是人工智慧。拿起iPhone,問個人助理軟體Siri天氣或股市如何,它的答案就是人工智慧。目前這些人工智慧機器都都算可愛,不過,當機器被賦予更多力量時,也許不多久就會失控。
In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market . Or a driverless car freezes on the highway because a software update goes awry.
剛開始,人工智慧故障的規模小但影響大,也許一台不安分的電腦讓股市暫時脫序,或者一輛自動駕駛車因軟體更新出錯而停在公路上。
But the upheavals can escalate quickly . Imagine how a medical robot, programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.
剛開始,人工智慧故障的規模小但影響大,也許一台不安分的電腦讓股市暫時脫序,或者一輛自動駕駛車因軟體更新出錯而停在公路上。
Nick Bostrom, author of the book "Superintelligence," lays out some doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. These bots could fight diseases in the human body or eat radioactive material . But, Mr. Bostrom says, a "person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth."
「超級人工智慧」一書作者薄思純描繪了一些末日情景,其中一種是能夠自我複製的奈米機器人,這是非常微小的機器人,設計宗旨就是自我複製。這些機器人可在人體內跟疾病作戰,或吃掉輻射物質,不過薄思純說,「掌握這種技術卻心懷不軌的人,可能讓地球上的智慧生命滅亡。」
Artificial-intelligence proponents argue that programmers are going to build safeguards. But didn't it take nearly a half-century for programmers to stop computers from crashing every time you used them?
世上最聰明的人之一霍金曾寫道,人工智慧成功「將是人類史上最重大的事件,不幸的是,可能也是最後一個事件。」
Stephen Hawking, one of the smartest people on earth, wrote that successful A. I. "would be the biggest event in human history. Unfortunately, it might also be the last."
雖然人工智慧支持者主張,程式設計師會預做防護,不過,程式設計師不是花了將近半世紀,才讓電腦不會在你每次使用時都當機?
One fear is that we are starting to create machines that can make decisions , but these machines don't have morality and likely never will. A more-distant fear is that once we build systems that are as intelligent as humans, the y will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control . We can't build safeguards into something that we haven't built ourselves.
令人擔心的一件事是,我們已開始創造能做決定的機器,這些機器卻沒有道德感,而且大概永遠不會有。往遠處想,令人害怕的是,一旦我們設計的系統跟人類一樣聰明,系統就能自行設計更聰明的機器,通常稱為「超級人工智慧」。專家說,那就是局面真正失控的時候。我們沒法為不是我們造出來的東西預做防護。
"We humans steer the future not because we're the strongest beings on the planet, or the fastest, but because we are the smartest," said James Barrat, author of "Our Final Invention: Artificial Intelligence and the End of the Human Era." "So when there is something smarter than us on the planet, it will rule over us on the planet."
「我們最後的發明:人工智慧與人類時代的終結」一書作者巴拉特說:「我們人類能主導未來走向,不是因為我們是地球上最強壯或動作最快的物種,而是因為我們最聰明,所以,如果地球上有某種東西比我們聰明,它就會統治我們。」
What makes it harder to comprehend is that we don't know what superintelligent machines will look or act like. " Artificial intelligence won't be like us," Mr. Barrat said, "but it will be the ultimate intellectual version of us."
讓這種恐懼更難理解的是,我們並不知道超級人工智慧機器長得怎樣、如何行動。巴拉特說:「人工智慧不會像我們,但會是我們最聰明的版本。」
Perhaps the scariest setting is how these technologies will be used by the military. Bonnie Docherty, a lecturer at Harvard University and a senior researcher at Human Rights Watch, said that the race to build autonomous weapons with artificial intelligence — already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place before machines are killing people on the battlefield. Machines that have no morality or mortality, she said, " should not be given power to kill."
也許最可怕的情景是,這種技術會被軍方使用。哈佛大學講師、「人權觀察」組織高級研究員邦妮.杜徹提說,研發人工智慧自動武器的競賽已經展開,讓人聯想到核武研發競賽早期,相關的限制條約應該在機器上戰場殺人前生效。她說,機器既沒有道德感也不會死,「不該握有殺人的力量。」
So how do we ensure that all these doomsday situations don't come to fruition? In some instances, we likely won't be able to stop them. But we can hinder some of the potential chaos by following the lead of Google. This year when the search-engine giant acquired DeepMind, a n artificial intelligence company based in London, the two put together an artificial intelligence safety and ethics board that aims to ensure these technologies are developed safely.
那我們該如何確保這些末日情景不會發生?在某些情況下,我們可能沒法阻止,不過我們可以效法搜尋引擎巨頭谷歌,防止某些潛在亂局上演。谷歌今年收購總部在倫敦的人工智慧公司DeepMind,二者共組「人工智慧安全與道德委員會」,確保這種技術朝安全的方向研發。
Demis Hassabis, founder of DeepMind, said that anyone building artificial intelligence should do the same thing. "They should definitely be thinking about the ethical consequences of what they do," Dr. Hassabis said. "Way ahead of time."
DeepMind創辦人哈薩畢斯博士說,凡是研發人工智慧的人都該這麼做,「他們理當想到自己所作所為的道德後果,而且要早早就想到。」
沒有留言:
張貼留言