1 00:00:01,000 --> 00:00:05,466 我想講下 一個好多人都經歷過嘅感官錯覺 2 00:00:05,480 --> 00:00:06,920 當呢個錯覺嚟嗰陣 3 00:00:06,920 --> 00:00:09,360 我哋會唔識得留意危險 4 00:00:09,360 --> 00:00:14,320 我亦都想講下一個我認為駭人聽聞 5 00:00:14,320 --> 00:00:16,850 同時又好有可能會發生嘅情景 6 00:00:16,850 --> 00:00:19,666 呢個情景發生嘅話,唔係一件好事嚟 7 00:00:20,080 --> 00:00:22,536 你哋可能唔覺得我依家講緊嘅嘢恐怖 8 00:00:22,560 --> 00:00:24,640 反而覺得好型 9 00:00:25,200 --> 00:00:29,996 所以我想講下 我哋人類喺人工智能方面取得嘅成就 10 00:00:30,000 --> 00:00:31,776 最终會點樣摧毀我哋 11 00:00:31,800 --> 00:00:33,580 而事實上,我認為好難會見到 12 00:00:33,580 --> 00:00:36,960 佢哋唔會摧毀我哋 或者導致我哋自我毀滅 13 00:00:37,400 --> 00:00:39,256 依家你哋或者同我一樣 14 00:00:39,280 --> 00:00:41,936 覺得諗呢啲嘢好得意 15 00:00:41,960 --> 00:00:43,500 正因為覺得得意 16 00:00:43,500 --> 00:00:45,610 亦都成為咗問題嘅一部份 17 00:00:45,610 --> 00:00:48,010 你哋應該擔心你哋嘅反應至真! 18 00:00:48,010 --> 00:00:51,220 如果我喺呢場演講度話畀你哋聽 19 00:00:51,220 --> 00:00:54,106 因為氣候變化或者大災難嘅原因 20 00:00:54,106 --> 00:00:56,800 我哋會遭遇一場饑荒 21 00:00:56,800 --> 00:01:02,606 而你嘅孫,或者佢哋嘅孫 會好似咁樣生活 22 00:01:02,606 --> 00:01:04,170 你就唔會覺得 23 00:01:04,170 --> 00:01:08,220 「好有趣,我鍾意呢場 TED 演講。」 24 00:01:09,230 --> 00:01:10,840 饑荒一啲都唔有趣 25 00:01:11,650 --> 00:01:14,970 但科幻小說描繪嘅死亡就好有趣 26 00:01:14,980 --> 00:01:18,546 呢一刻,人工智能發展 最令我最困擾嘅係 27 00:01:18,546 --> 00:01:25,020 我哋面對近在眼前嘅危險似乎無動於衷 28 00:01:25,036 --> 00:01:27,006 雖然我喺你哋面前演講 29 00:01:27,006 --> 00:01:30,006 但我同你哋一樣都係冇反應 30 00:01:30,006 --> 00:01:32,670 成件事就好似我哋企喺兩道門前面 31 00:01:32,670 --> 00:01:36,876 喺一號門後面,我哋唔再發展智能機器 32 00:01:36,876 --> 00:01:38,490 因為某啲原因 33 00:01:38,490 --> 00:01:41,040 我哋電腦嘅硬件同軟件都停滯不前 34 00:01:41,040 --> 00:01:44,926 依家嚟諗一下點解呢種情況會發生 35 00:01:44,926 --> 00:01:48,910 即係話,因為智能同自動化好重要 36 00:01:48,910 --> 00:01:52,406 所以我哋會喺許可嘅情況之下 繼續改善科技 37 00:01:53,086 --> 00:01:55,520 咁究竟係乜嘢會阻止我哋? 38 00:01:55,533 --> 00:01:57,620 一個全面嘅核戰爭? 39 00:01:58,960 --> 00:02:00,860 一個全球流行病? 40 00:02:02,160 --> 00:02:03,890 一個小行星撞擊? 41 00:02:05,400 --> 00:02:08,020 Justin Bieber 做咗美國總統? 42 00:02:08,020 --> 00:02:09,936 (笑聲) 43 00:02:12,810 --> 00:02:15,330 之但係,如我哋所知 44 00:02:15,330 --> 00:02:17,480 有一啲嘢會摧毀文明 45 00:02:17,480 --> 00:02:19,360 你必須要想像 46 00:02:19,360 --> 00:02:25,876 如果我哋一代又一代人 永遠改善唔到科技 47 00:02:25,876 --> 00:02:28,086 情況會有幾嚴重 48 00:02:28,086 --> 00:02:29,610 幾乎可以確定嘅係 49 00:02:29,610 --> 00:02:32,186 呢個係人類史上最壞嘅事 50 00:02:32,186 --> 00:02:36,130 所以唯一嘅選擇 就係二號門後嘅做法 51 00:02:36,130 --> 00:02:41,066 我哋繼續年復一年升級改造智能機器 52 00:02:41,066 --> 00:02:42,440 到咗某個地步 53 00:02:42,440 --> 00:02:45,950 我哋就會整出比我哋仲要聰明嘅機器 54 00:02:45,950 --> 00:02:48,440 一旦我哋有咗比我哋自己 仲聰明嘅機器 55 00:02:48,440 --> 00:02:50,646 佢哋就會自我改良 56 00:02:50,646 --> 00:02:55,006 到時我哋就會面臨數學家 IJ Good 講嘅「智能爆炸」危機 57 00:02:55,006 --> 00:02:57,966 即係話,改良過程唔再需要人類 58 00:02:57,966 --> 00:03:01,130 依家,經常會有人學呢張諷刺漫畫咁 59 00:03:01,130 --> 00:03:05,170 描繪叛變嘅機器人會攻擊我哋 60 00:03:05,170 --> 00:03:08,636 但係呢個唔係最有可能發生嘅情景 61 00:03:08,636 --> 00:03:12,926 我哋嘅機器唔會自動變惡 62 00:03:12,926 --> 00:03:18,076 所以問題在於我哋製造出 比我哋更加做到嘢嘅機器嘅時候 63 00:03:18,086 --> 00:03:22,436 佢哋目標上同我哋嘅細微分歧 會置我哋於死地 64 00:03:24,030 --> 00:03:26,690 就諗下我哋同螞蟻之間嘅關係︰ 65 00:03:26,690 --> 00:03:28,080 我哋唔討厭佢哋 66 00:03:28,080 --> 00:03:30,674 我哋唔會傷害佢哋 67 00:03:30,674 --> 00:03:32,780 甚至我哋為咗唔傷害佢哋 而會受一啲苦 68 00:03:32,780 --> 00:03:34,514 例如我哋會為咗唔踩到佢哋 而跨過佢哋 69 00:03:34,514 --> 00:03:39,240 但係一旦佢哋嘅存在 同我哋嘅其中一個目標有嚴重衝突 70 00:03:39,240 --> 00:03:41,707 譬如話要起一棟咁樣嘅樓 71 00:03:41,707 --> 00:03:44,927 我哋諗都唔諗就殺死佢哋 72 00:03:44,927 --> 00:03:48,101 問題係,我哋終有一日整出嘅機器—— 73 00:03:48,101 --> 00:03:50,176 無論佢哋自己有冇意識都好 74 00:03:50,176 --> 00:03:52,280 同樣會冷漠咁對待我哋 75 00:03:53,250 --> 00:03:57,030 依家,我估對於你哋大部份人嚟講 呢件情景都係遙不可及嘅 76 00:03:57,430 --> 00:04:03,240 我賭你哋當中有人質疑 超級智能嘅可能性 77 00:04:03,240 --> 00:04:05,440 更加唔好講 人類要避免超級智能 78 00:04:05,440 --> 00:04:08,990 但係你哋肯定會喺下面嘅假設當中 搵到一啲謬誤 79 00:04:08,990 --> 00:04:10,694 呢度一共有三個假設 80 00:04:11,944 --> 00:04:16,820 喺物理系統裏面,智能等如訊息處理 81 00:04:17,460 --> 00:04:19,370 但係,呢個超過咗假設 82 00:04:19,370 --> 00:04:23,449 因為我哋已經喺我哋嘅機器裏面 植入咗弱人工智能 83 00:04:23,449 --> 00:04:28,846 而且呢啲機器好多 已經處於一個超人類智能水平 84 00:04:28,846 --> 00:04:33,940 同時我哋知道僅僅係物質 就可以產生所謂嘅「一般智能」 85 00:04:33,940 --> 00:04:37,406 一種可以喺唔同領域之間 靈活思考嘅能力 86 00:04:37,406 --> 00:04:40,576 咁係因為我哋嘅大腦 已經可以做到,係唔係? 87 00:04:40,576 --> 00:04:44,776 我嘅意思係,大腦凈係得原子 88 00:04:44,776 --> 00:04:49,436 只要我哋繼續加設原子系統 89 00:04:49,436 --> 00:04:52,036 機器就可以有更加多智能行為 90 00:04:52,036 --> 00:04:55,026 除非進度有咩停頓 91 00:04:55,026 --> 00:04:59,136 否則我哋最終會喺機器裏面 建構出一般智能 92 00:04:59,146 --> 00:05:02,556 明白進度嘅快慢並唔影響係好重要 93 00:05:02,556 --> 00:05:06,206 因為任何過程都足以令我哋返唔到轉頭 94 00:05:06,206 --> 00:05:08,106 我哋唔需要按照摩爾定律進行 95 00:05:08,106 --> 00:05:10,126 我哋唔需要指數式增長 96 00:05:10,126 --> 00:05:11,686 我哋只需要繼續做 97 00:05:13,280 --> 00:05:16,630 第二個假設就係我哋會繼續做 98 00:05:16,630 --> 00:05:20,030 我哋會繼續改造我哋嘅智能機器 99 00:05:21,440 --> 00:05:25,090 而考慮到智能嘅價值… 100 00:05:25,090 --> 00:05:29,240 我係話,因為有智能 我哋至會珍重事物 101 00:05:29,240 --> 00:05:31,916 或者我哋需要智能 去保護我哋珍重嘅一切 102 00:05:31,916 --> 00:05:34,116 智能係我哋最有寶貴嘅資源 103 00:05:34,116 --> 00:05:36,200 所以我哋想繼續發展智能 104 00:05:36,200 --> 00:05:38,876 我哋有極需解決嘅問題 105 00:05:38,876 --> 00:05:42,546 例如我哋想治療類似阿茲海默症 同癌症嘅疾病 106 00:05:42,546 --> 00:05:44,610 我哋想認識經濟系統 107 00:05:44,610 --> 00:05:46,780 我哋想改善我哋嘅氣候科學 108 00:05:46,780 --> 00:05:49,366 所以如果可以做到嘅話 我哋會繼續發展智能 109 00:05:49,366 --> 00:05:52,656 件事亦都可以比喻為︰ 列車已經開出,但冇刹車掣可以踩 110 00:05:53,836 --> 00:06:01,086 最終,我哋唔會去到 智能嘅頂峰或者高智能水平 111 00:06:01,726 --> 00:06:03,576 而呢個就係非常重要嘅觀察結果 112 00:06:03,576 --> 00:06:06,116 就係呢個結果 將我哋置於岌岌可危嘅境地 113 00:06:06,116 --> 00:06:10,926 亦令到我哋對於危險嘅觸覺唔可靠 114 00:06:10,926 --> 00:06:13,900 依家,就諗下史上最聰明嘅人 115 00:06:13,900 --> 00:06:18,310 幾乎喺每個人嘅名單上面 都會有 John von Neumann 116 00:06:18,310 --> 00:06:20,916 我嘅意思係 John von Neumann 畀佢周圍嘅人嘅印象 117 00:06:20,916 --> 00:06:25,326 包括佢畀嗰個時代最犀利嘅數學家 同物理學家嘅印象 118 00:06:25,326 --> 00:06:27,300 都係有紀錄低嘅 119 00:06:27,300 --> 00:06:31,266 如果一半關於佢嘅故事有一半係真嘅 120 00:06:31,266 --> 00:06:32,356 咁毫無疑問 121 00:06:32,356 --> 00:06:34,366 佢係有史以來其中一個最聰明嘅人 122 00:06:34,366 --> 00:06:37,646 所以當我哋畫一幅比較智力嘅圖 123 00:06:37,646 --> 00:06:41,460 喺右邊高分嘅位置 我哋有 John von Neumann 124 00:06:41,460 --> 00:06:43,239 喺中間有你同我 125 00:06:44,269 --> 00:06:45,640 去到最左邊,我哋有雞仔 126 00:06:45,640 --> 00:06:46,816 (笑聲) 127 00:06:46,816 --> 00:06:48,906 係吖,就係一隻雞仔 128 00:06:48,906 --> 00:06:49,796 (笑聲) 129 00:06:49,796 --> 00:06:52,916 我冇理由將呢個演講搞到咁灰㗎 130 00:06:52,916 --> 00:06:56,560 (笑聲) 131 00:06:56,560 --> 00:07:03,550 但好有可能智力分佈 遠比我哋目前認知嘅廣 132 00:07:03,550 --> 00:07:06,750 如果我哋建造出 比我哋擁有更高智慧嘅機器 133 00:07:06,750 --> 00:07:11,566 佢哋嘅智力好有可能會 超越我哋認知嘅最高智力 134 00:07:11,566 --> 00:07:14,490 同埋以無法想像嘅方式超越我哋 135 00:07:15,140 --> 00:07:19,170 同樣重要嘅係 單憑運算速度就可以超越我哋 136 00:07:19,170 --> 00:07:21,246 啱唔啱?諗下如果我哋整咗一個 137 00:07:21,246 --> 00:07:29,886 冇哈佛或者麻省理工研究人員 咁聰明嘅超級人工智能 138 00:07:29,886 --> 00:07:34,326 但電路運行速度大概 比生化電路快一百萬倍 139 00:07:34,326 --> 00:07:39,446 所以呢個機器嘅思考速度應該會 比佢嘅創造者快大概一百萬倍 140 00:07:39,446 --> 00:07:41,296 所以如果佢運行一個星期 141 00:07:41,296 --> 00:07:48,376 佢就可以完成人類要兩萬年 先至完成得到嘅工作 142 00:07:49,410 --> 00:07:50,724 而我哋又點會明白 143 00:07:50,724 --> 00:07:55,360 人工智能係點樣完成咁龐大嘅運算呢? 144 00:07:56,610 --> 00:08:01,730 另一個令人擔憂嘅事,老實講 145 00:08:01,730 --> 00:08:04,486 就係…不如想像一下最好嘅情形 146 00:08:04,486 --> 00:08:09,560 想像一下我哋設計咗一個 冇安全問題嘅超級人工智能 147 00:08:09,560 --> 00:08:12,946 我哋第一次擁有完美嘅設計 148 00:08:12,946 --> 00:08:17,216 就好似我哋摞住 一個按照預期發展嘅神諭 149 00:08:18,200 --> 00:08:21,580 呢個機器仲會變成完美嘅慳力設備 150 00:08:21,580 --> 00:08:23,880 事關機器可以生產另一款機器出嚟 151 00:08:23,880 --> 00:08:25,683 做任何體力勞動 152 00:08:25,683 --> 00:08:27,396 兼由太陽能驅動 153 00:08:27,396 --> 00:08:29,826 成本仲同買原材料差唔多 154 00:08:29,826 --> 00:08:33,306 所以,我哋唔單止講緊咕哩勞力嘅終結 155 00:08:33,306 --> 00:08:36,616 我哋同時講緊大部份用腦工作嘅終結 156 00:08:37,196 --> 00:08:40,260 咁我哋人類面對工作削減 應該何去何從? 157 00:08:40,260 --> 00:08:44,946 我哋會好自由咁去掟飛盤 、同人按摩 158 00:08:44,946 --> 00:08:48,690 服食一啲 LSD 精神藥 同埋著上怪異服飾 159 00:08:48,690 --> 00:08:50,896 於是成個世界都會變成火人節嘅人咁 160 00:08:50,896 --> 00:08:53,026 (笑聲) 161 00:08:54,596 --> 00:08:56,700 頭先講到嘅嘢聽起上嚟好似好好咁 162 00:08:57,220 --> 00:08:58,550 但係撫心自問 163 00:08:58,550 --> 00:09:02,186 面對目前嘅經濟政治秩序 乜嘢會發生呢? 164 00:09:02,606 --> 00:09:04,816 似乎我哋會目睹 165 00:09:04,816 --> 00:09:10,336 我哋從未見過咁嚴重嘅 貧富懸殊同失業率 166 00:09:10,336 --> 00:09:15,446 如果呢筆新財富唔即時用嚟服務全人類 167 00:09:15,446 --> 00:09:19,170 就算一啲億萬富翁使好多錢 㨘靚商業雜誌嘅封面 168 00:09:19,170 --> 00:09:21,896 世界上其他人都要挨餓 169 00:09:21,896 --> 00:09:23,920 咁如果俄羅斯人或者中國人 170 00:09:23,920 --> 00:09:29,170 聽到矽谷嘅一啲公司 打算使用一個超級人工智能 171 00:09:29,170 --> 00:09:30,330 佢哋會點諗? 172 00:09:30,330 --> 00:09:35,216 呢個機器有能力用未見過嘅力度 發動地面或者網絡戰爭 173 00:09:35,216 --> 00:09:37,160 呢個係「勝者全取」嘅情況 174 00:09:38,120 --> 00:09:43,006 喺呢場人工智能較量中有六個月嘅優勢 175 00:09:43,006 --> 00:09:47,366 就係至少要做多人類五十萬年做到嘅嘢 176 00:09:47,366 --> 00:09:54,166 甚至只係關於人工智能突破嘅謠言 就可以令到人類亂起上嚟 177 00:09:54,920 --> 00:10:00,000 依家最驚人嘅一件事,我覺得 178 00:10:00,000 --> 00:10:05,986 就係人工智能研究人員 安定人心時講嘅說話 179 00:10:06,856 --> 00:10:10,250 佢哋成日話,因為我哋有時間 所以我哋唔需要擔心 180 00:10:10,250 --> 00:10:12,636 「乜你唔知有排咩? 181 00:10:12,636 --> 00:10:15,000 仲有五十年或者一百年先到。」 182 00:10:15,590 --> 00:10:17,010 一位研究人員曾經咁講︰ 183 00:10:17,010 --> 00:10:21,000 「擔心人工智能嘅安全就好似 擔心火星人口爆棚一樣。」 184 00:10:22,038 --> 00:10:23,984 呢句嘢等如矽谷同你講︰ 185 00:10:23,984 --> 00:10:26,194 「你十八廿二就杞人憂天!」 186 00:10:26,194 --> 00:10:27,150 (笑聲) 187 00:10:27,150 --> 00:10:33,586 冇人意識到 攞時間嚟到講完全係無稽之談 188 00:10:34,086 --> 00:10:37,736 如果智能凈係同處理訊息有關 189 00:10:37,736 --> 00:10:40,626 同埋我哋繼續改良我哋嘅機器嘅話 190 00:10:40,626 --> 00:10:44,116 我哋最終會生產到超級智能 191 00:10:44,116 --> 00:10:46,760 但我哋唔知道要用幾長時間 192 00:10:46,760 --> 00:10:50,896 先可以生產安全嘅超級智能 193 00:10:52,256 --> 00:10:53,420 等我再講多一次 194 00:10:53,420 --> 00:10:56,516 我哋唔知道要用幾長時間 195 00:10:56,516 --> 00:10:59,496 先可以生產安全嘅超級智能 196 00:11:01,176 --> 00:11:02,290 如果你仲未意識到 197 00:11:02,290 --> 00:11:04,700 五十年嘅概念已經唔同咗喇 198 00:11:04,700 --> 00:11:06,984 呢幅圖顯示咗以月份計嘅五十年 199 00:11:06,984 --> 00:11:09,370 先係 iPhone 面世至今嘅時間 200 00:11:09,370 --> 00:11:12,480 再係阿森一族出現係電視上嘅時間 201 00:11:12,480 --> 00:11:18,390 五十年不足以畀人類應對最大挑戰 202 00:11:19,560 --> 00:11:23,240 再一次,我哋對於有理由發生嘅事 203 00:11:23,240 --> 00:11:26,246 未有採取適當嘅情緒反應 204 00:11:26,246 --> 00:11:30,366 對此,電腦科學家 Stuart Russell 有一個好嘅比喻 205 00:11:30,366 --> 00:11:35,146 佢話︰想像一下我哋收到 一個來自外星文明嘅信息 206 00:11:35,146 --> 00:11:36,206 上面寫住: 207 00:11:36,206 --> 00:11:38,196 「地球上嘅人類, 208 00:11:38,196 --> 00:11:41,666 我哋五十年之後會到達你哋嘅星球。 209 00:11:41,666 --> 00:11:42,600 請準備好。」 210 00:11:43,490 --> 00:11:46,636 咁我哋依家凈係會倒數外星人來臨? 211 00:11:47,556 --> 00:11:50,606 我哋應該更加緊張至係 212 00:11:52,626 --> 00:11:55,400 另一個我哋被告知唔使擔心嘅原因係 213 00:11:55,400 --> 00:11:57,636 呢啲機器只會識得 將我哋嘅價值觀傳開 214 00:11:57,636 --> 00:11:59,956 因為佢哋係我哋人類嘅附屬嘅一部分 215 00:11:59,956 --> 00:12:02,336 但同時佢哋會被植入我哋嘅大腦 216 00:12:02,336 --> 00:12:04,696 所以我哋會成為佢哋嘅邊緣系統 217 00:12:04,696 --> 00:12:09,360 依家使啲時間諗下 最安全同唯一審慎嘅做法 218 00:12:09,360 --> 00:12:14,130 而推薦嘅做法就係 直接將呢種科技植入我哋嘅大腦 219 00:12:14,680 --> 00:12:17,940 呢種做法可能係最安全同唯一審慎嘅 220 00:12:17,940 --> 00:12:21,246 但係喺你將佢植入你個腦之前 221 00:12:21,246 --> 00:12:24,886 科技嘅安全問題需要解決 222 00:12:24,886 --> 00:12:26,706 (笑聲) 223 00:12:26,706 --> 00:12:28,340 更深一層嘅問題係 224 00:12:28,340 --> 00:12:32,060 人工智能自己整超級人工智能 225 00:12:32,060 --> 00:12:35,876 似乎比整一個可以喺神經科學上 226 00:12:35,876 --> 00:12:39,620 同我哋腦部無縫接合嘅 超級人工智能簡單 227 00:12:41,120 --> 00:12:47,340 考慮到從事研發人工智能嘅公司 同政府好可能會互相競爭 228 00:12:47,340 --> 00:12:50,720 考慮到要贏呢場比賽就要贏成個世界 229 00:12:50,720 --> 00:12:53,574 同埋先假設如果你下一刻 唔會糟塌人工智能嘅成果 230 00:12:53,574 --> 00:12:57,850 咁樣,似乎更加簡單嘅事會完成咗先 231 00:12:58,490 --> 00:13:01,120 但唔好彩嘅係 我除咗叫大家反思呢個問題 232 00:13:01,120 --> 00:13:03,476 我就再冇辦法解決呢個問題 233 00:13:03,476 --> 00:13:05,716 我覺得我哋喺人工智能方面 234 00:13:05,716 --> 00:13:07,906 需要好似「曼哈頓計劃」咁嘅計劃 235 00:13:08,416 --> 00:13:12,076 唔係講點樣整人工智能 因為我認為人工智能終有一日會整到 236 00:13:12,076 --> 00:13:14,416 而係搞清楚點樣避免一場軍備競賽 237 00:13:14,416 --> 00:13:17,690 同埋往符合我哋利益嘅方向 發展人工智能 238 00:13:18,140 --> 00:13:22,626 當你講緊可以自我改造嘅超級人工智能 239 00:13:22,626 --> 00:13:27,126 我哋似乎只有一個機會 令到人工智能發展得安全 240 00:13:27,126 --> 00:13:28,266 就算發展得安全 241 00:13:28,266 --> 00:13:32,946 我哋都要接受 人工智能對經濟同政治產生嘅結果 242 00:13:33,806 --> 00:13:40,360 但係當我哋同意 訊息處理係智能嘅起步點 243 00:13:40,360 --> 00:13:46,160 同意一啲適當嘅計算系統係智能嘅基礎 244 00:13:46,160 --> 00:13:51,030 同意我哋會不斷完善人工智能 245 00:13:51,030 --> 00:13:57,850 同意將來有好多嘢超越我哋認知嘅 246 00:13:57,850 --> 00:14:03,130 咁我哋就必須要承認 我哋正喺度創造緊某種神明 247 00:14:03,130 --> 00:14:07,470 依家會係一個好時機 確保佢係可以同我哋共存嘅神明 248 00:14:08,030 --> 00:14:09,253 好多謝你哋 249 00:14:09,253 --> 00:14:11,296 (掌聲)