02.26 這些是國防部AI的PENTAGON的新倫理“原則”

這些是國防部AI的PENTAGON的新倫理“原則”

On Monday, the Pentagon announced the official adoption of a series of new principles for ethical use of artificial intelligence in warfare, the Associated Press reports.

據美聯社(Associated Press)報道,週一,五角大樓宣佈正式通過一系列新的原則,將人工智能用於戰爭的道德規範。

The principles were formed out of a commission with the (darkly Newspeak-y) name the Defense Innovation Board, which released its recommendations (title: “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense“) to the Pentagon last October.

“原則”是由一個名為國防創新委員會(Defense Innovation Board)的委員會組成的,該委員會去年10月向五角大樓發佈了其建議(標題:“AI原則:國防部對人工智能的道德使用的建議”)。

The board was fronted by former Google CEO Eric Schmidt, an interesting twist given (as pointed out by the AP) due to the way Google seemed to (or: pretended to) drop out of a defense department project involving A.I. in 2018 after internal protests from Google staffers (to say nothing of the way Google’s involvement was handled by the Pentagon).

董事會由谷歌前首席執行官埃裡克·施密特擔任,這是一個有趣的轉折(正如美聯社指出的那樣),因為2018年穀歌似乎(或假裝)退出了一個涉及人工智能的國防部項目(更不用說五角大樓處理谷歌介入的方式了)。

Per the late 2019 report, the principles are (with our paraphrasing in parenthesis):

根據2019年末的報告,原則是(我們在括號中轉述):

If this all sounds broad, harmless, ineffectual, myopic, painfully obvious, and toothless, well…

如果這一切聽起來寬泛,無害,無效,近視,痛苦地顯而易見,沒有牙齒,那麼…

The Next Web called the principles “hazy” and “toothless.” And Dave Gershgorn of OneZero noted that these supposed ethics are missing “‘don’t kill somebody with a robot.’” You can decide for yourself just how effective they are by reading them here.

NeXT Web將這些原則稱為“模糊”和“無牙”。OneZero的戴夫·格什戈恩(Dave Gershgorn)指出,這些所謂的道德缺失“不要用機器人殺人”。你可以通過在這裡閱讀它們來決定它們到底有多有效。


分享到:


相關文章: