期刊论文详细信息
Philosophies
Provably Safe Artificial General Intelligence via Interactive Proofs
Kristen Carlson1 
[1] Beth Israel Deaconess Medical Center and Harvard Medical School, Harvard University, Boston, MA 02115, USA;
关键词: artificial general intelligence;    AGI;    AI safety;    AI value alignment;    AI containment;    interactive proof systems;   
DOI  :  10.3390/philosophies6040083
来源: DOAJ
【 摘 要 】

Methods are currently lacking to prove artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation AGI1 rapidly triggers a succession of more powerful AGIn that differ dramatically in their computational capabilities (AGIn << AGIn+1). No proof exists that AGI will benefit humans or of a sound value-alignment method. Numerous paths toward human extinction or subjugation have been identified. We suggest that probabilistic proof methods are the fundamental paradigm for proving safety and value-alignment between disparately powerful autonomous agents. Interactive proof systems (IPS) describe mathematical communication protocols wherein a Verifier queries a computationally more powerful Prover and reduces the probability of the Prover deceiving the Verifier to any specified low probability (e.g., 2−100). IPS procedures can test AGI behavior control systems that incorporate hard-coded ethics or value-learning methods. Mapping the axioms and transformation rules of a behavior control system to a finite set of prime numbers allows validation of ‘safe’ behavior via IPS number-theoretic methods. Many other representations are needed for proving various AGI properties. Multi-prover IPS, program-checking IPS, and probabilistically checkable proofs further extend the paradigm. In toto, IPS provides a way to reduce AGInAGIn+1 interaction hazards to an acceptably low level.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次