Advanced Search
    Li Guojie. A Safety Risk Taxonomy of AI Systems Based on Decidability TheoryJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202660032
    Citation: Li Guojie. A Safety Risk Taxonomy of AI Systems Based on Decidability TheoryJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202660032

    A Safety Risk Taxonomy of AI Systems Based on Decidability Theory

    • This paper reconceptualizes AI safety from the lens of logical complexity, and establishes three levels of safety issues: R1 Level (decidable propositions, supporting formal pre-verification), R2 Level (semi-decidable propositions, supporting only post-hoc evidence discovery), and R3 Level (non-recursively enumerable, where even the identification of insecurity cannot be guaranteed). Distinguishing between R1 and R2 is critical: all engineering-tractable security issues reside within the R1 Level. Consequently, achieving security requires a dual-track effort focused on correctness verification and institutional safety nets. Regarding the high-profile field of AI security, while current risks have not yet escalated to the R3 Level, the governance trajectory must urgently shift from “pre-verification” to “runtime governance”, prioritizing external monitoring including gating, rollbacks, isolation, human-in-the-loop systems, and permission hierarchies. Furthermore, it necessitates a dual-sovereignty system combining embedded technology and external institutions, thereby maintaining human corrective sovereignty and civilizational security within a logically incomplete reality.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return