高级检索

    标准模型下格上基于身份的门限解密方案

    Identity-Based Threshold Decryption Scheme from Lattices under the Standard Model

    • 摘要: 基于身份的门限解密体制(identity-based threshold decryption, IBTD)是将秘密共享方法和基于身份加密算法有效结合.在(t,N)门限解密方案中,N个解密服务器共享用户私钥,当解密时,至少需要t个服务器参与并计算相应解密份额,才能正确恢复出明文.然而,少于t个或更少的服务器无法获取关于明文的任何信息.目前现存的格上IBTD方案都是在随机预言模型下证明的,主要方法是对服从高斯分布的私钥直接分割.针对该问题,构造了一种非交互的IBTD方案,采用拉格朗日秘密分割方法对一个公共向量进行拆分,每个解密服务器得到各自的特征向量,通过用户的私有陷门,对特征向量进行原像抽样,得到私钥份额,有效隐藏了用户完整私钥,提高方案的安全性.在解密份额验证时,采用离散对数问题的难解性,实现了可公开验证性.在解密份额组合时,通过公共向量分割合并和解密份额分割合并之间运算的同态性,保证解密的正确性.在标准模型下,将该方案的安全性规约为判定性LWE(learning with errors)困难假设,证明了其满足IND-sID-CPA安全.

       

      Abstract: The identity-based threshold decryption (IBTD) system combines the secret sharing method with the identity-based encryption mechanism. In a (t, N) IBTD system, N decryption servers share the private key corresponding to a user’s identity. When to decrypt, at least t servers are required to participate in and calculate their corresponding decryption shares. However, less than t or fewer servers are unable to obtain any information about the plaintext. At present, the existing IBTD schemes from lattices are constructed under the random model, and the main method is to divide the private key statistically close to a Gauss distribution directly. This paper constructs a non-interactive IBTD scheme. A public vector is split using the Lagrange secret partition method, and each decryption server obtains its respective characteristic vector. Each private key share is obtained by sampling the pre-image of the characteristic vectors through the private trapdoor function for each decryption server. The user’s complete private key is effectively hidden and the security of the scheme is improved. The difficulty of the discrete logarithm problem is used to realize the verifiability of decryption share. The correctness of the decryption share is guaranteed by the homomorphism of the operations between the common vector and the private key shares. The IND-sID-CPA security for the proposed scheme is proved based on the decisional learning with errors (LWE) hardness assumption under the standard model.

       

    /

    返回文章
    返回