Computer Science › Artificial Intelligence

Cryptography and Data Security

Description

This cluster of papers covers advanced cryptographic schemes and protocols, including homomorphic encryption, identity-based encryption, attribute-based encryption, lattice-based cryptography, secure multi-party computation, searchable encryption, pairing-based cryptography, privacy-preserving computation, zero-knowledge proofs, and trapdoor functions.

Keywords

Homomorphic Encryption; Identity-Based Encryption; Attribute-Based Encryption; Lattice-based Cryptography; Secure Multi-party Computation; Searchable Encryption; Pairing-based Cryptography; Privacy-Preserving Computation; Zero-Knowledge Proofs; Trapdoor Functions

We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even … We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to … Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian. In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.
Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received … Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper, we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if thhe sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, the movement is independent of the actual input.) What is the slowdown in the running time of a machine, if it is required to be oblivious? In 1979, Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. We show an analogous result for the random-access machine (RAM) model of computation. In particular, we show how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower bound.
Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that:
Our main result is a reduction from worst-case lattice problems such as GapSVP and SIVP to a certain learning problem. This learning problem is a natural extension of the “learning … Our main result is a reduction from worst-case lattice problems such as GapSVP and SIVP to a certain learning problem. This learning problem is a natural extension of the “learning from parity with error” problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for GapSVP and SIVP. A main open question is whether this reduction can be made classical (i.e., nonquantum). We also present a (classical) public-key cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worst-case quantum hardness of GapSVP and SIVP. The new cryptosystem is much more efficient than previous lattice-based cryptosystems: the public key is of size Õ( n 2 ) and encrypting a message increases its size by a factor of Õ( n ) (in previous cryptosystems these values are Õ( n 4 ) and Õ( n 2 ), respectively). In fact, under the assumption that all parties share a random bit string of length Õ( n 2 ), the size of the public key can be reduced to Õ( n ).
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show how to construct a pseudorandom generator from any one-way function. Since it is easy to construct … Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show how to construct a pseudorandom generator from any one-way function. Since it is easy to construct a one-way function from a pseudorandom generator, this result shows that there is a pseudorandom generator if and only if there is a one-way function.
Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the … Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three … We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable.
We show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to … We show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors). Our contributions include a new notion of trapdoor function with preimage sampling, simple and efficient "hash-and-sign" digital signature schemes, and identity-based encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a discrete Gaussian probability distribution whose standard deviation is essentially the length of the longest Gram-Schmidt vector of the basis. A crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis.
We argue that the random oracle model—where all parties have access to a public random oracle—provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a … We argue that the random oracle model—where all parties have access to a public random oracle—provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a practical protocol P is produced by first devising and proving correct a protocol PR for the random oracle model, and then replacing oracle accesses by the computation of an "appropriately chosen" function h. This paradigm yields protocols much more efficient than standard ones while retaining many of the advantages of provable security. We illustrate these gains for problems including encryption, signatures, and zero-knowledge proofs.
Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning … Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical.Using the main result, we obtain a public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP. Previous lattice-based public-key cryptosystems such as the one by Ajtai and Dwork were only based on unique-SVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Õ(n2) and encrypting a message increases its size by Õ(n)(in previous cryptosystems these values are Õ(n4) and Õ(n2), respectively). In fact, under the assumption that all parties share a random bit string of length Õ(n2), the size of the public key can be reduced to Õ(n).
Publicly accessible databases are an indispensable resource for retrieving up-to-date information. But they also pose a significant risk to the privacy of the user, since a curious database operator can … Publicly accessible databases are an indispensable resource for retrieving up-to-date information. But they also pose a significant risk to the privacy of the user, since a curious database operator can follow the user's queries and infer what the user is after. Indeed, in cases where the users' intentions are to be kept secret, users are often cautious about accessing the database. It can be shown that when accessing a single database, to completely guarantee the privacy of the user, the whole database should be down-loaded; namely n bits should be communicated (where n is the number of bits in the database). In this work, we investigate whether by replicating the database, more efficient solutions to the private retrieval problem can be obtained. We describe schemes that enable a user to access k replicated copies of a database ( k ≥2) and privately retrieve information stored in the database. This means that each individual server (holding a replicated copy of the database) gets no information on the identity of the item retrieved by the user. Our schemes use the replication to gain substantial saving. In particular, we present a two-server scheme with communication complexity O(n 1/3 ).
We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system … We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems.
Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such … Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.
We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data … We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage system.
article Free Access Share on Untraceable electronic mail, return addresses, and digital pseudonyms Author: David L. Chaum Univ. of California, Berkeley Univ. of California, BerkeleyView Profile Authors Info & Claims … article Free Access Share on Untraceable electronic mail, return addresses, and digital pseudonyms Author: David L. Chaum Univ. of California, Berkeley Univ. of California, BerkeleyView Profile Authors Info & Claims Communications of the ACMVolume 24Issue 2Feb. 1981 pp 84–90https://doi.org/10.1145/358549.358563Published:01 February 1981Publication History 2,966citation8,505DownloadsMetricsTotal Citations2,966Total Downloads8,505Last 12 Months1,429Last 6 weeks303 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my Alerts New Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing … In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing such policies is to employ a trusted server to store the data and mediate access control. However, if any server storing the data is compromised, then the confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call ciphertext-policy attribute-based encryption. By using our techniques encrypted data can be kept confidential even if the storage server is untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's keys; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, our methods are conceptually closer to traditional access control methods such as role-based access control (RBAC). In addition, we provide an implementation of our system and give performance measurements.
A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the … A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the difficulty of computing discrete logarithms over finite fields.
Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings … Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.
The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has … The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting … As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).
In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge … In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D . This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies … It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that one has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certain words, it was not previously known how to let the data storage server perform the search and answer the query, without loss of data confidentiality. We describe our cryptographic schemes for the problem of searching on encrypted data and provide proofs of security for the resulting crypto systems. Our techniques have a number of crucial advantages. They are provably secure: they provide provable secrecy for encryption, in the sense that the untrusted server cannot learn anything about the plaintext when only given the ciphertext; they provide query isolation for searches, meaning that the untrusted server cannot learn anything more about the plaintext than the search result; they provide controlled searching, so that the untrusted server cannot search for an arbitrary word without the user's authorization; they also support hidden queries, so that the user may ask the untrusted server to search for a secret word without revealing the word to the server. The algorithms presented are simple, fast (for a document of length n, the encryption and search algorithms only need O(n) stream cipher and block cipher operations), and introduce almost no space and communication overhead, and hence are practical to use today.
We present a digital signature scheme based on the computational difficulty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosen-message attack: an adversary … We present a digital signature scheme based on the computational difficulty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a way that depends on the signatures of previously chosen messages) cannot later forge the signature of even a single additional message. This may be somewhat surprising, since in the folklore the properties of having forgery being equivalent to factoring and being invulnerable to an adaptive chosen-message attack were considered to be contradictory. More generally, we show how to construct a signature scheme with such properties based on the existence of a “claw-free” pair of permutations—a potentially weaker assumption than the intractibility of integer factorization. The new scheme is potentially practical: signing and verifying signatures are reasonably fast, and signatures are compact.
article Free Access Share on The Byzantine Generals Problem Authors: Leslie Lamport Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CA Computer Science Laboratory, SRI International, 333 Ravenswood … article Free Access Share on The Byzantine Generals Problem Authors: Leslie Lamport Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CA Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CAView Profile , Robert Shostak Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CA Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CAView Profile , Marshall Pease Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CA Computer Science Laboratory, SRI International, 333 Ravenswood Avenue, Menlo Park, CAView Profile Authors Info & Claims ACM Transactions on Programming Languages and SystemsVolume 4Issue 3pp 382–401https://doi.org/10.1145/357172.357176Published:01 July 1982Publication History 3,984citation25,820DownloadsMetricsTotal Citations3,984Total Downloads25,820Last 12 Months3,557Last 6 weeks636 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers … An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n .
Abstract Traditional fully homomorphic encryption(FHE) schemes allow computation only on data encrypted under the same public key. Multi-Key Fully Homomorphic Encryption (MKFHE) enables arbitrary operations on data encrypted with different … Abstract Traditional fully homomorphic encryption(FHE) schemes allow computation only on data encrypted under the same public key. Multi-Key Fully Homomorphic Encryption (MKFHE) enables arbitrary operations on data encrypted with different public keys, allowing all participating users jointly decrypting the final ciphertext. The multi-key BFV FHE scheme inherits BFV’s advantages in ring element encryption and scale invariance. Nonetheless, it also has some disadvantages, such as additional noise generated during the relinearization process, the need for costly transformations during the external product process, and the requirement for a Common Reference String (CRS). In this paper, we investigate the MKFHE scheme for RLWE-based BFV. Firstly, we improve the modulus size of the evaluation key and the public key to construct a modulus enchancement relinearization method, which can significantly reduce the noise generated during the relinearization process. Secondly, we propose to use an inner product via Gadget decomposition in the relinearization based on the MK-BFV scheme instead of the original outer product operation, which can reduce the complexity of the NTT operation to $$\left( {d + 2\tilde{d}} \right)r^{\prime } /\left( {d + 2} \right)\tilde{l}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mfenced> <mml:mrow> <mml:mi>d</mml:mi> <mml:mo>+</mml:mo> <mml:mn>2</mml:mn> <mml:mover> <mml:mi>d</mml:mi> <mml:mo>~</mml:mo> </mml:mover> </mml:mrow> </mml:mfenced> <mml:msup> <mml:mi>r</mml:mi> <mml:mo>′</mml:mo> </mml:msup> <mml:mo>/</mml:mo> <mml:mfenced> <mml:mrow> <mml:mi>d</mml:mi> <mml:mo>+</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:mfenced> <mml:mover> <mml:mi>l</mml:mi> <mml:mo>~</mml:mo> </mml:mover> </mml:mrow> </mml:math> of the original one. Finally, we propose a MK-BFV without CRS on the basis of the previous ones, which enhances the user's control over his own key.
Weak coin flipping is a cryptographic primitive in which two mutually distrustful parties generate a shared random bit to agree on a winner via remote communication. While a stand-alone secure … Weak coin flipping is a cryptographic primitive in which two mutually distrustful parties generate a shared random bit to agree on a winner via remote communication. While a stand-alone secure weak coin flipping protocol can be constructed from noiseless quantum communication channels, its composability remains unexplored. In this work, we demonstrate that no weak coin flipping protocol can be abstracted as a simple black-box resource with composable security. Despite this, we also establish the overall stand-alone security of quantum weak coin flipping protocols under composition in sequential order.
<title>Abstract</title> Although digital signature technology ensures data integrity and signer non-repudiation, its public verification feature may lead to the disclosure of sensitive information. Existing designated verifier signature (DVS) schemes face … <title>Abstract</title> Although digital signature technology ensures data integrity and signer non-repudiation, its public verification feature may lead to the disclosure of sensitive information. Existing designated verifier signature (DVS) schemes face a trade-off between privacy protection and dispute arbitration, and they require pre-binding of the designated verifier's identity during signature generation. Additionally, some designated verifier digital signature schemes rely on centralized traceability authorities for traceability, posing risks of key leakage and trust distribution. To address these issues, this paper proposes a Traceable Conditional Privacy-Preserving Multi-Designated Verifier Signature (TCP-MDVS) scheme. The scheme integrates the advantages of DVS and Universal Designated Verifier Signature (UDVS) by constructing a group of coprime moduli and embedding signature components, enabling a single signature to support independent verification by multiple verifiers without requiring pre-registration of verifier keys. The scheme supports dynamic authorization, eliminating the need for signature reconstruction or the introduction of third-party intermediaries. In terms of dispute arbitration, by selectively disclosing specific prime parameters, a publicly verifiable evidence chain can be generated, which not only overcomes the non-transferability limitation of UDVSP but also avoids the dependence on centralized traceability authorities inherent in Traceable Universal Designated Verifier Signature Proof (TUDVSP). Furthermore, the TCP-MDVS scheme hides the signature's structure, thus resisting cross-component correlation attacks, providing an efficient, secure, and privacy-controllable signature framework for dynamic multi-verifier scenarios.
M. Naveena | International Journal for Research in Applied Science and Engineering Technology
Cloud computing provides efficient data storage and sharing with low-cost resource utilization, but security issues are still a major challenge, especially data confidentiality and access control. Conventional approaches depend on … Cloud computing provides efficient data storage and sharing with low-cost resource utilization, but security issues are still a major challenge, especially data confidentiality and access control. Conventional approaches depend on encrypting data prior to uploading it to the cloud, but dynamic group management and secure key distribution are still challenging problems. To overcome these issues, this study suggests a secure and decentralized data-sharing model using blockchain technology and RoleBased Access Control (RBAC) combined with Elliptic Curve Cryptography (ECC). The system supports effective key management so that encrypted data can be accessed by authorized users only without any direct intervention of the data owner. Hybrid cloud model is implemented, where sensitive role structures and user mappings are kept in a private cloud and encrypted data is handled in a public cloud, for better security and scalability. The suggested method guarantees smooth revocation of group members by updating group keys automatically without re-encrypting the initial data, thus eliminating unauthorized access. Blockchain technology is also utilized to offer tamper evidence and solve issues of data modification. This new combination of blockchain, ECC encryption, and RBAC provides strong data privacy, secure key distribution, and effective rolebased access, and hence is a very secure and scalable solution for cloud-based data sharing.
Attribute-Based Signatures (ABS) provide a cryptographic mechanism enabling signers to authenticate messages while preserving their privacy, based on a set of attributes. This paper introduces an Aggregate Attribute-Based Signature (AABS) … Attribute-Based Signatures (ABS) provide a cryptographic mechanism enabling signers to authenticate messages while preserving their privacy, based on a set of attributes. This paper introduces an Aggregate Attribute-Based Signature (AABS) scheme that leverages bilinear pairings to support efficient aggregation of multiple signatures into a compact form. The proposed scheme is designed to address scalability challenges in systems requiring numerous signatures, such as healthcare data sharing, vehicular networks, and blockchain-based applications. The scheme ensures essential security properties, including unforgeability, anonymity, and collusion resistance, under the Bilinear Diffie-Hellman (BDH) assumption. Each signer generates a signature based on their attributes, which can then be aggregated into a single signature for a given message by an authorized entity. Verification is efficient, requiring only a constant number of pairing operations, regardless of the number of signers or attributes involved. A rigorous security analysis demonstrates that the scheme resists forgery attempts and unauthorized collusion, ensuring reliable message authentication even in adversarial settings. Furthermore, the compactness of the aggregated signatures reduces storage and computational overhead, making it ideal for resource-constrained environments. This work paves the way for more scalable, privacy-preserving, and efficient attribute-based cryptographic solutions.
Mobile Agents are a new type of computing that is replacing the client-server approach. Mobile agents are little pieces of code that function automatically on behalf of the owner. Many … Mobile Agents are a new type of computing that is replacing the client-server approach. Mobile agents are little pieces of code that function automatically on behalf of the owner. Many applications, such as e-commerce, parallel computing, network management, and health care, use mobile agents. The healthcare industry is one of the most growing fields in any country. As the population increases day by day the requirement of medical resources is proportionally increasing. Due to high patient demand and a severe lack of medical resources, a remote medical healthcare system is required. However, the deployment of remote healthcare systems over the Internet introduces a new set of challenges, including interoperability among heterogeneous networks and the need to navigate through multiple public systems dispersed over insecure networks. This paper explores how mobile agents can effectively tackle these challenges, especially in heterogeneous and potentially malicious environments. A key focus of this research is the development of a mathematical model for secure medical information retrieval. This model incorporates a variable threshold secret-sharing mechanism, employing the Chinese remainder theorem and multiplicative inverse with modular arithmetic at different levels. By integrating these cryptographic techniques, the proposed approach ensures the confidentiality and integrity of medical information during its retrieval, contributing to the overall safety and robustness of mobile agent computing in healthcare scenarios.
<title>Abstract</title> Massive migration and accelerated cloud computing adoption have exacerbated data protection concerns. Most data hosted on the cloud are sensitive, and unauthorized disclosure can lead to identity theft and … <title>Abstract</title> Massive migration and accelerated cloud computing adoption have exacerbated data protection concerns. Most data hosted on the cloud are sensitive, and unauthorized disclosure can lead to identity theft and privacy violations. Traditional access control systems struggle to meet these challenges in the cloud environment, leaving data vulnerable to unauthorized access. Attribute-based access Control (ABAC) models, particularly those defined by NIST, offer enhanced flexibility in access management. However, their implementation often has shortcomings, such as a lack of effective real-time encryption mechanisms and dynamic policies , making them vulnerable to advanced threats. Attribute-based encryption, specifically Ciphertext-Policy Attribute-Based Encryption (CP-ABE), emerges as a promising solution to ensure precise access control and secure data sharing. Nevertheless, CP-ABE must be protected against attacks such as user collusion and attempts at concealment. Furthermore, in the dynamic context of a cloud environment where users are regularly added or removed, it is crucial to ensure the revocation of access rights when users leave the system, preventing them from accessing cloud resources with previously assigned attributes and secret keys. This article presents Chaos-ABAC, a new framework based on a chaotic system that enhances the security and reliability of ABAC models. By integrating dynamic policy authorization and real-time encryption of attribute credentials , Chaos-ABAC ensures existential unforgeability against attacks involving attributes and nonce. A comprehensive security analysis demonstrates that Chaos-ABAC effectively mitigates selective attribute plaintext attacks and preserves user confidentiality. The experimental results further validate its reliability, ease of implementation, and superior performance compared to existing schemes.
| International Journal of Computer Applications Technology and Research
Chaum and Van first introduced the Undeniable Signature Scheme (USS) in 1989. A significant advantage of the USS scheme is that the signer participates in the verification process. In this … Chaum and Van first introduced the Undeniable Signature Scheme (USS) in 1989. A significant advantage of the USS scheme is that the signer participates in the verification process. In this study, we present the Discrete Logarithm Conjugacy Search Factor Problem (DLCSFP) and provide an evaluation of its security and complexity. We then propose an undeniable signature method based on DLCSFP and evaluate the security and complexity of this new scheme.
With increasing demands for privacy, it becomes necessary to protect sensitive user query data when accessing public key-value databases. Existing Private Information Retrieval (PIR) schemes provide full security but suffer … With increasing demands for privacy, it becomes necessary to protect sensitive user query data when accessing public key-value databases. Existing Private Information Retrieval (PIR) schemes provide full security but suffer from poor scalability, limiting their applicability in large-scale deployment. We argue that in many real-world scenarios, a more practical solution should allow users to flexibly determine the privacy levels of their queries in a theoretically guided way, balancing security and performance based on specific needs. To formally provide provable guarantees, we introduce a novel concept of distance-based indistinguishability, which can facilitate users to comfortably relax their security requirements. We then design Femur, an efficient framework to securely query public key-value stores with flexible security and performance trade-offs. It uses a space-efficient learned index to convert query keys into storage locations, obfuscates these locations with extra noise provably derived by the distance-based indistinguishability theory, and sends the expanded range to the server. The server then adaptively utilizes the best scheme to retrieve data. We also propose a novel variable-range PIR scheme optimized for bandwidth-constrained environments. Experiments show that Femur outperforms the state-of-the-art designs even when ensuring the same full security level. When users are willing to relax their privacy requirements, Femur can further improve the performance gains to up to 163.9X, demonstrating an effective trade-off between security and performance.
Machine Learning as a Service (MLaaS) platforms facilitate collaborative machine learning, but trust issues necessitate privacy-preserving methods. As for tree ensembles, prior Fully Homomorphic Encryption (FHE)-based approaches suffer from slow … Machine Learning as a Service (MLaaS) platforms facilitate collaborative machine learning, but trust issues necessitate privacy-preserving methods. As for tree ensembles, prior Fully Homomorphic Encryption (FHE)-based approaches suffer from slow inference speed and high memory usage. This work proposes the VESTA system that leverages compile-time precomputation and parallelizable model partitioning to enhance performance. VESTA achieves a 2.1x speedup and reduces memory consumption by 59.4% compared to state-of-the-art solutions.
We study the problem of answering conjunctive queries with free access patterns (CQAPs) under updates. A free access pattern is a partition of the free variables of the query into … We study the problem of answering conjunctive queries with free access patterns (CQAPs) under updates. A free access pattern is a partition of the free variables of the query into input and output. The query returns tuples over the output variables given a tuple of values over the input variables. We introduce a fully dynamic evaluation approach that works for all CQAPs and is optimal for two classes of CQAPs. This approach recovers prior work on the dynamic evaluation of conjunctive queries without access patterns. We first give a syntactic characterisation of all CQAPs that admit constant time per single-tuple update and whose output tuples can be enumerated with constant delay given a tuple of values over the input variables. We further chart the complexity trade-off between the preprocessing time, update time and enumeration delay for a class of CQAPs. For some of these CQAPs, our approach achieves optimal, albeit non-constant, update time and delay. This optimality is predicated on the Online Matrix-Vector Multiplication conjecture. We finally adapt our approach to the dynamic evaluation of tractable CQAPs over probabilistic databases under updates.
Among searchable encryption techniques, multi-user dynamic searchable encryption (MUDSE) schemes are an important research direction. After the data owner transfers data to the cloud, it may be necessary to authorize … Among searchable encryption techniques, multi-user dynamic searchable encryption (MUDSE) schemes are an important research direction. After the data owner transfers data to the cloud, it may be necessary to authorize different users to access some or all of the data while allowing for dynamic updates. Enabling dynamic data sharing in cloud storage while preserving users’ ability to search the data is crucial for promoting data flow and maximizing its value. This approach is particularly significant in addressing the data silo problem. However, existing security mechanisms remain imperfect, and most current scenarios assume that cloud servers are merely “curious but honest”. In reality, cloud servers may exhibit malicious behavior, such as returning incorrect or incomplete search results. Similarly, malicious users might falsify search results—for example, to avoid payment—or collude with cloud servers to steal other users’ search privacy. To address these challenges, this paper proposes an updatable multi-user dynamic searchable encryption scheme with bidirectional verification. The scheme enables secure dynamic data sharing in multi-user scenarios by constructing an index structure using homomorphic message authentication codes and bitmaps. This ensures secure updates to encrypted data without revealing the relationship between files and keyword search keys while providing forward and backward security. Regarding privilege management, the scheme employs updatable keys, ensuring that users can only generate valid search commands if they possess the latest encryption key. Additionally, blockchain technology is introduced to assist in verifying user honesty. Through actual testing and security analysis, the proposed solution demonstrates improved search speed over traditional methods while maintaining security. It also exhibits high adaptability for handling frequently changing cloud data.
Nishant Kumar Rathi | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Abstract The exponential growth of data-centric applications in cloud and distributed environments has intensified the demand for robust privacy-preserving mechanisms in database systems. While encryption techniques—such as homomorphic encryption and … Abstract The exponential growth of data-centric applications in cloud and distributed environments has intensified the demand for robust privacy-preserving mechanisms in database systems. While encryption techniques—such as homomorphic encryption and secure multiparty computation—offer foundational security, they often incur significant computational overhead and fail to address broader privacy concerns such as inference attacks, access pattern leakage, and insider threats. This study presents a comprehensive, layered framework for privacy-preserving query processing that extends beyond traditional encryption paradigms. Integrating fine-grained access control, differential privacy, secure hardware enclaves, and privacy-aware query rewriting, the proposed architecture balances query expressiveness, performance, and privacy guarantees. A prototype implementation on PostgreSQL was evaluated using standard workloads (TPC-H and synthetic sensitive datasets) to assess system latency, accuracy trade-offs, and privacy leakage. Results indicate a substantial reduction in leakage exposure with minimal performance degradation, demonstrating the framework’s practicality for real-world deployment. This research contributes to the evolving discourse on database privacy by advocating a shift from encryption-centric approaches to holistic privacy engineering, paving the way for secure, trustworthy, and regulation-compliant data systems. Keywords: Privacy-preserving query processing, differential privacy, encrypted databases, trusted execution environments, data confidentiality, query optimization, homomorphic encryption, secure multi-party computation, access control, privacy-aware data management.
We propose a new framework for compile-time ciphertext synthesis in fully homomorphic encryption (FHE) systems. Instead of invoking encryption algorithms at runtime, our method synthesizes ciphertexts from precomputed encrypted basis … We propose a new framework for compile-time ciphertext synthesis in fully homomorphic encryption (FHE) systems. Instead of invoking encryption algorithms at runtime, our method synthesizes ciphertexts from precomputed encrypted basis vectors using only homomorphic additions, scalar multiplications, and randomized encryptions of zero. This decouples ciphertext generation from encryption and enables efficient batch encoding through algebraic reuse. We formalize this technique as a randomized module morphism and prove that it satisfies IND-CPA security. Our proof uses a hybrid game framework that interpolates between encrypted vector instances and reduces the adversarial advantage to the indistinguishability advantage of the underlying FHE scheme. This reduction structure captures the security implications of ciphertext basis reuse and structured noise injection. The proposed synthesis primitive supports fast, encryption-free ingestion in outsourced database systems and other high-throughput FHE pipelines. It is compatible with standard FHE APIs and preserves layout semantics for downstream homomorphic operations.
Mobile cloud computing, also known as MCC, has become increasingly popular among users of high-speed mobile devices. Users face dangers to their privacy and security due to several factors, including … Mobile cloud computing, also known as MCC, has become increasingly popular among users of high-speed mobile devices. Users face dangers to their privacy and security due to several factors, including the growing number of subscribers to MCC and the possibility of unauthorized access to user data. Within the realm of conventional research, efforts are made to construct a safe model to maintain the data in MCC. There are a number of difficulties and gaps in MCC that represent severe dangers to security policy despite the fact that it provides modeling for a variety of architectures. The purpose of this research article is to offer an attribute-based encryption (ABE) system as a means of mitigating the effects of such problems. By facilitating data separation and storage in cloud servers, the ABE solution contributes to the facilitation of white-box traceability and facilitates direct refurbishment.