Skip to main content

Dynamic and Public Auditing with Fair Arbitration for Cloud Data

Dynamic and Public Auditing with Fair Arbitration for Cloud Data

ABSTRACT
          Cloud users no longer physically possess their data, so how to ensure the integrity of their outsourced data becomes a challenging task. Recently proposed schemes such as “provable data possession” and “proofs of retrievability” are designed to address this problem, but they are designed to audit static archive data and therefore lack of data dynamics support. Moreover, threat models in these schemes usually assume an honest data owner and focus on detecting a dishonest cloud service provider despite the fact that clients may also misbehave. This paper proposes a public auditing scheme with data dynamics support and fairness arbitration of potential disputes. In particular, we design an index switcher to eliminate the limitation of index usage in tag computation in current schemes and achieve efficient handling of data dynamics. To address the fairness problem so that no party can misbehave without being detected, we further extend existing threat models and adopt signature exchange idea to design fair arbitration protocols, so that any possible dispute can be fairly settled. The security analysis shows our scheme is provably secure, and the performance evaluation demonstrates the overhead of data dynamics and dispute arbitration are reasonable.




SYSTEM ANALYSIS
EXISTING SYSTEM
                                                 we extend the existing threat model in current research to provide fair arbitration for solving disputes between clients and the CSP, which is of vital significance for the deployment and promotion of auditing schemes in the cloud environment. This is because most existing auditing schemes intend to embed a block’s index  into its tag computation. which serves to authenticate challenged blocks. However, if we insert or delete a block, block indices of all subsequent blocks will change, then tags of these blocks have to be re-computed. We extend the threat model in current research to provide dispute arbitration, which is of great significance and practicality for cloud data auditing, since most existing schemes generally assume an honest data owner in their threat models.Threat models in existing public auditing schemes  mainly focus on the delegation of auditing tasks to a third party auditor (TPA) so that the overhead on clients can be offloaded as much as possible
PROPOSED SYSTEM
          Remote integrity check could be sourced to memory check schemes that aim to verify read and write operations to a remote memory. Recently, many auditing schemes have been proposed around checking the integrity of outsourced data.We achieve this by designing arbitration protocols based on the idea of exchanging metadata signatures upon each update operation. Our experiments demonstrate the efficiency of our proposed scheme, whose overhead for dynamic update and dispute arbitration are reasonable. proposed general arbitration protocols with automated payments using fair signature exchange protocols. Our work also adopts the idea of signature exchange to ensure the metadata correctness and protocol fairness, and we concentrate on combining efficient data dynamics support and fair dispute arbitration into a single auditing scheme.
Algorithm:
Proof verification  algorithm:
                 Verification proof algorithm is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process.
                           Proof verification  algorithm  for any valid input it produces the result required by the algorithm’s specification.
Asymmetric signature algorithm:
                    Asymmetric algorithms  use different keys for encryption and decryption, and the decryption key cannot  be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Attacks:
Network Attack:
              A network attack can be defined as any method, process, or means used to maliciously attempt to compromise network security
          The individuals performing network attacks are commonly referred to as network attackers, hackers, or crackers.


Replay Attack:
                  A replay attack  also known as playback attack is a form of network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed.

MODULE DESCRIPTION

MODULE
Ø  Data Upload and Encryption.
Ø  Data Sharing.
Ø Auditing.
Ø Join Group.
MODULE DESCRIPTION
Data Upload and Encryption:
To efficiently upload  an encrypted data with cloud. A semi-trusted proxy can transform an encryption of a message to another encryption of the same message without knowing the message. To user upload our files for our selected location of the database.Every user can be upload their data  are  Encrypted format to store the database.Then user want to use the file download and view our file for Decrypted format using secret keys.
Data Sharing:
The shared data are signed by a group of users. Therefore, disputes between the two parties are unavoidable to a certain degree. So an arbitrator for dispute settlement is indispensable for a fair auditing scheme. We extend the threat model in existing public schemes by differentiating between the auditor (TPAU) and the arbitrator (TPAR) and putting different trust assumptions on them. Because the TPAU is mainly a delegated party to check client’s data integrity, and the potential dispute may occur between the TPAU and the CSP, so the arbitrator should be an unbiased third party who is different to the TPAU.
As for the TPAR, we consider it honest-but-curious. It will behave honestly most of the time but it is also curious about the content of the auditing data, thus the privacy protection of the auditing data should be considered. Note that, while privacy protection is beyond the scope of this paper, our scheme can adopt the random mask technique proposed  for privacy preservation of auditing data, or the ring signatures in to protect the identityprivacy of signers for data shared among a group of users.
Auditing:
Public auditing schemes mainly focus on the delegation of auditing tasks to a third party auditor (TPA) so that the overhead on clients can be offloaded as much as possible. However, such models have not seriously considered the fairness problem as they usually assume an honest owner against an untrusted CSP. Since the TPA acts on behalf of the owner, then to what extent could the CSP trust the auditing result? What if the owner and TPA collude together against an honest CSP for a financial compensation. In this sense, such models reduce the practicality and applicability of auditing schemes.


Join Group:
                   Compared to these schemes, our work is the first to combine public verifiability, data dynamics support and dispute arbitration simultaneously. Other extensions to both PDPs and PoRs. Introduced a mechanism for data integrity auditing under the multiserver scenario, where data are encoded with network code. Ensure data possession of multiple replicas across the distributed storage scenario. They also integrate forward error-correcting codes into PDP to provide robust data possession  utilize the idea of proxy re-signatures to provide efficient user revocations, where the shared data are signed by a group of users.


SYSTEM SPECIFICATION

Hardware Requirements:

         System                 :   Pentium IV 2.4 GHz.
         Hard Disk             :   40 GB.
         Floppy Drive       :   1.44 Mb.
         Monitor                :   14’ Colour Monitor.
         Mouse                  :   Optical Mouse.
         Ram                      :   512 Mb.


Software Requirements:

         Operating system           :   Windows 7 Ultimate.
         Coding Language           :   ASP.Net with C#
         Front-End                      :   Visual Studio 2010 Professional.
         Data Base                      :   SQL Server 2008.





Comments

Popular posts from this blog

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD report

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD ABSTRACT More and more clients would like to store their data to PCS (public cloud servers) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: IDPUIC (identity-based proxy-oriented data uploading and remote data integrity checking in public cloud). We give the formal definition, system model and se

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION ABSTRACT Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three

LIFI

LIFI Prof . Harald Haas is a technology of high brightness light emitting diodes(LED).It is bidirectional ,high speed and fully networked wireless communication.    LiFi is designed to use LED light bulbs similar to those currently in use in many energy-conscious homes and offices. However, LiFi bulbs are outfitted with a   chip   that modulates the light imperceptibly for optical data transmission. LiFi data is transmitted by the LED bulbs and received by photoreceptors. LiFi's early developmental models were capable of 150 megabits-per-second ( Mbps ). Some commercial kits enabling that speed have been released. In the lab, with stronger LEDs and different technology, researchers have enabled 10   gigabits -per-second (Gbps), which is faster than   802.11ad .  Benefits of LiFi: ·         Higher speeds than  Wi-Fi . ·         10000 times the frequency  spectrum  of radio. ·         More secure because data cannot be intercepted without a clear line of si