Skip to main content

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD report

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD
ABSTRACT
More and more clients would like to store their data to PCS (public cloud servers) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: IDPUIC (identity-based proxy-oriented data uploading and remote data integrity checking in public cloud). We give the formal definition, system model and security model. Then, a concrete ID-PUIC protocol is designed by using the bilinear pairings. The proposed ID-PUIC protocol is provably secure based on the hardness of CDH (computational Diffie-Hellman) problem. Our ID-PUIC protocol is also efficient and flexible. Based on the original client’s authorization, the proposed ID-PUIC protocol can realize private remote data integrity checking, delegated remote data integrity checking and public remote data integrity checking.







CHAPTER 1
INTRODUCTION
1.1 CLOUD COMPUTING
Cloud storage has emerged as a promising solution for providing ubiquitous, convenient, and on-demand accesses to large amounts of data shared over the Internet. Today, millions of users are sharing personal data, such as photos and videos, with their friends through social network applications based on cloud storage on a daily basis. Business users are also being attracted by cloud storage due to its numerous benefits, including lower cost, greater agility, and better resource utilization.
Cloud computing is a recently evolved computing terminology or metaphor based on utility and consumption of computing resources. Cloud computing involves deploying groups of remote servers and software networks that allow centralized data storage and online access to computer services or resources. Clouds can be classified as public, private or hybrid.
Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users.
For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America's business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rack space, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.
The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as one uses it). Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of on infrastructure.  
Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.
The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to a growth in cloud computing. Cloud storage offers an on-demand data outsourcing service model, and is gaining popularity due to its elasticity and low maintenance cost. However, security concerns arise when data storage is outsourced to third-party cloud storage providers. It is desirable to enable cloud clients to verify the integrity of their outsourced data, in case their data have been accidentally corrupted or maliciously compromised by insider/outsider attacks.
One major use of cloud storage is long-term archival, which represents a workload that is written once and rarely read. While the stored data are rarely read, it remains necessary to ensure its integrity for disaster recovery or compliance with legal requirements . Since it is typical to have a huge amount of archived data, whole-file checking becomes prohibitive. Proof of retrievability (POR) and proof of data possession(PDP) have thus been proposed to verify the integrity of a large file by spot-checking only a fraction of the file via various crypto-graphic primitives.
Suppose that we outsource storage to a server, which could be a storage site or a cloud-storage provider. If we detect corruptions in our outsourced data (e.g., when a server crashes or is compromised), then we should repair the corrupted data and restore the original data. However, putting all data in a single server is susceptible to the single-point-of-failure problem and vendor lock-ins. A plausible solution is to stripe data across multiple servers. Thus, to repair a failed server, we can
1.      Read data from the other surviving servers.
2.      Reconstruct the corrupted data of the failed server.
3.      Write the reconstructed data to a new server.
POR and PDP are originally proposed for the single-server case. MR-PDP and HAIL extend integrity checks to a multiserver setting using replication and erasure coding, respectively. In particular, erasure coding has a lower storage overhead than replication under the same fault tolerance level.
1.2 CHARACTERISTICS:
Cloud computing exhibits the following key characteristics:
Agility improves with users' ability to re-provision technological infrastructure resources.
Cost reductions claimed by cloud providers. A public-cloud delivery model converts capital expenditure to operational expenditure. This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation. The e-FISCAL project's state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
·        Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
·        Peak-load capacity increases (users need not engineer for highest possible load-levels)
·        Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
Performance is monitored and consistent and loosely coupled architectures are constructed using web services as the system interface.
Productivity may be increased when multiple users can work on the same data simultaneously, rather than waiting for it to be saved and emailed. Time may be saved as information does not need to be re-entered when fields are matched, nor do users need to install application software upgrades to their computer.
Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis in near real-time (Note, the VM startup time varies by VM type, location, OS and cloud providers), without users having to engineer for peak loads.
Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
1.2.1 Cloud Computing Identifies "Five Essential Characteristices:
ON-DEMAND SELF-SERVICE: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
BROAD NETWORK ACCESS: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
RESOURCE POOLING: The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. 
RAPID ELASTICITY: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
MEASURED SERVICE: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
1.3 SERVICE MODELS
Though service-oriented architecture advocates "everything as a service" (with the acronyms EaaS or XaaS or simply aas), cloud-computing providers offer their "services" according to different models, which happen to form a stack: infrastructure-, platform- and software-as-a-service.
https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Cloud_computing_layers.png/300px-Cloud_computing_layers.png
Fig 1: Cloud-computing layers accessible within a stack
1.3.1 INFRASTRUCTURE AS A SERVICE (IAAS)
In the most basic cloud-service model - and according to the IETF (Internet Engineering Task Force) - providers of IaaS offer computers – physical or (more often) virtual machines – and other resources. IaaS refers to online services that abstract user from the detail of infrastucture like physical computing resources, location, data partitioning, scaling, security, backup etc. Ahypervisor, such as Xen, Oracle VirtualBox, KVM, VMware ESX/ESXi, or Hyper-V runs the virtual machines as guests.
Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.  IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds.
To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.
1.3.2 PLATFORM AS A SERVICE (PAAS)
PaaS vendors offers a development environment to application developers.The provider typically develops toolkit and standards for development and channels for distribution and payment.In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming-language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers.
With some PaaS offers like Microsoft Azure and Google App Engine, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually. The latter has also been proposed by an architecture aiming to facilitate real-time in cloud environments. Even more specific application types can be provided via PaaS, such as media encoding as provided by services.
Some integration and data management providers have also embraced specialized applications of PaaS as delivery models for data solutions. Examples include iPaaS anddPaaS. iPaaS (Integration Platform as a Service) enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS (Data Platform as a Service) delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of data solutions by building tailored data applications for the customer. dPaaS users retain transparency and control over data through data-visualization tools.
1.3.3 SOFTWARE AS A SERVICE (SAAS)
In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee.
In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support.
Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. For this reason, users are increasingly adopting intelligent third-party key-management systems to help secure their data.
1.4 DEPLOYMENT MODELS:
1.4.1 PRIVATE CLOUD
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party, and hosted either internally or externally. Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive.
They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
1.4.2 PUBLIC CLOUD
A cloud is called a "public cloud" when the services are rendered over a network that is open for public use. Public cloud services may be free. Technically there may be little or no difference between public and private cloud architecture, however, security consideration may be substantially different for services (applications, storage, and other resources) that are made available by a service provider for a public audience and when communication is effected over a non-trusted network.
Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure at their data center and access is generally via the Internet. AWS and Microsoft also offer direct connect services called "AWS Direct Connect" and "Azure ExpressRoute" respectively, such connections require customers to purchase or lease a private connection to a peering point offered by the cloud provider.
1.4.3 HYBRID CLOUD
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources.
A hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it can't be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.
Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases.
A primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for extra compute resources when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands. The specialized model of hybrid cloud, which is built atop heterogeneous hardware, is called "Cross-platform Hybrid Cloud". A cross-platform hybrid cloud is usually powered by different CPU architectures, for example, x86-64 and ARM, underneath. Users can transparently deploy applications without knowledge of the cloud's hardware diversity. This kind of cloud emerges from the raise of ARM-based system-on-chip for server-class computing.
1.5 ARCHITECTURE
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.
https://upload.wikimedia.org/wikipedia/commons/thumb/7/79/CloudComputingSampleArchitecture.svg/325px-CloudComputingSampleArchitecture.svg.png
Fig 2: Cloud computing sample architecture
CHAPTER 3
SYSTEM ANALYSIS
In this phase a detailed appraisal of the existing system is explained. This appraisal includes how the system works and what it does. It also includes finding out in more detail- what are the problems with the system and what user requires from the new system or any new change in system. The output of this phase results in the detail model of the system. The model describes the system functions and data and system information flow. The phase also contains the detail set of user requirements and these requirements are used to set objectives for the new system.
3.1 CURRENT SYSTEM:
In public cloud computing, the clients store their massive data in the remote public cloud servers. Since the stored data is outside of the control of the clients, it entails the security risks in terms of confidentiality, integrity and availability of data and service. In this manager cloud not process entire client, hence the manager has to delegate the proxy to process its data. Public checking will incur some danger of leaking the privacy. To overcome the privacy issue remote data integrity checking protocol is introduced to perform the certificate management. Considerable overheads come from the heavy certificate verification, certificates generation, delivery, revocation, renewals, etc.
3.2 SHORTCOMINGS OF THE CURRENT SYSTEM:
·        The security is in risks, in terms of confidentiality, integrity and availability of data while outsource the data to cloud storage.
·        Certificate management cause heavy computational overheads
·        User privacy is not achieved



3.3 PROPOSED SYSTEM:
In public cloud, this project focuses on the identity-based proxy-oriented data uploading and remote data integrity checking. By using identity-based public key cryptology, our proposed ID-PUIC protocol is efficient since the certificate management is eliminated. ID-PUIC is a novel proxy-oriented data uploading and remote data integrity checking model in public cloud. We give the formal system model and security model for ID-PUIC protocol. Then, based on the bilinear pairings, we designed the first concrete ID-PUIC protocol. In the random oracle model, our designed ID-PUIC protocol is provably secure. Based on the original client’s authorization, our protocol can realize private checking, delegated checking and public checking.
3.4 ADVANTAGE OF PROPOSED SYSTEM:
·        The computation and communication overhead are achieved
·        Our proposed protocol satisfies the private checking, delegated checking and public checking
·        CDH problem is solved
·        Outsourced data security is achieved
·        Time efficiency is achieved







CHAPTER 4
IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective.
The implementation stage involves careful planning, investigation of the existing system and it’s constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods.
4.1 MODULES:
A module is a part of a program. Programs are composed of one or more independently developed modules that are not combined until the program is linked. A single module can contain one or several routines.
Our project modules are given below:
1)     Client
2)     Public Cloud Server
3)     Proxy
4)     Key Generation Center
4.1.1 CLIENT
Client is an entity, which has massive data to be uploaded to PCS by the delegated proxy, can perform the remote data integrity checking. Client can also perform the ID-PUIC protocol without the local copy of the file’s to be checked only if the proxy is authorized.


4.1.2 PUBLIC CLOUD SERVER
Public Cloud Server (PCS) is an entity, which is managed by cloud service provider, has significant storage space and computation resource to maintain the client’s data. If some challenged blocks have been modified or deleted, the malicious PCS cannot generate a valid remote data integrity proof. On the other hand, a practical IDPUIC protocol also needs to convince the client that all of his outsourced data is kept integer with a high probability
4.1.3 PROXY
Proxy is an entity, which is authorized to process the Client’s data and upload them, is selected and authorized by Client. When Proxy satisfies the warrant which is signed and issued by Client, it can process and upload the original client’s data; otherwise, it cannot perform the procedure.
4.1.4 KEY GENERATION CENTER
Key Generation Center (KGC) is an entity, when receiving an identity; it generates the private key which corresponds to the received identity.









CHAPTER 5
LITERATURE SURVEY
5.1 OVERVIEW:
A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries
Besides enlarging your knowledge about the topic, writing a literature review lets you gain and demonstrate skills in two areas
1.     INFORMATION SEEKING: the ability to scan the literature efficiently, using manual or computerized methods, to identify a set of useful articles and books
2.     CRITICAL APPRAISAL: the ability to apply principles of analysis to identify unbiased and valid studies.
5.2 ACHIEVING EFFICIENT CLOUD SEARCH SERVICES: MULTI-KEYWORD RANKED SEARCH OVER ENCRYPTED CLOUD DATA SUPPORTING PARALLEL COMPUTING
ABSTRACT
 In recent years, consumer-centric cloud computing paradigm has emerged as the development of smart electronic devices combined with the emerging cloud computing technologies. A variety of cloud services are delivered to the consumers with the premise that an effective and efficient cloud search service is achieved. For consumers, they want to find the most relevant products or data, which is highly desirable in the “pay-as-you use” cloud computing paradigm. As sensitive data (such as photo albums, emails, personal health records, financial records, etc.) are encrypted before outsourcing to cloud, traditional keyword search techniques are useless. Meanwhile, existing search approaches over encrypted cloud data support only exact or fuzzy keyword search, but not semantics-based multi-keyword ranked search. Therefore, how to enable an effective searchable system with support of ranked search remains a very challenging problem. This paper proposes an effective approach to solve the problem of multi-keyword ranked search over encrypted cloud data supporting synonym queries. The main contribution of this paper is summarized in two aspects: multi-keyword ranked search to achieve more accurate search results and synonym-based search to support synonym queries. Extensive experiments on real-world dataset were performed to validate the approach, showing that the proposed solution is very effective and efficient for multikeyword ranked searching in a cloud environment.
DISADVANTAGES OF EXISTING SYSTEM
·        Unauthorized operation on the outsourced data on account of curiosity or profit.
·        Sensitive data are encrypted before outsourcing to the cloud.
·        The existing searchable encryption schemes support only exact or fuzzy keyword search.
ADVANTAGES OF PROPOSED SYSTEM :
·        To improve search efficiency, a tree-based index structure which is a balance binary tree is used.
·        Multi-keyword ranked search to achieve more accurate search results and synonym-based search to support synonym queries
·        Improves the accuracy of search results.
ALGORITHM :
E-TFIDFALGORITHM(ExtractTermFrequency-InverseDocument Frequency)
The E-TFIDF algorithm, which can extract the most representative keywords from outsourced text documents, improves the accuracy of search results.
Rank function:
 In information retrieval, a ranking function is usually used to evaluate relevant scores of matching files to a request. Among lots of ranking functions, the “TF×IDF” rule  is most widely used, where TF (term frequency) denotes the occurrence of the term appearing in the document, and IDF (inverse document frequency) is often obtained by dividing the total number of documents by the number of files containing the term. That means, TF represents the importance of the term in the document and IDF indicates the importance or degree of
distinction in the whole document collection.







5.3 A PROXY RE-ENCRYPTION SCHEME WITH THE UNFORGEABILITY OF RE-ENCRYPTION KEYS AGAINST COLLUSION ATTACKS
ABSTRACT :
Proxy re-encryption (PRE) schemes are cryptosystems which allow a proxy who has a reencryption key to convert a ciphertext originally encrypted for one party into a ciphertext which can be decrypted by another party. In  Hayashi et al. proposed the new security notion for
PRE called “unforgeability of re-encryption keys against collusion attacks,” UFReKey-CA for short. They proposed the PRE schemes and claimed that their schemes meet UFReKey-CA. However, Isshiki et al.pointed out that the schemes do not meet UFReKey-CA in IWSEC
2013. It is an open problem of constructing the scheme which meets UFReKey-CA. In this paper, we propose new PRE schemes which meet confidentiality (RCCA security) assuming that the q-wDBDHI problem is hard and meet UFReKey-CA assuming that the 2-DHI problem
is hard.
DISADVANTAGES OF EXIXTING SYSTEM :
·        Proxy to convert ciphertexts into ciphertexts without revealing any information about the underlying plaintexts to the proxy
·        Security problem occurs when proxies are corrupted.
·        Re-encryption key can be forged when a proxy and two or more delegatees collude.
ADVANTAGES OF PROPOSED SYSTEM:
·        To prevent re-encryption key forgery, we change the form of the re-encryption key.
·        The Proposed scheme satisfy the RCCA security and the strong unforgeability of re-encryption keys.
·        Security is necessary to prove the confidentiality of each types of ciphertexts.
ALGORITHM:
Strong One-Time Signature :
A one-time signature (OTS) scheme is a digital signature scheme that can be used to sign one message per key pair. More generally, we consider w-time signatures, which allow w signatures to be signed securely with each key pair (signing more than w messages breaks the security of the scheme). One-time signatures are an old idea: the first digital signature scheme invented was an OTS (Rabin/Lamport  The two main advantages of OTS is that they may be constructed from any one-way function, and the signing and verification algorithms are very fast and cheap to compute (when compared to regular public-key signatures). Common drawbacks, aside from the signature limit, are the signature length and the size of the public and private keys. Despite the limitations of OTS, they have found many applications. On the more practical side, OTS can be used to authenticate messages in sensor networks  and to provide source authentication for multicast (also called broadcast) authentication . One-time signatures are also used in the construction of other primitives, such as online/offline signatures  and CCAsecure public-key encryption

5.3 PROXY RE-ENCRYPTION FROM LATTICES
ABSTRACT :
We propose a new unidirectional proxy re-encryption scheme based on the hardness of the LWE problem. Our construction is collusionsafe and does not require any trusted authority for the re-encryption key generation. We extend a recent trapdoor definition for a lattice of Micciancio and Peikert. Our proxy re-encryption scheme is provably CCA-1 secure in the selective model under the LWE assumption.
DISADVANTAGES :
·        We have an open problem to construct a CCA-2 secure lattice-based construction in the adaptive setting.
·        Our construction is that its security is proved in the selective model only.
ADVANTAGES :
·        Proxy reencryption scheme based on the hardness of lattice-based problems.
·        Lattice-based construction that achieves collusion resilience and noninteractivity.
·        The generalization might prove useful for functionalities other than proxy re-encryption.
ALGORITHM :
RAPDOOR GENERATION:
A trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor". Trapdoor functions are widely used in cryptography.
In mathematical terms, if f is a trapdoor function, then there exists some secret information y, such that given f(x) and y, it is easy to compute x. Consider a padlock and its key. It is trivial to change the padlock from open to closed without using the key, by pushing the shackle into the lock mechanism. Opening the padlock easily, however, requires the key to be used. Here the key is the trapdoor.
A trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). A backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion.
5.4 MUTUAL VERIFIABLE PROVABLE DATA AUDITING IN PUBLIC CLOUD STORAGE
ABSTRACT :
Cloud storage is now a hot research topic in information technology. In cloud storage, date security properties such as data confidentiality, integrity and availability become more and more important in many commercial applications. Recently, many provable data possession (PDP) schemes are proposed to protect data integrity. In some cases, it
has to delegate the remote data possession checking task to some proxy. However, these PDP schemes are not secure since the proxy stores some state information in cloud storage servers. Hence, in this paper, we propose an efficient mutual verifiable provable data possession scheme,
which utilizes Diffie-Hellman shared key to construct the homomorphic authenticator. In particular, the verifier in our scheme is stateless and independent of the cloud storage service. It is worth noting that the presented scheme is very efficient compared with the previous PDP schemes, since the bilinear operation is not required.

DISDAVANTAGES :
·        Provable data possession (PDP) schemes are not secure
·        Security becomes one of the major concerns for all entities in cloud services
·        Data owners would worry that their data could be misused or accessed by unauthorized users.
·        Data loss could happen in any infrastructure.
ADVANTAGES :
·        Data storage auditing service to assure data are correctly stored in the Cloud.
·        An efficient mutual verifiable provable data possession (MV-PDP) scheme solves the problem that the verifier can be optionally specified by a malicious CSS.
·        The same data blocks are easy to be signed and checked by a client and the verifier in turn.
ALGORITHM :
Computational Diffie–Hellman Algorithm :
We present some computational problems related to CDH, and prove reductions among them. The main result is to prove that CDH and Fixed-CDH are equivalent. Most of the results in this section apply to both algebraic groups (AG) and algebraic group quotients (AGQ) of prime order r. For the algebraic group quotients G considered in this book then one can obtain all the results by lifting from the quotient to the covering group G′ and applying the results there. A subtle distinction is whether the base element g G is considered fixed or variable in a CDH instance. To a cryptographer it is most natural to assume the generator is fixed, since that corresponds to the usage of cryptosystems in the real world (the group G and element g G are fixed for all users). Hence, an adversary against a cryptosystem leads to an oracle for a fixed generator problem. To a computational number theorist it is most natural to assume the generator is variable, since algorithms in computational number theory usually apply to all problem instances. Hence both problems are studied in the literature and when an author writes CDH it is sometimes not explicit which of the variants is meant. Definition 20.2.1 was for the case when g varies.

5.5 FINE-GRAINED AND HETEROGENEOUS PROXY RE- ENCRYPTION FOR SECURE CLOUD STORAGE
ABSTRACT:
Cloud is an emerging computing paradigm. It has drawn extensive attention from both academia and industry. But its security issues have been considered as a critical obstacle in its rapid development. When data owners store their data as plaintext in cloud, they lose the security of their cloud data due to the arbitrary accessibility, specially accessed by the un-trusted cloud. In order to protect the confidentiality of data owners’ cloud data, a promising idea is to encrypt data by data owners before storing them in cloud. However, the straightforward employment of the traditional encryption algorithms can not solve the problem
well, since it is hard for data owners to manage their private keys, if they want to securely share their cloud data with others in a fine-grained manner. In this paper, we propose a fine-grained and heterogeneous proxy re-encryption (FHPRE) system to protect the confidentiality of data owners’ cloud data. By applying the FH-PRE system in cloud, data
owners’ cloud data can be securely stored in cloud and shared in a fine-grained manner.

DISADVANTAGES:
·        Security becomes one of the major concerns for all entities in cloud services
·        Data loss could happen in any infrastructure.
·        Cloud data is accessible without any authorization, especially accessed by the un-trusted cloud
ADVANTAGES :
·        A promising idea is to encrypt data by data owners before storing them in cloud.
·        Proxy re-encryption allows that data owners securely share their cloud data to the authorized data consumers.
·        A re-encryption key and decrypt a reencrypted ciphertext and support more flexible access control
ALGORITHM:
Fine-grained and heterogeneous proxy re-encryption (FHPRE) :
In this system, data owners and data consumers belong to different cryptographic primitives. A data owner owns a pair of the IBE public-and-private keys, and a data consumer owns a pair of the Elgamal public-and-private keys. An IBE ciphertext of any data owner can be re-encrypted by a proxy (or cloud) to generate a new ciphertext. This
new ciphertext can be decrypted by the Elgamal private keys of data consumers. So data consumers can share data owners’ cloud data without more registrations. In addition, our FH-PRE system performs more efficiently to generate a re-encryption key and decrypt a reencrypted ciphertext and support more flexible access control comparing with ITHJ08 system. As mentioned above, a parameter of type is taken as input to generate ciphertexts and re-encryption keys in ITHJ08 system. Therefore, it allows to fine-grained share data owners’
cloud data. We find that if a data owner can update his cloud data from a old type to a new type, he can realize more flexible access control on his cloud data. But ITHJ08 system did not tell us how to do it. Fortunately, we realize this function in our FH-PRE system. It also is a novel
contribution in this paper.















CHAPTER 6
6.1 METHODOLOGY
ID-PUIC is a novel proxy-oriented data uploading and remote data integrity checking model in public cloud.This concrete ID-PUIC protocol comprises four procedures: Setup, Extract, Proxy-key generation, TagGen, and Proof. In order to show the intuition of our construction, the concrete protocol’s architecture is depicted. First, Setup is performed and the system parameters are generated. Based on the generated system parameters, the other procedures are performed as:
(1) In the phase Extract, when the entity’s identity is input, KGC generates the entity’s private key. Especially, it can generate the private keys for the client and the proxy.
(2) In the phase Proxy-key generation, the original client creates the warrant and helps the proxy generate the proxy key.
 (3) In the phase TagGen, when the data block is input, the proxy generates the block’s tag and upload block-tag pairs to PCS.
 (4) In the phase Proof, theoriginal client O interacts with PCS . Through the interaction, O checks its remote data integrity.






6.2 OJECTIVE AND MOTIVATION
OJECTIVE
In public cloud computing, the clients store their massive data in the remote public cloud servers. Since the stored data is outside of the control of the clients, it entails the security risks in terms of confidentiality, integrity and availability of data and service. Remote data integrity checking is a primitive which can be used to convince the cloud clients that their data are kept intact. In some special cases, the data owner may be restricted to access the public cloud server, the data owner will delegate the task of data processing and uploading to the third party, for example the proxy. On the other side, the remote data integrity checking protocol must be efficient in order to make it suitable for capacity-limited end devices. Thus, based on identity-based public cryptography and proxy public key cryptography, we will study ID-PUIC protocol.
MOTIVATION:
In public cloud environment, most clients upload their data to PCS and check their remote data’s integrity by Internet. When the client is an individual manager, some practical problems will happen. If the manager is suspected of being involved into the commercial fraud, he will be taken away by the police. During the period of investigation, the manager will be restricted to access the network in order to guard against collusion.
In public cloud, remote data integrity checking is an important security problem. Since the clients’ massive data is outside of their control, the clients’ data may be corrupted by the malicious cloud server regardless of intentionally or unintentionally. In order to address the novel security problem, some efficient models are presented.


CHAPTER 7
SYSTEM SPECIFICATION
The purpose of system requirement specification is to produce the specification analysis of the task and also to establish complete information about the requirement, behavior and other constraints such as functional performance and so on. The goal of system requirement specification is to completely specify the technical requirements for the product in a concise and unambiguous manner.
7.1 HARDWARE REQUIREMENTS
      Processor                  -    Pentium –III
      Speed                        -    1.1 Ghz
      RAM                                  -    256  MB(min)
      Hard Disk                 -   20 GB
      Floppy Drive                      -    1.44 MB
      Key Board                -    Standard Windows Keyboard
      Mouse                      -    Two or Three Button Mouse
      Monitor                    -    SVGA
7.2 SOFTWARE REQUIREMENTS
·        Operating System                             :   Windows 8
·        Coding Language                    : ASP.net, C#.net
·        Tool                                                  : Visual Studio 2010
·        Database                                           : SQL SERVER 2008




CHAPTER 8
SOFTWARE ENVIRONMENT
DOTNET
.NET Framework
The Microsoft .NET Framework (pronounced dot net) is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a large class library known as Framework Class Library (FCL) and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs written for .NET Framework execute in a software environment (as contrasted to hardware environment), known as Common Language Runtime (CLR), an application virtual machine that provides services such as security, memory management, and exception handling. FCL and CLR together constitute .NET Framework.
FCL provides user interfacedata accessdatabase connectivitycryptographyweb application development, numeric algorithms, and network communications. Programmers produce software by combining their own source code with .NET Framework and other libraries. .NET Framework is intended to be used by most new applications created for Windows platform. Microsoft also produces an integrated development environment largely for .NET software called Visual Studio.
DotNet.svg
HISTORY
Microsoft started development of .NET Framework in the late 1990s, originally under the name of Next Generation Windows Services (NGWS). By late 2000, the first beta versions of .NET 1.0 were released.
OVERVIEW OF .NET FRAMEWORK RELEASE HISTORY
Version
number
CLR
version
Release
date
Included in
Replaces

Development tool
Windows
Windows Server

1.0
2002-02-13
N/A
N/A
N/A

1.1
2003-04-24
N/A
1.0

2.0
2005-11-07
N/A
N/A

2.0
2006-11-06
2.0

2.0
2007-11-19
2.0, 3.0

4
2010-04-12
N/A
N/A
N/A

4
2012-08-15
8
4.0

4
2013-10-17
4.0, 4.5

4
2014-05-05
N/A
N/A
N/A
4.0, 4.5, 4.5.1


.NET Framework family also includes two versions for mobile or embedded device use. A reduced version of the framework, .NET Compact Framework, is available on Windows CE platforms, including Windows Mobile devices such as smart phones. Additionally, .NET Micro Framework is targeted at severely resource-constrained devices.
ARCHITECTURE
COMMON LANGUAGE INFRASTRUCTURE:
Common Language Runtime (CLI) provides a language-neutral platform for application development and execution, including functions for exception handlinggarbage collection, security, and interoperability. By implementing the core aspects of .NET Framework within the scope of CLI, this functionality will not be tied to a single language but will be available across the many languages supported by the framework. Microsoft's implementation of CLI is Common Language Runtime (CLR). It serves as the execution engine of .NET Framework. All .NET programs execute under the supervision of CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling.
For computer programs to run on CLI, they need to be compiled into Common Intermediate Language (CIL) – as opposed to being compiled into machine code. Upon execution, an architecture-specific Just-in-time compiler (JIT) turns the CIL code into machine code. To improve performance, however, .NET Framework comes with Native Image Generator (NGEN) that performs ahead-of-time compilation.
http://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Overview_of_the_Common_Language_Infrastructure.svg/520px-Overview_of_the_Common_Language_Infrastructure.svg.png
Figure 2: visual overview of the common language infrastructure (CLI)

CLASS LIBRARY
.NET Framework includes a set of standard class libraries. The class library is organized in a hierarchy of namespaces. Most of the built-in APIs are part of either System.* or Microsoft.* namespaces. These class libraries implement a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation, among others. .NET class libraries are available to all CLI compliant languages. .NET Framework class library is divided into two parts: Framework Class Library (FCL) and Base Class Library (BCL).
FCL includes a small subset of the entire class library and is the core set of classes that serve as the basic API of CLR. Classes in mscorlib.dll and some classes inSystem.dll and System.core.dll are part of FCL. FCL classes are available in .NET Framework as well as its alternative implementations including .NET Compact FrameworkMicrosoft Silver light and Mono.
BCL is a superset of FCL and refers to the entire class library that ships with .NET Framework. It includes an expanded set of libraries, including Windows FormsADO.NET,ASP.NETLanguage Integrated Query (LINQ), Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF) and Workflow Foundation (WF). BCL is much larger in scope than standard libraries for languages like C++, and comparable in scope to standard libraries of Java.
.NET CORE
.NET Core is a free and open-source partial implementation of the .NET Framework. It consists of CoreCLR and CoreFX, which are partial forks of CLR and BCL respectively.NET Core comes with an improved JIT compiler, called RyuJIT.
ASSEMBLIES
Compiled CIL code is stored in CLI assemblies. As mandated by the specification, assemblies are stored in Portable Executable (PE) file format, common on Windows platform for all DLL and EXE files. Each assembly consists of one or more files, one of which must contain a manifest bearing the metadata for the assembly. The complete name of an assembly (not to be confused with the file name on disk) contains its simple text name, version number, culture, and public key token. Assemblies are considered equivalent if they share the same complete name, excluding the revision of the version number. A private key can also be used by the creator of the assembly for strong naming. The public key token identifies which private key an assembly is signed with. Only the creator of the keypair (typically .NET developer signing the assembly) can sign assemblies that have the same strong name as a previous version assembly, since the creator is in possession of the private key. Strong naming is required to add assemblies to Global Assembly Cache.
DESIGN TENETS
INTEROPERABILITY
Because computer systems commonly require interaction between newer and older applications, .NET Framework provides means to access functionality implemented in newer and older programs that execute outside .NET environment. Access to COM components is provided in System.Runtime.InteropServices andSystem.EnterpriseServices namespaces of the framework; access to other functionality is achieved using the P/Invoke feature.
LANGUAGE INDEPENDENCE
.NET Framework introduces a Common Type System (CTS) that defines all possible datatypes and programming constructs supported by CLR and how they may or may not interact with each other conforming to CLI specification. Because of this feature, .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language.
PORTABILITY
While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be platform-agnostic, and cross-platform implementations are available for other operating systems. Microsoft submitted the specifications for CLI (which includes the core class libraries, CTS, and CIL), and C++/CLI to both ECMA and ISO, making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.

SECURITY
.NET Framework has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. CAS is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the intranet or Internet). CAS uses evidence to determine the permissions granted to the code. Other code can demand that calling code be granted a specified permission. The demand causes CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.
MEMORY MANAGEMENT
CLR frees the developer from the burden of managing memory (allocating and freeing up when done); it handles memory management itself by detecting when memory can be safely freed. Instantiations of .NET types (objects) are allocated from the managed heap; a pool of memory managed by CLR. As long as there exists a reference to an object, which might be either a direct reference to an object or via a graph of objects, the object is considered to be in use. When there is no reference to an object, and it cannot be reached or used, it becomes garbage, eligible for collection. .NET Framework includes a garbage collector which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them and this is more effcient then the java.
.NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep garbage collector. GC runs only when a certain amount of memory has been used or there is enough pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim memory are reached, GC runs are non-deterministic. Each .NET application has a set of roots, which are pointers to objects on the managed heap (managed objects). These include references to static objects and objects defined as local variables or method parameters currently in scope, as well as objects referred to by CPU registers. When GC runs, it pauses the application, and for each object referred to in the root, it recursively enumerates all the objects reachable from the root objects and marks them as reachable. It uses CLI metadata and reflection to discover the objects encapsulated by an object, and then recursively walk them. It then enumerates all the objects on the heap (which were initially allocated contiguously) using reflection. All objects not marked as reachable are garbage. This is the mark phase. Since the memory held by garbage is not of any consequence, it is considered free space. However, this leaves chunks of free space between objects which were initially contiguous. The objects are then compacted together to make used memory contiguous again. Any reference to an object invalidated by moving the object is updated by GC to reflect the new location. The application is resumed after the garbage collection is over.
GC used by .NET Framework is also generational. Objects are assigned a generation; newly created objects belong to Generation 0. The objects that survive a garbage collection are tagged as Generation 1, and the Generation 1 objects that survive another collection are Generation 2 objects. .NET Framework uses up to Generation 2 objects. Higher generation objects are garbage collected less frequently than lower generation objects. This helps increase the efficiency of garbage collection, as older objects tend to have a longer lifetime than newer objects.  Thus, by eliminating older (and thus more likely to survive a collection) objects from the scope of a collection run, fewer objects need to be checked and compacted.
 SIMPLIFIED DEPLOYMENT
.NET Framework includes design features and tools which help manage the installation of computer software to ensure that it does not interfere with previously installed software, and that it conforms to security requirements.



STANDARDIZATION AND LICENSING
TIMELINE
In August 2000, MicrosoftHewlett-Packard, and Intel worked to standardize CLI and C#. By December 2001, both were ratified ECMA standards. The current version of ISO standards are ISO/IEC 23271:2012 and ISO/IEC 23270:2006.
 While Microsoft and their partners hold patents for CLI and C#, ECMA and ISO require that all patents essential to implementation be made available under "reasonable and non-discriminatory terms". In addition to meeting these terms, the companies have agreed to make the patents available royalty-free. However, this did not apply for the part of .NET Framework not covered by ECMA/ISO standards, which included Windows Forms, ADO.NET, and ASP.NET. Patents that Microsoft holds in these areas may have deterred non-Microsoft implementations of the full framework.
 On 3 October 2007, Microsoft announced that the source code for .NET Framework 3.5 libraries was to become available under the Microsoft Reference License (Ms-RSL).The source code repository became available online on 16 January 2008 and included BCL, ASP.NET, ADO.NET, Windows Forms, WPF and XML. Scott Guthrie of Microsoft promised LINQ, WCF and WF libraries were in process of being added.
On 12 November 2014, Microsoft announced .NET Core, in an effort to include cross-platform support for .NET, the source release of Microsoft's CoreCLR implementation, source for the "entire [...] library stack" for .NET Core, and the adoption of a conventional ("bazaar"-like) open source development model under the stewardship of the .NET FoundationMiguel de Icaza describes .NET Core as a "redesigned version of .NET that is based on the simplified version of the class libraries", and Microsoft's Immo Landwerth explained that .NET Core would be "the foundation of all future .NET platforms".
At the time of the announcement, the initial release of the .NET Core project had been seeded with a subset of the libraries' source code and coincided with the relicensing of Microsoft's existing .NET reference source away from the restrictions of the Microsoft Reference License. Both projects are made available under the MIT License. Landwerth acknowledged the disadvantages of the previously selected shared source license, explaining that it "made Rotor [the Microsoft reference implementation] a non-starter" as a community-developed open source project because it did not meet the criteria of an OSI-approved license.
Microsoft also produced an update to its patent grants, which further extends the scope beyond its previous pledges. Whereas before projects like Mono existed in a legal grey area because Microsoft's earlier grants applied only to the technology in "covered specifications", including strictly the 4th editions each of ECMA-334 and ECMA-335, the new patent promise places no ceiling on the specification version and even extends to any .NET runtime technologies documented on MSDN that haven't been formally specified by the ECMA group, if a project chooses to implement them. This permits Mono and other projects to maintain feature parity with modern .NET features that have been introduced since the 4th edition was published without being at risk of patent litigation over the implementation of those features. The new grant does maintain the restriction that any implementation must maintain minimum compliance with the mandatory parts of the CLI specification.
Microsoft's press release highlights that the cross-platform commitment now allows for a fully open source, modern server-side .NET stack. However, Microsoft does not plan to release the source for WPF or Windows Forms.




LICENSING DETAILS
Component
License
.NET Core
CoreFX and CoreCLR
.NET Compiler Platform (codename "Roslyn")
ASP.NET Web Stack
ASP.NET Ajax Control Toolkit
ASP.NET SignalR
.NET Framework reference source code
.NET Framework redistributable package

Table: Components of .net framework

ALTERNATIVE IMPLEMENTATIONS
.NET Framework is the predominant implementation of .NET technologies. Other implementations for parts of the framework exist. Although the runtime engine is described by an ECMA/ISO specification, other implementations of it may be encumbered by patent issues; ISO standards may include the disclaimer, "Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights." It is more difficult to develop alternatives to FCL, which is not described by an open standard and may be subject to copyright restrictions. Additionally, parts of FCL have Windows-specific functionality and behavior, so implementation on non-Windows platforms can be problematic.Some alternative implementations of parts of the framework are listed here.
.NET Micro Framework is a .NET platform for extremely resource-constrained devices. It includes a small version of CLR and supports development in C# (though some developers were able to use VB.NET, albeit with an amount of hacking, and with limited functionalities) and debugging (in an emulator or on hardware), both using Microsoft Visual Studio. It also features a subset of .NET Framework Class Library (about 70 classes with about 420 methods), a GUI framework loosely based on WPF, and additional libraries specific to embedded applications.
Mono is an implementation of CLI and FCL, and provides additional functionality. It is dual-licensed under free software and proprietary software licenses. It includes support for ASP.NET, ADO.NET, and Windows Forms libraries for a wide range of architectures and operating systems. It also includes C# and VB.NET compilers. Portable.NET (part of DotGNU) provides an implementation of CLI, portions of FCL, and a C# compiler. It supports a variety of CPUs and operating systems.Microsoft Shared Source Common Language Infrastructure is a non-free implementation of CLR. However, the last version only runs on Microsoft Windows XP SP2, and was not updated since 2006, therefore it does not contain all features of version 2.0 of .NET Framework.CrossNetis an implementation of CLI and portions of FCL. It is free software using the open source MIT License.
PERFORMANCE
The garbage collector, which is integrated into the environment, can introduce unanticipated delays of execution over which the developer has little direct control. "In large applications, the number of objects that the garbage collector needs to deal with can become very large, which means it can take a very long time to visit and rearrange all of them."
.NET Framework provides support for calling Streaming SIMD Extensions (SSE) via managed code from April 2014 in Visual Studio 2013 Update 2. However, Mono has provided support for SIMD Extensions as of version 2.2 within the Mono.Simd namespace; before. Mono's lead developer Miguel de Icaza has expressed hope that this SIMD support will be adopted by CLR's ECMA standard. Streaming SIMD Extensions have been available in x86 CPUs since the introduction of the Pentium III. Some other architectures such as ARM and MIPS also have SIMD extensions. In case the CPU lacks support for those extensions, the instructions are simulated in software.[citation needed]
SECURITY
Unobfuscated managed CIL byte code can often be easier to reverse-engineer than native code. NET decompile programs enable developers with no reverse-engineering skills to view the source code behind unobfuscated .NET assemblies (DLL/EXE). In contrast, applications built with Visual C++ are much harder to reverse-engineer and source code is almost never produced successfully, mainly due to compiler optimizations and lack of reflection.[citation needed] One concern is over possible loss of trade secrets and the bypassing of license control mechanisms. To mitigate this, Microsoft has included Dotfuscator Community Edition with Visual Studio .NET since 2002. Third-party obfuscation tools are also available from vendors such as vmwareV.i. LabsXenocodeRed Gate Software. Method-level encryption tools for .NET code are available from vendors such asSafeNet.
Features Of . Net:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).
THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are
Ø  Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
Ø  Memory management, notably including garbage collection.
Ø  Checking and enforcing security restrictions on the running code.
Ø  Loading and executing programs, with version control and other such features.
Ø  The following features of the .NET framework are also worth description:
Managed Code:
The code that targets .NET, and which contains certain extra Information- “metadata” - to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.
Managed Data
 With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System
 The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification
 The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit. Other languages for which .NET compilers are available include
1.      FORTRAN
2.      COBOL
3.      Eiffel
C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.   
C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
         Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
      Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.
THE .NET FRAMEWORK
     The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.
     OBJECTIVES OF. NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.         
There are different types of application, such as Windows-based applications and Web-based applications.
MICROSOFT SQL SERVER
Microsoft SQL Server is a relational database management system developed by Microsoft. As a database, it is a software product whose primary function is to store and retrieve data as requested by other software applications, be it those on the same computer or those running on another computer across a network (including the Internet). There are at least a dozen different editions of Microsoft SQL Server aimed at different audiences and for workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users. Its primary query languages are T-SQL and ANSI SQL.
HISTORY:
GENESIS
Prior to version 7.0 the code base for MS SQL Server was sold by Sybase SQL Server to Microsoft, and was Microsoft's entry to the enterprise-level database market, competing against OracleIBM, and, later, Sybase. Microsoft, Sybase and Ashton-Tate originally worked together to create and market the first version named SQL Server 1.0 for OS/2 (about 1989) which was essentially the same as Sybase SQL Server 3.0 on Unix,VMS, etc. Microsoft SQL Server 4.2 was shipped around 1992 (available bundled with IBM OS/2 version 1.3). Later Microsoft SQL Server 4.21 for Windows NT was released at the same time as Windows NT 3.1. Microsoft SQL Server v6.0 was the first version designed for NT, and did not include any direction from Sybase.
About the time Windows NT was released in July 1993, Sybase and Microsoft parted ways and each pursued its own design and marketing schemes. Microsoft negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Until 1994, Microsoft's SQL Server carried three Sybase copyright notices as an indication of its origin.
SQL Server 7.0 and SQL Server 2000 included modifications and extensions to the Sybase code base, adding support for the IA-64 architecture. By SQL Server 2005 the legacy Sybase code had been completely rewritten.
Since the release of SQL Server 2000, advances have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. These include:
·         an extract-transform-load (ETL) tool (SQL Server Integration Services or SSIS)
·         Reporting Server
·         an OLAP and data mining server (Analysis Services)
·         several messaging technologies, specifically Service Broker and Notification Services
SQL SERVER RELEASE HISTORY
Version
Year
Release Name
Codename
Internal
Version
1.0
(
OS/2)
1989
SQL Server 1.0
(16 bit)
Ashton-Tate /
Microsoft SQL Server
-
1.1
(
OS/2)
1991
SQL Server 1.1
(16 bit)
-
-
4.21
(
WinNT)
1993
SQL Server 4.21
SQLNT
-
6.0
1995
SQL Server 6.0
SQL95
-
6.5
1996
SQL Server 6.5
Hydra
-
7.0
1998
SQL Server 7.0
Sphinx
515
-
1999
SQL Server 7.0
OLAP Tools
Palato mania
-
8.0
2000
SQL Server 2000
Shiloh
539
8.0
2003
SQL Server 2000
64-bit Edition
Liberty
539
9.0
2005
SQL Server 2005
Yukon
611/612
10.0
2008
SQL Server 2008
Katmai
661
10.25
2010
Azure SQL DB
Matrix (aka
CloudDatabase
or CloudDB)
-
10.50
2010
SQL Server 2008 R2
Kilimanjaro (aka KJ)
665
11.0
2012
SQL Server 2012
Denali
706
12.0
2014
SQL Server 2014
SQL14
782

SQL SERVER 2005:
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML is converted to an internal binary data type before being stored in the database. Specialized indexing methods were made available for XML data. XML data is queried using XQuery; SQL Server 2005 added some extensions to the T-SQL language to allow embedding XQuery queries in T-SQL. In addition, it also defines a new extension to XQuery, called XML DML, that allows query-based modifications to XML data. SQL Server 2005 also allows a database server to be exposed over web services using Tabular Data Stream (TDS) packets encapsulated within SOAP (protocol) requests. When the data is accessed over web services, results are returned as XML.
Common Language Runtime (CLR) integration was introduced with this version, enabling one to write SQL code as Managed Code by the CLR. For relational data, T-SQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems. Data pages are checksummed for better error resiliency, and optimistic concurrency support has been added for better performance. Permissions and access control have been made more granular and the query processor handles concurrent execution of queries in a more efficient way. Partitions on tables and indexes are supported natively, so scaling out a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let it integrate with the .NET Framework.
SQL Server 2005 introduced Multi-Version Concurrency Control. User facing features include new transaction isolation level called SNAPSHOT and a variation of the READ COMMITTED isolation level based on statement-level data snapshots.SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing usage of database connections for multiple purposes. SQL Server 2005 introduced DMVs (Dynamic Management Views), which are specialized views and functions that return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. Service Pack 1 (SP1) of SQL Server 2005 introduced Database Mirroring, a high availability option that provides redundancy and failover capabilities at the database level. Failover can be performed manually or can be configured for automatic failover. Automatic failover requires a witness partner and an operating mode of synchronous (also known as high-safety or full safety).
ARCHITECTURE:
The protocol layer implements the external interface to SQL Server. All operations that can be invoked on SQL Server are communicated to it via a Microsoft-defined format, called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer data between a database server and a client. Initially designed and developed by Sybase Inc. for their Sybase SQL Server relational database engine in 1984, and later by Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical transport dependent protocols, including TCP/IP, Named pipes, and Shared memory. Consequently, access to SQL Server is available over these protocols. In addition, the SQL Server API is also exposed over web services
FEATURES SQL SERVER:
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
 A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View


Design View
To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.


QUERY:
  A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

















CHAPTER 9
SYSTEM DESIGN
9.1 USE CASE DIAGRAM:
To model a system the most important aspect is to capture the dynamic behaviour. To clarify a bit in details, dynamic behaviour means the behaviour of the system when it is running /operating. So only static behaviour is not sufficient to model a system rather dynamic behaviour is more important than static behaviour.
In UML there are five diagrams available to model dynamic nature and use case diagram is one of them. Now as we have to discuss that the use case diagram is dynamic in nature there should be some internal or external factors for making the interaction. These internal and external agents are known as actors. So use case diagrams are consists of actors, use cases and their relationships.
The diagram is used to model the system/subsystem of an application. A single use case diagram captures a particular functionality of a system. So to model the entire system numbers of use case diagrams are used. A use case diagram at its simplest is a representation of a user's interaction with the system and depicting the specifications of a use case. A use case diagram can portray the different types of users of a system and the case and will often be accompanied by other types of diagrams as well.






9.2 CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information.






9.3 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams.



9.4 COLLABORATION DIAGRAM












9.5ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.

9.6 TABLE DESIGN:
CLIENT REGISTER
reg.PNG
END USER REGISTER
ereg.PNG
PROXY SERVER
proxy.PNG





FILE UPLLOAD
upload.PNG
ATTACKED FILE
attack.PNG
HACKER FILE
hacked file.PNG





FILE REQUEST
filerequest.PNG
FILE TRANSFER
file tran.PNG









CHAPTER 10
INPUT DESIGN AND OUTPUT DESIGN
INPUT DESIGN
The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things:’
Ø What data should be given as input?
Ø  How the data should be arranged or coded?
Ø  The dialog to guide the operating personnel in providing input.
Ø Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES
1.Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user
 will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the system’s relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following objectives.

v Convey information about past activities, current status or projections of the
v Future.
v Signal important events, opportunities, problems, or warnings.
v Trigger an action.
v Confirm an action.



























CHAPTER 11
SYSTEM STUDY
FEASIBILITY STUDY:
 The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
¨     Economical feasibility
¨     Technical feasibility
¨     Social feasibility
ECONOMICAL FEASIBILITY:                 
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY:            

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

SOCIAL FEASIBILITY:      
           The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.















CHAPTER 12
SYSTEM TESTING
            The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS:
          Testing is the process of trying to discover every conceivable fault or weakness in a work product.  The different type of testing are given below:
UNIT TESTING:
          Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

INTEGRATION TESTING:
             Integration tests are designed to test integrated software components to determine if they actually run as one program.  Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at   exposing the problems that arise from the combination of components.
FUNCTIONAL TEST:
        Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:

Valid Input      :  identified classes of valid input must be accepted.
Invalid Input   :  identified classes of invalid input must be rejected.
Functions         :  identified functions must be exercised.
Output               : identified classes of application outputs must be                              exercised.
Systems/ Procedures:  interfacing systems or procedures must be invoked.

     Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.
SYSTEM TEST:
     System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

WHITE BOX TESTING:
        White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.
BLACK BOX TESTING:
        Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.
UNIT TESTING:
          Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Test strategy and approach
          Field testing will be performed manually and functional tests will be written in detail.
Test objectives
·        All field entries must work properly.
·        Pages must be activated from the identified link.
·        The entry screen, messages and responses must not be delayed.

Features to be tested
·        Verify that the entries are of the correct format
·        No duplicate entries should be allowed
·        All links should take the user to the correct page.


INTEGRATION TESTING:
          Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
          The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.

ACCEPTANCE TESTING:
          User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered






CHAPTER 13

FUTURE WORK

Searchable encryption is a technique that enables secure search over encrypted data stored on remote servers. In our future work, we propose a novel secure and efficient multi-keyword similarity searchable encryption that returns the matching data items from the cloud server. Our analysis demonstrates that this scheme is proved to be secure against adaptive chosen-keyword attacks.




















CHAPTER 14
SOURCE CODE
CLIENT REGISTER:
<%@page import="com.oreilly.servlet.*,java.sql.*,java.lang.*,databaseconnection.*,java.text.SimpleDateFormat,java.util.*,java.io.*,javax.servlet.*, javax.servlet.http.*"  errorPage="Error.jsp"%>
<%@ page import="java.sql.*,databaseconnection.*"%>
<%
 try
 {
          //int n=(Integer)(session.getAttribute( "id" ));
          String id = request.getParameter("id");
           String user = request.getParameter("user");
            String pass = request.getParameter("pass");
            String gender = request.getParameter("gender");
            String dob = request.getParameter("date");
            String mail = request.getParameter("mail");
            String location = request.getParameter("location");
            String contactno = request.getParameter("contactno");
            session.setAttribute("user", user);
            //session.setAttribute("mail", mail);
            Connection con=databasecon.getconnection();
        Statement st=con.createStatement();
       
            int i = st.executeUpdate("insert into client (id,user,pass,gender,dob,mail,location,contactno,status)values('"+id+"','"+user+"','"+pass+"','"+gender+"','"+dob+"','"+mail+"','"+location+"','"+contactno+"','Waiting')");
            if(i!=0){
                response.sendRedirect("Client.jsp?message=Success");
                               }
                       else{
                  response.sendRedirect("Client.jsp?message=error");
                       }
             con.close();
        st.close();
        }
        catch(Exception e)
                {
        out.println(e);
               }
%>
CLIENT LOGIN
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>
<%
    String username = request.getParameter("user");
    String password = request.getParameter("pass");
    String stat = "Authorized";
    //System.out.println(" user name" + username);
    //System.out.println(" user password" + password);
    Class.forName("com.mysql.jdbc.Driver");
    Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
    Statement st = con.createStatement();
    String Q = "select * from client where user= '" + username + "' and pass='" + password + "' ";
    ResultSet rs = st.executeQuery(Q);
    if (rs.next()) {
        if ((rs.getString("user").equals(username)) && (rs.getString("status").equals(stat)))
        {
            session.setAttribute("me", username);
            response.sendRedirect("ClientHome.jsp?msg=success");

        } else {
            response.sendRedirect("Client.jsp?msg=NotActivated");
        }
    } else {
        response.sendRedirect("Client.jsp?msg=error");
    }
%>
PROXY
<%
String username = request.getParameter("username");
String password = request.getParameter("password");   
 if(((username).equalsIgnoreCase("proxy"))&&((password).equalsIgnoreCase("proxy"))){
     response.sendRedirect("proxyhome.jsp?msg=sucess");
 }
else{
 out.println("Error Found..!!");
 }
%>
UPLOAD
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>
<%@ page import="java.sql.*" import="databaseconnection.*"%>
<%
         String me =session.getAttribute("me").toString();
            String fm =session.getAttribute("nn").toString();
            //String user = request.getParameter("user");
            
            Class.forName("com.mysql.jdbc.Driver");
            Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
            Statement st = con.createStatement();
            int i = st.executeUpdate("update files set user='" + me + "' where name='" + fm + "'");
                                    int j = st.executeUpdate("update transaction set user='" + me + "' where filename='" + fm + "'");
            if (i != 0) {
               response.sendRedirect("Uploads.jsp?msgfns=Register success"); 
            } else {
               response.sendRedirect("Uploads.jsp?msgfnf=Register fails");
            }
            %>
CHECK INTEGRITY
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>
<%
    String filename = request.getParameter("file");
            System.out.println(" filename" + filename);
             session.setAttribute("filename", filename);
            String me =session.getAttribute("me").toString();
    //String password = request.getParameter("pass");   String stat = "attack";
 String stat1 = "Key Mismatch";
    Class.forName("com.mysql.jdbc.Driver");
    Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
    Statement st = con.createStatement();
    String Q = "select * from attack where filename= '" + filename + "'";
    ResultSet rs = st.executeQuery(Q);
   if (rs.next()) {
if ((rs.getString("filename").equals(filename)) && (rs.getString("status").equals(stat)))
        {
         session.setAttribute("filename", filename);           
         response.sendRedirect("Recover.jsp?msg=NotSecure");
      } else {
    response.sendRedirect("ClientHome.jsp?msg=secure");
       }
            }
    else
                    {
                        response.sendRedirect("ClientHome.jsp?msg=secure");
                        }
%>
END USER LOGIN
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>
<%
    String username = request.getParameter("user");
    String password = request.getParameter("pass");
    String stat = "Authorized";
    //System.out.println(" user name" + username);
    //System.out.println(" user password" + password);
    Class.forName("com.mysql.jdbc.Driver");
    Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
    Statement st = con.createStatement();
    String Q = "select * from enduser where user= '" + username + "' and pass='" + password + "' ";
   ResultSet rs = st.executeQuery(Q);

    if (rs.next()) {
        if ((rs.getString("user").equals(username)))// && (rs.getString("status").equals(stat)))
        {
            session.setAttribute("me", username);
            response.sendRedirect("EndUserHome.jsp?msg=Login");

        } else {
            response.sendRedirect("EndUSer.jsp?msgrr= User Not Activated..!");
        }
    } else {
        response.sendRedirect("EndUSer.jsp?msg=error");
    }
%>
END USER REGISTER
<%@page import="com.oreilly.servlet.*,java.sql.*,java.lang.*,databaseconnection.*,java.text.SimpleDateFormat,java.util.*,java.io.*,javax.servlet.*, javax.servlet.http.*"  errorPage="Error.jsp"%>
<%@ page import="java.sql.*,databaseconnection.*"%>
<%
 try
 {
          //int n=(Integer)(session.getAttribute( "id" ));
          String id = request.getParameter("id");
           String user = request.getParameter("user");
            String pass = request.getParameter("pass");
            String gender = request.getParameter("gender");
            String dob = request.getParameter("date");
            String mail = request.getParameter("mail");
            String location = request.getParameter("location");
            String contactno = request.getParameter("contactno");

            session.setAttribute("user", user);
            //session.setAttribute("mail", mail);
            Connection con=databasecon.getconnection();
        Statement st=con.createStatement();
         Int i = st.executeUpdate("insert into enduser (id,user,pass,gender,dob,mail,location,contactno)values('"+id+"','"+user+"','"+pass+"','"+gender+"','"+dob+"','"+mail+"','"+location+"','"+contactno+"')");
            if(i!=0){
          response.sendRedirect("EndUSer.jsp?message=Success");
                               }
                       else{
            response.sendRedirect("EndUser.jsp?message=error");
                       }   con.close();
        st.close();
        }
        catch(Exception e)
                {
        out.println(e);
               }
%>
FILE REQUEST
<%@ page import="java.text.ParseException.*"%>
<%@ page import="java.text.SimpleDateFormat,java.util.*,java.io.*,javax.servlet.*, javax.servlet.http.*" %>
<%@ page import ="java.util.Date,java.text.SimpleDateFormat,java.text.ParseException"%>
<%@ page import="java.sql.*,databaseconnection.*"%>

<%
            Statement st = null;
            ResultSet rs1=null;
            int n=0;
            try{
            Class.forName("com.mysql.jdbc.Driver");       Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity","root","root");
st=con.createStatement();
String sql1="select max(id) from request";
rs1=st.executeQuery(sql1);
if(rs1.next())
{
            if(rs1.getInt(1)==0)
            n=1;
            else
            n=rs1.getInt(1)+1;
}
      }catch (Exception e) {
e.printStackTrace();
out.print(e.getMessage());
}
%>
<%
  //String id = request.getParameter("id");
 //String id=(String)session.getAttribute("id");
  String fname=(String)session.getAttribute("fname");
       System.out.println("filename :"+fname); 
      String user=(String)session.getAttribute("user");
       System.out.println(user); 
       String status=(String)session.getAttribute("status");
        System.out.println(status); 
                   //String username=(String)session.getAttribute("uname");
                     // System.out.println(username); 
                         String me =(String)session.getAttribute("me");
                         String id = request.getParameter("id");
try
{
       Class.forName("com.mysql.jdbc.Driver");
    Connection con 2= DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
           Statement st11 = con2.createStatement();
            ResultSet rs = st11.executeQuery("select * from enduser where user='"+me+"'");
            if(rs.next()){
String sql="insert into request values('"+n+"','"+me+"','"+user+"','"+fname+"','"+status+"','Pending')";
int j=st11.executeUpdate(sql);
String sql1="insert into transaction values('"+me+"','"+fname+"','Request',now(),'"+status+"')";
  int x=st11.executeUpdate(sql1);
 response.sendRedirect("Search.jsp?msg=success");
}
else
       {
    response.sendRedirect("Search.jsp?msg=error");
       }
}catch(Exception e)
{
            System.out.println(e);
}
%>

DOWNLOAD
<%@page import="design.download"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>

<% 
            Statement st1 = null;
            ResultSet rs1=null;
            int n=0;
            try{
            Class.forName("com.mysql.jdbc.Driver");     Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity","root","root");
                        st1=con.createStatement();
                        String sql1="select max(id) from attack";
                       
                        rs1=st1.executeQuery(sql1);
                        if(rs1.next())
                        {
                        if(rs1.getInt(1)==0)
                        n=1;
                        else
                        n=rs1.getInt(1)+1;
                       
                        }
                             }catch (Exception e) {
                        e.printStackTrace();
                        out.print(e.getMessage());
            }
%>     
<%
String namee=request.getParameter("file");
String skey=request.getParameter("key");
String me =session.getAttribute("me").toString();
//System.out.println(" user name"+ username);
//System.out.println(" user password"+ password);
Class.forName("com.mysql.jdbc.Driver");
            Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity","root","root");
            Statement st = con.createStatement();
            String Q = "select * from files where name= '"+namee+"'";                   
                ResultSet rs = st.executeQuery(Q);         
               if(rs.next()){                       
         if((rs.getString("name").equals(namee))&&(rs.getString("skey").equals(skey))){
                     session.setAttribute("ff", namee);
                    response.sendRedirect("userdwn.jsp?ms= sucess..!");
                    }
                    else{
                   Class.forName("com.mysql.jdbc.Driver");
            con = DriverManager.getConnection("jdbc:mysql://localhost:3306/integrity", "root", "root");
        Statement st2 = con.createStatement();
         int i = st2.executeUpdate("insert into attack (id,filename,status,time)values('"+n+"','"+namee+"','attack',now())");
              // String sql = "insert into hackers (user,filename)values('"+me+"',m'"+namee+"')";
            if (i != 0) {
int j = st2.executeUpdate("insert into transaction (user,filename,status,date)values('"+me+"','"+namee+"','Key Mismatch',now())");
               response.sendRedirect("EndUserHome.jsp?msghack=hacker"); 
            }
        }     }
                    else{
                      response.sendRedirect("EndUserHome.jsp?msgrrr= Error found..!");
                         //out.println("<script>alert('wrong again')</script>");
                    }
            %>
        
                   
                   
SCREEN SHOT
HOME PAGE:






CLIENT  LOGIN
PROXY SERVER





PROXY HOME
CLIENT INFORMATION
VIEW CLIENT DETAILS
ClENT REGISTER




CLIENT HOME
UPLOAD FILE
UPLOAD FILE CLOUD:
CHECK DATA INTEGRITY





ENDUSER REGISTER




END USER LOGIN
ENDUSER HOME

SEARCH FILE

FILE INFORMATION






KEY GENERATION LOGIN
KEY GENERATION HOME

GENERATE SECRECT KEY
VIEW END USER REQUEST





VIEW FILE TRANSACTION
PUBLIC CLOUD SERVER  LOGIN
PUBLIC CLOUD SERVER  HOME
VIEW ALL CLOUD FILE






VIEW ALL TRANSACTION






VIEW ALL ATTACKER







VIEW KEY AUTHORITY RESPONSE








DOWNLOAD FILE













CHAPTER 15
CONCLUSION

Motivated by the application needs, the novel security concept of ID-PUIC in public cloud. It is ID-PUIC’s system model and security model. Then, the first concrete ID-PUIC protocol is designed by using the bilinear pairings technique. The concrete ID-PUIC protocol is provably secure and efficient by using the formal security proof and efficiency analysis. On the other hand, the proposed ID-PUIC protocol can also realize private remote data integrity checking, delegated remote data integrity checking and public remote data integrity checking based on the original client’s authorization.




ABBREVATION
ID-PUIC- identity-based proxy-oriented data uploading and remote data integrity checking in public cloud.
PKI- public key infrastructure.
CPA-chosen-plaintext attack.
PDP-provable data possession.
PCS-Public Cloud Server.
KGC -Key Generation Center.
CDH -Computational Diffie-Hellman.






REFERENCES
[1] Z. Fu, X. Sun, Q. Liu, L. Zhou, J. Shu, “Achieving efficient cloud search
services: multi-keyword ranked search over encrypted cloud data supporting
parallel computing,” IEICE Transactions on Communications, vol. E98-B, no. 1, pp.190-200, 2015.
[2] Y. Ren, J. Shen, J. Wang, J. Han, S. Lee, “Mutual verifiable provable data auditing in public cloud storage,” Journal of Internet Technology, vol. 16, no. 2, pp. 317-323, 2015.
[3] E. Kirshanova, “Proxy re-encryption from lattices”, PKC 2014, LNCS 8383, pp. 77-94, 2014.
[4] H. Guo, Z. Zhang, J. Zhang, “Proxy re-encryption with unforgeable reencryption keys”, Cryptology and Network Security, LNCS 8813, pp. 20-33, 2014.    
[5] P. Xu, H. Chen, D. Zou, H. Jin, “Fine-grained and heterogeneous proxy
re-encryption for secure cloud storage”, Chinese Science Bulletin, vol.59,no.32, pp. 4201-4209, 2014.
[6] E. Esiner, A. K¨upc¸ ¨u, ¨O ¨Ozkasap, “Analysis and optimization on FlexDPDP: a practical solution for dynamic provable data possession”, Intelligent Cloud Computing, LNCS 8993, pp. 65-83, 2014.
[7] H. Wang, “Identity-based distributed provable data possession in multicloud
storage”, IEEE Transactions on Services Computing, vol. 8, no. 2, pp. 328-340, 2015.
[8] H. Wang, “Anonymous multi-receiver remote data retrieval for pay-tv in public clouds”, IET Information Security, vol. 9, no. 2, pp. 108-118, 2015.
[9] Q. Zheng, S. Xu, “Fair and dynamic proofs of retrievability”, CODASPY’
11, pp. 237-248, 2011.


[10] J. Zhang, W. Tang, J. Mao, “Efficient public verification proof of
 retrievability scheme in cloud”, Cluster Computing, vol. 17, no. 4, pp.
1401-1411, 2014.
[11] K. Huang, J. Liu, M. Xian, H. Wang, S. Fu, “Enabling dynamic proof
of retrievability in regenerating-coding-based cloud storage”, ICC 2014,
pp.712-717, 2014.
[12] Y. Zhu, G. Ahn, H. Hu, S. Yau, H. An, S. Chen, “Dynamic Audit Services for Outsourced Storages in Clouds,” IEEE Transactions on Services Computing, vol. 6, no. 2, pp. 227-238, 2013.
[13] T. Ma, J. Zhou, M. Tang, Y. Tian, Al-dhelaan A., Al-rodhaan M., L. Sungyoung, “Social network and tag sources based augmenting collaborative recommender system”, IEICE Transactions on Information and Systems, vol.E98-D, no.4, pp. 902-910, 2015.
[14] J. Shen, H. Tan, J. Wang, J. Wang, S. Lee, “A novel routing protocol providing good transmission reliability in underwater sensor networks”, Journal of Internet Technology, vol. 16, no. 1, pp. 171-178, 2015.
[15] C. Wang, Q. Wang, K. Ren, W. Lou, “Privacy-preserving public auditing
for data storage security in cloud computing”, INFOCOM 2010, pp. 1-9,

2010.

Comments

  1. Hello hi, it's a very interesting BlogSpot. Thank you for sharing it will be helpful for engineering students to develop their own academic projects. For some more interesting projects Electronics Engineering Projects

    ReplyDelete

Post a Comment

Popular posts from this blog

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION ABSTRACT Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three

LIFI

LIFI Prof . Harald Haas is a technology of high brightness light emitting diodes(LED).It is bidirectional ,high speed and fully networked wireless communication.    LiFi is designed to use LED light bulbs similar to those currently in use in many energy-conscious homes and offices. However, LiFi bulbs are outfitted with a   chip   that modulates the light imperceptibly for optical data transmission. LiFi data is transmitted by the LED bulbs and received by photoreceptors. LiFi's early developmental models were capable of 150 megabits-per-second ( Mbps ). Some commercial kits enabling that speed have been released. In the lab, with stronger LEDs and different technology, researchers have enabled 10   gigabits -per-second (Gbps), which is faster than   802.11ad .  Benefits of LiFi: ·         Higher speeds than  Wi-Fi . ·         10000 times the frequency  spectrum  of radio. ·         More secure because data cannot be intercepted without a clear line of si