Skip to main content

2DCRYPT: IMAGE SCALING AND CROPPING IN ENCRYPTED DOMAINS REPORT

2DCRYPT: IMAGE SCALING AND CROPPING IN ENCRYPTED DOMAINS
ABSTRACT
The evolution of cloud computing and a drastic increase in image size are making the outsourcing of image storage and processing an attractive business model. Although this outsourcing has many advantages, ensuring data confidentiality in the cloud is one of the main concerns. There are state-of-the-art encryption schemes for ensuring confidentiality in the cloud. However, such schemes do not allow cloud datacenters to perform operations over encrypted images. In this paper, we address this concern by proposing 2DCrypt, a modified Paillier cryptosystem-based image scaling and cropping scheme for multi-user settings that allows cloud datacenters to scale and crop an image in the encrypted domain. To anticipate a high storage overhead resulted from the naive per-pixel encryption, we propose a space-efficient tiling scheme that allows tile-level image scaling and cropping operations. Basically, instead of encrypting each pixel individually, we are able to encrypt a tile of pixels. 2DCrypt is such that multiple users can view or process the images without sharing any encryption keys – a requirement desirable for practical deployments in real organizations. Our analysis and results show that 2DCrypt is IND-CPA secure and incurs an acceptable overhead. When scaling a 512 x 512 image by a factor of two, 2DCrypt requires an image user to download approximately 5:3 times more data than the un-encrypted scaling and need to work approximately 2:3 seconds more for obtaining  the scaled image in plaintext.



CHAPTER 1
INTRODUCTION
NETWORK FORENSICS
Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection.[1] Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information. Network traffic is transmitted and then lost, so network forensics is often a pro-active investigation.
Network forensics generally has two uses. The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. The second form relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions.
Two systems are commonly used to collect network data; a brute force "catch it as you can" and a more intelligent "stop look listen" method.

Overview

Network forensics is a comparatively new field of forensic science. The growing popularity of the Internet in homes means that computing has become network-centric and data is now available outside of disk-based digital evidence. Network forensics can be performed as a standalone investigation or alongside a computer forensics analysis (where it is often used to reveal links between digital devices or reconstruct how a crime was committed).
Marcus Ranum is credited with defining Network forensics as “the capture, recording, and analysis of network events in order to discover the source of security attacks or other problem incidents.
Compared to computer forensics, where evidence is usually preserved on disk, network data is more volatile and unpredictable. Investigators often only have material to examine if packet filters, firewalls, and intrusion detection systems were set up to anticipate breaches of security.
Systems used to collect network data for forensics use usually come in two forms:
·         "Catch-it-as-you-can" - This is where all packets passing through a certain traffic point are captured and written to storage with analysis being done subsequently in batch mode. This approach requires large amounts of storage.
·         "Stop, look and listen" - This is where each packet is analyzed in a rudimentary way in memory and only certain information saved for future analysis. This approach requires a faster processor to keep up with incoming traffic.

Types

Ethernet

Applying forensic methods on the Ethernet layer is done by eavesdropping bit streams with tools called monitoring tools or sniffers. The most common tool on this layer is Wireshark (formerly known as Ethereal) and tcpdump where tcpdump works mostly on unix-like operating systems. These tools collect all data on this layer and allows the user to filter for different events. With these tools, website pages, email attachments, and other network traffic can be reconstructed only if they are transmitted or received unencrypted. An advantage of collecting this data is that it is directly connected to a host. If, for example the IP address or the MAC address of a host at a certain time is known, all data sent to or from this IP or MAC address can be filtered.
To establish the connection between IP and MAC address, it is useful to take a closer look at auxiliary network protocols. The Address Resolution Protocol (ARP) tables list the MAC addresses with the corresponding IP addresses.
To collect data on this layer, the network interface card (NIC) of a host can be put into "promiscuous mode". In so doing, all traffic will be passed to the CPU, not only the traffic meant for the host.
However, if an intruder or attacker is aware that his connection might be eavesdropped, he might use encryption to secure his connection. It is almost impossible now days to break encryption but the fact that a suspect's connection to another host is encrypted all the time might indicate that the other host is an accomplice of the suspect.

TCP/IP

On the network layer the Internet Protocol (IP) is responsible for directing the packets generated by TCP through the network (e.g., the Internet) by adding source and destination information which can be interpreted by routers all over the network. Cellular digital packet networks, like GPRS, use similar protocols like IP, so the methods described for IP work with them as well.
For the correct routing, every intermediate router must have a routing table to know where to send the packet next. These routing tables are one of the best sources of information if investigating a digital crime and trying to track down an attacker. To do this, it is necessary to follow the packets of the attacker, reverse the sending route and find the computer the packet came from (i.e., the attacker).

The Internet

The internet can be a rich source of digital evidence including web browsing, email, newsgroup, synchronous chat and peer-to-peer traffic. For example, web server logs can be used to show when (or if) a suspect accessed information related to criminal activity. Email accounts can often contain useful evidence; but email headers are easily faked and, so, network forensics may be used to prove the exact origin of incriminating material. Network forensics can also be used in order to find out who is using a particular computer[6] by extracting user account information from the network traffic.

WIRELESS FORENSICS

Wireless forensics is a sub-discipline of network forensics. The main goal of wireless forensics is to provide the methodology and tools required to collect and analyze (wireless) network traffic that can be presented as valid digital evidence in a court of law. The evidence collected can correspond to plain data or, with the broad usage of Voice-over-IP (VoIP) technologies, especially over wireless, can include voice conversations.
Analysis of wireless network traffic is similar to that on wired networks, however there may be the added consideration of wireless security measures.

Computer forensics

Computer forensics (sometimes known as computer forensic science) is a branch of digital forensic science pertaining to evidence found in computers and digital storage media. The goal of computer forensics is to examine digital media in a forensically sound manner with the aim of identifying, preserving, recovering, analyzing and presenting facts and opinions about the digital information.
Although it is most often associated with the investigation of a wide variety of computer crime, computer forensics may also be used in civil proceedings. The discipline involves similar techniques and principles to data recovery, but with additional guidelines and practices designed to create a legal audit trail.
Evidence from computer forensics investigations is usually subjected to the same guidelines and practices of other digital evidence. It has been used in a number of high-profile cases and is becoming widely accepted as reliable within U.S. and European court systems.

Overview

In the early 1980s personal computers became more accessible to consumers, leading to their increased use in criminal activity (for example, to help commit fraud). At the same time, several new "computer crimes" were recognized (such as hacking). The discipline of computer forensics emerged during this time as a method to recover and investigate digital evidence for use in court. Since then computer crime and computer related crime has grown, and has jumped 67% between 2002 and 2003. Today it is used to investigate a wide variety of crime, including child pornography, fraud, espionage, cyberstalking, murder and rape. The discipline also features in civil proceedings as a form of information gathering (for example, Electronic discovery)
Forensic techniques and expert knowledge are used to explain the current state of a digital artifact; such as a computer system, storage medium (e.g. hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image). The scope of a forensic analysis can vary from simple information retrieval to reconstructing a series of events. In a 2002 book Computer Forensics authors Kruse and Heiser define computer forensics as involving "the preservation, identification, extraction, documentation and interpretation of computer data". They go on to describe the discipline as "more of an art than a science", indicating that forensic methodology is backed by flexibility and extensive domain knowledge. However, while several methods can be used to extract evidence from a given computer the strategies used by law enforcement are fairly rigid and lacking the flexibility found in the civilian world.

Use as evidence

In court, computer forensic evidence is subject to the usual requirements for digital evidence. This requires that information be authentic, reliably obtained, and admissible. Different countries have specific guidelines and practices for evidence recovery. In the United Kingdom, examiners often follow Association of Chief Police Officers guidelines that help ensure the authenticity and integrity of evidence. While voluntary, the guidelines are widely accepted in British courts.
Computer forensics has been used as evidence in criminal law since the mid-1980s, some notable examples include:
·         BTK Killer: Dennis Rader was convicted of a string of serial killings that occurred over a period of sixteen years. Towards the end of this period, Rader sent letters to the police on a floppy disk. Metadata within the documents implicated an author named "Dennis" at "Christ Lutheran Church"; this evidence helped lead to Rader's arrest.
·         Joseph E. Duncan III: A spreadsheet recovered from Duncan's computer contained evidence that showed him planning his crimes. Prosecutors used this to show premeditation and secure the death penalty.
·         Sharon Lopatka: Hundreds of emails on Lopatka's computer lead investigators to her killer, Robert Glass.
·         Corcoran Group: This case confirmed parties' duties to preserve digital evidence when litigation has commenced or is reasonably anticipated. Hard drives were analyzed by a computer forensics expert who could not find relevant emails the Defendants should have had. Though the expert found no evidence of deletion on the hard drives, evidence came out that the defendants were found to have intentionally destroyed emails, and misled and failed to disclose material facts to the plaintiffs and the court.
·         Dr. Conrad Murray: Dr. Conrad Murray, the doctor of the deceased Michael Jackson, was convicted partially by digital evidence on his computer. This evidence included medical documentation showing lethal amounts of propofol.

Forensic process

Computer forensic investigations usually follow the standard digital forensic process or phases: acquisition, examination, analysis and reporting. Investigations are performed on static data (i.e. acquired images) rather than "live" systems. This is a change from early forensic practices where a lack of specialist tools led to investigators commonly working on live data.

Techniques

A number of techniques are used during computer forensics investigations and much has been written on the many techniques used by law enforcement in particular. See, e.g., "Defending Child Pornography Cases".



Cross-drive analysis
A forensic technique that correlates information found on multiple hard drives. The process, still being researched, can be used to identify social networks and to perform anomaly detection.
Live analysis
The examination of computers from within the operating system using custom forensics or existing sysadmin tools to extract evidence. The practice is useful when dealing with Encrypting File Systems, for example, where the encryption keys may be collected and, in some instances, the logical hard drive volume may be imaged (known as a live acquisition) before the computer is shut down.
Deleted files
A common technique used in computer forensics is the recovery of deleted files. Modern forensic software have their own tools for recovering or carving out deleted data. Most operating systems and file systems do not always erase physical file data, allowing investigators to reconstruct it from the physical disk sectors. File carving involves searching for known file headers within the disk image and reconstructing deleted materials.
A method which uses stochastic properties of the computer system to investigate activities lacking digital artifacts. Its chief use is to investigate data theft.
One of the techniques used to hide data is via steganography, the process of hiding data inside of a picture or digital image. An example would be to hidepornographic images of children or other information that a given criminal does not want to have discovered. Computer forensics professionals can fight this by looking at the hash of the file and comparing it to the original image (if available.) While the image appears exactly the same, the hash changes as the data changes.
Volatile data
When seizing evidence, if the machine is still active, any information stored solely in RAM that is not recovered before powering down may be lost. One application of "live analysis" is to recover RAM data (for example, using Microsoft's COFEE tool, windd, Windows SCOPE) prior to removing an exhibit. CaptureGUARD Gateway bypasses Windows login for locked computers, allowing for the analysis and acquisition of physical memory on a locked computer.
RAM can be analyzed for prior content after power loss, because the electrical charge stored in the memory cells takes time to dissipate, an effect exploited by thecold boot attack. The length of time that data is recoverable is increased by low temperatures and higher cell voltages. Holding unpowered RAM below −60 °C helps preserve residual data by an order of magnitude, improving the chances of successful recovery. However, it can be impractical to do this during a field examination.
Some of the tools needed to extract volatile data, however, require that a computer be in a forensic lab, both to maintain a legitimate chain of evidence, and to facilitate work on the machine. If necessary, law enforcement applies techniques to move a live, running desktop computer. These include a mouse jiggler, which moves the mouse rapidly in small movements and prevents the computer from going to sleep accidentally. Usually, an uninterruptible power supply (UPS) provides power during transit.
However, one of the easiest ways to capture data is by actually saving the RAM data to disk. Various file systems that have journaling features such as NTFS andReiserFS keep a large portion of the RAM data on the main storage media during operation, and these page files can be reassembled to reconstruct what was in RAM at that time.
Analysis tools
A number of open source and commercial tools exist for computer forensics investigation. Typical forensic analysis includes a manual review of material on the media, reviewing the Windows registry for suspect information, discovering and cracking passwords, keyword searches for topics related to the crime, and extracting e-mail and pictures for review.









CHAPTER 2
SYSTEM ANALYSIS
In this phase a detailed appraisal of the existing system is explained. This appraisal includes how the system works and what it does. It also includes finding out in more detail- what are the problems with the system and what user requires from the new system or any new change in system. The output of this phase results in the detail model of the system. The model describes the system functions and data and system information flow. The phase also contains the detail set of user requirements and these requirements are used to set objectives for the new system.
2.1 CURRENT SYSTEM:
The cloud providers are honest-but-curious. We assume they do not tamper with the applications deployed in the infrastructure, but data might be collected or leaked. The fully homomorphic encryption scheme to perform any type of computations over encrypted data. However, the currently available fully homomorphic encryption scheme is not computationally practical.
Thus, partial homomorphic encryption schemes, those supporting certain operations over encrypted data, are typically used for practical solutions. By extending the seminal work of these works create multiple shares of the secret image and distribute the noise-like shared images among multiple cloud providers.
To recover the original image, k out of n shared images have to be retrieved. The images are shared in such a way that scaling and cropping operations can be performed on encrypted images.


2.2 SHORTCOMINGS OF THE CURRENT SYSTEM:
However, these approaches suffer from two main drawbacks:
(i)                For each image, n shares are created and uploaded to the cloud, which increase the amount of storage required as well as the processing power (all the share images are processed and updated when an operation is performed); and
(ii)              There is no protection against collusion: if k data centers collude then the original image can be retrieved.
2.3 PROPOSED SYSTEM:
In this project, we present 2DCrypt, a practical cloud-based multi-user encrypted domain image scaling and cropping framework based on the modified Paillier cryptosystem. For practical deployment, we propose a novel space-efficient tiling scheme for tile-level encrypted domain scaling and cropping operations.
Unlike the state-of-the-art Shamir’s secret sharing-based schemes, the modified Paillier-based cryptosystem scheme neither requires more than one datacenter nor assumes that an adversary cannot access more than certain number of datacenters at any time.
To overcome high overheads resulted from encrypting an image, we propose a novel space-efficient tiling scheme that allows tile-level scaling and cropping operations. Using this scheme, we can encrypt a tile of pixels rather than encrypting each pixel independently.


2.4 ADVANTAGE OF PROPOSED SYSTEM:
·        Therefore, 2DCrypt is more suitable for practical scenarios and it provides stronger defense against colluding attacks we optimize the cryptosystem to further limit its storage requirement.
·        As a result, 2DCrypt requires approximately 40 times less storage than the naive per-pixel encryption














CHAPTER 3
LITERATURE SURVEY
3.1 OVERVIEW:
A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries
Besides enlarging your knowledge about the topic, writing a literature review lets you gain and demonstrate skills in two areas
1.     INFORMATION SEEKING: the ability to scan the literature efficiently, using manual or computerized methods, to identify a set of useful articles and books
2.     CRITICAL APPRAISAL: the ability to apply principles of analysis to identify unbiased and valid studies.


3.2 Scale me, crop me, know me not: Supporting scaling and cropping in secret image sharing
Abstract:
Secret image sharing is a method for distributing a secret image amongst n data stores, each storing a shadow image of the secret, such that the original secret image can be recovered only if any k out of the n shares is available. Existing secret image sharing schemes, however, do not support scaling and cropping operations on the shadow image, which are useful for zooming on large images. In this paper, we propose an image sharing scheme that allows the user to retrieve a scaled or cropped version of the secret image by operating directly on the shadow images, therefore reducing the amount of data sent from the data stores to the user. Results and analyses show that our scheme is highly secure, requires low computational cost, and supports a large number of scale factors with arbitrary crop.
3.3 Scaling and Cropping of Wavelet-Based Compressed Images in Hidden Domain
Author- Kshitij Kansal , Manoranjan Mohanty

Abstract

With the rapid advancement of cloud computing, the use of third-party cloud datacenters for storing and processing (e.g, scaling and cropping) personal and critical images is becoming more common. For storage and bandwidth efficiency, the images are almost always compressed. Although cloud-based imaging has many advantages, security and privacy remain major issues. One way to address these two issues is to use Shamir’s (k, n) secret sharing-based secret image sharing schemes, which can distribute the secret image among n number of participants in such a way that no less than k (where k ≤ n) participants can know the image content. Existing secret image sharing schemes do not allow processing of a compressed image in the hidden domain. In this paper, we propose a scheme that can scale and crop a CDF (Cohen Daubechies Feauveau) wavelet-based compressed image (such as JPEG2000) in the encrypted domain by smartly applying secret sharing on the wavelet coefficients. Results and analyses show that our scheme is highly secure and has acceptable computational and data overheads.

3.4 Encrypted Domain DCT Based on Homomorphic Cryptosystems
Author-Tiziano Bianchi, Alessandro Piva
Abstract
Signal processing in the encrypted domain (s.p.e.d.) appears an elegant solution in application scenarios, where valuable signals must be protected from a possibly malicious processing device. In this paper, we consider the application of the Discrete Cosine Transform (DCT) to images encrypted by using an appropriate homomorphic cryptosystem. An s.p.e.d. 1-dimensional DCT is obtained by defining a convenient signal model and is extended to the 2-dimensional case by using separable processing of rows and columns. The bounds imposed by the cryptosystem on the size of the DCT and the arithmetic precision are derived, considering both the direct DCT algorithm and its fast version. Particular attention is given to block-based DCT (BDCT), with emphasis on the possibility of lowering the computational burden by parallel application of the s.p.e.d. DCT to different image blocks. The application of the s.p.e.d. 2D-DCT and 2D-BDCT to 8-bit greyscale images is analyzed; whereas a case study demonstrates the feasibility of the s.p.e.d. DCT in a practical scenario.

3.5 Confidentiality-Preserving Image Search: A Comparative Study Between Homomorphic Encryption and Distance-Preserving Randomization

Abstract:
Recent years have seen increasing popularity of storing and managing personal multimedia data using online services. Preserving confidentiality of online personal data while offering efficient functionalities thus becomes an important and pressing research issue. In this paper, we study the problem of content-based search of image data archived online while preserving content confidentiality. The problem has different settings from those typically considered in the secure computation literature, as it deals with data in rank-ordered search, and has a different security-efficiency requirement. Secure computation techniques, such as homomorphic encryption, can potentially be used in this application, at a cost of high computational and communication complexity. Alternatively, efficient techniques based on randomizing visual feature and search indexes have been proposed recently to enable similarity comparison between encrypted images. This paper focuses on comparing these two major paradigms of techniques, namely, homomorphic encryption-based techniques and feature/index randomization-based techniques, for confidentiality-preserving image search. We develop novel and systematic metrics to quantitatively evaluate security strength in this unique type of data and applications. We compare these two paradigms of techniques in terms of their search performance, security strength, and computational efficiency. The insights obtained through this paper and comparison will help design practical algorithms appropriate for privacy-aware cloud multimedia systems.

3.6 A blind digital watermarking for color medical images based on PCA

Abstract:
In this paper, we propose a new robust digital image blind watermark scheme that is used to protect color medical images. In this scheme, K-L transform is applied to an RGB medical image and the binary watermark is embedded into low frequency sub-band of DWT of the principal component of medical images. The embedding positions are chosen according to the human visual system (HVS). The embedding method is based on the relationship between center coefficients and the mean values of the nearest neighborhood coefficients. The watermark is extracted from the watermarked image only according to the relationship. The experimental results show that the proposed algorithm is robust, imperceptible and practicable.










CHAPTER 4
IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective.
The implementation stage involves careful planning, investigation of the existing system and it’s constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods.
4.1 SYSTEM ARCHITECTURE
Fig. 1: The architecture of 2DCrypt: a cloud-based secure image scaling and cropping system

4.2 MODULES:
A module is a part of a program. Programs are composed of one or more independently developed modules that are not combined until the program is linked. A single module can contain one or several routines.
Our project modules are given below:
(i)                    Image Outsourcer
(ii)                 Cloud Server
(iii)           Image User
(iv)           Key Management Authority (KMA)
4.2.1 Image Outsourcer:
This entity outsources the storing and processing (i.e., scaling and cropping) of images to a third-party cloud provider. It could be an individual or an organization, such as a hospital. In the latter case, several users can act as an Image Outsources. Typically, this entity owns the image. An Image Outsourcer is responsible for addressing security and privacy concerns attached to image outsourcing. To achieve this, the Image Outsourcer encrypts the image before sending it to the cloud datacenter. Further, the Image Outsourcer can store new images on a cloud server, delete/modify existing ones, and manage access control policies (such as read/write access rights) to regulate access to the images stored on the cloud server.
4.2.2 Cloud Server:
It is the part of infrastructure provided by a cloud service provider, such as Amazon S31, for storing and processing images. It stores encrypted images and access policies used to regulate access to the images. After making authorization checks, it retrieves a requested image from its image store. If the access request satisfies access policies, it scales and/or crops images in an encrypted manner, i.e., without decrypting them.
4.2.3 Image User:
It is authorized by the Image Outsourcer to access the requested image stored in an encrypted form on the Cloud Server. Depending on authorization, an Image User can issue either read request or process request (i.e., scaling and cropping operations). In both cases, the Image User decrypts the image returned by the request. Note that in a multi-user setting, (i) an Image User can modify an image that will be accessible by other Image Users, or (ii) an Image User can access images processed by other Image Users. In both cases, Image Users do not need to share any keying material.
4.2.4 Key Management Authority (KMA):
It generates and revokes keys. It generates a client and server key pair for each user, be it an Image Outsourcer or Image User. The client and the server side keys are securely transmitted to the user and the Cloud Server, respectively. Whenever required (say in key lost or stolen cases), the KMA revokes the keys from the system with the support of the Cloud Server.





CHAPTER 5
5.1 METHODOLOGY
2DCrypt- The Paillier cryptosystem-based proxy encryption
·        Init(1k). The KMA runs the initialization algorithm in order to generate public parameters Params and a master secret key set MSK. It takes as input a security parameter k and generates two prime numbers p and q of bit-length k. It computes n = pq. The secret key is x  [1,n2/2].
·        KeyGen(MSK, i). The KMA runs the key generation algorithm to generate keying material for users in the system. For each user i, this algorithm generates two key sets KUi and KSi by choosing a random xi1 from[1; n2=2]. Then it calculates xi2 = x  xi1, and transmits KUi= xi1 securely to user i and KSi = (i; xi2) to the server.
·        ClientEnc(D, KUi). A user i runs the data encryption algorithm to encrypt the data D using her key KUi. To encrypt the data D 2 Zn , the user client chooses a random r 2 [1; n=4].
·        UserDec(E

j(D), KUj). The user runs this algorithm to decrypt the data.

·        Revoke(i). The server runs this algorithm to revoke user i access to the data. Given the user i, the server removes KSi from the Key Store as follows: KS   KSnKSi.



5.2 OJECTIVE AND MOTIVATION
OBJECTIVE
In this work, we focus on dynamic scaling and cropping operations on  encrypted images. These two operations can be combined to implement zooming and panning operations, which are necessary to navigate through large images (such as maps). In this way, no information contained in the images can be leaked to the cloud servers, and at the same time, users can fully exploit the cloud model by delegating most of the computation to the cloud.
MOTIVATION
The main idea behind 2DCrypt is to employ the Paillier cryptosystem-based proxy encryption to encrypt images before storing them in the cloud. This version of Pailler cryptosystem supports re-encryption, and is homomorphic to additions and scalar multiplications. Therefore, we can apply this cryptosystem to encrypt an image that will be bilinearly scaled, since bilinear scaling requires addition and scalar multiplication operations only. Cropping of the encrypted image is easy since this cryptosystem does not disturb the pixel position, i.e., allowing us to obtain the corresponding pixel position after the decryption. In order to provide the multi-user support, we extend the modified Paillier cryptosystem such that each user has her own key to encrypt or decrypt the images. Thus, adding a new user or removing an existing one will not require re-encryption of existing images stored in the cloud.



CHAPTER 6
SYSTEM SPECIFICATION
The purpose of system requirement specification is to produce the specification analysis of the task and also to establish complete information about the requirement, behavior and other constraints such as functional performance and so on. The goal of system requirement specification is to completely specify the technical requirements for the product in a concise and unambiguous manner.
6.1 HARDWARE REQUIREMENTS
      Processor                  -    Pentium –III
      Speed                        -    1.1 Ghz
      RAM                                  -    256  MB(min)
      Hard Disk                 -   20 GB
      Floppy Drive                      -    1.44 MB
      Key Board                -    Standard Windows Keyboard
      Mouse                      -    Two or Three Button Mouse
      Monitor                    -    SVGA
6.2 SOFTWARE REQUIREMENTS
      Operating System                        :   Windows 8
      Front End                                     :    Java
      Database                                       :   Mysql


CHAPTER 7
SOFTWARE ENVIRONMENT
 JAVA:
Java is a programming language created by James Gosling from Sun Microsystems (Sun) in 1991. The target of Java is to write a program once and then run this program on multiple operating systems. The first publicly available version of Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006 Sun started to make Java available under the GNU General Public License (GPL). Oracle continues this project called OpenJDK.
8.2 PLATFORM INDEPENDENT
Unlike many other programming languages including C and C++ when Java is compiled, it is not compiled into platform specific machine, rather into platform independent byte code. This byte code is distributed over the web and interpreted by virtual Machine (JVM) on whichever platform it is being run.
 JAVA VIRTUAL MACHINE
Java was designed with a concept of ‘write once and run everywhere’. Java Virtual Machine plays the central role in this concept. The JVM is the environment in which Java programs execute. It is a software that is implemented on top of real hardware and operating system. When the source code (.java files) is compiled, it is translated into byte codes and then placed into (.class) files. The JVM executes these bytecodes. So Java byte codes can be thought of as the machine language of the JVM. A JVM can either interpret the bytecode one instruction at a time or the bytecode can be compiled further for the real microprocessor using what is called a just-in-time compiler. The JVM must be implemented on a particular platform before compiled programs can run on that platform.
 JAVA DEVELOPMENT KIT
The Java Development Kit (JDK) is a Sun product aimed at Java developers. Since the introduction of Java, it has been by far the most widely used Java software development kit (SDK). It contains a Java compiler, a full copy of the Java Runtime Environment (JRE), and many other important development tools.
TOOLS
You will need a Pentium 200-MHz computer with a minimum of 64 MB of RAM (128 MB of RAM recommended).
You will also need the following softwares :
·        Linux 7.1 or Windows xp/7/8 operating system
·        Java JDK 8
·        Microsoft Notepad or any other text editor
FEATURES
·        Reusability of Code
·        Emphasis on data rather than procedure
·        Data is hidden and cannot be accessed by external functions
·        Objects can communicate with each other through functions
·        New data and functions can be easily added      
    
 What is a Java Web Application?
A Java web application generates interactive web pages containing various types of markup language (HTML, XML, and so on) and dynamic content. It is typically comprised of web components such as JavaServer Pages (JSP), servlets and JavaBeans to modify and temporarily store data, interact with databases and web services, and render content in response to client requests.
Because many of the tasks involved in web application development can be repetitive or require a surplus of boilerplate code, web frameworks can be applied to alleviate the overhead associated with common activities. For example, many frameworks, such as JavaServer Faces, provide libraries for templating pages and session management, and often promote code reuse.

 What is Java EE?
Java EE (Enterprise Edition) is a widely used platform containing a set of coordinated technologies that significantly reduce the cost and complexity of developing, deploying, and managing multi-tier, server-centric applications. Java EE builds upon the Java SE platform and provides a set of APIs (application programming interfaces) for developing and running portable, robust, scalable, reliable and secure server-side applications.
Some of the fundamental components of Java EE include:
  • Enterprise JavaBeans (EJB): a managed, server-side component architecture used to encapsulate the business logic of an application. EJB technology enables rapid and simplified development of distributed, transactional, secure and portable applications based on Java technology.
  • Java Persistence API (JPA): a framework that allows developers to manage data using object-relational mapping (ORM) in applications built on the Java Platform.

 JavaScript and Ajax Development
JavaScript is an object-oriented scripting language primarily used in client-side interfaces for web applications. Ajax (Asynchronous JavaScript and XML) is a Web 2.0 technique that allows changes to occur in a web page without the need to perform a page refresh. JavaScript toolkits can be leveraged to implement Ajax-enabled components and functionality in web pages.

 Web Server and Client
Web Server is software that can process the client request and send the response back to the client. For example, Apache is one of the most widely used web server. Web Server runs on some physical machine and listens to client request on specific port.
A web client is software that helps in communicating with the server. Some of the most widely used web clients are Firefox, Google Chrome, Safari etc. When we request something from server (through URL), web client takes care of creating a request and sending it to server and then parsing the server response and present it to the user.

 HTML and HTTP
Web Server and Web Client are two separate softwares, so there should be some common language for communication. HTML is the common language between server and client and stands for HyperText Markup Language.
Web server and client needs a common communication protocol, HTTP (HyperText Transfer Protocol) is the communication protocol between server and client. HTTP runs on top of TCP/IP communication protocol.
Some of the important parts of HTTP Request are:
  • HTTP Method – action to be performed, usually GET, POST, PUT etc.
  • URL – Page to access
  • Form Parameters – similar to arguments in a java method, for example user,password details from login page.
Sample HTTP Request:
1
2
3
GET /FirstServletProject/jsps/hello.jsp HTTP/1.1
Host: localhost:8080
Cache-Control: no-cache
Some of the important parts of HTTP Response are:
  • Status Code – an integer to indicate whether the request was success or not. Some of the well known status codes are 200 for success, 404 for Not Found and 403 for Access Forbidden.
  • Content Type – text, html, image, pdf etc. Also known as MIME type
  • Content – actual data that is rendered by client and shown to user.

 MIME Type or Content Type: If you see above sample HTTP response header, it contains tag “Content-Type”. It’s also called MIME type and server sends it to client to let them know the kind of data it’s sending. It helps client in rendering the data for user. Some of the mostly used mime types are text/html, text/xml, application/xml etc.

 

Understanding URL

URL is acronym of Universal Resource Locator and it’s used to locate the server and resource. Every resource on the web has it’s own unique address. Let’s see parts of URL with an example.
http://localhost:8080/FirstServletProject/jsps/hello.jsp

http:// – This is the first part of URL and provides the communication protocol to be used in server-client communication.

localhost – The unique address of the server, most of the times it’s the hostname of the server that maps to unique IP address. Sometimes multiple hostnames point to same IP addresses and web server virtual host takes care of sending request to the particular server instance.

8080 – This is the port on which server is listening, it’s optional and if we don’t provide it in URL then request goes to the default port of the protocol. Port numbers 0 to 1023 are reserved ports for well known services, for example 80 for HTTP, 443 for HTTPS, 21 for FTP etc.

FirstServletProject/jsps/hello.jsp – Resource requested from server. It can be static html, pdf, JSP, servlets, PHP etc.

 

 Why we need Servlet and JSPs?

Web servers are good for static contents HTML pages but they don’t know how to generate dynamic content or how to save data into databases, so we need another tool that we can use to generate dynamic content. There are several programming languages for dynamic content like PHP, Python, Ruby on Rails, Java Servlets and JSPs.
Java Servlet and JSPs are server side technologies to extend the capability of web servers by providing support for dynamic response and data persistence.

 

Web Container

Tomcat is a web container, when a request is made from Client to web server, it passes the request to web container and it’s web container job to find the correct resource to handle the request (servlet or JSP) and then use the response from the resource to generate the response and provide it to web server. Then web server sends the response back to the client.
When web container gets the request and if it’s for servlet then container creates two Objects HTTPServletRequest and HTTPServletResponse. Then it finds the correct servlet based on the URL and creates a thread for the request. Then it invokes the servlet service() method and based on the HTTP method service() method invokes doGet() or doPost() methods. Servlet methods generate the dynamic page and write it to response. Once servlet thread is complete, container converts the response to HTTP response and send it back to client.
Some of the important work done by web container are:
  • Communication Support – Container provides easy way of communication between web server and the servlets and JSPs. Because of container, we don’t need to build a server socket to listen for any request from web server, parse the request and generate response. All these important and complex tasks are done by container and all we need to focus is on our business logic for our applications.
  • Lifecycle and Resource Management – Container takes care of managing the life cycle of servlet. Container takes care of loading the servlets into memory, initializing servlets, invoking servlet methods and destroying them. Container also provides utility like JNDI for resource pooling and management.
  • Multithreading Support – Container creates new thread for every request to the servlet and when it’s processed the thread dies. So servlets are not initialized for each request and saves time and memory.
  • JSP Support – JSPs doesn’t look like normal java classes and web container provides support for JSP. Every JSP in the application is compiled by container and converted to Servlet and then container manages them like other servlets.
  • Miscellaneous Task – Web container manages the resource pool, does memory optimizations, run garbage collector, provides security configurations, support for multiple applications, hot deployment and several other tasks behind the scene that makes our life easier.
             










CHAPTER 8
SYSTEM DESIGN
8.1 USE CASE DIAGRAM:
To model a system the most important aspect is to capture the dynamic behaviour. To clarify a bit in details, dynamic behaviour means the behaviour of the system when it is running /operating. So only static behaviour is not sufficient to model a system rather dynamic behaviour is more important than static behaviour.
In UML there are five diagrams available to model dynamic nature and use case diagram is one of them. Now as we have to discuss that the use case diagram is dynamic in nature there should be some internal or external factors for making the interaction. These internal and external agents are known as actors. So use case diagrams are consists of actors, use cases and their relationships.
The diagram is used to model the system/subsystem of an application. A single use case diagram captures a particular functionality of a system. So to model the entire system numbers of use case diagrams are used. A use case diagram at its simplest is a representation of a user's interaction with the system and depicting the specifications of a use case. A use case diagram can portray the different types of users of a system and the case and will often be accompanied by other types of diagrams as well.



8.2 CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information.



8.3 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams.









8.4 COLLABORATION DIAGRAM

8.5 ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.






8.6 TABLE DESIGN:
Register


Upload



Key Request
Hacker









CHAPTER 9
INPUT DESIGN AND OUTPUT DESIGN
INPUT DESIGN
The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things:’
Ø What data should be given as input?
Ø  How the data should be arranged or coded?
Ø  The dialog to guide the operating personnel in providing input.
Ø Methods for preparing input validations and steps to follow when error occur.
OBJECTIVES
1.Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user  will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow.


OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the system’s relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following objectives.
v Convey information about past activities, current status or projections of the
v Future.
v Signal important events, opportunities, problems, or warnings.
v Trigger an action.
v Confirm an action.






















CHAPTER 10
SYSTEM STUDY
FEASIBILITY STUDY:
 The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
¨     Economical feasibility
¨     Technical feasibility
¨     Social feasibility
ECONOMICAL FEASIBILITY:                 
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY:            

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

SOCIAL FEASIBILITY:      
           The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.













CHAPTER 11
SYSTEM TESTING
            The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS:
          Testing is the process of trying to discover every conceivable fault or weakness in a work product.  The different type of testing are given below:
UNIT TESTING:
          Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

INTEGRATION TESTING:
             Integration tests are designed to test integrated software components to determine if they actually run as one program.  Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at   exposing the problems that arise from the combination of components.
FUNCTIONAL TEST:
        Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:

Valid Input      :  identified classes of valid input must be accepted.
Invalid Input   :  identified classes of invalid input must be rejected.
Functions         :  identified functions must be exercised.
Output               : identified classes of application outputs must be                              exercised.
Systems/ Procedures:  interfacing systems or procedures must be invoked.

     Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.
SYSTEM TEST:
     System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

WHITE BOX TESTING:
        White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.
BLACK BOX TESTING:
        Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.
UNIT TESTING:
          Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Test strategy and approach
          Field testing will be performed manually and functional tests will be written in detail.
Test objectives
·        All field entries must work properly.
·        Pages must be activated from the identified link.
·        The entry screen, messages and responses must not be delayed.
Features to be tested
·        Verify that the entries are of the correct format
·        No duplicate entries should be allowed
·        All links should take the user to the correct page.
INTEGRATION TESTING:
          Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
          The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
ACCEPTANCE TESTING:
          User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.

CHAPTER 12

FUTURE WORK

We believe that 2DCrypt can be extended in multiple ways. An obvious direction is to extend this work for compressed images. Another approach can be using our idea for addressing security issues in more specialized images, such as histopathology images and G.I.S maps. It will be interesting to investigate if we can utilize properties of these specialized images to further decrease overheads. Another possible future work can be extending our work to video processing in encrypted domains.











CHAPTER 13
HomePage




Register







Mail send
Login

Share Message








View Encrypted Message






Decrypt Message
Source code
Register
<%@page import="java.util.UUID"%>
<%@page import="java.net.HttpURLConnection"%>
<%@page import="java.net.URL"%>
<%@page import="java.net.URLEncoder"%>
<%@page import="design.mailsession"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.util.Random"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.net.InetAddress"%>
<%@page import="java.sql.Connection"%>
<%@page import="java.sql.*" import="databaseconnection.*"%>
<%
String name1=null,username1=null,emailid1=null,password1=null,confirmpassword1=null,dob1=null,loc=null;
String country1=null,gender1=null,mobilenum1=null,utype=null;
name1=request.getParameter("name");
session.setAttribute("name",name1);
emailid1=request.getParameter("eid");
session.setAttribute("eid",emailid1);
password1=request.getParameter("pwd");
loc=request.getParameter("k");
country1=request.getParameter("mno");
gender1=request.getParameter("gender");
String click="NOTHING";
String status="REGISTER";
String secretkey = UUID.randomUUID().toString();
System.out.println("SESS = " + secretkey);
System.out.println("WEBSITE KEY: "+secretkey);
PreparedStatement ps=null;
Connection conn=null;
try
{
mailsession m=new mailsession();
m.mailsend(secretkey, emailid1);
Connection con=databaseconnection.getconnection();
ps=con.prepareStatement ("insert into registration (name,eid,pwd,mno,gender,location,click,status,sessionkey) values (?,?,?,?,?,?,?,?,?)");
ps.setString(1,name1);
ps.setString(2,emailid1);
ps.setString(3,password1);
ps.setString(4,country1);
ps.setString(5,gender1);
ps.setString(6,loc);
ps.setString(7,click);
ps.setString(8,status);
ps.setString(9,secretkey);
ps.executeUpdate();
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
%>
<%
String mno=request.getParameter("mno");
System.out.println(" mno"+ mno);
System.out.println(" secretkey"+ secretkey);
try {
String recipient = "91"+mno;
String message = secretkey;
String requestUrl  = "http://bulksms.mysmsmantra.com:8080/WebSMS/SMSAPI.jsp?username=micinfsms&password=1049242150&sendername=micinf&mobileno="+URLEncoder.encode(recipient)+"&message="+URLEncoder.encode(message)+"";
URL url = new URL(requestUrl);
HttpURLConnection uc = (HttpURLConnection)url.openConnection();
out.println(uc.getResponseMessage());
out.println("message send");
uc.disconnect();
response.sendRedirect("regsuccess.jsp");
} catch(Exception ex) {
out.println(ex.getMessage());
}
%>
Login
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.Connection"%>
<%
String id=null,emailid=null;
String user = request.getParameter("user");
String email = request.getParameter("email");
String pass = request.getParameter("pass");
String sessionkey = request.getParameter("sessionkey");
session.setAttribute("consumerlog", user);
System.out.println(user);
Class.forName("com.mysql.jdbc.Driver");
Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/scaling", "root", "root");
Statement st = con.createStatement();
String Q = "select * from registration where name= '"+user+"' and pwd= '"+pass+"' and sessionkey='"+sessionkey+"'  ";
ResultSet rs = st.executeQuery(Q);
if(rs.next())
{
id=rs.getString("id");
session.setAttribute("id", id);
System.out.println(id);
emailid=rs.getString("eid");
session.setAttribute("emailid", emailid);
System.out.println(emailid);
response.sendRedirect("home.jsp");
}
else{
response.sendRedirect("loginhome.jsp");
}

%>
Upload
<%@page import="design.mailsession"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="secureoutsourced.util.EmailFinder"%>
<%@page import="secureoutsourced.util.MailSender"%>
<%@page import="secureoutsourced.DB.DbConnector"%>
<%@page import="secureoutsourced.util.TrippleDes"%>
<%@page import="java.io.InputStream"%>
<%@page import="java.sql.Connection"%>
<%@page import="java.sql.PreparedStatement"%>
<%@page import="javax.print.attribute.standard.Fidelity"%>
<%@page import="com.sun.crypto.provider.RSACipher"%>
<%@page import="org.apache.struts2.components.Else"%>
<%@page import="java.sql.Statement"%>
<%@page import="databaseconnection.databaseconnection"%>
<%@ page import="java.sql.*" import="databaseconnection.*"%>
<%
String name = request.getParameter("name");
session.setAttribute("name",name);
String eid=request.getParameter("eid");
session.setAttribute("eid",eid);
System.out.println(eid);
System.out.println(name);
String sessionKey = request.getParameter("sessionKey");
String title = new TrippleDes(sessionKey).encrypt(request.getParameter("title"));
session.setAttribute("title",title);
String tag = new TrippleDes(sessionKey).encrypt(request.getParameter("tag"));
session.setAttribute("tag",tag);
String about = new TrippleDes(sessionKey).encrypt(request.getParameter("about"));
session.setAttribute("about",about);
String content = new TrippleDes(sessionKey).encrypt(request.getParameter("content"));
session.setAttribute("content",content);
//String receiver = request.getParameter("select1");
System.out.println("key is"+sessionKey);
System.out.println("title is"+title);
System.out.println("tag is"+tag);
System.out.println("about is"+about);
System.out.println("content is" +content);
try{
mailsession m=new mailsession();
m.mailsend(sessionKey, eid);
Connection con = databaseconnection.getconnection();
Statement st = con.createStatement();
String sql="SELECT * FROM registration where sessionKey='"+sessionKey+"'";
ResultSet rs=st.executeQuery(sql);
while(rs.next()){PreparedStatement ps=con.prepareStatement("Update upload1 set title ='"+title+"',tag ='"+tag+"',about ='"+about+"', content ='"+content+"',eid ='"+eid+"' where name='"+name+"' ");
//ps.setInt(1,hit);
int x=ps.executeUpdate();
response.sendRedirect("shareview.jsp?");
}
}
catch (Exception ex)
{
out.println(ex.getMessage());
}
Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/scaling","root","root");
PreparedStatement ps=con.prepareStatement("Update upload1 set title ='"+title+"',tag ='"+tag+"',about ='"+about+"', content ='"+content+"' where name='"+name+"' ");
%>
Decrypt
<%@page import="databaseconnection.databaseconnection"%>
<%@page import="secureoutsourced.DB.DbConnector"%>
<%@page import="secureoutsourced.util.TrippleDes"%>
<%@page import="java.io.OutputStream"%>
<%@page import="java.io.FileOutputStream"%>
<%@page import="java.io.InputStream"%>
<%@page import="java.io.File"%>
<%@page import="java.sql.Connection"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Blob"%>
<%
String name=(String)session.getAttribute("name");
System.out.println(name);
String decryptionkey=(String)session.getAttribute("decryptionkey");
System.out.println(decryptionkey);
//String title=(String)session.getAttribute("title");
//System.out.println(tile);
// String key =  request.getParameter("decryptionkey");
String title = new TrippleDes(decryptionkey).decrypt((String)session.getAttribute("title"));
String tag = new TrippleDes(decryptionkey).decrypt((String)session.getAttribute("tag"));
String about = new TrippleDes(decryptionkey).decrypt((String)session.getAttribute("about"));
String content = new TrippleDes(decryptionkey).decrypt((String)session.getAttribute("content"));
//  String email = request.getParameter("email");
//String name = request.getParameter("name");
//session.setAttribute("name",name);
%>
<%

String image=request.getParameter("image");
session.setAttribute("image", image);
int count=0,rank;                          String s1="",s2="",s3="",s4="",s5="",s6="",s7="",s8="",s9,s10,s11,s12,s13="";
int i=0,j=0;
String ii="";
try{
// String connectionURL = "jdbc:mysql://localhost:3306/upload1?user=root&password=root";
Connection con = databaseconnection.getconnection();
Statement st = con.createStatement();
String sql="SELECT * FROM upload1 where name='"+name+"'";
ResultSet rs=st.executeQuery(sql);
while(rs.next())
{
ii=rs.getString("id");
//s2=rs.getString("name");
//s3=rs.getString("email");
//s4=rs.getString("mobile");
//s5=rs.getString("addr");
//s6=rs.getString("feature");
//s7=rs.getString("filename");
//s9=rs.getString("tag");
//s10=rs.getString("st");
//count=rs.getInt("count");
i=Integer.parseInt(ii);
session.setAttribute("id",i);
%>
<%
}
//connection.close();
}

catch(Exception e)
{
out.println(e.getMessage());
}
%>
Download
<%@page import="secureoutsourced.DB.DbConnector"%>
<%@page import="java.io.InputStream"%>
<%@page import="java.util.logging.Logger"%>
<%@page import="java.util.logging.Level"%>
<%@page import="java.sql.SQLException"%>
<%@page import="java.io.OutputStream"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>
<%@page import="java.sql.Blob"%>
<%
try {
String name = (String) session.getAttribute("name");
System.out.println(name);
Blob blob = null;
// String name = request.getParameter("name");
//String name=(String) session.getAttribute("name");

Connection con1 = DbConnector.getConnection();
Statement st1 = con1.createStatement();
String select = "select *from upload1 where name = '" + name + "'";
ResultSet rs1 = st1.executeQuery(select);
if (rs1.next()) {
blob = rs1.getBlob("image");
}
if (blob != null) {
// String name =name;
byte a[] = blob.getBytes(1, (int) blob.length());
response.setContentType("text/plain");
response.setHeader("Content-Disposition", "attachment; name=\"" + name + "\"");
OutputStream os = response.getOutputStream();
os.write(a);
os.close();
a = null;
}else
{
response.sendRedirect("index.jsp");
}
} catch (SQLException ex) {
ex.printStackTrace();
}
%>












CHAPTER 14
CONCLUSION
We addressed this issue by proposing 2DCrypt, a modified Paillier cryptosystem-based scheme that allows a cloud server to perform scaling and cropping operations without learning the image content. In 2DCrypt, users do not need to share keys for accessing the image stored in the cloud. Therefore, 2DCrypt is suitable for scenarios where it is not desirable for the image user to maintain per-image keys. Furthermore, 2DCrypt is more practical than existing schemes based on Shamir’s secret sharing because it neither employs more than one datacenter nor assumes that multiple adversaries could collude by accessing a certain number of datacenters.
To make 2DCrypt practical, we propose some improvements to decrease overheads resulted from the application of the modified Paillier cryptosystem. First, we proposed a space-efficient tiling scheme that allows the cloud to perform per-tile operations. In 2DCrypt, we put a number of pixels in a tile, and encrypt the tile instead of encrypting each pixel independently. Furthermore, we optimized the modified Paillier scheme to limit its storage requirement.








REFERENCES
[1] C. Gentry, “A fully homomorphic encryption scheme,” Ph.D. disserta-tion, Stanford University, Stanford, USA, 2009.
[2] M. Naehrig, K. Lauter, and V. Vaikuntanathan, “Can homomorphic encryption be practical?” in Proceedings of the 3rd ACM Workshop on Cloud Computing Security Workshop, 2011, pp. 113–124.
[3] A. Shamir, “How to share a secret,” Communications of the ACM, vol. 22, pp. 612–613, November 1979.
[4]  M. Mohanty, W. T. Ooi, and P. K. Atrey, “Scale me, crop me, know me not: supporting scaling and cropping in secret image sharing,” in Proceedings of the 2013 IEEE International Conference on Multimedia & Expo, San Jose, USA, 2013.
[5] K. Kansal, M. Mohanty, and P. K. Atrey, “Scaling and cropping of wavelet-based compressed images in hidden domain,” in MultiMedia Modeling, ser. Lecture Notes in Computer Science, 2015, vol. 8935, pp. 430–441.
[6] C.-C. Thien and J.-C. Lin, “Secret image sharing,” Computers and Graphics, vol. 26, pp. 765–770, October 2002.
[7] T. Bianchi, A. Piva, and M. Barni, “Encrypted domain DCT based on homomorphic cryptosystems,” EURASIP Journal on Multimedia and Information Security, vol. 2009, pp. 1:1–1:12, January 2009.
[8] X. Sun, “A blind digital watermarking for color medical images based on PCA,” in Proceedings of the IEEE International Conference on Wireless Communications, Networking and Information Security, Beijing, China, August 2010, pp. 421–427.
[9] N. K. Pareek, V. Patidar, and K. K. Sud, “Image encryption using chaotic logistic map,” Image and Vision Computing, vol. 24, pp. 926–934, September 2006.
[10] W. Lu, A. L. Varna, and M. Wu, “Confidentiality-preserving image search: A comparative study between homomorphic encryption and distance-preserving randomization,” IEEE Access, vol. 2, pp. 125–141, February 2014.
[11] C.-Y. Hsu, C.-S. Lu, and S.-C. Pei, “Image feature extraction in encrypted domain with privacy-preserving SIFT,” IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4593–4607, 2012.
[12] J. Yuan, S. Yu, and L. Guo, “SEISA: Secure and efficient encrypted image search with access control,” in IEEE Conference on Computer Communications, 2015, pp. 2083–2091.
[13] P. Paillier, “Public-key cryptosystems based on composite degree residuosity classes,” in Advances in Cryptology EUROCRYPT, 1999, vol. 1592, pp. 223–238.
[14] S. Goldwasser and S. Micali, “Probabilistic encryption,” Journal of Computer and System Sciences, vol. 28, no. 2, pp. 270–299, 1984.
[15] J. Benaloh and D. Tuinstra, “Receipt-free secret-ballot elections (Extended Abstract),” in Proceedings of the Twenty-sixth Annual ACM Symposium on Theory of Computing, 1994, pp. 544–553.
[16] R. L. Rivest, A. Shamir, and L. Adleman, “A method for obtaining digital signatures and public-key cryptosystems,” Communications of the ACM, vol. 21, pp. 120–126, February 1978.
[17] T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete logarithms,” in Advances in Cryptology, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1985, vol. 196, pp. 10–18.
[18] D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” in IEEE Symposium on Security and Privacy, 2000, pp. 44–55.
[19] D. Boneh, G. D. Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with keyword search,” in Advances in Cryptology-Eurocrypt, 2004, pp. 506–522.
[20] P. Golle, J. Staddon, and B. Waters, “Secure conjunctive keyword search over encrypted data,” in Applied Cryptography and Network Security, 2004, pp. 31–45.



Comments

Popular posts from this blog

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD report

IDENTITY-BASED PROXY-ORIENTED DATA UPLOADING AND REMOTE DATA INTEGRITY CHECKING IN PUBLIC CLOUD ABSTRACT More and more clients would like to store their data to PCS (public cloud servers) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: IDPUIC (identity-based proxy-oriented data uploading and remote data integrity checking in public cloud). We give the formal definition, system model and se

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION

A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION ABSTRACT Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three

LIFI

LIFI Prof . Harald Haas is a technology of high brightness light emitting diodes(LED).It is bidirectional ,high speed and fully networked wireless communication.    LiFi is designed to use LED light bulbs similar to those currently in use in many energy-conscious homes and offices. However, LiFi bulbs are outfitted with a   chip   that modulates the light imperceptibly for optical data transmission. LiFi data is transmitted by the LED bulbs and received by photoreceptors. LiFi's early developmental models were capable of 150 megabits-per-second ( Mbps ). Some commercial kits enabling that speed have been released. In the lab, with stronger LEDs and different technology, researchers have enabled 10   gigabits -per-second (Gbps), which is faster than   802.11ad .  Benefits of LiFi: ·         Higher speeds than  Wi-Fi . ·         10000 times the frequency  spectrum  of radio. ·         More secure because data cannot be intercepted without a clear line of si