A LOCALITY SENSITIVE LOW-RANK
MODEL
FOR IMAGE TAG COMPLETION
ABSTRACT
Many visual applications have benefited from the
outburst of web images, yet the imprecise and incomplete tags arbitrarily
provided by users, as the thorn of the rose, may hamper the performance of
retrieval or indexing systems relying on such data. In this paper, we propose a
novel locality sensitive low-rank model for image tag completion, which
approximates the global nonlinear model with a collection of local linear
models. To effectively infuse the idea of locality sensitivity, a simple and
effective pre-processing module is designed to learn suitable representation
for data partition, and a global consensus regularizer is introduced to
mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is
employed as local models, where the local geometry structures are preserved for
the low-dimensional representation of both tags and samples. Extensive
empirical evaluations conducted on three datasets demonstrate the effectiveness
and efficiency of the proposed method, where our method outperforms pervious
ones by a large margin.
CHAPTER 1
INTRODUCTION
1.1
MULTIMEDIA
Multimedia is content that uses a combination of different content forms such as text, audio, images, animations, video and interactive content. Multimedia contrasts with media that use only
rudimentary computer displays such as text-only or traditional forms of printed
or hand-produced material. Multimedia can be recorded and played, displayed,
interacted with or accessed by information content processing devices, such as computerized and
electronic devices, but can also be part of a live performance. Multimedia
devices are electronic
media devices used to store and experience
multimedia content. Multimedia is distinguished from media in fine art; by including audio, for
example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia scales up the amount of media content in multimedia
application.
Multimedia may be broadly divided
into linear and non-linear categories.
Linear active content progresses often without any navigational control for the
viewer such as a cinema
presentation. Non-linear uses interactivity to control progress as with a video game or self-paced computer
based training. Hypermedia is an example of non-linear content.
Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation
system. A live multimedia presentation may allow interactivity via an
interaction with the presenter or performer.
“Face it. Butlers cannot be blind.
Secretaries cannot be deaf. But somehow we take it for granted that computers
can be both.” This is a quote by Nicholas Negroponte who wrote the popular book
Being Digital. The purpose of this course is exactly to deal with this problem.
Before we go on, it is important to define the term media. It refers to the
storage, transmission, interchange, presentation, representation and perception
of different information types (data types) such as text, graphics, voice,
audio and video, where
Storage - refers to the type of
physical means to store data, such as magnetic tape, hard disk, optical disk,
DVDs, CD-ROMs, etc.
Transmission - refers to the type of
physical means of transmitting data, such as coaxial cable, twisted pair,
fibre, etc.
Interchange - refers to the means of
interchanging data; this can be by storage media, transmission media, or a
combination of both.
Presentation - is used to
describe the type of physical means to reproduce information to the user or to
acquire information from the user; for example, speakers,video windows,
immersive environment etc.
Representation - is
related to how information is described in an abstract form for use within an
electronic system. For example, to present text to the user, the text can be
coded in raster graphics, in graphics primitives or in simple ASCII characters.
Thus the presentation can be the same but with different representations. Other
examples of representation media are ASN.1 and SGML.
Perception - is used to
describe the nature of information as perceived by the user; for example
speech, music and film. We use the term multimedia to denote the property of
handling a variety of representation media in an integrated manner. The more
means of transmission used and the greater the interactivity, the more
‘multimedia’ a system is.
The development of
multimedia technology is to provide a digital experience that losely resembles
that of the physical world. Biological forms provide what are, perhaps, the
most sophisticated multimedia systems. Visual - Each eye has 127m rods and
cones with 1m nerve fibres connected to the brain. Neurons can fire every
millisecond.
Eye-brain link data
transmission rate is about 1 Gbit/sec. -2-
Aural - Each ear has
23,500 hairs in the cochlea, around 30,000 nerve fibres to the brain. They
transmit approximately 30 Mbit to the brain if all were firing. In practice,
data rate is around 100 Kbit/sec. Human has a hearing range of 20Hz - 20 kHz,
whale 2Hz - 20 kHz, dolphin 20Hz - 200 kHz, and bat 16Hz - 200 kHz.
Tactile - A human has
approximately 5 million (compared with 127m on eyes) sensors on the skin
sensing at about a few Mbit/sec. Finger tips have nerve endings 1mm apart.
Sense - Human can detect
around 10,000 different smells at about a few Kbit/sec. Maximum range for
smell: Human -1m, Dog -100 m, Moth -5 km, Elephant -3 km. The tongue can only
sense four tastes sweet, sour (H+ hydrogen ions), salt (Na+ sodium ions), and
bitter. A few tens of bits/sec are sent to the brain. A snake can taste its way
around.
1.2
CHARACTERISTICS:
Multimedia
presentations may be viewed in person on stage, projected, transmitted, or played locally with a media
player. A broadcast may be a live or recorded multimedia presentation. Broadcasts
and recordings can be either analogy or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming
multimedia may be live or on-demand.
Multimedia
games and simulations may be used in a physical environment
with special effects, with multiple users in an online network, or locally with
an offline computer, game system, or simulator.
The various formats of technological
or digital multimedia may be intended to enhance the users' experience, for
example to make it easier and faster to convey information. Or in entertainment
or art, to transcend everyday experience.
CHAPTER 2
SYSTEM ANALYSIS
In
this phase a detailed appraisal of the existing system is explained. This
appraisal includes how the system works and what it does. It also includes
finding out in more detail- what are the problems with the system and what user
requires from the new system or any new change in system. The output of
this phase results in the detail model of the system. The model describes the
system functions and data and system information flow. The phase also contains
the detail set of user requirements and these requirements are used to set
objectives for the new system.
2.1 EXISTING SYSTEM:
Existing
completion methods are usually founded on linear assumptions; hence the
obtained models are limited due to their incapability to capture complex
correlation patterns. The first issue involving in such a locality sensitive
framework is how to conduct meaningful data partition, which is nontrivial in
the tag completion scenario, since the distance between samples, which is
essential to most partition methods, is extremely unreliable when measured by
low-level features and incomplete user-provided tags.
The
second problem concerns the construction of the local models, that is, how to
effectively model the local correlations between similar samples and related
tags. Specifically, each initial tag sub-matrix is decomposed into a low-rank
basis matrix and a sparse coefficient matrix, and the compressed representation
for both the tags and samples are learnt, respectively. However, it is not preferable to learn local
models independently, since the output of data partition is typically far from
satisfactory, even with the help of the pre-processing module.
3.2 DISADVANTAGES OF THE
EXISTING SYSTEM:
·
Obtained models are
limited due to their incapability to capture complex correlation patterns.
·
User-labelled visual
data, such as images which are uploaded and shared in Flickr, are usually
associated with imprecise and incomplete tags.
·
This will pose threats
to the retrieval or indexing of these images, causing them difficult to be
accessed by users.
·
Missing label is
inevitable in the manual labeling phase, since it is infeasible for users to
label every related word and avoid all possible confusions, due to the
existence of synonyms and user preference.
3.3 PROPOSED SYSTEM:
The
first to infuse the idea of locality sensitivity into the scenario of image tag
completion, and our
Main
contributions are summarized as follows.
1)
It propose a locality sensitive low-rank model for image tag completion, which
approximates the global nonlinear model with a collection of local linear
models, by which complex correlation structures can be captured.
2)
Several adaptations are introduced to enable the fusion of locality sensitivity
and low-rank factorization, including a simple and effective pre-processing
module and a global consensus regularizer to mitigate the risk of over fitting.
3.4 ADVANTAGE OF
PROPOSED SYSTEM:
·
Proposed LSLR method, a
global consensus model is introduced to regularize the local models.
·
Proposed LSLR method, a
global consensus model is introduced to regularize the local models.
·
To effectively
integrate locality sensitivity and low-rank factorization, several adaptations
are introduced, including the design of a pre-processing module and a global
consensus regularizer.
CHAPTER 3
IMPLEMENTATION
Implementation is the stage
of the project when the theoretical design is turned out into a working system.
Thus it can be considered to be the most critical stage in achieving a
successful new system and in giving the user, confidence that the new system
will work and be effective.
The implementation stage
involves careful planning, investigation of the existing system and it’s
constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.
4.1 MODULES:
·
Pre-Processing and Data Partition
·
Low-Rank Model
·
Global Consistency Among Local Models
4.1.1
Pre-Processing and Data Partition
Introduces
two closely related modules: pre-processing and data partition. A, the goal of
data partition is to divide the entire sample space into a collection of local
neighbourhoods or groups, such that samples within each group are semantically
related. However, as we observed in our experiments, direct partitions usually fail
to generate meaningful groups, regardless of using visual features or
incomplete initial tags. The reason behind is easy to understand. For instance,
images depicting people may be
divided into the clusters concerning beach
or building according to
their backgrounds, especially when people
are missing. On the other hand, despite actually describing different
contents such as bear, fox or mountain, samples initially labeled as snow may be grouped into the same cluster about snow, since distance is distorted
when their foreground tags are absent.
4.1.2
Low-Rank Model
Construction
of the local models for individual groups sparse coefficient matrixhi can
be interpreted as compressed kdimensional representation for the j-th
tag; and symmetrically, the l-th row in the basis matrix Wi can
be considered as compressed representation for the l-th sample in the k-dimensional
subspace. This new perspective reveals the inherent relations between the
original spaces and the low-dimensional representation obtained by (1), and
allows us to employ information obtained in the original spaces to improve the
learning process of Wi and Hi .
4.1.3
Global Consistency among Local Models
Optimizing
each Wi and Hi independently for each cluster is not preferable
due to potential overfitting, especially for the aforementioned cluttered
clusters. Under such circumstances, images depicting the same concept may be
partitioned into multiple clusters, whereas samples available for learning a
specific model maybe insufficient. Therefore, the obtained local model is very
likely to overfit the training data visible for the current cluster. To fix
this problem, a global consensus regularizer is introduced by assuming that
each Hi is consistent with a global reference matrix H.
CHAPTER
4
LITERATURE SURVEY
4.1
OVERVIEW:
A
literature review is an account of what has been published on a topic by
accredited scholars and researchers. Occasionally you will be asked to write
one as a separate assignment, but more often it is part of the introduction to
an essay, research report, or thesis. In writing the literature review, your
purpose is to convey to your reader what knowledge and ideas have been
established on a topic, and what their strengths and weaknesses are. As a piece
of writing, the literature review must be defined by a guiding concept (e.g.,
your research objective, the problem or issue you are discussing or your
argumentative thesis). It is not just a descriptive list of the material
available, or a set of summaries
Besides
enlarging your knowledge about the topic, writing a literature review lets you
gain and demonstrate skills in two areas
1. INFORMATION SEEKING:
the ability to scan the literature efficiently, using manual or computerized
methods, to identify a set of useful articles and books
2. CRITICAL APPRAISAL:
the ability to apply principles of analysis to identify unbiased and valid
studies.
5.2
TAG COMPLETION FOR IMAGE RETRIEVAL
Many social image
search engines are based on keyword/tag matching. This is because tag-based
image retrieval (TBIR) is not only efficient but also effective. The
performance of TBIR is highly dependent on the availability and quality of
manual tags. Recent studies have shown that manual tags are often unreliable
and inconsistent. In addition, since many users tend to choose general and
ambiguous tags in order to minimize their efforts in choosing appropriate
words, tags that are specific to the visual content of images tend to be
missing or noisy, leading to a limited performance of TBIR. To address this
challenge, we study the problem of tag completion, where the goal is to
automatically fill in the missing tags as well as correct noisy tags for given
images. We represent the image-tag relation by a tag matrix, and search for the
optimal tag matrix consistent with both the observed tags and the visual
similarity. We propose a new algorithm for solving this optimization problem.
Extensive empirical studies show that the proposed algorithm is significantly
more effective than the state-of-the-art algorithms. Our studies also verify
that the proposed algorithm is computationally efficient and scales well to
large databases.
5.3 IMAGE TAG COMPLETION BY
LOW-RANK FACTORIZATION WITH DUAL RECONSTRUCTION STRUCTURE PRESERVED
A novel tag completion algorithm is proposed in
this paper, which is designed with the following features: 1) Low-rank and
error s-parsity: the incomplete initial tagging matrix D is decomposed into the
complete tagging matrix A and a sparse error matrix E. However, instead of
minimizing its nuclear norm, A is further factorized into a basis matrix U and
a sparse coefficient matrix V, i.e. D = UV + E. This low-rank formulation
encapsulating sparse coding enables our algorithm to recover latent structures
from noisy initial data and avoid performing too much denoising; 2) Local
reconstruction structure consistency: to steer the completion of D, the local
linear reconstruction structures in feature space and tag space are obtained
and preserved by U and V respectively. Such a scheme could alleviate the
negative effect of distances measured by low-level features and incomplete
tags. Thus, we can seek a balance between exploiting as much information and
not being mislead to suboptimal performance. Experiments conducted on Corel5k
dataset and the newly issued Flickr30Concepts dataset demonstrate the
effectiveness and efficiency of the proposed method.
5.4 TOPIC REGRESSION MULTI-MODAL LATENT DIRICHLET ALLOCATION FOR
IMAGE ANNOTATION
We present topic-regression multi-modal Latent Dirich-let
Allocation (tr-mmLDA), a novel statistical topic model for the task of image
and video annotation. At the heart of our new annotation model lies a novel
latent variable regression approach to capture correlations between image or
video features and annotation texts. Instead of sharing a set of latent topics
between the 2 data modalities as in the formulation of correspondence LDA in,
our approach introduces a regression module to correlate the 2 sets of topics,
which captures more general forms of association and allows the number of
topics in the 2 data modalities to be different. We demonstrate the power of
tr-mmLDA on 2 standard annotation datasets: a 5000-image subset of COREL and a
2687-image LabelMe dataset. The proposed association model shows improved
performance over correspondence LDA as measured by caption perplexity.
5.5 SUPERVISED LEARNING OF
SEMANTIC CLASSES FOR IMAGE ANNOTATION AND RETRIEVAL
A probabilistic formulation for semantic image annotation and
retrieval is proposed. Annotation and retrieval are posed as classification
problems where each class is defined as the group of database images labeled
with a common semantic label. It is shown that, by establishing this one-to-one
correspondence between semantic labels and semantic classes, a minimum
probability of error annotation and retrieval are feasible with algorithms that
are 1) conceptually simple, 2) computationally efficient, and 3) do not require
prior semantic segmentation of training images. In particular, images are
represented as bags of localized feature vectors, a mixture density estimated
for each image, and the mixtures associated with all images annotated with a
common semantic label pooled into a density estimate for the corresponding
semantic class. This pooling is justified by a multiple instance learning
argument and performed efficiently with a hierarchical extension of
expectation-maximization. The benefits of the supervised formulation over the
more complex, and currently popular, joint modeling of semantic label and
visual feature distributions are illustrated through theoretical arguments and
extensive experiments. The supervised formulation is shown to achieve higher
accuracy than various previously published methods at a fraction of their
computational cost. Finally, the proposed method is shown to be fairly robust
to parameter tuning
5.6 FREQUENCY BASED LOCALITY SENSITIVE HASHING
Nearest Neighbor (NN) search is of major importance to many
applications, such as information retrieval, data mining and so on. However,
finding the NN in high dimensional space has been proved to be time-consuming.
In recent years, Locality Sensitive Hashing (LSH) has been proposed to solve Approximate
Nearest Neighbor (ANN) problem. The main drawback of LSH is that it requires
quite a lot of memory to achieve good performance, which makes it not that suit
for today's application of massive data. We analyze generic LSH scheme as well
as the properties of LSH hash functions based on p-stable distributions and
propose a new LSH scheme called Frequency Based Locality Sensitive Hashing
(FBLSH). FBLSH just uses one function based on p-stable distributions as hash
function of a hash table, and it sets a frequency threshold m, only those
points which collide with query point more than m times can be candidate anns.
FBLSH is easy to implement and through experiments, we show that FBLSH can
reduce the extra space cost by several orders of magnitude with less (or
similar) time cost while achieving better search quality compared with LSH
based onp-stable distributions.
CHAPTER 5
5.1 METHODOLOGY
Pre-processing
b and data partition. A, the goal of
data partition is to divide the entire sample space into a collection of local neighbourhoods
or groups, such that samples within each group are semantically related.
However, as we observed in our experiments, direct partitions usually fail to
generate meaningful groups, regardless of using visual features or incomplete
initial tags. The reason behind is easy to understand. For instance, images
depicting people may be divided
into the clusters concerning beach or
building according to their
backgrounds, especially when people are
missing. On the other hand, despite actually describing different contents such
as bear, fox or mountain, samples initially labeled as snow may be grouped into the same cluster about snow, since distance is distorted
when their foreground tags are absent.
We
evaluate the proposed method on three datasets: the wellestablished benchmark
datasets Corel5K and IAPR TC12, as well as a real-world dataset
Flickr30Concepts. For each dataset, three types of features are used for
evaluation, including the 1000-d SIFT BoW feature,2 a 400-d composite feature
obtained by merging 10 types of basic features,3 and a 4096-d CNN feature
extracted with a pre-trained 16-layerVGGNet [42], [43]. Statistics of all the
three datasets
The
second step is to learn the low-dimensional representation for each image.
Recall that the basis matrix in (1) can be interpreted as row-wise
low-dimensional representation for each sample, thus it can be readily adapted
to fit our demand. Specifically, we solve (3) for the entire dataset, and
utilize the basis matrix W0 as the novel representation and feed it into
the data partition module, with the subscript “0” denoting the entire dataset.
CHAPTER 6
SYSTEM SPECIFICATION
The purpose of system
requirement specification is to produce the specification analysis of the task
and also to establish complete information about the requirement, behavior and
other constraints such as functional performance and so on. The goal of system
requirement specification is to completely specify the technical requirements
for the product in a concise and unambiguous manner.
6.1 HARDWARE REQUIREMENTS
•
Processor - Pentium –III
•
Speed - 1.1 Ghz
•
RAM - 256
MB(min)
•
Hard
Disk - 20 GB
•
Floppy
Drive - 1.44 MB
•
Key
Board - Standard Windows Keyboard
•
Mouse
-
Two or Three Button Mouse
•
Monitor
-
SVGA
6.2 SOFTWARE REQUIREMENTS
• Operating System :
Windows 8
• Front End : Java
• Database :
Mysql
CHAPTER 7
SOFTWARE ENVIRONMENT
JAVA:
Java
is a programming language created by James Gosling from Sun Microsystems (Sun)
in 1991. The target of Java is to write a program once and then run this
program on multiple operating systems. The first publicly available version of
Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the
Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006
Sun started to make Java available under the GNU General Public License (GPL).
Oracle continues this project called OpenJDK.
8.2
PLATFORM INDEPENDENT
Unlike
many other programming languages including C and C++ when Java is compiled, it
is not compiled into platform specific machine, rather into platform
independent byte code. This byte code is distributed over the web and
interpreted by virtual Machine (JVM) on whichever platform it is being run.
JAVA VIRTUAL MACHINE
Java
was designed with a concept of ‘write once and run everywhere’. Java Virtual
Machine plays the central role in this concept. The JVM is the environment in
which Java programs execute. It is a software that is implemented on top of
real hardware and operating system. When the source code (.java files) is
compiled, it is translated into byte codes and then placed into (.class) files.
The JVM executes these bytecodes. So Java byte codes can be thought of as the
machine language of the JVM. A JVM can either interpret the bytecode one
instruction at a time or the bytecode can be compiled further for the real
microprocessor using what is called a just-in-time compiler. The JVM
must be implemented on a particular platform before compiled programs can run
on that platform.
JAVA DEVELOPMENT KIT
The Java Development Kit (JDK) is a Sun product
aimed at Java developers. Since the introduction of Java, it has been by far
the most widely used Java software
development kit (SDK). It
contains a Java compiler, a full copy of the Java
Runtime Environment (JRE), and many other important development
tools.
TOOLS
You will need a
Pentium 200-MHz computer with a minimum of 64 MB of RAM (128 MB of RAM
recommended).
You will also
need the following softwares :
·
Linux 7.1 or Windows
xp/7/8 operating system
·
Java JDK 8
·
Microsoft Notepad or
any other text editor
FEATURES
·
Reusability of Code
·
Emphasis on data rather
than procedure
·
Data is hidden and
cannot be accessed by external functions
·
Objects can communicate
with each other through functions
·
New data and functions
can be easily added
What is a Java Web Application?
A Java web application generates
interactive web pages containing various types of markup language (HTML, XML,
and so on) and dynamic content. It is typically comprised of web components
such as JavaServer Pages (JSP), servlets and JavaBeans to modify and temporarily
store data, interact with databases and web services, and render content in
response to client requests.
Because many of the tasks involved
in web application development can be repetitive or require a surplus of
boilerplate code, web frameworks can be applied to alleviate the overhead
associated with common activities. For example, many frameworks, such as
JavaServer Faces, provide libraries for templating pages and session
management, and often promote code reuse.
What is Java EE?
Java EE (Enterprise Edition) is a
widely used platform containing a set of coordinated technologies that
significantly reduce the cost and complexity of developing, deploying, and
managing multi-tier, server-centric applications. Java EE builds upon the Java
SE platform and provides a set of APIs (application programming interfaces) for
developing and running portable, robust, scalable, reliable and secure
server-side applications.
Some of the fundamental components
of Java EE include:
- Enterprise
JavaBeans (EJB): a managed, server-side component architecture used to
encapsulate the business logic of an application. EJB technology enables
rapid and simplified development of distributed, transactional, secure and
portable applications based on Java technology.
- Java
Persistence API (JPA): a framework that allows developers to manage data
using object-relational mapping (ORM) in applications built on the Java
Platform.
JavaScript and Ajax Development
JavaScript is an object-oriented
scripting language primarily used in client-side interfaces for web
applications. Ajax (Asynchronous JavaScript and XML) is a Web 2.0 technique
that allows changes to occur in a web page without the need to perform a page
refresh. JavaScript toolkits can be leveraged to implement Ajax-enabled components
and functionality in web pages.
Web Server and Client
Web Server is software that can
process the client request and send the response back to the client. For
example, Apache is one of the most widely used web server. Web Server runs on
some physical machine and listens to client request on specific port.
A web client is software that helps
in communicating with the server. Some of the most widely used web clients are
Firefox, Google Chrome, Safari etc. When we request something from server
(through URL), web client takes care of creating a request and sending it to
server and then parsing the server response and present it to the user.
HTML and HTTP
Web Server and Web Client are two
separate softwares, so there should be some common language for communication.
HTML is the common language between server and client and stands for HyperText
Markup Language.
Web server and client needs a
common communication protocol, HTTP (HyperText Transfer Protocol)
is the communication protocol between server and client. HTTP runs on top of
TCP/IP communication protocol.
Some of the important parts of HTTP
Request are:
- HTTP
Method – action to be performed,
usually GET, POST, PUT etc.
- URL
– Page to access
- Form
Parameters – similar to arguments in a
java method, for example user,password details from login page.
Sample HTTP Request:
1
2
3
|
GET
/FirstServletProject/jsps/hello.jsp HTTP/1.1
Host: localhost:8080
Cache-Control:
no-cache
|
Some of the important parts of HTTP
Response are:
- Status
Code – an integer to indicate whether
the request was success or not. Some of the well known status codes are
200 for success, 404 for Not Found and 403 for Access Forbidden.
- Content
Type – text, html, image, pdf etc. Also
known as MIME type
- Content
– actual data that is rendered by client and shown to user.
MIME Type or Content Type: If you see above sample
HTTP response header, it contains tag “Content-Type”. It’s also called MIME
type and server sends it to client to let them know the kind of data it’s
sending. It helps client in rendering the data for user. Some of the mostly
used mime types are text/html, text/xml, application/xml etc.
Understanding
URL
URL is acronym of
Universal Resource Locator and it’s used to locate the server and resource.
Every resource on the web has it’s own unique address. Let’s see parts of URL
with an example.
http://localhost:8080/FirstServletProject/jsps/hello.jsp
http:// – This is the first part
of URL and provides the communication protocol to be used in server-client
communication.
localhost – The unique address of
the server, most of the times it’s the hostname of the server that maps to
unique IP address. Sometimes multiple hostnames point to same IP addresses and web
server virtual host takes care of sending request to the particular server
instance.
8080 – This is the port on
which server is listening, it’s optional and if we don’t provide it in URL then
request goes to the default port of the protocol. Port numbers 0 to 1023 are
reserved ports for well known services, for example 80 for HTTP, 443 for HTTPS,
21 for FTP etc.
FirstServletProject/jsps/hello.jsp – Resource requested from
server. It can be static html, pdf, JSP, servlets, PHP etc.
Why we need Servlet and JSPs?
Web servers are good for
static contents HTML pages but they don’t know how to generate dynamic content
or how to save data into databases, so we need another tool that we can use to
generate dynamic content. There are several programming languages for dynamic
content like PHP, Python, Ruby on Rails, Java Servlets and JSPs.
Java Servlet and JSPs are
server side technologies to extend the capability of web servers by providing
support for dynamic response and data persistence.
Web
Container
Tomcat is a web container,
when a request is made from Client to web server, it passes the request to web
container and it’s web container job to find the correct resource to handle the
request (servlet or JSP) and then use the response from the resource to
generate the response and provide it to web server. Then web server sends the
response back to the client.
When web container gets
the request and if it’s for servlet then container creates two Objects
HTTPServletRequest and HTTPServletResponse. Then it finds the correct servlet
based on the URL and creates a thread for the request. Then it invokes the
servlet service() method and based on the HTTP method service() method invokes
doGet() or doPost() methods. Servlet methods generate the dynamic page and
write it to response. Once servlet thread is complete, container converts the
response to HTTP response and send it back to client.
Some of the important work
done by web container are:
- Communication Support
– Container provides easy way of communication between web server and the
servlets and JSPs. Because of container, we don’t need to build a server
socket to listen for any request from web server, parse the request and
generate response. All these important and complex tasks are done by
container and all we need to focus is on our business logic for our
applications.
- Lifecycle and
Resource Management – Container takes
care of managing the life cycle of servlet. Container takes care of
loading the servlets into memory, initializing servlets, invoking servlet
methods and destroying them. Container also provides utility like JNDI for
resource pooling and management.
- Multithreading
Support – Container
creates new thread for every request to the servlet and when it’s
processed the thread dies. So servlets are not initialized for each
request and saves time and memory.
- JSP Support
– JSPs doesn’t look like normal java classes and web container provides
support for JSP. Every JSP in the application is compiled by container and
converted to Servlet and then container manages them like other servlets.
- Miscellaneous Task
– Web container manages the resource pool, does memory optimizations, run
garbage collector, provides security configurations, support for multiple
applications, hot deployment and several other tasks behind the scene that
makes our life easier.
CHAPTER 8
SYSTEM DESIGN
ARCHITECTURE:
Our
first step is to eliminate the side effect of both the high-frequency and rare
tags by removing their corresponding columns in the initial tag matrix, since
they hardly appear as the main content of the images. For instance, sky usually
relates to background rather than foreground, but the learning process may
consider it as an intrinsic pattern due to its high-frequency. thereby
preserving its information in the low-dimensional representation. To identify
tags that need to be removed, some thresholds are manually set based on the
counts of the initial
tags.1
The second step is to learn the low-dimensional representation for each image.
Recall that the basis matrix in (1) can be interpreted as row-wise
low-dimensional representation for each sample, thus it can be readily adapted
to fit our demand. Specifically, we solve (3) for the entire dataset, and
utilize the
basis
matrix W0 as the novel representation and feed it into the data
partition module, with the subscript “0” denoting the entire dataset. It is
worth noting that here we prefer using W0 over typical label
transformation methods such as CPLST [32] for the following reasons:
1)
the proposed method does not rely on the true label matrix Y as in the
formulation of CPLST, and
2)
sample correlation can be explicitly embedded, which is suitable for
data
partition.
The
data partition module takes as input W0 , and assigns a cluster label to
each sample. According to this assignment, the visual feature matrix X and
initial tag matrix D are reorganized into c sub-matrices denoted
as {Xi}ci
=1 ∈ Rni
×d
and {Di}ci
=1 ∈ Rni
×m
respectively, which are adopted for the
establishment
of
local models later, see Section III-C. As mentioned earlier, ni represents
the number of samples contained in
the
i-th cluster, and thus we have_i ni = n. Our approach
makes no particular assumptions on the choice
of
partition algorithms, thus various methods can be considered, including k-means
clustering, locality sensitive hashing (LSH)
and some adaptive methods such as Affinity Propagation clustering or
ISODATA , if sufficient prior knowledge is available. In our implementation, we
use k-means clustering
for
its simplicity and efficiency.
8.1
USE CASE DIAGRAM:
To model a system the most
important aspect is to capture the dynamic behaviour. To clarify a bit in
details, dynamic
behaviour means the behaviour of the
system when it is running /operating. So only static behaviour is not
sufficient to model a system rather dynamic behaviour is more important than
static behaviour.
In UML there are five
diagrams available to model dynamic nature and use case diagram is one of them.
Now as we have to discuss that the use case diagram is dynamic in nature there
should be some internal or external factors for making the interaction. These
internal and external agents are known as actors. So use case diagrams are
consists of actors, use cases and their relationships.
The diagram is used to
model the system/subsystem of an application. A single use case diagram
captures a particular functionality of a system. So to model the entire system
numbers of use case diagrams are used. A use case diagram at its
simplest is a representation of a user's interaction with the system and
depicting the specifications of a use case. A use case
diagram can portray the different types of users of a system and the case and
will often be accompanied by other types of diagrams as well.
8.2 CLASS DIAGRAM:
In software engineering, a
class diagram in the Unified Modeling Language (UML) is a type of static
structure diagram that describes the structure of a system by showing the
system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.
8.3
SEQUENCE DIAGRAM:
A
sequence diagram in Unified Modeling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It
is a construct of a Message Sequence Chart. Sequence diagrams are sometimes
called event diagrams, event scenarios, and timing diagrams.
8.4 COLLABORATION
DIAGRAM
8.5ACTIVITY
DIAGRAM:
Activity diagrams are
graphical representations of workflows of stepwise activities and actions with
support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity
diagram shows the overall flow of control.
8.6
TABLE DESIGN
CHAPTER 9
INPUT
DESIGN AND OUTPUT DESIGN
INPUT DESIGN
The input design is the link
between the information system and the user. It comprises the developing
specification and procedures for data preparation and those steps are necessary
to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it
can occur by having people keying the data directly into the system. The design
of input focuses on controlling the amount of input required, controlling the
errors, avoiding delay, avoiding extra steps and keeping the process simple.
The input is designed in such a way so that it provides security and ease of
use with retaining the privacy. Input Design considered the following things:’
Ø What
data should be given as input?
Ø How the data should be
arranged or coded?
Ø The dialog to guide the
operating personnel in providing input.
Ø Methods
for preparing input validations and steps to follow when error occur.
OBJECTIVES
1.Input
Design is the process of converting a user-oriented description of the input
into a computer-based system. This design is important to avoid errors in the
data input process and show the correct direction to the management for getting
correct information from the computerized system.
2. It
is achieved by creating user-friendly screens for the data entry to handle
large volume of data. The goal of designing input is to make data entry easier
and to be free from errors. The data entry screen is designed in such a way
that all the data manipulates can be performed. It also provides record viewing
facilities.
3.When
the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that
the user
will not be in maize of instant. Thus the
objective of input design is to create an input layout that is easy to follow
OUTPUT DESIGN
A quality output is one, which
meets the requirements of the end user and presents the information clearly. In
any system results of processing are communicated to the users and to other
system through outputs. In output design it is determined how the information
is to be displaced for immediate need and also the hard copy output. It is the
most important and direct source information to the user. Efficient and
intelligent output design improves the system’s relationship to help user
decision-making.
1.
Designing computer output should proceed in an
organized, well thought out manner; the right output must be developed while
ensuring that each output element is designed so that people will find the
system can use easily and effectively. When analysis design computer output,
they should Identify the specific output that is needed to meet the
requirements.
2.Select
methods for presenting information.
3.Create
document, report, or other formats that contain information produced by the
system.
The output form of an information
system should accomplish one or more of the following objectives.
v Convey
information about past activities, current status or projections of the
v Future.
v Signal
important events, opportunities, problems, or warnings.
v Trigger
an action.
v Confirm
an action.
CHAPTER 10
SYSTEM STUDY
FEASIBILITY STUDY:
The feasibility of the project is
analyzed in this phase and business proposal is put forth with a very general
plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to
ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
¨
Economical feasibility
¨
Technical feasibility
¨
Social feasibility
ECONOMICAL
FEASIBILITY:
This study is carried out to check
the economic impact that the system will have on the organization. The amount
of fund that the company can pour into the research and development of the
system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.
TECHNICAL FEASIBILITY:
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
SOCIAL FEASIBILITY:
The
aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods
that are employed to educate the user about the system and to make him familiar
with it. His level of confidence must be raised so that he is also able to make
some constructive criticism, which is welcomed, as he is the final user of the
system.
CHAPTER 11
SYSTEM TESTING
The purpose of testing is to
discover errors. Testing is the process of trying to discover every conceivable
fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring
that the Software system meets its requirements and user expectations and does
not fail in an unacceptable manner. There are various types of test. Each test
type addresses a specific testing requirement.
TYPES OF TESTS:
Testing
is the process of trying to discover every conceivable fault or weakness in a
work product. The different type of
testing are given below:
UNIT
TESTING:
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs produce
valid outputs. All decision branches and internal code flow should be
validated. It is the testing of individual software units of the application
.it is done after the completion of an individual unit before integration.
This
is a structural testing, that relies on knowledge of its construction and is invasive.
Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
INTEGRATION TESTING:
Integration tests are designed to
test integrated software components to determine if they actually run as one
program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.
FUNCTIONAL TEST:
Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on
the following items:
Valid Input : identified
classes of valid input must be accepted.
Invalid Input : identified
classes of invalid input must be rejected.
Functions :
identified functions must be exercised.
Output : identified
classes of application outputs must be exercised.
Systems/ Procedures: interfacing
systems or procedures must be invoked.
Organization and preparation of functional
tests is focused on requirements, key functions, or special test cases. In
addition, systematic coverage pertaining to identify Business process flows;
data fields, predefined processes, and successive processes must be considered
for testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.
SYSTEM TEST:
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
WHITE BOX TESTING:
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.
BLACK BOX TESTING:
Black Box Testing is testing the
software without any knowledge of the inner workings, structure or language of
the module being tested. Black box tests, as most other kinds of tests, must be
written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a
testing in which the software under test is treated, as a black box .you cannot
“see” into it. The test provides inputs and responds to outputs without
considering how the software works.
UNIT TESTING:
Unit
testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing
to be conducted as two distinct phases.
Test strategy and approach
Field
testing will be performed manually and functional tests will be written in
detail.
Test objectives
·
All field entries must
work properly.
·
Pages must be activated
from the identified link.
·
The entry screen,
messages and responses must not be delayed.
Features to be tested
·
Verify that the entries
are of the correct format
·
No duplicate entries
should be allowed
·
All links should take
the user to the correct page.
INTEGRATION TESTING:
Software
integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused
by interface defects.
The
task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
Test Results: All
the test cases mentioned above passed successfully. No defects encountered.
ACCEPTANCE TESTING:
User
Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the
functional requirements.
Test Results: All
the test cases mentioned above passed successfully. No defects encountered
CHAPTER 12
FUTURE
WORK
Method
can capture complex correlations by approximating a nonlinear model with a
collection of local linear models. To effectively integrate locality
sensitivity and low-rank factorization, several adaptations are introduced,
including the design of a pre-processing module and a global consensus
regularizer. Our method achieves superior results on three datasets and
outperforms pervious methods by a large margin. This would definitely enhance
the capability of our method, since recovering beach within the first cluster
much easier than recovering it from the entire dataset.
CHAPTER 13
SOURCE CODE
UPLOAD
<form
action="uploadaction.jsp" name="form"
method="post" enctype="multipart/form-data"
onSubmit="return validation();">
<table width="674">
<tr>
<td
width="327"><tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">CATEGORIES</td>
<td
width="365"><input type="text"
name="categories" id="s" size="25"
value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">SUB CATEGORIES</td>
<td><input
type="text" name="subcategories" id="s" size="25"
value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">IMAGE NAME</td>
<td><input
type="text" name="imagename" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">CLASS</td>
<td><input
type="text" name="classs" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">FAMILY</td>
<td><input
type="text" name="family" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">PHYLUM</td>
<td><input type="text"
name="phylum" id="s" size="25" value=""
/></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">NATIVE PLACE</td>
<td><input
type="text" name="nativeplace" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">HABITAT</td>
<td><input
type="text" name="habitat" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">GENUS</td>
<td><input
type="text" name="genus" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">SCIENTIFIC NAME</td>
<td><input
type="text" name="scientificname" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">PHYSICAL CHARACTERISTICS</td>
<td><input
type="text" name="physicalcharacteristics" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td width="327"
height="31" style="font-size: 20px; font-weight: bold;
font-family:console">COLOR</td>
<td><input
type="text" name="color" id="s" size="25"
value="" /></td>
</tr>
<tr>
<td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">TYPES</td>
<td><input
type="text" name="types" id="s"
size="25" value="" />
</td>
</tr>
<tr><td width="327"
height="37" style="font-size: 20px; font-weight: bold;
font-family:console">IMAGE</td>
<td><input
type="file" name="images" id="s"
size="25" value="" /></td>
</tr>
<tr>
<td align="center">
<input type="submit"
name="submit" value="submit" />
<td width="365"><input
type="reset" name="reset"
value="reset"></td
width="54"></tr></table>
</form>
UPLOAD ACTION
<%@page
import="com.oreilly.servlet.*,java.sql.*,java.lang.*,databaseconnection.*,java.text.SimpleDateFormat,java.util.*,java.io.*,javax.servlet.*,
javax.servlet.http.*"
errorPage="Error.jsp"%>
<%@page import="javax.crypto.*"%>
<%@ page
import="java.net.InetAddress"%>
<%@page import="java.io.File"%>
<%@page import="
java.security.MessageDigest"%>
<%@ page
import="java.sql.*,databaseconnection.*"%>
<%@page
import="java.io.IOException.*"%>
<%@page
import="java.io.InputStreamReader.*"%>
<html>
<body>
<%
ArrayList list = new ArrayList();
ServletContext
context = getServletContext();
String
dirName =context.getRealPath("\\Gallery");
String
paramname=null;
//int
appid=(Integer)(session.getAttribute( "appid" ));
String
categories=null, subcategories=null, imagename=null, classs=null,
family=null,phylum=null, nativeplace=null,habitat=null,genus=null,scientificname=null,physicalcharacteristics=null,color=null,types=null,images=null;
java.util.Date now = new java.util.Date();
String DATE_FORMAT1 = "dd/MM/yyyy
hh:mm:ss a";
SimpleDateFormat sdf1 = new
SimpleDateFormat(DATE_FORMAT1);
String date= sdf1.format(now);
Random r = new Random();
int
ii = r.nextInt(100000-50000)+50000;
String k = Integer.toString(ii);
String key="D0SKzyt?="+ii;
//
String k1 = ii+"";
//
System.out.println("DATAOWNER KEY: "+key);
File
file1 = null;
try
{
MultipartRequest
multi = new MultipartRequest(request, dirName, 10
* 1024 * 1024); // 10MB
Enumeration
params = multi.getParameterNames();
while
(params.hasMoreElements())
{
paramname
= (String) params.nextElement();
if(paramname.equalsIgnoreCase("categories"))
{
categories=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("subcategories"))
{
subcategories=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("imagename"))
{
imagename=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("classs"))
{
classs=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("family"))
{
family=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("phylum"))
{
phylum=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("nativeplace"))
{
nativeplace=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("habitat"))
{
habitat=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("genus"))
{
genus=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("scientificname"))
{
scientificname=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("physicalcharacteristics"))
{
physicalcharacteristics=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("color"))
{
color=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("types"))
{
types=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("images"))
{
images=multi.getParameter(paramname);
}
}
int
f = 0;
Enumeration
files = multi.getFileNames();
while
(files.hasMoreElements())
{
paramname
= (String) files.nextElement();
if(paramname.equals("d1"))
{
paramname
= null;
}
if(paramname
!= null)
{
f
= 1;
images=
multi.getFilesystemName(paramname);
String
fPath = context.getRealPath("\\Gallery\\"+images);
file1
= new File(fPath);
FileInputStream
fs = new FileInputStream(file1);
list.add(fs);
}
}
FileInputStream fs1 = null;
int
count= 0;
Class.forName("com.mysql.jdbc.Driver");
Connection
con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/locality","root","root");
PreparedStatement ps=con.prepareStatement("INSERT INTO
upload(categories,subcategories,imagename,classs,family,phylum,nativeplace,habitat,genus,scientificname,physicalcharacteristics,color,types,images,count)
VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)");
//PreparedStatement
ps=con.prepareStatement("insert into
upload(document,username,filename,secretkey,keyword,date,count) values
('"+username+"','"+filename+"','"+ky+"','"+strDateNew1+"')");
ps.setString(1,categories);
session.setAttribute(categories, categories);
ps.setString(2,subcategories);
session.setAttribute(subcategories, subcategories);
ps.setString(3,imagename);
ps.setString(4,classs);
ps.setString(5,family);
ps.setString(6,phylum);
ps.setString(7,nativeplace);
ps.setString(8,habitat);
ps.setString(9,genus);
ps.setString(10,scientificname);
ps.setString(11,physicalcharacteristics);
ps.setString(12,color);
ps.setString(13,types);
ps.setString(14,images);
ps.setInt(15,count+1);
//ps.setInt(8,count);
//ps.setInt(9,result);
if(f ==
0)
ps.setObject(1,null);
else
if(f == 1)
{
fs1
= (FileInputStream)list.get(0);
ps.setBinaryStream(14,fs1,fs1.available());
}
int
x=ps.executeUpdate();
if(x!=0)
{
response.sendRedirect("success.jsp?msg=success");
}
else
{
response.sendRedirect("upload.jsp?msg=fails");
}
}
catch
(Exception e)
{
out.println(e.getMessage());
}
%>
</body>
</html>
VIEW
<form>
<table border="2" width="861">
<tr><th width="235" height="33"align="center"
font style="align-items:Rockwell Extra Bold; font-size:20px;
color:red;">categorie</th>
<th width="325" align="center" font
style="align-items:Rockwell Extra Bold; font-size:20px;
color:red;">subcategories</th>
<th width="325" align="center" font
style="align-items:Rockwell Extra Bold; font-size:20px;
color:red;">imagename</th>
</tr>
<%
String categories=null,subcategories=null,imagename=null;
try
{
Connection con = databasecon.getconnection();
Statement st = con.createStatement();
String sss = "select * from upload";
ResultSet rs=st.executeQuery(sss);
while(rs.next())
{
categories=rs.getString(2);
subcategories=rs.getString(3);
imagename=rs.getString(4);
%>
<form
action="subview.jsp" method="get" >
<tr align="center">
<td align="center" font
style="align-items:Rockwell Extra Bold; font-size:18px;
color:black;"><%=categories%></td>
<td align="center" font
style="align-items:Rockwell Extra Bold; font-size:18px;
color:black;"><%=subcategories%></td>
<td align="center" font
style="align-items:Rockwell Extra Bold; font-size:18px;
color:black;"><%=imagename%></td>
</tr>
<%
}
}
catch(Exception e)
{
System.out.println(e);
}
%>
</table>
</form>
<form> <center>
<p align="center" font
style="align-items:Rockwell Extra Bold; font-size:28px;
color:#ff0066;">View graph</p></center>
<table>
<%
int s=0;
int i=0;
Connection
con = null,con1 = null;
Statement
st = null,st1 = null;
ResultSet
rs = null,rs1 = null;
String
count =null;
// String
categories=request.getParameter("categories");
System.out.println(categories);
//String =
request.getParameter("loguser");
String cnt=null,cnt1=null;
try{
con=databasecon.getconnection();
st=con.createStatement();
String
sql="select * from upload";
rs=st.executeQuery(sql);
while(rs.next())
{
cnt=rs.getString("count");
i=Integer.parseInt(cnt);
s=s+i;
}
}
catch(Exception e2)
{
out.println(e2.getMessage());
}
try{
con1=databasecon.getconnection();
st1=con1.createStatement();
String
sql1="select * from upload";
rs1=st1.executeQuery(sql1);
while(rs1.next())
{
categories=rs1.getString("imagename");
cnt1=rs1.getString("count");
int
cnt2=Integer.parseInt(cnt1);
int
d=cnt2*s/100*10;
%>
<tr><td
width="339" align="left"><img
src="images/blue.jpg" width="<%=d%>"
height="35"> <%=categories%>(<%=cnt1%>)</td></tr>
<%
}
}
catch(Exception e1)
{
out.println(e1.getMessage());
}
%>
</table>
</table>
</form>
<div
class="footer">
</div>
<div
align=center></a></div></body>
</html>
<head>
<meta
http-equiv="Content-Type" content="text/html;
charset=UTF-8">
<div
class="fbg">
</head>
</html>
SCREEN SHORT
HOME
PAGE:
UPLOAD
FILE:
SEARCH:
RESULT
CATEGORIES
SUBCATEGORIES SEARCH:
RESULT:
RESULT
CHAPTER 14
CONCLUSION
In
this paper we propose a locality sensitive low-rank model for image tag
completion. The proposed method can capture complex correlations by
approximating a nonlinear model with a collection of local linear models. To
effectively integrate locality sensitivity and low-rank factorization, several
adaptations are introduced, including the design of a pre-processing module and
a
global consensus regularizer. Our method achieves superior results on three
datasets and outperforms pervious methods by a large margin.
ABBREVATION
LSLR-Locality
Sensitive Low rank Reconstructions
MTL-Multi-Task
Learning
MAP-Maximum a
posterior
REFERENCES
[1]
H.-F. Yu, P. Jain, and I. S. Dhillon, “Large-scale multi-label learning with
Missing
labels,” in Proc. 31st Int. Conf. Mach. Learn., 2014, pp. 593–601.
[2]
M. M. Kalayeh, H. Idrees, and M. Shah, “NMF-KNN: Image annotation
using
weighted multi-view non-negative matrix factorization,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recog., Jun. 2014, pp. 184–191.
[3]
S. Feng, R. Manmatha, and V. Lavrenko, “Multiple Bernoulli relevance
models
for image and video annotation,” in Proc. IEEE Conf. Comput.
Vis.
Pattern Recog., Jun. 2004, vol. 2, pp. 1002–1009.
[4]
G. Carneiro, A. B. Chan, P. J. Moreno, and N. Vasconcelos, “Supervised
learning
of semantic classes for image annotation and retrieval,” IEEE
Trans.
Pattern Anal. Mach. Intell., vol. 29, no. 3, pp.
394–410, Mar. 2007.
[5]
D. M. Blei and M. I. Jordan, “Modeling annotated data,” in Proc. Int.
ACM
SIGIR Conf. Res. Develop. Inform. Retrieval,
2003, pp. 127– 134.
[6]
D. Putthividhy, H. T. Attias, and S. S. Nagarajan, “Topic regression multimodal
latent dirichlet allocation for image annotation,” in Proc. IEEE
Conf.
Comput. Vis. Pattern Recog., Jun. 2010, pp.
3408–3415.
[7]
C. Yang, M. Dong, and J. Hua, “Region-based image annotation using
asymmetrical
support vector machine-based multiple-instance learning,”
in
Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 2006, vol. 2, pp.
2057–2063.
[8]
A. Makadia, V. Pavlovic, and S. Kumar, “A new baseline for image annotation,” in
Proc. Eur. Conf. Comput. Vis., 2008, vol. 5304, pp. 316–329.
[9]
M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid, “TagProp: Discriminative metric
learning in nearest neighbor models for image autoannotation,” in Proc. IEEE
Int. Conf. Comput. Vis., Sep.–Oct. 2009, pp. 309–316.
[10]
Y. Verma and C. Jawahar, “Image annotation using metric learning in semantic
neighbourhoods,” in Proc. Eur. Conf. Comput. Vis., 2012, pp. 836–849.
[11]
K. Q. Weinberger and L. K. Saul, “Distance metric learning for large
margin
nearest neighbor classification,” J. Mach. Learn. Res., vol. 10, pp.
207–244,
2009.
[12]
S. S. Bucak, R. Jin, and A. K. Jain, “Multi-label learning with incomplete
class
assignments,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun.
2011, pp. 2801–2808.
[13]
Y. Verma and C. Jawahar, “Exploring SVM for image annotation in presence of
confusing labels,” in Proc. Brit. Mach. Vis. Conf., 2013.
[14]
H.-F. Yu, P. Jain, P. Kar, and I. S. Dhillon, “Large-scale multilabel learning
with missing labels,” CoRR, 2013. [Online]. Available: http://arxiv.org/abs/1307.5101.
[15]
M. Chen, A. Zheng, and K. Weinberger, “Fast image tagging,” in Proc.
Int.
Conf. Mach. Learn., 2013, pp. 1274–1282.
Comments
Post a Comment