Can
You Trust Online Ratings? A Mutual Reinforcement Model for Trust worthy Online Rating Systems
Abstract:
The average
of customer ratings on a product, which we call a reputation, is one of the key
factors in online purchasing decisions. There is, however, no guarantee of the
trustworthiness of a reputation since it can be manipulated rather easily. In
this paper, we define false reputation as the problem of a reputation being
manipulated by unfair ratings and design a general framework that provides
trustworthy reputations. For this purpose, we propose Trust-reputation, an
algorithm that iteratively adjusts a reputation based on the confidence of
customer ratings. We also show the effectiveness of Trust-reputation through
extensive experiments in comparisons to state-of-the-art approaches.
Existing
system:
The most
common way to aggregate ratings is to use the average (i.e., to assign the same
weight to each rating), which may result in a false reputation. For example, a
group of abusers may inflate or deflate the overall rating of a targeted
product. The existing strategies avoid a false reputation by detecting and
eliminating abusers. However, abusers cannot always be detected, and it is
possible that normal users may be regarded as abusers. Consequently, existing
strategies can exclude the ratings of normal users or allow the ratings of
abusers to be included in the calculation of a reputation.
existing
strategies against shilling attacks; all are trying to prevent the manipulation
of ratings by abusers. The classification algorithms for detecting shilling
attacks, however, may face situations where malicious users cannot be detected
and/or where normal users are considered as malicious. As a result, there may
be instances when a reputation is calculated without the ratings of normal
users or including the ratings of malicious users.
Proposed
system:
The proposed
framework does not require clustering or classification, both of which
necessitate considerable learning time. Though TRUE-REPUTATION does not require
any learning steps when solving a false reputation, extensive experiments show
that TRUE-REPUTATION provides more trustworthy reputations than do algorithms
based on clustering or classification. The contributions of this paper are as
follows. First, we have defined false reputation and categorized various
real-life scenarios in which a false reputation can occur. The categorization
of the false-reputation scenarios helps us design experimental scenarios
similar to real-life situations. Second, we have proposed a general framework
to address a false reputation by quantifying the level of confidence of a
rating. The framework includes TRUE-REPUTATION, an algorithm that iteratively
adjusts the reputation based on the confidence of customer ratings. Third, we
have verified the superiority of TRUE-REPUTATION by comparing it with
machine-learningbased algorithms through extensive experiments.
Problem
statement:
This paper
defines the false reputation problem in online rating systems and categorizes
various real-life situations in which a false reputation may occur. The
understanding of why and when a false reputation occurs helps us establish
experimental situations. In order to solve the false reputation problem, we
proposed a general framework that quantifies the confidence of a rating based
on activity, objectivity, and consistency. The framework includes
TRUE-REPUTATION, an algorithm that iteratively adjusts the reputation based on
the confidence of user ratings. Through extensive experiments, we showed that
TRUE-REPUTATION can reduce the influence of various RAs. We also showed that
TRUE-REPUTATION is superior to the existing approaches that use
machine-learning algorithms such as clustering and classification to solve the
false reputation problem.
Future
Work:
In a further
study, we plan to develop an approach to accurately separate an item score and
a seller score from a user rating. Separating the true reputation of items and
that of sellers would enable customers to judge items and sellers
independently.
Implementation
Of Modules:
False-Reputation
Module:
In an online
rating system, it is almost impossible to obtain the ground-truth data because
there is no way of knowing which users have caused a false reputation in a
real-life database. We artificially establish various situations in which a
false reputation may occur and test the performance of the proposed algorithm
in these situations. In order to claim that the generated situations are likely
to occur in real-life online rating systems, we list various scenarios
involving a false reputation and categorize them according to the types of user
and situations. In this section, we define dangerous users who cause a false reputation
and dangerous situations leading to a false reputation. Using the definitions
of dangerous users and dangerous situations,



TABLE I
FALSE-REPUTATION SCENARIOS

Robustness:
In order to
enhance the robustness of recommendation systems, it is imperative to develop
detection methods against shilling attacks. Major research in shilling attack
detection falls into three categories:
1)
classifying shilling attacks according to different types of attacks.
2) extracting
attributes that represent the characteristics of the shilling attacks and
quantifying the attributes
3) developing
robust classification algorithms based on the quantified attributes used to
detect shilling attacks
Strategies
for improving the robustness of multi agent systems can be classified into two
categories. The first group of strategies is based on the principle of majority
rule. Considering the collection of majority opinions (more than half the
opinions) as fair, this group of strategies excludes the collection of minority
opinions, viewed as biased, when calculating the reputation.
Unfair
Ratings:
The
trustworthiness of a reputation can be achieved when a large number of buyers
take part in ratings with honesty .If some users intentionally give unfair
ratings to a product, especially when few users have participated, the
reputation of the product could easily be manipulated. In this paper, we define
false reputation as the problem of a reputation being manipulated by unfair
ratings. In the case of a newly-launched product, for example, a company may
hire people in the early stages of promotion to provide high ratings for the
product. In this case, a false reputation adversely affects the decision making
of potential buyers of the product.
a reputation
based on the confidence scores of all ratings, the proposed algorithm
calculates the reputation without the risk of omitting ratings by normal users
while reducing the influence of unfair ratings by abusers. We call this
algorithm, which solves the false reputation problem by computing the true
reputation, TRUE-REPUTATION. Our framework for online rating systems
and the existing strategies in multiagent systems serve the same purpose in
that they are trying to address unfair ratings by abusers.
Buyer
Modules:
Numerous
studies have been conducted to improve the trustworthiness of online shopping
malls by detecting abusers who have participated in the rating system for the
sole purpose of manipulating the information provided to potential buyers
(e.g., reputations of sellers and recommended items). Especially in the fields
of multiagent and recommendation systems, various strategies have been proposed
to handle abusers who attack the vulnerability of the system. In online rating
systems, on the other hand, a buyer can give only a single rating per item.
Thus, the relationship between buyers and items is significantly different from
the relationship between buyers and sellers; as such, the graph structure of an
online rating system is very different from that of a multiagent system. This
paper uses an approach that considers the relation between buyers and items.
Conclusion:
This paper
defines the false reputation problem in online rating systems and categorizes
various real-life situations in which a false reputation may occur. The
understanding of why and when a false reputation occurs helps us establish
experimental situations. In order to solve the false reputation problem, we
proposed a general framework that quantifies the confidence of a rating based
on activity, objectivity, and consistency. The framework includes
TRUE-REPUTATION, an algorithm that iteratively adjusts the reputation based on
the confidence of user ratings. Through extensive experiments, we showed that
TRUE-REPUTATION can reduce the influence of various RAs. We also showed that
TRUE-REPUTATION is superior to the existing approaches that use
machine-learning algorithms such as clustering and classification to solve the
false reputation problem. There are more factors (other than those addressed in
this paper) known to be elemental in assessing the trust of users in the field
of social and behavioral sciences. We plan to study how to incorporate them
into our model to compute the reputation of items more accurately. In the
e-market place such as Amazon.com and eBay.com, buyers give ratings on items
they have purchased. We note, however, that the rating given by a buyer
indicates the degree of his satisfaction not only with the item (e.g., the
quality) but also with its seller (e.g., the promptness of delivery).
Comments
Post a Comment