Skip to main content

Making Digital Artifacts on the Web Verifiable and Reliable

Making Digital Artifacts on the Web
Verifiable and Reliable
Abstract:
       The current Web has no general mechanisms to make digital artifacts  such as datasets, code, texts, and images verifiable and permanent. For digital artifacts that are supposed to be immutable, there is moreover no commonly accepted method to enforce this immutability. These shortcomings have a serious negative impact on the ability to reproduce the results of processes that rely onWeb resources, which in turn heavily impacts areas such as science where reproducibility is important. To solve this problem, we propose trusty URIs containing cryptographic hash values. We show how trusty URIs can be used for the verification of digital artifacts, in a manner that is independent of the serialization format in the case of structured data files such as nanopublications.We demonstrate how the contents of these files become immutable, including dependencies to external digital artifacts and thereby extending the range of verifiability to the entire reference tree. Our approach sticks to the core principles of the Web, namely openness and decentralized architecture, and is fully compatible with existing standards and protocols. Evaluation of our reference implementations shows that these design goals are indeed accomplished by our approach, and that it remains practical even for very large files.





EXISTING SYSTEMS:

v Our approach sticks to the core principles of the Web, namely openness and decentralized architecture, and is fully compatible with existing standards and protocols.
v There are a number of existing approaches to include hash values in URIs for verifiability purposes, e.g. for legal documents.
v This reversibility is needed once an existing trusty URI resource containing self-references should be verified.
v We transformed these nanopublications into the formats N-Quads and TriX using existing off-theshelf converters.


DISADVANTAGE:
v The same input always leads to exactly the same hash value, whereas just a minimally modified input returns a completely different value.
v The downside of such custom-made  solutions is that custom-made software is required to generate, resolve, and check the hash references.
v Here, approach that could replace such specific ones, thereby establishing interoperability of systems and standard infrastructure for creating, resolving,  and checking hash references.




PROPOSED SYSTEMS

v propose trusty URIs containing cryptographic hash values. We show how trusty URIs can be used for the verification of digital artifacts, in a manner that is independent of the serialization format in the case of structured data files such as nanopublications.
v we propose an approach to make items on the (Semantic) Web verifiable, immutable, and permanent.
v This approach includes cryptographic hash values in Uniform Resource Identifiers (URIs) anadheres to the core principles of the Web, namely openness and decentralized architecture.
v Nanopublications have been proposed as a new way of scientific publishing.

ADVANTAGE:
·        Nanopublications  can cite other nanopublications via their URIs, thereby creating complex citation networks.
·        Published nanopublications are supposed to be immutable,  but there is currently no mechanism to enforce this.
·        It is well-known that even artifacts that are supposed to be immutable tend to change over time, while often keeping the same URI reference.



CONCLUSION:
We have presented a proposal for unambiguous URIreferences to make digital artifacts on the (Semantic Web verifiable, immutable, and permanent. If adopted,
it could have a considerable impact on the structure and functioning of the Web, could improve the efficiency and reliability of tools using Web resources, and could become an important technical pillar for the Semantic Web, in particular for scientific data, where provenance and verifiability are important. Scientific data analyses, for example, might be conducted in the future in a fully reproducible manner within “data projects” analogous to today’s software projects. The dependencies in the form of datasets could be automatically fetched from the Web,

similar to what Apache Maven  does for software projects, but decentralized and verifiable.

Comments

Popular posts from this blog

Inverted Linear Quadtree: Efficient Top K Spatial Keyword Search

Inverted Linear Quadtree: Efficient Top K Spatial Keyword Search ABSTRACT: In this paper, With advances in geo-positioning technologies and geo-location services, there are a rapidly growing amount of spatiotextual objects collected in many applications such as location based services and social networks, in which an object is described by its spatial location and a set of keywords (terms). Consequently, the study of spatial keyword search which explores both location and textual description of the objects has attracted great attention from the commercial organizations and research communities. In the paper, we study two fundamental problems in the spatial keyword queries: top k spatial keyword search (TOPK-SK), and batch top k spatial keyword search (BTOPK-SK). Given a set of spatio-textual objects, a query location and a set of query keywords, the TOPK-SK retrieves the closest k objects each of which contains all keywords in the query. BTOPK-SK is the batch processing of sets...

A simple and reliable touch sensitive security system CODING

#include <REGX51.H> #include "lcd.c" #define MAX_DELAY() delay(65000) sbit Vibra_Sense=P3^1; sbit Buz=P1^0; void intro() {  lcd_init();  lcd_str("Touch Sensitive ",0x80);  lcd_str("Security System ",0xc0);  MAX_DELAY();MAX_DELAY();  lcd_clr();  }  void main()  { unsigned int i = 0, j= 0; intro();    while(1)    { lcd_str("Security Syst On",0x80); lcd_str("No Vibra Detectd",0xc0); Buz = 1; if(Vibra_Sense == 1) { while(Vibra_Sense == 1) delay(1000); } else { while(Vibra_Sense == 0) delay(1000); } Buz = 0; lcd_str("Vibraton Detectd",0xc0);delay(65000); while(1);    }  }

A Time Efficient Approach for Detecting Errors in Big Sensor Data on Cloud

A Time Efficient Approach for Detecting Errors in Big Sensor Data on Cloud Abstract                                                                                                                                                      ...