دانلود رایگان مقاله انگلیسی وب معنایی: از نمایش تا تحقق به همراه ترجمه فارسی
عنوان فارسی مقاله: | وب معنایی: از نمایش تا تحقق |
عنوان انگلیسی مقاله: | The Semantic Web: From Representation to Realization |
رشته های مرتبط: | مهندسی کامپیوتر و فناوری اطلاعات، هوش مصنوعی و اینترنت و شبکه های گسترده |
فرمت مقالات رایگان | مقالات انگلیسی و ترجمه های فارسی رایگان با فرمت PDF میباشند |
کیفیت ترجمه | کیفیت ترجمه این مقاله پایین میباشد |
نشریه | اسپرینگر – Springer |
کد محصول | f263 |
مقاله انگلیسی رایگان (PDF) |
دانلود رایگان مقاله انگلیسی |
ترجمه فارسی رایگان (PDF) |
دانلود رایگان ترجمه مقاله |
خرید ترجمه با فرمت ورد |
خرید ترجمه مقاله با فرمت ورد |
جستجوی ترجمه مقالات | جستجوی ترجمه مقالات مهندسی کامپیوتر |
بخشی از ترجمه فارسی مقاله: 1- مقدمه |
بخشی از مقاله انگلیسی: 1 Introduction Intelligent automated retrieval, manipulation and presentation of information defines the speed of progress in much of today’s high-technology work. In a world where information is at the center, any improvement is welcomed that can help automate even more of the massive amounts of data manipulation necessary. In many people’s vision of the Semantic Web machines take center stage, based on a deeper knowledge of the data they manipulate than currently possible. To do so calls for metadata – data about the data. Making machines smarter at tasks such as retrieving relevant information at relevant times automatically from the vast collection, even on today’s average laptop hard drive, requires much more meta-information than is available at present for this data. Accurate metadata can only be derived from an understanding of content; classifying photographs according to what they depict, for example, is best done by a recognition of the entities in them, lighting conditions, weather, film stock, lens type used, etc. Authoring metadata for images by hand, to continue with this example, will be an impossible undertaking, even if we limited the metadata to surface phenomena such as the basic objects included in the picture, as the number of photographs generated and shared by people is increasing exponentially. Powertools designed for manual metadata creation would only improve the situation incrementally, not exponentially, as needed. Although text analysis has come quite a long way and is much further advanced than image analysis, artificial intelligence techniques for analyzing text and images have a long way to go to reliably decipher the complex content of such data. The falling price of computing power could help in this respect, as image analysis is resource-intensive. This will not be sufficient, however, as generalpurpose image analysis (read: software with “commmon sense”) is needed to analyze and classify the full range of images produced by people based on content. On the one hand, achieving the full potential of a semantic web, leaving metadata creation to current AI technologies, will not be possible as these technologies are simply not powerful enough. This state of affairs may very possibly extend well beyond the next decade. On the other hand, because the growth of data available online is rising exponentially, and can be expected to continue to do so, manual metadata entry will never catch up to the extent necessary for significant effect. Creating the full set of ontologies by hand required for adequate machine manipulation would be a Herculean effort; waiting for the adequate machine intelligence could delay the Semantic Web for decades. Does this mean the semantic web is unrealizable until machines become significantly smarter? Not necessarily. While we believe that neither hand-crafted ontologies nor current (or next wave) artificial intelligence techniques alone can achieve a giant leap towards the Semantic Web, a clever combination of the two could potentially achieve more than a notable improvement. The idea is that if online manual labor could somehow be augmented in such a way that it supported automatic classification, making up for its weak points, this could help move the total amount of semantically-tagged data closer to the 100% mark and help automatic processes get over the well-known “90% accuracy brick wall”. For us the question about how to achieve the vision of the Semantic Web has been, What kind of collaborative framework will best address the building of the Semantic Web? Most tools and methodologies designed for automating data handling are not suitable for human usage – the underlying data representations is designed for machines in ways that are not meant for human consumption. Data formats designed exclusively for human usage, such as e.g. HTML, are not suitable for machine manipulation – the data is unstructured, the process is slow, error prone and ultimately, to make it work, calls for massive amounts of machine intelligence that are well beyond today’s reach. This line of reasoning has resulted in our two-prong approach to the creation of the Semantic Web: First, we develop a system for helping people take a more structured approach to their data creation, management and manipulation and second, we develop automatic analysis mechanisms that use the human-provided structured data and framework to expand the semantic classification beyond what is possible to do by hand. We have already achieved significant progress on the first part of this approach; the second part is also well under way. Our method facilitates an iterative interaction loop between the user’s information input, the automated extension of this work and subsequent monitoring of feedback on the extensions from the user. Semantic Cards, or SemCards, is what we call the underlying representation of our approach. It is a technology that combines ontology creation, management/usage with the user interface in a way that supports simultaneously (a) human metadata creation, manipulation and consumption, (b) expert-user creation and maintenance of ontologies, and (c) automation services that are augmented by human-created meaningful examples of metadata and semantic relationship links, which greatly enhances their functionality and accuracy. SemCards provide an intermediate ontological representational level that allows end-users to create rich semantic networks for their information sphere. One of the big problems with automation is low quality of results. While statistics may work reasonably in some cases as a solution to this, for any single individual the “average user” is all too often too different on too many dimensions for such an approach to be useful. The SemCard intermediate layer encourages users to create metadata and semantic links, which provides underlying automation with highly specific, user-motivated examples. The net effect is an increase in the possible collaboration between the user and the machine. Semi-intelligent processes can be usefully employed without requiring significant or immediate leaps in AI research. From the users’ perspective what we have developed is a network portal where they can organize their own information for personal use, publish any of that information to any group – be it “emails” addressed to a single individual or photo albums shared with the world – and manage the information shared with them from others, whether it is documents, books, music, etc. Under the hood are powerful ontology-driven technologies for organizing all categories of data, including access management, relational (semantic) links and display policies, in a way that is relatively transparent to the user. The result is a system that offers improved automation and control over access management, information organization and display features. Here we describe the ideas behind our approach and give a short overivew of a use-case on the semantic Web site Twine.com. The paper is organized as follows: First we review related work, then we describe the technology underlying SemCards and explain how the are used. We then describe our Web portal Twine.com, where we have implemented various user interfaces for enabling the use of SemCards in a number of ways, including making semantically rich Web bookmarks, notes, blogs and semantically-annotated uploads. |