دانلود رایگان مقاله انگلیسی در هم سازی چند رسانه ای و شبکه بندی به همراه ترجمه فارسی
عنوان فارسی مقاله | در هم سازی چند رسانه ای و شبکه بندی |
عنوان انگلیسی مقاله | Multimedia Hashing and Networking |
رشته های مرتبط | مهندسی فناوری اطلاعات و کامپیوتر، سیستم های چند رسانه ای، رایانش ابری، اینترنت و شبکه های گسترده و شبکه های کامپیوتری |
فرمت مقالات رایگان |
مقالات انگلیسی و ترجمه های فارسی رایگان با فرمت PDF آماده دانلود رایگان میباشند همچنین ترجمه مقاله با فرمت ورد نیز قابل خریداری و دانلود میباشد |
کیفیت ترجمه | کیفیت ترجمه این مقاله متوسط میباشد |
نشریه | آی تریپل ای – IEEE |
مجله | چند رسانه ای – MultiMedia |
سال انتشار | 2016 |
کد محصول | F543 |
مقاله انگلیسی رایگان (PDF) |
دانلود رایگان مقاله انگلیسی |
ترجمه فارسی رایگان (PDF) |
دانلود رایگان ترجمه مقاله |
خرید ترجمه با فرمت ورد |
خرید ترجمه مقاله با فرمت ورد |
جستجوی ترجمه مقالات | جستجوی ترجمه مقالات |
فهرست مقاله: هش چند رسانه ای درهم سازی از طریق یادگیری عمیق شبکه بندی چند رسانه ای شبکه های اطلاعات چند رسانه های ارتباط بین تصاویر و کلمات کلیدی توسط کاربران تعیین می شود. کنفرانس های چند رسانه ای |
بخشی از ترجمه فارسی مقاله: هش چند رسانه ای |
بخشی از مقاله انگلیسی: Multimedia Hashing We explore two different methodologies related to multimedia hashing—shallow-learning-based hashing and deep-learning-based hashing— demonstrating state-of-the-art techniques for enabling efficient multimedia storage, indexing, and retrieval. Hashing by Shallow Learning Hashing1 has attracted considerable attention from researchers and practitioners in computer vision, machine learning, data mining, information retrieval, and other related areas. A variety of hashing techniques have been developed to encode documents, images, videos, or other types of data into a set of binary codes (used as hash keys), while preserving certain similarities among the original data. With such binary codes, similarity searches can be rapidly performed over massive datasets, thanks to the high efficiency of pairwise comparison using the Hamming distance. Early endeavors in hashing concentrated on employing random permutations or projections to construct hash functions. Well-known representatives include Min-wise Hashing (MinHash)2 and Locality-Sensitive Hashing (LSH).3 MinHash estimates the Jaccard set similarity, while LSH accommodates various distance or similarity metrics—such as the ‘p distance for p 2 ð0; 2, cosine similarity, and kernel similarity. Due to randomized hashing, more bits per hash table are required to achieve high precision. This typically reduces recall, and multiple hash tables are thus required to achieve satisfactory accuracy of retrieved nearest neighbors. The overall number of hash bits used in one application can easily run into the thousands. Beyond data-independent randomized hashing schemes, a recent trend in machine learning is to develop data-dependent hashing techniques that learn a set of compact hash codes based on a training dataset (a multimedia database, for example). Binary codes have been popular in this scenario because of their simplicity and efficiency in computation. The compact hashing scheme can accomplish almost a constant-time nearest neighbor search, after encoding the entire dataset into short binary codes and then aggregating them into a hash table. Additionally, compact hashing is particularly beneficial for storing massive-scale data. For example, saving one hundred million samples, each with 100 binary bits, costs less than 1.5 Gbytes, which can easily fit in memory. To create effective compact hash codes, numerous methods have been presented, including unsupervised and supervised methods. The state-of-the-art unsupervised hashing method, Discrete Graph Hashing (DGH),4 leverages the concept of “anchor graphs” to capture the neighborhood structure inherent in a given massive dataset, and then formulates a graph-based hashing model over the entire dataset. This model hinges on a novel discrete optimization procedure to achieve nearly balanced and uncorrelated hash bits, where the binary constraints are explicitly imposed and handled. The DGH technique has been demonstrated to outperform the conventional unsupervised hashing methods, such as Iterative Quantization, Spectral Hashing, and Anchor Graph Hashing,1 which fail to sufficiently capture local neighborhoods of raw data in the discrete code space. The state-of-the-art supervised hashing method, Supervised Discrete Hashing (SDH),5 incorporates supervised label information and formulates hashing in terms of linear classification, where the learned binary codes are expected to be optimal for classification. SDH applies a joint optimization procedure that jointly learns a binary embedding and a linear classifier. The SDH technique has also been demonstrated to outperform previous supervised hashing methods.1 There exist many other interesting hashing techniques, such as document hashing,6 video hashing,7 structured data hashing,8 and intermedia hashing.9 Note that all of the techniques we have mentioned depend on shallow-learning algorithms. Nonetheless, owing to the high speed of shallow-learning-based hashing, the state-of-the-art hashing techniques have been widely used in high-efficiency multimedia storage, indexing, and retrieval, especially in multimedia search applications on smartphone devices. Several well-known startups, such as Snapchat, Pinterest, SenseTime, and Faceþþ, use proper hashing techniques to manage and search through millions or even billions of images. |