دانلود رایگان مقاله انگلیسی یک طبقه بندی بیزین ساده برای مدارک علمی به همراه ترجمه فارسی
عنوان فارسی مقاله | یک طبقه بندی بیزین ساده برای مدارک علمی |
عنوان انگلیسی مقاله | A Naïve Bayesian Classifier for Educational Qualification |
رشته های مرتبط | مهندسی کامپیوتر، مهندسی صنایع، داده کاوی، مهندسی الگوریتم ها و محاسبات و رایانش ابری |
کلمات کلیدی | طبقه بندی، داده کاوی، مدارک علمی، kappa، بیزین ساده |
فرمت مقالات رایگان |
مقالات انگلیسی و ترجمه های فارسی رایگان با فرمت PDF آماده دانلود رایگان میباشند همچنین ترجمه مقاله با فرمت ورد نیز قابل خریداری و دانلود میباشد |
کیفیت ترجمه | کیفیت ترجمه این مقاله متوسط میباشد |
نشریه | مجله هندی فناوری و علوم – Indian Journal of Science and Technology |
مجله | 2015 |
سال انتشار | Indjst |
کد محصول | F542 |
مقاله انگلیسی رایگان (PDF) |
دانلود رایگان مقاله انگلیسی |
ترجمه فارسی رایگان (PDF) |
دانلود رایگان ترجمه مقاله |
خرید ترجمه با فرمت ورد |
خرید ترجمه مقاله با فرمت ورد |
جستجوی ترجمه مقالات | جستجوی ترجمه مقالات مهندسی کامپیوتر |
فهرست مقاله: چکیده 1. مقدمه 2. سیستم پیشنهادی 1.2 مرور کلی بیز ساده 2.2 شرح دیتاست 3.2 آموزش و تست مجموعه داده نمونه 4.2 روش طبقهبندی 5.2 الگوریتم 3. نتایج آزمایش و تجزیهوتحلیل 1.3 معیارهای عملکرد 1.1.3 حساسیت 2.1.3 ویژگی 3.1.3 دقت 4.1.3 Kappa: 5.1.3 توزیع 6.1.3 نمادهای استفاده شده 2.3 تحلیل 4. نتیجهگیری |
بخشی از ترجمه فارسی مقاله: 1. مقدمه |
بخشی از مقاله انگلیسی: 1. Introduction There are quite a large number of instances where a person is initially judged or analyzed by his/her educational qualification he/she has gained in his life. Under such cases, the categorization of the persons according to their educational qualification would be of much help and the decision made with the help of technical assistance would be free from any kind of biases and hence can be universally applicable. This paper proposes a method to categorize the educational qualification utilizing the benchmark Naïve Bayesian Classification Algorithm. This method can be used in a variety of applications such as segregation based on educational relevance, short listing a candidate for recruitment based on his/her degree of education, etc. The organization of this paper is given below: Section 2 contains the literature survey. Section 3 explains the Naïve Bayesian Algorithm and the proposed classification method. Section 4 analyses the experimental results based on the listed tabulations and Section 5 concludes the paper. Naïve Bayesian algorithm is a classical classification algorithm which has proved its simplicity and efficiency in various applications and a few articles exhibiting the efficiency of the classifier are discussed here.Mauricio A. Valle et al.10paper discusses the method of predicting the determining attributes in case of a Naïve Bayesian classification algorithm involving a testing method based on cross-validation. It is verified experimentally that the socio-demographic attributes are not contributing to the prediction of future performance of the sales agent in a call center. Dunja Mladenic et al.7 research deals with choosing the features contributing for the classification using certain specifications and the learning ability of the classifier over a text data whose distribution is uneven. It is found that when the domain and the characteristics of the classification algorithm istaken into account, the performance of the classifier increases. Dong Tao et al.2 paper proposes an improved Naïve Bayesian algorithm by combining the classical method with a feature selection method based on Gini Index. This hybrid method improves the performance of text categorization. Kabir Md Faisal et al.4 research deals with combining k-means clustering method with Naïve Bayesian classification algorithm to increase the accuracy. The clustering method groups the training samples into similar categories after which all the groups are trained under Naïve Bayesian classifier. This method is verified to improve the accuracy. Santra A.K. et al.8 research proves that the time taken for classification and the memory utilized are reduced in case of the web usage mining while utilizing Naïve Bayesian classifier rather than using decision trees. Liangxiao Jiang et al.5 paper suggests that the conditional independence nature of attributes in the original Naïve Bayesian algorithm seems to be weak in certain cases and proposes a local weightage method which outperforms the classical algorithm in terms of accuracy. Pradeepta K. Sarangiet al.12 paper describes the feature extraction using LU factorization followed by the usage of Naïve Bayesian classifier for pattern recognition. This proves the universal applicability of the classifier. Yildrim P. and Birant D.11 research paper discusses the experimental verification of the effect of various distributions on the attributes. It is found that the application of distributions based on the nature of attributes increases the accuracy rather than using a single distribution across all the attributes. AbeerBadr El Din Ahmed and Ibrahim SayedElarabarticle1 discusses the application of classification algorithms to predict the final grade of the students. Ron Kohavi6 article proposes a hybrid classifier combining Naïve Bayesian and Decision Tree which is termed as NBTree to increase the accuracy of the classifier. It is also found that the class conditional independence is passive in case of small data sets but in case of large data sets, this assumption leads to misclassification and reduction in the accuracy. Shasha Wang et al.9 paper proposes an upgraded version of the NBtree hybrid classifier and named it as multinomialNBTree (MNBTree), where a multinomial naïve Bayesian classifier is applied to the leaf nodes of a decision tree. Further, to increase the performance, another improvisation is made by the inclusion of multiclass classification and the system is called as multiclass version of MNBTree (MMNBTree). With reference to the above stated research articles, the pros of Naïve Bayesian classification algorithm are studied thoroughly and found that this algorithm will best suit the nature of data used for the experiment comprising of both numerical and text data which are independently contributing to the classification. |