Volume 14, Issue 1 (6-2017)                   JSDP 2017, 14(1): 53-70 | Back to browse issues page

XML Persian Abstract Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Zare Bidoki T, Sadeghi M T, Abutalebi H R. Semi Supervised Multiple Kernel Learning using Distance Metric Learning Techniques. JSDP. 2017; 14 (1) :53-70
URL: http://jsdp.rcisp.ac.ir/article-1-362-en.html
Ph.d. student Yazd University
Abstract:   (997 Views)

Distance metric has a key role in many machine learning and computer vision algorithms so that choosing an appropriate distance metric has a direct effect on the performance of such algorithms. Recently, distance metric learning using labeled data or other available supervisory information has become a very active research area in machine learning applications. Studies in this area have shown that distance metric learning-based algorithms considerably outperform the commonly used distance metrics such as Euclidean distance. In the kernelized version of the metric learning algorithms, the data points are implicitly mapped into a new feature space using a non-linear kernel function. The associated distance metric is then learned in this new feature space. Utilizing kernel function improves the performance of pattern recognition algorithms, however choosing a proper kernel and tuning its parameter(s) are the main issues in such methods. Using of an appropriate composite kernel instead of a single kernel is one of the best solutions to this problem. In this research study, a multiple kernel is constructed using the weighted sum of a set of basis kernels. In this framework, we propose different learning approaches to determine the kernels weights. The proposed learning techniques arise from the distance metric learning concepts. These methods are performed within a semi supervised framework where different cost functions are considered and the learning process is performed using a limited amount of supervisory information. The supervisory information is in the form of a small set of similarity and/or dissimilarity pairs. We define four distance metric based cost functions in order to optimize the multiple kernel weight. In the first structure, the average distance between the similarity pairs is considered as the cost function. The cost function is minimized subject to maximizing of the average distance between the dissimilarity pairs.  This is in fact, a commonly used goal in the distance metric learning problem. In the next structure, it is tried to preserve the topological structure of the data by using of the idea of graph Laplacian. For this purpose, we add a penalty term to the cost function which preserves the topological structure of the data. This penalty term is also used in the other two structures. In the third arrangement, the effect of each dissimilarity pair is considered as an independent constraint. Finally, in the last structure, maximization of the distance between the dissimilarity pairs is considered within the cost function not as a constraint.  The proposed methods are examined in the clustering application using the kernel k-means clustering algorithm. Both synthetic (a XOR data set) and real data sets (the UCI data) used in the experiments and the performance of the clustering algorithm using single kernels, are considered as the baseline. Our experimental results confirm that using the multiple kernel not only improves the clustering result but also makes the algorithm independent of choosing the best kernel. The results also show that increasing of the number of constraints, as in the third structures, leads to instability of the algorithm which is expected.

Full-Text [PDF 6834 kb]   (429 Downloads)    
Type of Study: بنیادی | Subject: Paper
Received: 2015/04/23 | Accepted: 2016/12/18 | Published: 2017/07/18 | ePublished: 2017/07/18

Add your comments about this article : Your username or Email:

Send email to the article author

© 2015 All Rights Reserved | Signal and Data Processing