一种用于提升深度学习分类模型准确率的正则化损失函数
A regularization loss function for improving the accuracy of deep learning classification models
投稿时间:2019-10-28  修订日期:2019-10-28
DOI:
中文关键词: 深度学习模型  正则化损失函数  过拟合  标签平滑
英文关键词: Deep learning  regularization loss function  over-fitting problem  label smoothing
基金项目:湖北省自然科学基金资助项目(2017CFB784)中央高校基本科研业务费专项资助项目( CZY17001)
作者单位E-mail
李成华 中南民族大学 电子信息工程学院 mdlich@mail.scuec.edu.cn 
杨斌 中南民族大学 电子信息工程学院 yangbin_ai@sina.com 
江小平 中南民族大学 电子信息工程学院  
石鸿凌 中南民族大学 电子信息工程学院  
摘要点击次数: 274
全文下载次数: 0
中文摘要:
      针对现有深度学习分类模型存在由标签边缘化效应引起的过拟合问题,提出了一种新的正则化损失函数----得分聚类损失函数。该函数为每个类别学习一个得分中心,将同类别样本的得分向得分中心聚集。得分中心经过softmax函数归一化后的概率向量,即为最佳的平滑标签。该函数避免了根据经验手工设置标签平滑系数,起到了自动平滑标签的作用,从而减轻了模型过拟合的风险。本文给出了得分聚类损失函数的定义和推导,并在刚性和非刚性图像分类任务上与其他正则化损失函数进行了实验比较。实验结果表明,应用本文的得分聚类损失函数能显著提高分类模型的准确率。
英文摘要:
      In order to reduce the over-fitting problem caused by the marginalized effect of label in the deep learning model, a new regularization loss function, called score clustering loss, is proposed. It learns a scoring center for each category and aggregates the scores of the same category into the scoring center. The probability vector that is normalized by the softmax function is the best smoothing label. The score clustering loss plays the role of smoothing the label automatically, avoiding the manual setting of the label smoothing coefficient according to experience and reducing the risk of over-fitting. The definition and derivation of the score clustering loss function is given in this paper. Compared with other regularization loss function on the rigid and non-rigid image classification tasks, the score clustering loss function improves the accuracy of the classification model obviously.
View Fulltext   查看/发表评论  下载PDF阅读器
关闭