版权说明 操作指南
首页 > 成果 > 详情

HCMSL: Hybrid cross-modal similarity learning for cross-modal retrieval

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Zhang, Chengyuan;Song, Jiayu;Zhu, Xiaofeng;Zhu, Lei;Zhang, Shichao
通讯作者:
Zhang, Shichao(zhangsc@csu.edu.cn);Zhu, Lei(leizhu@hunau.edu.cn)
作者机构:
[Zhang, Chengyuan] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Hunan, Peoples R China.
[Song, Jiayu; Zhang, Shichao] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China.
[Zhu, Xiaofeng] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610054, Sichuan, Peoples R China.
[Zhu, Lei] Hunan Agr Univ, Coll Informat & Intelligence, Changsha 410128, Hunan, Peoples R China.
通讯机构:
[Zhang, S.] S
[Zhu, L.] C
College of Information and Intelligence, China
School of Computer Science and Engineering, China
语种:
英文
关键词:
Cross-modal retrieval;deep learning;hybrid cross-modal similarity;intra-modal semantic correlation
期刊:
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
ISSN:
1551-6857
年:
2021
卷:
17
期:
1s
页码:
1–22
基金类别:
This work was supported in part by the National Natural Science Foundation of China (61702560, 62072166, 61836016, 61672177), project (2018JJ3691, 2016JC2011) of Science and Technology Plan of Hunan Province. Authors’ addresses: C. Zhang, College of Computer Science and Electronic Engineering, Hunan University, Changsha, Hunan, 410082; email: cyzhangcse@hnu.edu.cn; J. Song and S. Zhang (corresponding author), School of Computer Science and Engineering, Central South University, Changsha, Hunan, 410083; emails: {jiayusong, zhangsc}@csu.edu.cn; X. Zhu, School of Computer Science and Engineering at University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054; email: xfzhu0011@hotmail.com; L. Zhu (corresponding author), College of Information and Intelligence, Hunan Agricultural University, Changsha, Hunan, 410128; email: leizhu@hunau.edu.cn. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2021 Association for Computing Machinery. 1551-6857/2021/04-ART2 $15.00 https://doi.org/10.1145/3412847
机构署名:
本校为其他机构
摘要:
The purpose of cross-modal retrieval is to find the relationship between different modal samples and to retrieve other modal samples with similar semantics by using a certain modal sample. As the data of different modalities presents heterogeneous low-level feature and semantic-related high-level features, the main problem of cross-modal retrieval is how to measure the similarity between different modalities. In this article, we present a novel cross-modal retrieval method, named Hybrid Cross-Modal Similarity Learning model (HCMSL for short). I...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com