节点文献

An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 Muazzam MAQSOODSadaf YASMINSaira GILLANIMaryam BUKHARISeungmin RHOSang-Soo YEO

【Author】 Muazzam MAQSOOD;Sadaf YASMIN;Saira GILLANI;Maryam BUKHARI;Seungmin RHO;Sang-Soo YEO;Department of Computer Science,COMSATS University Islamabad,Attock Campus;Department of Computer Science,Bahria University;Department of Industrial Security,Chung-Ang University;Department of Computer Engineering,Mokwon University;

【通讯作者】 Seungmin RHO;Sang-Soo YEO;

【机构】 Department of Computer Science,COMSATS University Islamabad,Attock CampusDepartment of Computer Science,Bahria UniversityDepartment of Industrial Security,Chung-Ang UniversityDepartment of Computer Engineering,Mokwon University

【摘要】 Innovations on the Internet of Everything(IoE) enabled systems are driving a change in the settings where we interact in smart units,recognized globally as smart city environments.However,intelligent video-surveillance systems are critical to increasing the security of these smart cities.More precisely,in today’s world of smart video surveillance,person re-identification(Re-ID) has gained increased consideration by researchers.Various researchers have designed deep learning-based algorithms for person Re-ID because they have achieved substantial breakthroughs in computer vision problems.In this line of research,we designed an adaptive feature refinementbased deep learning architecture to conduct person Re-ID.In the proposed architecture,the inter-channel and inter-spatial relationship of features between the images of the same individual taken from nonidentical camera viewpoints are focused on learning spatial and channel attention.In addition,the spatial pyramid pooling layer is inserted to extract the multiscale and fixed-dimension feature vectors irrespective of the size of the feature maps.Furthermore,the model’s effectiveness is validated on the CUHK01 and CUHK02 datasets.When compared with existing approaches,the approach presented in this paper achieves encouraging Rank 1 and 5 scores of 24.6% and 54.8%,respectively.

【Abstract】 Innovations on the Internet of Everything(IoE) enabled systems are driving a change in the settings where we interact in smart units,recognized globally as smart city environments.However,intelligent video-surveillance systems are critical to increasing the security of these smart cities.More precisely,in today’s world of smart video surveillance,person re-identification(Re-ID) has gained increased consideration by researchers.Various researchers have designed deep learning-based algorithms for person Re-ID because they have achieved substantial breakthroughs in computer vision problems.In this line of research,we designed an adaptive feature refinementbased deep learning architecture to conduct person Re-ID.In the proposed architecture,the inter-channel and inter-spatial relationship of features between the images of the same individual taken from nonidentical camera viewpoints are focused on learning spatial and channel attention.In addition,the spatial pyramid pooling layer is inserted to extract the multiscale and fixed-dimension feature vectors irrespective of the size of the feature maps.Furthermore,the model’s effectiveness is validated on the CUHK01 and CUHK02 datasets.When compared with existing approaches,the approach presented in this paper achieves encouraging Rank 1 and 5 scores of 24.6% and 54.8%,respectively.

【基金】 supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0008703,The Competency Development Program for Industry Specialist);the MSIT (Ministry of Science and ICT),Republic of Korea,under the ITRC (Information Technology Research Center) support program (IITP-2022-2018-0-01799) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation)
  • 【文献出处】 Frontiers of Computer Science ,计算机科学前沿(英文版) , 编辑部邮箱 ,2023年04期
  • 【分类号】TP391.41;TP18
节点文献中: