![计算机视觉的对象级场景理解及其应用](https://wfqqreader-1252317822.image.myqcloud.com/cover/275/47755275/b_47755275.jpg)
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/2_01.jpg?sign=1739316710-HuDJD8obxxouHbQzU4A7IBiAA1pzrAK5-0-11e54845f760fab9a45b24f895b04001)
图1-1 《大橡树下的母马和马驹》(乔治·斯塔布斯)[1]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/2_02.jpg?sign=1739316710-p8LmEMN8rgnACRaE0ezYJZOp5ndPBzk9-0-04b7299c016c54e7bb2aa5b36f8ec513)
图1-2 图像场景语义分割目标
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/2_03.jpg?sign=1739316710-IywIdel8kCpaVQUzCJDH6UvUBJProwFP-0-70b4a2e00a5e825cd25ecfc599e069f9)
图1-3 底层图像分割结果[3]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/3_01.jpg?sign=1739316710-jsu4hCopmUIU7uy4D63IZfhOVWS39gGC-0-16d875ddd7597bf9eadc53e5ba6f60fb)
图1-4 交互式对象提取与区域分割[7-9]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/3_02.jpg?sign=1739316710-7QlQeb94flKpO3F4Srlz3Ku1o3rPjLEG-0-1287ff3f9602d0f7443c3dfeccea59f5)
图1-5 Textonboost图像场景语义分割和标记[13,14]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/4_01.jpg?sign=1739316710-YgRpLqehypzh1q6Z8MVqrjuG2rIckvlK-0-604a1eb2e035bcc0ba3583c62551bc29)
图1-6 多视角下街景图像的语义分割[15]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/4_02.jpg?sign=1739316710-NgxvrJv5dh28CZOQ37Pps5eS5uWryllN-0-bc4213bb2e5648d1503b1aa3e111dde4)
图1-7 Label Transfer图像场景语义迁移结果[17]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/4_03.jpg?sign=1739316710-6mSywngodat8C0MZrAgLdDD45Ed0AYgJ-0-2ed8b2e68f44953276a6f0ae437153ee)
图1-8 街景图像的语义迁移结果[19]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/5_01.jpg?sign=1739316710-oKU4ycjFOdBiLMm8t9Oi7zfMiSE0Gyno-0-64b291d2c4dad7811e1876f6e78513f2)
图1-9 多张图像前景对象共分割结果[25]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/5_02.jpg?sign=1739316710-GV3k3aSEJt64nvxx7IK8hxoXq2M4PYt2-0-5b87f8f1f9a8371f4622e9e6609588b4)
图1-10 关注于稀少类别的上下文驱动的场景解析方法[26],蓝色矩形中为普通类别,黄色矩形中为稀少类别,在右边的条形类别分布图中可看到,增强后的稀少类别样本(黄色)比增强前(蓝色)分布更均衡
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/5_03.jpg?sign=1739316710-ek8bSXwdXyHeATiGqZVcTM4ssZMXbwGU-0-32f223566e9e08e8e7b4442dc9af110f)
图1-11 场景语义分割的全卷积网络FCN[33],将全连接层转换为卷积层使得分类网络能够输出与图像相同尺寸的热图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/6_01.jpg?sign=1739316710-h1e2dQYIiT1895CDKbTpQ1B0yIjb5PzO-0-7331006ab6cdb8b1688485b62506a1a6)
图1-12 基于单幅图像的遮挡边界恢复[41]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/6_02.jpg?sign=1739316710-33zUa101Ndp9n0Xn3CxyHA4lvo38PaSg-0-2e487e1124980fa843f5d8a89622891d)
图1-13 基于光流的遮挡边界检测和前/后景划分的方法[49],左图为输入图像,右图为该方法遮挡边界检测结果,绿色边界表示前景区域,红色边界表示后景区域
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/6_03.jpg?sign=1739316710-nQfGIapZ72RJ6edtIPZJPcGHZys6KIVR-0-e8c8f81b8d7fd98a4b60a20a6cc2ebdc)
图1-14 单幅图像场景深度信息估计方法[42]的四邻域特征
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/7_01.jpg?sign=1739316710-faF8wHNT101WNADDn8DxSsJqZPAPPugV-0-79ccf6e30dd4cbb1b843dae392ee06a8)
图1-15 单幅图像场景深度信息估计方法结果[43]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/7_02.jpg?sign=1739316710-U7JBLoMBQM6cIucclMafbQ2j1UB6Pxuw-0-14344f27f4d180d5f315d75499abf2b7)
图1-16 基于语义标记预测的单幅图像深度信息估计[44]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/7_03.jpg?sign=1739316710-6K09BNjSO28e2r0p1ofMB12V13gv6pRj-0-9deae3040f5ec22a2cff165438d17a61)
图1-17 离散-连续式单幅图像深度信息估计方法[50],左图为输入图像,右图为对应的离散-连续的深度信息估计结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/8_01.jpg?sign=1739316710-4mw8yIDjuHnx7xAEPiRr5hRlJ6yEB0sN-0-958519334853415fc20b25907a39818c)
图1-18 基于多尺度深度网络的单幅图像深度信息估计方法[51],全局粗略尺度网络包含五个由卷积和最大池化构成的特征提取层以及两个全连接层,局部细化尺度网络则由卷积层构成
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/8_02.jpg?sign=1739316710-tnp6Ol5CqBJFAJpxcFjfXYq2NxxdTIJH-0-c6b5d3950f546a12a430b0902b166bd3)
图1-19 基于CNN框架和连续CRF结构的深度估计卷积神经场模型[54]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/9_01.jpg?sign=1739316710-CxIqZu4Sp8mI4qOdcXVnVxulOHtNjs4U-0-c57d703a1a35a941142a8a8556097123)
图1-20 物理规则指导下的单幅图像3D解析图[45]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/9_02.jpg?sign=1739316710-VeHSrLUfHHeEIw2zuMLvMxBqwzdka1GO-0-5f1b4f4f4a69c9f4b0ac561490d64b84)
图1-21 面向图像分割的层次结构估计[46]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/10_01.jpg?sign=1739316710-4lKUtbMRNveHEhqbyX6TxQPY8Bf2QdHl-0-7b4711bbc5afb76f3ea017dcc4e6f2a2)
图1-22 基于嵌入角的图像分割和遮挡边界同时求解结果[47]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/10_02.jpg?sign=1739316710-rS1dWXnuqisgI4avuCHOBvyAmZQrSz5z-0-a4e41f7ea50c46d1c899e9ddfd6f64a1)
图1-23 室内折纸世界的展开方法,对于输入图像(第一行左图),该方法估计出每个平面的朝向(第一行中图)以及平面之间边界的凹凸性(第一行右图),“+”表示凸,“-”表示凹
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/10_03.jpg?sign=1739316710-Oz6xoGhUPNSQVw2pDWC062i74YpAaK1N-0-914fe0c4e2536c85c370be9016027fbc)
图1-24 基于样例检测的区域级图像解析方法[66]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/11_01.jpg?sign=1739316710-X8KY8f2UMr9l6m1oAXbbKWUeMxYu3l38-0-0b27415cc3cb9f15bfd5b45facd2df43)
图1-25 自主驾驶环境下基于密集连接MRF模型的单张图像实例级标记方法[70]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/11_02.jpg?sign=1739316710-tUUKcjV4wLmBZjTTDRTAWNL6DwJwpJLq-0-870d7eb40a83ff015c3d6e5ade7164b6)
图1-26 相对属性的研究[95]:相对属性比绝对属性能够更好地描述图像内容。绝对属性可以描述是微笑的还是没有微笑的,但是对于b)就难以描述;相对属性能够描述b)比c)微笑多,但是比a)微笑少。对自然场景的理解同样如此
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/12_01.jpg?sign=1739316710-CMlNmBAItbdcS4iCPuyB0GNVyoXzhJY4-0-cb87a236fa654761b4af25a0f6379334)
图1-27 属性辅助对象分割的方法[99],由于对象遮挡、对象尺度过小或对象视角的影响,以类别为中心的方法较难描述对象属性,而以对象为中心的该方法可以较好地描述对象属性
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/12_02.jpg?sign=1739316710-6Ey4uYt1quT3Ue206td3pKkOrxTzZ8PO-0-4681727d3cd3e452b3f8f4dd80ff259a)
图1-28 一种图像对象和属性的稠密语义分割方法[102]
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/13_01.jpg?sign=1739316710-cAJ41yb2FuLqobBVzXD1sMWm8eJhrKYv-0-d7f20f1a62a2c49b6b2ca945aa2519b2)
图1-29 交互式场景生成过程示例[115]:第一行,用户界面的示意图面板,用户在其中排列所需对象,不同颜色代表对象的增加或调整;第二行,根据用户提供的布局自动推断的场景图结构;第三行及第四行,根据图结构生成的场景语义图及场景最终图像
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/13_02.jpg?sign=1739316710-suXCY4IQJppVbk3Mc7q3JNjwFg4qwT4M-0-0033c23287e116f7aaaf086913a0f78c)
图1-30 基于Voxel单元的图像场景三维结构理解方法[119],左图显示了该方法利用Voxel-CRF模型重建的场景三维结构以及每个Voxel的语义标记,右侧图中显示了深度信息的不足和缺失,例如电视机后面墙面的深度信息缺失
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/13_03.jpg?sign=1739316710-jbhvKwxu8V6LxGOUGYwvimm36WjGv98h-0-c18d5134f4576f6ddacd87f9b660193d)
图1-31 基于RGBD信息的图像场景全局解析方法[121],左边为输入图像和对应的深度信息,中间为对象的三维检测识别结果,用带有朝向的立方块来表示,右边为嵌入了场景和对象之间上下文关系的CRF模型
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/14_01.jpg?sign=1739316710-LbOWW9OWIxzEITNTTPf4Whii5x6TUTIp-0-0fc1c339daf4cbf23b3a88be0c3bc960)
图1-32 面向室内场景空间布局估计的曼哈顿交界点检测方法[123],图中显示了Y、W、T、L、X几种类型的交界点以及图像场景空间布局估计结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/14_02.jpg?sign=1739316710-hdL3yxNr2eM1OVu17OyNSY9KmLWu2dL1-0-8ac6cdee407a27b7ed89bb59bb193ab6)
图2-1 图像场景内容上下文指导的场景语义分割方法架构图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/15_01.jpg?sign=1739316710-WXdGYiTsl5X6Yz68IL3kycllIhipTq04-0-6952ab8759064378625afc74b70f0d78)
图2-2 多类别测地线距离示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/15_02.jpg?sign=1739316710-yYtz7ca8dlCaL0uXAYOljdWHcaHQghFg-0-d56123d3be0d0e39f273e2a67257fcfb)
图2-3 基于粗略语义概率的种子点选择示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/15_03.jpg?sign=1739316710-2m4nAMq5e4aweOU3dkGRfsKJBJQTTFAb-0-95ccb3bcb371751d5bcec479fa2ae0bc)
图2-4 传播指示器训练样本示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/16_01.jpg?sign=1739316710-kjSwv7PF46nsbnu2F3oHrxiDWlSRYxL3-0-3a3dd5abf40a130dab4d7eaf290b317d)
图2-5 传播指示器作用示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/16_02.jpg?sign=1739316710-413a76r7USX5WCONu4H2IkEyY2wL5sbP-0-46e9de062a6028ad2fe8c7a812aa3d9c)
图2-6 CamVid数据集上类别准确率对比图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/17_01.jpg?sign=1739316710-iBmUMXZn75SORCUuzYsqMG1u2tH3IzDK-0-bee9c489c86700cc1e1154abbd9314e5)
图2-7 本方法在CamVid数据集上的部分实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/17_02.jpg?sign=1739316710-6bzKWDxksIVxULyGJosWMoYvb9TnqmHM-0-fec260c3c36df57db282c97d4d597983)
图2-8 MSRC数据集上类别准确率对比图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/18_01.jpg?sign=1739316710-x0CObLzjsb7zPfztlvSUJCFrcpLoHve6-0-f1374e06f519226c7a194c42cb7f6194)
图2-9 本方法在MSRC数据集上的部分实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/18_02.jpg?sign=1739316710-wnMTmsoWJpzTqygfedmFYovdKw5TIU3X-0-0ec0dfe36a71e68b2c86ab73da14b2fe)
图2-10 CBCL数据集上类别准确率对比图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/19_01.jpg?sign=1739316710-7iJiqj416QG1yUeHCKbkfpidrN4mAYH4-0-5b6fa14f48671f9b60adfc893c678a57)
图2-11 本方法在CBCL数据集上的部分实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/19_02.jpg?sign=1739316710-fxV2PZeGNtpZOnbeg90DMoAavezQFkLj-0-5bb9c0110fae2259fe8d10c5e677d356)
图2-12 本方法在LHI数据集上的部分实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/19_03.jpg?sign=1739316710-lSDCVL9S3SFSePXjhoclsoNYbuuvLJb9-0-8a11e70b6f0bb105d2805e95cc0c2acb)
图2-13 视频场景语义分割框架图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/20_01.jpg?sign=1739316710-CM0VZP2FABYy2BrdcrwOjRZMvKA3wg4T-0-4d87886e8b2cf9cd6ad4af3ef06a61cf)
图2-14 基于测地线的MRF模型示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/20_02.jpg?sign=1739316710-uwWuQPE1jeS4MvSM5xDRHJgYUvfVLQ2H-0-4618ba8fe4e955720c382445381f3166)
图2-15 CamVid视频序列的语义分割实验结果,前三行是Seq05VD视频序列的语义分割结果,后三行是Seq06R0视频序列的语义分割结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/21_01.jpg?sign=1739316710-RfbTew8sCbin15stqT8lQnQdW1ZimU8b-0-d03f03e317ec0f5db110f386d8caaf50)
图3-1 具有歧义的图像空间关系理解示意图,a)是输入的图像,b)和c)是对输入图像的不同理解
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/21_02.jpg?sign=1739316710-np5rmWwxkjsxVdZj88qI0U8LDaY5COga-0-0b2449c1aa2c0ce5cdf503b00b077f2e)
图3-2 基于层次线索的场景分层框架图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/21_03.jpg?sign=1739316710-oKtXRWk7EIQUmaapZnU8LITgKFYdlRew-0-6e429176cad421baff4a0aa58a7253ee)
图3-3 语义线索示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/22_01.jpg?sign=1739316710-XtVKE2kQqbNtySveERfIrwkc71gocgjP-0-2e955bb6d0291ab77b72514bf2cd1576)
图3-4 位置线索示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/22_02.jpg?sign=1739316710-oGXTneUVa8HwP4kJaGPH4w9d0s2vYqBn-0-7ec0bf5e06e079ad2f30254e98ad62bf)
图3-5 轮廓线索示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/22_03.jpg?sign=1739316710-nbLes5drIs402UtE4JrxVlUY0rt0BC2q-0-da5a190b922c1658d8e9b4232d194b3f)
图3-6 公共边界线索示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/22_04.jpg?sign=1739316710-GkuYxfQFCH7gNdozyFF3iYNxrjOdCKDn-0-2f0c0307eb3d12a5c2fadb05e60de4e2)
图3-7 交界点线索示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/23_01.jpg?sign=1739316710-Z4jddCQHIBhwWQXpRwRc9mHIwSEVVUM7-0-d0d12ad2a2f5daa13b1c56a05df77b13)
图3-8 图像内容表达示意图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/23_02.jpg?sign=1739316710-i7VnSR9b3EppkvADrWgb5rsFHkYtzARE-0-da435c444e95f0cf94309144c62e5347)
图3-9 层次排序有向图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/23_03.jpg?sign=1739316710-rdWC1Dj8qCwc7ErQBVNWDzZr7k3GCnEw-0-c3b0f755d2a8f1ae5d2aa5fc8198cfed)
图3-10 不同数目的特征组合遮挡判别准确率对比图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/24_01.jpg?sign=1739316710-1kKwzZUQgg85yUbMYLvjMvdSeGb7AR8I-0-994f7794e84ec919d9f2ec4a1801cfdb)
图3-11 31种特征组合在相邻区域和不相邻区域的遮挡判别准确率差异
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/24_02.jpg?sign=1739316710-wVjkyn5s5ZiKvtiufUddmFujwBmG5OrZ-0-a8dfca661f013d077a729f25fb5bf7b3)
图3-12 三个数据集上遮挡判定的召回率
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/25_01.jpg?sign=1739316710-yXbSIdAgv7tHjlUyB4hEy4KezyVUf1Gr-0-39c3cc19f92612cad2019ede02696cf3)
图3-13 LHI自然场景数据集上场景分层结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/25_02.jpg?sign=1739316710-RDkFs8UrprnuvDsnPgdKDMI0EXbeU8W4-0-6a6e4f7646319f92bd0e12e6a33b49b2)
图3-14 LHI人造室内场景数据集上场景分层结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/26_01.jpg?sign=1739316710-CZHYDSwMONVVnBCymXnEp0t74fzUvkam-0-9837da4284bee9e9dbad4e1565c66566)
图3-15 室外场景数据集上场景分层结果图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/27_01.jpg?sign=1739316710-uUv1kQlYKjWFPWPjGhJBymaw6iFMTQDV-0-96850c7429af38676f72580c4593aba4)
图3-16 与Hoiem等的遮挡关系判别比较实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/27_02.jpg?sign=1739316710-SvVN9MbO5p8qvpvcDt971vdGb3I0In5J-0-788bd43ee4af11e0a2ec4fcb7a17cfc7)
图4-1 “对象级”的图像内容语义标记、以“对象”为单元的场景布局迁移,左图为图像,右图为三维场景布局生成,将左图的图像场景布局,自动迁移到由三维模型组成的三维场景
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/28_01.jpg?sign=1739316710-Xh2Taq0DR8S9SSvKkdFauT4oJximYg0k-0-dc8c379157130efc65f71d0736eb44dc)
图4-2 本方法的目标:a)输入图像;b)语义分割目标,不同的颜色代表不同的语义类别,这里只显示了马这种类别(绿色);c)对象分割目标,不同的颜色代表不同的对象
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/28_02.jpg?sign=1739316710-lyZ8MYg4TACFz0eBdY2Qcp9YGKeXr9iQ-0-84adaced84ec1a8ec778c2a21e853b4b)
图4-3 方法总体流程图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/29_01.jpg?sign=1739316710-pipGgUQGmqy0i6eL8alIX4v95yGLtcUx-0-f919da9b555e26b94e0652b424606b68)
图4-4 多尺度对象显著性检测示意图,颜色越浅代表对象显著性越高,颜色越深代表对象显著性越低
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/29_02.jpg?sign=1739316710-5UqrkxGqf77WOPP5bGRGs7n6tFTRxe9v-0-82eaaea7bdf400daf55ead9a1ae9c447)
图4-5 基于深度识别框架的多实例对象分割方法流程图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/29_03.jpg?sign=1739316710-t4stoYhvKfm2V5z3hkMxSI1Vd0gLMuYq-0-82cdcd30541ef020fbdd73e773dfa501)
图4-6 训练集图像标注信息
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/30_01.jpg?sign=1739316710-ZzcB66MIzrlBssAcRDMThZy1Pmbe9JjF-0-f861b6f1b94a7905201299e34c780a2b)
图4-7 实验结果图,以“马”这种类别为例,其他语义类别可视化为黑色背景,不同的颜色表示不同的“马”对象
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/30_02.jpg?sign=1739316710-bG9sZJuWfL9xWmXj6cpe6qyjSSZKYxbO-0-ba083a29d6b0ae7c5fa27247e580f879)
图4-8 基于深度识别框架DRF的多对象分割方法在Polo数据集上的实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/30_03.jpg?sign=1739316710-D8c4OLrmj2PhG0gIkd5pJmAeM3XCRS6r-0-a8e6468d3cacac5f7cb128e283469b69)
图4-9 基于深度识别框架DRF的多对象分割方法在TUD数据集上的实验结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/31_01.jpg?sign=1739316710-cxHqqj6rQAyjWhmqVTmcE1owsaWSSyN5-0-aa5f11a57738883a0cefcba2f5c3d9a3)
图4-10 图像内容驱动的室内场景布局迁移方法架构图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/31_02.jpg?sign=1739316710-PGMwJ5qEHN2vWjCXo0IMPzfI9rdXjCHP-0-40d6080292184c4305a87cc8773a1ab1)
图4-11 不同类别对象的位置分布可视化,从左至右分别为床、床头柜、柜子、桌子
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/31_03.jpg?sign=1739316710-N0W6s27DMHc9heo6JByhZxaM1FqsiQGR-0-9d32bddb08536a96855046a2d1969893)
图4-12 对象距离空间示意图,虚线表示包围盒,d表示从中心O到角落的距离
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/32_01.jpg?sign=1739316710-M1doDoICjKU7Sx1xknnD4fGqA8HA3d2N-0-371457f368cff0e40f795879dc9c054a)
图4-13 基于用户交互的图像场景语义分割和布局估计
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/32_02.jpg?sign=1739316710-tzaLpOgzm8IHOzLwypMDB12dlioWdd90-0-5b6b300ba04e42e28c1faca053c3fe28)
图4-14 室内场景布局图模型表达,三种边表示三种关系,虚线表示缺少的部分
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/33_01.jpg?sign=1739316710-npu0DnhKSIfx8AjuVR37iGsrlrqLCEoZ-0-adb106bd4ef510878fd69b9501656b6a)
图4-15 基于图模型结构的布局相似性度量
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/33_02.jpg?sign=1739316710-UWPG20VbtZsRFTD2rkNgEFoRx0sNlKOL-0-c58ca297f3dcd2d2992653243e6c8d53)
图4-16 布局规则重要性实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/34_01.jpg?sign=1739316710-5gUpWNJPE8IUNTXwQv72KLYKIKCYI7VT-0-e809fd607bb0e39e6bf39fbe82e676a9)
图4-17 单幅图像场景布局迁移结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/34_02.jpg?sign=1739316710-AHNk0G3VE9yiieTm5qltAx1Kl0Yt4Ggq-0-48ae006409fa83f55fb05990d8225ddf)
图4-18 基于单幅图像的卧室场景布局迁移结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/35_01.jpg?sign=1739316710-tbRwm0dzBUFIkMbQHw1jlFew5E575mrl-0-955773bf9562905f2417ba1d1a44d5ff)
图4-19 基于单幅图像的客厅场景布局迁移结果
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/35_02.jpg?sign=1739316710-wHi3nk8UPYmdNQZ3f45V8Jz1XOwAqHhg-0-0b0aebfb956b57621964087c2b388258)
图4-20 基于布局渐变图像序列集的卧室场景布局迁移实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/36_01.jpg?sign=1739316710-357OP8IFJbW7mGy4YOMIYs1uzbjiYf9H-0-11198f0117c9bdc30598db8718c6a9b2)
图4-21 基于布局渐变图像序列集的客厅场景布局迁移实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/36_02.jpg?sign=1739316710-rt33wqquwvWVUOpsKLMzBdHcdm8Gwavl-0-6292806420b91b6e9d8f3fbb2733734d)
图4-22 完备性测试实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/36_03.jpg?sign=1739316710-MzTqiIsw7reZQCt9x58r0NJKJbPXTgin-0-493c5224f839794d2694c24558b9cebf)
图4-23 布局迁移对比实验
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/37_01.jpg?sign=1739316710-f8J3uFvaGAy6e6G9roG6MlKGvkTKirp4-0-4c1a91720f87edac46b8fc41513c7597)
图5-1 人-物交互三元组<女孩,放,风筝>
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/37_02.jpg?sign=1739316710-daeB5r9k4guZyl3t0YomdVCcWF2BHI7p-0-0a319036cccd491624ac47898fb63252)
图5-2 一种基于深度上下文注意机制的人-物交互检测方法
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/37_03.jpg?sign=1739316710-GegCzjRSUG5lHJvvNk9dWEsuQNF7H35Q-0-73aabac745f11f19714ee12838529697)
图5-3 利用人体特征估计目标物体密度
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/37_04.jpg?sign=1739316710-zmoFibCSz0r5UZ9V1Cym5HfmpvEJd62g-0-7e0746d6b06c11689982f7e8165a26b1)
图5-4 基于级联方式进行人-物交互识别及关系分割
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/38_01.jpg?sign=1739316710-TLsEBlcWqwFY8VWv1mdIEGzPd5afdN4f-0-b4a7e95a5a83292074a34c743a60503e)
图5-5 面向自动驾驶相关技术的公共基准数据集
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/38_02.jpg?sign=1739316710-oGoxIY0JYhlCHZkGLdemQmjVnSvBjQlx-0-a360d9691152923ff01f21672116d777)
图5-6 一种自主驾驶环境下基于密集连接MRF模型的单张图像实例级标记方法
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/38_03.jpg?sign=1739316710-Wj5bEigm3agrC2HzAhltIT4pXr54EKOF-0-55261fe135affeed33250d127cb839db)
图5-7 基于端到端学习模型的对象距离估计,从上到下分别是城市场景、公路场景、弯道场景
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/39_01.jpg?sign=1739316710-FoNjKAIoJ3rxjaGAFd4jqgF337so6zbs-0-e353a810c8714e188710c090136a2460)
图5-8 基于属性注意网络的行人属性热图
![](https://epubservercos.yuewen.com/D70E0B/27167441507759506/epubprivate/OEBPS/Images/39_02.jpg?sign=1739316710-qdQc5fbETUlukTk9foeoXsGoDaKtWAp9-0-a0affe7795cb4384ad843224f4a4d7ce)
图5-9 面向车辆重识别的姿态感知多任务学习框架,分割片段、关键点和覆盖了姿态信息的合成数据