2018-10-04 星期四

The high level of English is a standard for a top student.

1. If you reveal your secret to the wind, you should not blame the wind for revealing them to the trees.

2. Either life entails courage, or it ceases to be life. ​​​​

3. No one can be pure and simple until the last minute.However,don't forget your nature.

4. All the things you are accustomed to have suddenly become the last time. Those landscapes you see repeatedly every day can hardly be seen again afterwards. I, you, we, everybody are going to leave now.

5. Life will get better when it goes bad to a certain extent.Because it can't get worse than this.We should fill our heart with sunshine.

6. Bustling sad end to the past, do not despair, mundane to the most beautiful soul-stirring.

7. Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will spend its whole life believing it is stupid. 

8. Actually it is just in an idea when feel oneself can achieve and cannot achieve.

9. Sometimes you have to sacrifice to do the right thing.

10. Life takes on meaning when you become motivated, set goals and charge after them in an unstoppable manner. 

 

Paper

1. A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars

(http://www.cnki.com.cn/Article/CJFDTotal-JZUS201701010.htm)

Abstract: Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient selflocalization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

Key words: Visual perception; Self-localization; Mapping; Motion planning; Robotic car

key knowledge points:

1.  three questions during all the time of its driving: where it is, where it is going, and how to go.(location, destination, path planning)

2.  slam simultaneous localization and mapping

3. self-localization, decision making and motion planning, and motion control.

4. the robotic car’s performance heavily depends on the accuracy and reliability of its environment perception technologies including self-localization and perception of obstacles.

 

2. Elaborate Scene Reconstruction with a Consumer Depth Camera

(https://sci-hub.tw/10.1007/s11633-018-1114-2)

Abstract: A robust approach to elaborately reconstruct the indoor scene with a consumer depth camera is proposed in this paper. In order to ensure the accuracy and completeness of 3D scene model reconstructed from a freely moving camera, this paper proposes new 3D reconstruction methods, as follows: 1) Depth images are processed with a depth adaptive bilateral filter to effectively improve the image quality; 2) A local-to-global registration with the content-based segmentation is performed, which is more reliable and robust to reduce the visual odometry drifts and registration errors; 3) An adaptive weighted volumetric method is used to fuse the registered data into a global model with sufficient geometrical details. Experimental results demonstrate that our approach increases the robustness and accuracy of the geometric models which were reconstructed from a consumer-grade depth camera.

Keywords: 3D reconstruction, image processing, geometry registration, simultaneous localization and mapping (SLAM), volumetric integration.

key knowledge points:

1. Reconstructing the real world scenes is known as a particularly challenging problem in computer vision field. Many tools have been applied to perceive accurate 3D world, including stereo cameras, laser range finders, monocular cameras, and depth cameras.

2. In this paper, we present an elaborate and robust scene reconstruction method, which can be applied to real-world scenes and has high reconstruction quality. The main contributions of our work contain three aspects: First, in order to increase the accuracy of 3D model, we smooth the depth images by a depth adaptive bilateral filter according to the depth camera s noise model. Second, to reduce the visual odometry drift and improve the geometric registration accuracy, we propose a content-based segmentation to partition the depth image sequence into fragments, and perform geometric registration from local to global. Third, we fuse the data with an adaptive weighting TSDF by which the details of areas with high accuracy and regions of interest (ROI) can be preserved.

3. We presented a robust approach to elaborate scene reconstruction from a consumer depth camera. The main contribution of our research is using the local-to-global registration to obtain complete scene reconstruction and then the accuracy of 3D scene models is improved in the process of depth images filtering and weighted volumetric integration. The experimental results demonstrated that the proposed approach improves the robustness of reconstruction and enhances the fidelity of the 3D models produced from a consumer depth camera.

 

 

posted @ 2018-10-04 21:27  三才  阅读(198)  评论(0编辑  收藏  举报