pyrealsense 的基本操作

Start the stream

import pyrealsense2 as rs
import numpy as np
import cv2 as cv

pipeline = rs.pipeline()

config = rs.config()
# from the camera SR300
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# from .bag file
config.enable_device_from_file("/path/to/your/file/my_record_file.bag")

profile = pipeline.start(config)

Start to fetch the images

在opencv中需要将RGB转换成BGR,而深度图进行可视化处理

frame = pipeline.wait_for_frames()       # wait for two parallel frames

color_frame = frame.get_color_frame()
color_image = np.asanyarray(color_frame.get_data())
color_image = cv.cvtColor(color_image, cv.COLOR_RGB2BGR)

depth_frame = frame.get_depth_frame()
depth_image = np.asanyarray(depth_frame.get_data())
depth_image = cv.applyColorMap(cv.convertScaleAbs(depth_image, alpha=0.03), cv.COLORMAP_JET)
# depth_image = depth_image * 6 may be also ok

image = np.hstack((color_image, depth_image))

Align the images

进行对齐操作,同时获取两种图片的时候推荐进行此操作

alignIt = rs.align(rs.stream.color) # rs.stream.depth
aligned_frame = alignIt.process(frame)
# i prefer frame = alignIt.process(frame)

Get Intrinsics & Extrinsics

获取相机的一些内参,以及两摄像头之间的外参

color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics

depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)

Get coordinate

获取坐标

distance = depth_frame.get_distance(320, 240)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale() 
# Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meter

depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, [320, 240], distance)

reference:https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples

posted @ 2020-08-01 22:09  penway  阅读(2976)  评论(0编辑  收藏  举报