This blog mainly records my learning and understanding of OriginBot-camera driver and visual code, and I will annotate it in the code file.
In the documentation, two methods are provided to drive the camera: one is to display the results of the screen and the human detection algorithm in real time through the page after startup, and the other is to publish the image data through a topic after startup.
The startup mode that can be viewed through the browser
The documentation makes it clear that you can start with the following command:
ros2 launch originbot_bringup camera_websoket_display.launch.py
After startup, open http://IP:8000 with a browser.
The final code executed by this command is originbot.originbot_bringup.launch.camera_websoket_display.launch.py, which is as follows:
import os
from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from ament_index_python import get_package_share_directory
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
def generate_launch_description():
mipi_cam_device_arg = DeclareLaunchArgument(
'device',
default_value='GC4663',
description='mipi camera device')
# 这里是实际启动摄像头的Node,最终执行的事mipi_cam.launch.py,会在下面单独解释这个代码
mipi_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('mipi_cam'),
'launch/mipi_cam.launch.py')),
launch_arguments={
'mipi_image_width': '960',
'mipi_image_height': '544',
'mipi_io_method': 'shared_mem',
'mipi_video_device': LaunchConfiguration('device')
}.items()
)
# nv12->jpeg
# 这里调用了TogetheROS.Bot的图像编解码模块,目的是为了提升性能,具体参考:
# https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec
jpeg_codec_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('hobot_codec'),
'launch/hobot_codec_encode.launch.py')),
launch_arguments={
'codec_in_mode': 'shared_mem',
'codec_out_mode': 'ros',
'codec_sub_topic': '/hbmem_img',
'codec_pub_topic': '/image'
}.items()
)
# web
# 这个就是启动web的部分,实际上背后是一个Nginx静态服务器,
# 订阅了image来展示图片,订阅了smart_topic来获取人体检测的数据
# 这里最后是执行了websocket.laucn.py这个代码,下面再详细解释
web_smart_topic_arg = DeclareLaunchArgument(
'smart_topic',
default_value='/hobot_mono2d_body_detection',
description='websocket smart topic')
web_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('websocket'),
'launch/websocket.launch.py')),
launch_arguments={
'websocket_image_topic': '/image',
'websocket_smart_topic': LaunchConfiguration('smart_topic')
}.items()
)
# mono2d body detection
# TogetheROS.Bot的人体检测功能,
# 会订阅/image_raw或者/hbmem_img的图片数据来做检测,
# 然后把检测结果发布到hobot_mono2d_body_detection,
# 我在https://www.guyuehome.com/45835里面有用到这个模块,也有相对详细的介绍,可以查看
# 源码和官方文档在:https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec
mono2d_body_pub_topic_arg = DeclareLaunchArgument(
'mono2d_body_pub_topic',
default_value='/hobot_mono2d_body_detection',
description='mono2d body ai message publish topic')
mono2d_body_det_node = Node(
package='mono2d_body_detection',
executable='mono2d_body_detection',
output='screen',
parameters=[
{"ai_msg_pub_topic_name": LaunchConfiguration(
'mono2d_body_pub_topic')}
],
arguments=['--ros-args', '--log-level', 'warn']
)
return LaunchDescription([
mipi_cam_device_arg,
# image publish
mipi_node,
# image codec
jpeg_codec_node,
# body detection
mono2d_body_pub_topic_arg,
mono2d_body_det_node,
# web display
web_smart_topic_arg,
web_node
])
上面的代码里面调用了mipi_cam.launch.py和 websocket.launch.py, 现在分别来介绍。
以下是originbot.mipi_cam.launch.mipi_cam.launch.py的内容:
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
return LaunchDescription([
DeclareLaunchArgument(
'mipi_camera_calibration_file_path',
default_value='/userdata/dev_ws/src/origineye/mipi_cam/config/SC132GS_calibration.yaml',
description='mipi camera calibration file path'),
DeclareLaunchArgument(
'mipi_out_format',
default_value='nv12',
description='mipi camera out format'),
DeclareLaunchArgument(
'mipi_image_width',
default_value='1088',
description='mipi camera out image width'),
DeclareLaunchArgument(
'mipi_image_height',
default_value='1280',
description='mipi camera out image height'),
DeclareLaunchArgument(
'mipi_io_method',
default_value='shared_mem',
description='mipi camera out io_method'),
DeclareLaunchArgument(
'mipi_video_device',
default_value='F37',
description='mipi camera device'),
# 启动图片发布pkg
Node(
package='mipi_cam',
executable='mipi_cam',
output='screen',
parameters=[
{"camera_calibration_file_path": LaunchConfiguration(
'mipi_camera_calibration_file_path')},
{"out_format": LaunchConfiguration('mipi_out_format')},
{"image_width": LaunchConfiguration('mipi_image_width')},
{"image_height": LaunchConfiguration('mipi_image_height')},
{"io_method": LaunchConfiguration('mipi_io_method')},
{"video_device": LaunchConfiguration('mipi_video_device')},
{"rotate_degree": 90},
],
arguments=['--ros-args', '--log-level', 'error']
)
])
This code is actually very simple, just some parameter declarations, but if you have used OriginBot for a while, you should remember that after the car starts the camera, it will release image data through a topic called /image_raw, which is not mentioned here.
这一部分在originbot.mipi_cam.src.mipi_cam_node.cpp 里面的236行, 函数如下:
Click on the camera driver of OriginBot source code learning - Gu Yueju to view the full text