RabbitMQ

RabbitMQ

安装rabbitMQ

1. 安装rabbitMQ

在Mac下安装RabbitMQ是非常简单的,一般默认RabbitMQ服务器依赖的Erlang已经安装,只需要用下面两个命令就可以完成RabbitMQ的安装(前提是homebrew已经被安装):

brew update
brew install rabbitmq

安装完成后添加环境变量:

open -t ~/.bash_profile

复制黏贴:

# RabbitMQ Config
export PATH=$PATH:/usr/local/sbin

使环境变量立即生效, 输入:

source ~/.bash_profile

2. 启动RabbitMQ服务

上面配置完成后,需要关闭终端窗口,重新打开,然后输入下面命令即可启动RabbitMQ服务:

rabbitmq-server

3. 登录Web管理界面

浏览器输入localhost:15672, 账号密码全输入guest即可登录

4. 结束服务

sudo rabbitmqctl stop

设置RabbitMQ远程ip登录

由于账号guest具有所有的操作权限,并且又是默认账号,出于安全因素的考虑,guest用户只能通过localhost登陆使用,并建议修改guest用户的密码以及新建其他账号管理使用rabbitmq.
这里以创建个test帐号,密码123456为例,创建一个账号并支持远程ip访问.

  • 创建账号
rabbitmqctl add_user test 123456
  • 设置用户角色
rabbitmqctl  set_user_tags  test  administrator
  • 设置用户权限
rabbitmqctl set_permissions -p "/" test ".*" ".*" ".*"


  • 重启rabbitMQ服务
sudo service rabbitmq-server restart
  • 设置完成后可以查看当前用户和角色(需要开启服务)
rabbitmqctl list_users

这是你就可以通过其他主机的访问RabbitMQ的Web管理界面了,访问方式,浏览器输入:serverip:15672. 其中serverip是RabbitMQ-Server所在主机的ip.

RabbitMQ常用操作

RabbitMQ常用操作:

1、用户管理
用户管理包括增加用户,删除用户,查看用户列表,修改用户密码。
(1) 新增一个用户

rabbitmqctl  add_user  Username  Password
(2) 删除一个用户

rabbitmqctl  delete_user  Username
(3) 修改用户的密码

rabbitmqctl  change_password  Username  Newpassword
(4) 查看当前用户列表

rabbitmqctl  list_users
2、 用户角色
按照个人理解,用户角色可分为五类,超级管理员, 监控者, 策略制定者, 普通管理者以及其他。
(1) 超级管理员(administrator)
可登陆管理控制台(启用management plugin的情况下),可查看所有的信息,并且可以对用户,策略(policy)进行操作。
(2) 监控者(monitoring)
可登陆管理控制台(启用management plugin的情况下),同时可以查看rabbitmq节点的相关信息(进程数,内存使用情况,磁盘使用情况等)
(3) 策略制定者(policymaker)
可登陆管理控制台(启用management plugin的情况下), 同时可以对policy进行管理。但无法查看节点的相关信息(上图红框标识的部分)。与administrator的对比,administrator能看到这些内容。
(4) 普通管理者(management)
仅可登陆管理控制台(启用management plugin的情况下),无法看到节点信息,也无法对策略进行管理。
(5) 其他
无法登陆管理控制台,通常就是普通的生产者和消费者。

了解了这些后,就可以根据需要给不同的用户设置不同的角色,以便按需管理。
设置用户角色的命令为:

rabbitmqctl  set_user_tags  User  Tag
User为用户名, Tag为角色名(对应于上面的administrator,monitoring,policymaker,management,或其他自定义名称)。

也可以给同一用户设置多个角色,例如

rabbitmqctl  set_user_tags  username  monitoring  policymaker
3、用户权限
用户权限指的是用户对exchange,queue的操作权限,包括配置权限,读写权限。配置权限会影响到exchange,queue的声明和删除。读写权限影响到从queue里取消息,向exchange发送消息以及queue和exchange的绑定(bind)操作。

例如: 将queue绑定到某exchange上,需要具有queue的可写权限,以及exchange的可读权限;向exchange发送消息需要具有exchange的可写权限;从queue里取数据需要具有queue的可读权限。详细请参考官方文档中"How permissions work"部分。
相关命令为:
(1) 设置用户权限

rabbitmqctl  set_permissions  -p  VHostPath  User  ConfP  WriteP  ReadP
(2) 查看(指定hostpath)所有用户的权限信息

rabbitmqctl  list_permissions  [-p  VHostPath]
(3) 查看指定用户的权限信息

rabbitmqctl  list_user_permissions  User
(4) 清除用户的权限信息

rabbitmqctl  clear_permissions  [-p VHostPath]  User
View Code

安装python rabbitMQ模块

可通过python中内置的pika模块来调用MQ发送或接收队列请求

pip3 install pika

Python中连接RabbitMQ的模块:pika 、Celery(分布式任务队列) 、haigha 可以维护很多的队列

 


介绍

  关于python的队列,内置的有两种,一种是线程queue,另一种是进程queue,但是这两种queue都是只能在同一个进程下的线程间或者父进程与子进程之间进行队列通讯,并不能进行程序与程序之间的信息交换,这时候就需要一个中间件,来实现程序之间的通讯. 

  消息服务擅长于解决多系统、异构系统间的数据交换(消息通知/通讯)问题,也可以把它用于系统间服务的相互调用(RPC-Remote Procedure Call). RabbitMQ就是当前最主流的消息中间件之一.

  AMQP, 即Advanced Message Queuing Protocol, 高级消息队列协议, 是应用层协议的一个开放标准, 为面向消息的中间件设计. 消息中间件主要用于组件之间的解耦,消息的发送者无需知道消息使用者的存在,反之亦然. AMQP的主要特征是面向消息、队列、路由(包括点对点和发布/订阅)、可靠性、安全.

  RabbitMQ是一个在AMQP基础上完整的,可复用的企业消息系统. 他遵循Mozilla Public License开源协议. 服务器端用Erlang语言编写,支持多种客户端,如:Python、Ruby、.NET、Java、JMS、C、PHP、ActionScript、XMPP、STOMP等,支持AJAX. 用于在分布式系统中存储转发消息,在易用性、扩展性、高可用性等方面表现不俗. 充分利用RabbitMQ提供的这些功能就可以处理绝大部分的异步业务.

  MQ全称为Message Queue, 消息队列(MQ)是一种应用程序对应用程序的通信方法. 应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。消 息传递指的是程序之间通过在消息中发送数据进行通信,而不是通过直接调用彼此来通信,直接调用通常是用于诸如远程过程调用的技术. 排队指的是应用程序通过队列来通信. 队列的使用除去了接收和发送应用程序同时执行的要求.

  websocket是基于服务器和页面之间的通信协议,一次握手,多次通信。 而rabbitMQ就像是服务器之间的socket,一个服务器连上MQ监听,而另一个服务器只要通过MQ发送消息就能被监听服务器所接收。

  但是MQ和socket还是有区别的,socket相当于是页面直接监听服务器。而MQ就是服务器之间的中转站,例如邮箱,一个人投递信件给邮箱,另一个人去邮箱取,他们中间没有直接的关系,所以耦合度相比socket小了很多。

  

  使用MQ好处

  MQ就是服务器之间通信的中间仓库.

  好处1:降低了两台服务器之间的耦合,哪怕是一台服务器挂了,另外一台服务器也不会报错或者休克,反正他监听的是MQ,只要服务器恢复再重新连上MQ发送消息,监听服务器就能再次接收.

  好处2:MQ作为一个仓库,本身就提供了非常强大的功能,例如不再是简单的一对一功能,还能一对多,多对一. 例如保险箱, 只要有特定的密码,谁都能存,谁都能取. 也就是说能实现群发消息和以此衍生的功能.   

  

  好处3:现在普遍化的持久化功能,当MQ挂掉可以存储在磁盘等下重启恢复

 

  几个概念说明:

Broker:简单来说就是消息队列服务器实体。 
Exchange:消息交换机,它指定消息按什么规则,路由到哪个队列。 
Queue:消息队列载体,每个消息都会被投入到一个或多个队列。 
Binding:绑定,它的作用就是把exchange和queue按照路由规则绑定起来。 
Routing Key:路由关键字,exchange根据这个关键字进行消息投递。 
vhost:虚拟主机,一个broker里可以开设多个vhost,用作不同用户的权限分离。 
producer:消息生产者,就是投递消息的程序。 
consumer:消息消费者,就是接受消息的程序。 
channel:消息通道,在客户端的每个连接里,可建立多个channel,每个channel代表一个会话任务

  流程: 生产者发送一条消息给交换机——交换机根据关键字匹配到对应的队列——将消息存入队列——消费者从队列中取出消息使用.

 


RabbitMQ实现最简单的队列通信(轮询消费模式)

此模式下,发送队列的一方把消息存入mq的指定队列后,若有消费者端联入相应队列,即会获取到消息,并且队列中的消息会被消费掉.

若有多个消费端同时连接着队列,则会已轮询的方式将队列中的消息消费掉.

Producer生产者(Sender)

import pika

# 建立实例
credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
# connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) # 默认端口5672,可不写
# 声明管道
channel = connection.channel()

# 声明queue
# 如果确定已经声明了, 可以不声明. 但是不知道那个机器先运行, 所以要声明两次.
channel.queue_declare(queue='hello')

# n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange.
channel.basic_publish(exchange='',
                      routing_key='hello',
                      body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

发送过队列后,可在MQ服务器中查看队列状态

 Receiver还没开启的时候, 运行Sender, hello数会不断叠加. 

Consumer消费者(Receiver)
import pika

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
# connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# You may ask why we declare the queue again ‒ we have already declared it in our previous code.
# We could avoid that if we were sure that the queue already exists. For example if send.py program
# was run before. But we're not yet sure which program to run first. In such cases it's a good
# practice to repeat declaring the queue in both programs.

#通道的实例
channel.queue_declare(queue='hello')

def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)

#收到消息就调用这个
channel.basic_consume(callback,
                      queue='hello',
                      no_ack=True)

print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()   #开始消息,是个死循环,一直监听收消息

接收队列后,查看一下队列状态

当Receiver启动, 会收到对应数量的信息, 累计的数量归零.

若开启两个receiver, 则两个轮流接收消息.

  


防止消息丢失

在这种模式下, RabbitMQ会默认把P发的消息依次分发给各个消费者(c), 跟负载均衡差不多.

How to make sure that even if the consumer dies, the task isn't lost (by default, if wanna disable use no_ack=True)

Producer(Sender)

import pika
import time

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.queue_declare(queue='task_queue')  # 声明queue

import sys

message = ' '.join(sys.argv[1:]) or "Hello World! %s" % time.time()
channel.basic_publish(exchange='',
                      routing_key='task_queue',
                      body=message,
                      )
print(" [x] Sent %r" % message)
connection.close()

Consumer(Receiver)

import pika, time

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)
    time.sleep(5)
    print(" [x] Done")
    print("method.delivery_tag", method.delivery_tag)
    ch.basic_ack(delivery_tag=method.delivery_tag)  #!!!!message ack

channel.basic_consume(callback,
                      queue='task_queue',
                      # no_ack=True  # 需要acknowledgments 消息才不会丢 
                      )

print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

若在Receiver开始后一段时间后连接断开, 但是消息只接收了一部分, 剩余的消息会在内存中全部抹去. 为了防止这种情况出现, 当一个Receiver连接断开, 消息会传给另外一个Receiver. 

为了防止信息丢失, RabbitMQ支持消息ack(acknowledgments). 一个消息ack将会被发送回Sender来告诉RabbitMQ消息已被接收, 可以删除该消息了.

若一个Receiver连接断开(可能通道关闭, 连接关闭或者TCP连接丢失)还没来得及发送一个消息ack, RabbitMQ知道这个消息没有被完全传送, 所以会将它重新放入队列. 如果同一时间有其他的Receiver连接开着, 消息将会被发送给该Receiver. 这样就可以保证没有消息丢失就是Receiver连接断开.

消息ack默认是开着的 (即将no_ack = True去掉).

Doing a task can take a few seconds. You may wonder what happens if one of the consumers starts a long task and dies with it only partly done. With our current code once RabbitMQ delivers message to the customer it immediately removes it from memory. In this case, if you kill a worker we will lose the message it was just processing. We'll also lose all the messages that were dispatched to this particular worker but were not yet handled.

But we don't want to lose any tasks. If a worker dies, we'd like the task to be delivered to another worker.

In order to make sure a message is never lost, RabbitMQ supports message acknowledgments. An ack(nowledgement) is sent back from the consumer to tell RabbitMQ that a particular message had been received, processed and that RabbitMQ is free to delete it.

If a consumer dies (its channel is closed, connection is closed, or TCP connection is lost) without sending an ack, RabbitMQ will understand that a message wasn't processed fully and will re-queue it. If there are other consumers online at the same time, it will then quickly redeliver it to another consumer. That way you can be sure that no message is lost, even if the workers occasionally die.

There aren't any message timeouts; RabbitMQ will redeliver the message when the consumer dies. It's fine even if processing a message takes a very, very long time.

Message acknowledgments are turned on by default. In previous examples we explicitly turned them off via the no_ack=True flag. It's time to remove this flag and send a proper acknowledgment from the worker, once we're done with a task.


def callback(ch, method, properties, body):
    print " [x] Received %r" % (body,)
    time.sleep( body.count('.') )
    print " [x] Done"
    ch.basic_ack(delivery_tag = method.delivery_tag)
 
channel.basic_consume(callback,
                      queue='hello')


  Using this code we can be sure that even if you kill a worker using CTRL+C while it was processing a message, nothing will be lost. Soon after the worker dies all unacknowledged messages will be redelivered
Eng Version

 


消息,队列持久化

为了使当RabbitMQ 服务端宕机或中断, 消息还在, 需要下面两步: 

队列持久化: 
每次声明队列的时候, 在Producer(Sender)和Consumer(Receiver)都要加上durable:

# 在管道里声明queue
channel.queue_declare(queue='hello2', durable=True)

测试结果发现, 只是把队列持久化了, 但是队列里的消息没了. 
durable的作用只是把队列持久化.

消息持久化: 
Producer(Sender)发送消息时, 加上properties

channel.basic_publish(exchange='',
                      routing_key="task_queue",
                      body=message,
                      properties=pika.BasicProperties(
                         delivery_mode = 2,  # 是消息持久化
                      ))
We have learned how to make sure that even if the consumer dies, the task isn't lost(by default, if wanna disable  use no_ack=True). But our tasks will still be lost if RabbitMQ server stops.

When RabbitMQ quits or crashes it will forget the queues and messages unless you tell it not to. Two things are required to make sure that messages aren't lost: we need to mark both the queue and messages as durable.

First, we need to make sure that RabbitMQ will never lose our queue. In order to do so, we need to declare it as durable:
#####
channel.queue_declare(queue='hello', durable=True)
  

Although this command is correct by itself, it won't work in our setup. That's because we've already defined a queue called hello which is not durable. RabbitMQ doesn't allow you to redefine an existing queue with different parameters and will return an error to any program that tries to do that. But there is a quick workaround - let's declare a queue with different name, for exampletask_queue:
#####
channel.queue_declare(queue='task_queue', durable=True)
  

This queue_declare change needs to be applied to both the producer and consumer code.

At that point we're sure that the task_queue queue won't be lost even if RabbitMQ restarts. Now we need to mark our messages as persistent - by supplying a delivery_mode property with a value 2.
#####
channel.basic_publish(exchange='',
                      routing_key="task_queue",
                      body=message,
                      properties=pika.BasicProperties(
                         delivery_mode = 2, # make message persistent
                      ))
Eng Version

 


消息公平分发

如果Rabbit只管按顺序把消息发到各个消费者身上, 不考虑消费者负载的话, 很可能出现一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松. 为解决此问题, 可以在各个消费者端,配置perfetch=1, 意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了.

Producer(Sender)

import pika
import sys
credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.queue_declare(queue='task_queue', durable=True)

message = ' '.join(sys.argv[1:]) or "Hello World!"
channel.basic_publish(exchange='',
                      routing_key='task_queue',
                      body=message,
                      properties=pika.BasicProperties(
                          delivery_mode=2,  # make message persistent
                      ))
print(" [x] Sent %r" % message)
connection.close()

Consumer(Receiver)

import pika
import time

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.queue_declare(queue='task_queue', durable=True)
print(' [*] Waiting for messages. To exit press CTRL+C')


def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)
    time.sleep(body.count(b'.'))
    print(" [x] Done")
    ch.basic_ack(delivery_tag=method.delivery_tag)


channel.basic_qos(prefetch_count=1)  # 公平分发
channel.basic_consume(callback,
                      queue='task_queue')

channel.start_consuming()

 


Publish\Subscribe(消息发布\订阅) 

Exchange, 一方面它从Producer(Sender/Publisher)方接收消息, 另一方面, 将信息推到queue中. Exchange知道如何处理这些消息, 知道将其添加到哪个queue, 这些规则由Exchage的类型来定义.

Exchange在定义的时候是有类型的, 以决定到底是哪些queue符合条件, 可以接收消息:

  • fanout: 所有bind绑定到此exchange的queue都可以接收消息    (给所有人发消息)
  • direct: 通过routingKey和exchange决定的哪些queue可以接收消息    (给指定的一些queue发消息)
  • topic: 所有符合routingKey(此时可以是一个表达式)的routingKey所bind绑定的queue可以接收消息   (给订阅话题的人发消息)
    • 表达式符号说明:#代表一个或多个字符,*代表任何字符
    • 例:#.a会匹配a.a,aa.a,aaa.a等
    • *.a会匹配a.a,b.a,c.a等
    • 注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanout 
  • headers: 通过headers 来决定把消息发给哪些queue    (通过消息头,决定发送给哪些队列)

 

一. fanout

应用场景:视频直播, 新浪微博

fanout_sender.py (Sender)

import pika
import sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='logs',exchange_type='fanout')  # exchange声明

message = ' '.join(sys.argv[1:]) or "info: Hello World!"
channel.basic_publish(exchange='logs',
                      routing_key='',
                      body=message)
print(" [x] Sent %r" % message)
connection.close()

fanout_receiver.py (Receiver)

import pika

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='logs',exchange_type='fanout')

result = channel.queue_declare(exclusive=True)  # 不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue

channel.queue_bind(exchange='logs',queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')

def callback(ch, method, properties, body):
    print(" [x] %r" % body)

channel.basic_consume(callback,queue=queue_name,no_ack=True)

channel.start_consuming()
 

二. direct (有选择的接收消息)

RabbitMQ还支持根据关键字发送, 即:队列绑定关键字, 发送者将数据根据关键字发送到消息exchange, exchange根据关键字判定应该将数据发送至指定队列.

direct_sender.py (Sender)

import pika
import sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs', exchange_type='direct')

severity = sys.argv[1] if len(sys.argv) > 2 else 'info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='direct_logs',
                      routing_key=severity,  # 类型要对上
                      body=message)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()

direct_receiver.py (Receiver)

import pika
import sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs',exchange_type='direct')

result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue

severities = sys.argv[1:]
if not severities:
    sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
    sys.exit(1)

for severity in severities:
    channel.queue_bind(exchange='direct_logs',
                       queue=queue_name,
                       routing_key=severity)

print(' [*] Waiting for logs. To exit press CTRL+C')

def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))

channel.basic_consume(callback,queue=queue_name)

channel.start_consuming()

 

三. topic (更细致的过滤消息)

topic_sender.py (Sender)

import pika
import sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='topic_logs',
                         exchange_type='topic')

routing_key = sys.argv[1] if len(sys.argv) > 2 else 'anonymous.info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='topic_logs',
                      routing_key=routing_key,
                      body=message)
print(" [x] Sent %r:%r" % (routing_key, message))
connection.close()

topic_receiver.py (Receiver)

import pika
import sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.exchange_declare(exchange='topic_logs', exchange_type='topic')

result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue

binding_keys = sys.argv[1:]
if not binding_keys:
    sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
    sys.exit(1)

for binding_key in binding_keys:
    channel.queue_bind(exchange='topic_logs',
                       queue=queue_name,
                       routing_key=binding_key)

print(' [*] Waiting for logs. To exit press CTRL+C')

def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))

channel.basic_consume(callback,queue=queue_name,)

channel.start_consuming()
To receive all the logs run:
python receive_logs_topic.py "#"

To receive all logs from the facility "kern":
python receive_logs_topic.py "kern.*"

Or if you want to hear only about "critical" logs:
python receive_logs_topic.py "*.critical"

You can create multiple bindings:
python receive_logs_topic.py "kern.*" "*.critical"

And to emit a log with a routing key "kern.critical" type:
python emit_log_topic.py "kern.critical" "A critical kernel error"
run
 

Remote procedure call (RPC) (双向的)

上面的流都是单向的, 如果远程的机器执行完返回结果, 就实现不了了. 
如果返回, 这种模式叫RPC(远程过程调用). snmp就是典型的RPC.
RabbitMQ既是发送端又是接收端.
但是接收端返回消息不可以发送到发过来的queue里. 


返回时, 再建立一个queue, 把结果发送新的queue里.
为了服务端返回的queue不写死, 在客户端给服务端发指令的的时候, 同时带一条消息说, 你结果返回给哪个queue.

 
rpc_client.py
import pika
import uuid

class FibonacciRpcClient(object):
    def __init__(self):
        credentials = pika.PlainCredentials('admin', '123456')
        self.connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
        self.channel = self.connection.channel()

        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue #定义了一个随机的rpc_result queue

        self.channel.basic_consume(self.on_response,queue=self.callback_queue)

    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body

    def call(self, n):
        self.response = None #消息接收到之后,会把结果赋值给response
        self.corr_id = str(uuid.uuid4())
        self.channel.basic_publish(exchange='',
                                   routing_key='rpc_queue',
                                   properties=pika.BasicProperties(
                                       reply_to=self.callback_queue,
                                       correlation_id=self.corr_id,
                                   ),
                                   body=str(n))
        while self.response is None:
            self.connection.process_data_events() #以非阻塞的方式去检查有没有新消息,有的话就接收
        return int(self.response)


fibonacci_rpc = FibonacciRpcClient()

print(" [x] Requesting fib(30)")
response = fibonacci_rpc.call(30)
print(" [.] Got %r" % response)

rpc_server.py

import pika,sys

credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()

channel.queue_declare(queue='rpc_queue')

def fib(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fib(n - 1) + fib(n - 2)

def on_request(ch, method, props, body):
    n = int(body)

    print(" [.] fib(%s)" % n)
    response = fib(n)

    ch.basic_publish(exchange='',
                     routing_key=props.reply_to,
                     properties=pika.BasicProperties(correlation_id= \
                                                         props.correlation_id),
                     body=str(response))
    ch.basic_ack(delivery_tag=method.delivery_tag)


channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue')

print(" [x] Awaiting RPC requests")
channel.start_consuming()

 

import pika
import uuid


class CMDRpcClient(object):
    def __init__(self):
        credentials = pika.PlainCredentials('admin', '123456')
        self.connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
        self.channel = self.connection.channel()

        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue #定义了一个随机的rpc_result queue

        self.channel.basic_consume(self.on_response,queue=self.callback_queue)

    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body

    def call(self, n):
        self.response = None #消息接收到之后,会把结果赋值给response
        self.corr_id = str(uuid.uuid4())
        self.channel.basic_publish(exchange='',
                                   routing_key='rpc_queue',
                                   properties=pika.BasicProperties(
                                       reply_to=self.callback_queue,
                                       correlation_id=self.corr_id,
                                   ),
                                   body=str(n))
        while self.response is None:
            self.connection.process_data_events() #以非阻塞的方式去检查有没有新消息,有的话就接收
        return self.response


cmd_rpc = CMDRpcClient()

print(" [x] Requesting fib(30)")
response = cmd_rpc.call("df -h")
print(" [.] Got cmd result" )
print(response.decode("utf-8"))
ssh_client
import pika,sys
import subprocess


credentials = pika.PlainCredentials('admin', '123456')
connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1', 5672, '/', credentials))
channel = connection.channel()


channel.queue_declare(queue='rpc_queue')


def CMD(cmd):

    cmd_obj = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
    cmd_result = cmd_obj.stdout.read() +  cmd_obj.stderr.read()

    return cmd_result

def on_request(ch, method, props, body):

    print(" [.]recv cmd(%s)" % body)
    response = CMD(body)

    ch.basic_publish(exchange='',
                     routing_key=props.reply_to,
                     properties=pika.BasicProperties(correlation_id= props.correlation_id),
                     body= response)
    ch.basic_ack(delivery_tag=method.delivery_tag)


channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue')

print(" [x] Awaiting RPC requests")
channel.start_consuming()
ssh_server

 

 
openstack 默认用的RabbitMQ 
posted @ 2018-05-20 00:12  Charonnnnn  阅读(165)  评论(0)    收藏  举报