python-python基础11-RabbitMQ队列

RabbitMQ队列

安装rabbitMQ:

https://www.cnblogs.com/jehuzzh/p/12560038.html

安装python rabbitMQ module 

pip install pika(python通过pika这个模块与rabbitMQ建立通信)

 

实现最简单的队列通信

 

 producer:

import pika

connection=pika.BlockingConnection(pika.ConnectionParameters('localhost'))  #建立一个socket
channel=connection.channel()
#声明一个queue
channel.queue_declare(queue='hello')
#生成消息,消息不能直接发到队列,需要通过exchange
channel.basic_publish(exchange='',
                     routing_key='hello',   #queue名
                     body='Hello World!')   #消息内容

print("send Hello World!")

connection.close()

consumer:

import pika
import time

connection = pika.BlockingConnection(pika.ConnectionParameters(
    'localhost'))
channel = connection.channel()

#您可能会问为什么我们再次声明队列‒我们已经在producer中声明了它。
#如果我们确定队列已经存在,我们就可以不再声明。
#但是我们不确定先运行哪个程序,所以在consumer中再声明一次queue就可以避免报错
channel.queue_declare(queue='hello')

def callback(ch, method, properties, body):
    print("-->",ch,method,properties)
    time.sleep(10)   #模拟处理消息的过程
    print("Received %r" % body)  #body就是消息内容

channel.basic_consume('hello',
                      callback,    #收到消息后执行callback函数
                      auto_ack=True)   #处理完消息后会发送一个回应消息,告诉producer我处理完了,然后rabbitMQ才会把这条消息从队列中删除

print('Waiting for messages. To exit press CTRL+C')
channel.start_consuming()  #启动consumer,会一直等待接收消息

比如有一个producer往rabbitMQ发消息,有多个consumer接收消息,在consumer里定义auto_ack=True,第一个consumer接收到消息,开始处理消息(假如处理消息需要10秒),但是在处理的过程中consumer宕机了或是断线了,那么rabbitMQ会把消息发给下一个consumer,直到consumer处理完后返回ack,不然消息会一直存在在rabbitMQ中。

 

在windows上可以在rabbitMQ的安装目录的sbin目录下 rabbitmqctl.bat list_queues查看当前rabbitMQ有哪些queue,每个queue中有几条消息

 

消息持久化

如果消息不做持久化,那么rabbitMQ中的消息只是存在内存中,如果rabbitMQ服务停止,或是机器宕机,那么在rabbitMQ中还未被接收的消息也就丢失了

消息持久化也很简单:

channel.queue_declare(queue='task_queue', durable=True)  在声明一个queue的时候加上durable=True,那么这个queue就被持久化了

queue里的消息要持久化:

channel.basic_publish(exchange='',
                      routing_key="task_queue",
                      body=message,
                      properties=pika.BasicProperties(
                         delivery_mode = 2, # make message persistent
                      ))

生成消息的时候加上 delivery_mode = 2

例子:producer:

import pika

connection=pika.BlockingConnection(pika.ConnectionParameters('localhost'))  #建立一个socket
channel=connection.channel()
#声明一个queue
channel.queue_declare(queue='jehuzzh',durable=True)  #durable=True : queue持久化

#生成消息,消息不能直接发到队列,需要通过exchange
channel.basic_publish(exchange='',
                     routing_key='jehuzzh',   #queue名
                     body='I miss you!',    #消息内容
                      properties=pika.BasicProperties(delivery_mode=2))   #消息持久化

print("send I miss you! done")

connection.close()

 

消息公平分发(配置权重,负载均衡)

如果Rabbit只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置perfetch=1,意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了。

channel.basic_qos(prefetch_count=1)

consumer:

import pika
import time

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

channel.queue_declare(queue='jehuzzh',durable=True)

def callback(ch, method, properties, body):
    print("-->",ch,method,properties)
    time.sleep(30)   #模拟处理消息的过程
    print("Received %r" % body)  #body就是消息内容

channel.basic_qos(prefetch_count=1)  #一次只能处理一条消息,处理完后再接收下一条消息
channel.basic_consume('jehuzzh',   #queue名
                      callback,)    #收到消息后执行callback函数

print('Waiting for messages...')
channel.start_consuming()  #启动consumer,会一直等待接收消息

以上例子,只要这个consumer没有处理完当前的消息,就不再接收新消息,producer发的消息都只能发到其它的consumer上。

 

Publish\Subscribe(消息发布\订阅) 

之前的例子都基本都是1对1的消息发送和接收,即消息只能发送到指定的queue里,但有些时候你想让你的消息被所有的Queue收到,类似广播的效果,这时候就要用到exchange了,

exchange一方面,它接收来自producer的消息,另一方面,将它们推入队列。 exchange必须确切知道如何处理收到的消息。 是否应将其附加到特定队列? 是否应该将其附加到许多队列中? 还是应该丢弃它。 规则由交换类型定义。

Exchange在定义的时候是有类型的,以决定到底是哪些Queue符合条件,可以接收消息

exchange类型:
fanout: 所有bind到此exchange的queue都可以接收消息
direct: 通过routingKey和exchange决定的那个唯一的queue可以接收消息
topic:所有符合routingKey(此时可以是一个表达式)的routingKey所bind的queue可以接收消息

   表达式符号说明:#代表一个或多个字符,*代表任何字符
      例:#.a会匹配a.a,aa.a,aaa.a等
          *.a会匹配a.a,b.a,c.a等
     注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanout 

headers: 通过headers 来决定把消息发给哪些queue

 

 fanout类型exchange示例:(所有绑定到exchange的queue都可以接收到消息)

producer:

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

#定义一个exchange
channel.exchange_declare(exchange='zzh',   #exchange名
                         exchange_type='fanout')     #exchange类型

message = "info: I love you!"  #消息内容
channel.basic_publish(exchange='zzh',
                      routing_key='',
                      body=message)  #消息内容
print("Sent %r" % message)
connection.close()

consumer:

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='zzh',
                         exchange_type='fanout')

result = channel.queue_declare('',exclusive=True)  # 不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue   #随机生成queue名
print("queue_name:",queue_name)

channel.queue_bind(exchange='zzh',
                   queue=queue_name)    #queue和exchange绑定,所有通过这个exchange发的消息都会到这个queue里,然后被consumer接收

print('Waiting for zzh...')

def callback(ch, method, properties, body):
    print(" consume %r" % body)

channel.basic_consume(queue_name,
                      callback,
                      auto_ack=True)

channel.start_consuming()

 多个consumer的queue绑定"zzh"这个exchange都可以收到消息

 

有选择的接收消息(exchange type=direct) 

RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。

 

 direct_producer:

import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs',
                         exchange_type='direct')

severity = sys.argv[1] if len(sys.argv) > 1 else 'info'  #sys.argv[1]脚本后的第一个参数。只要带参数,就把第一个参数赋值,如果没有参数,那就把info赋值
message = ' '.join(sys.argv[2:]) or 'Hello World!'   #如果脚本后带一个以上的参数,就把从第二个开始的参数赋值给message
channel.basic_publish(exchange='direct_logs',   #exchange名
                      routing_key=severity,
                      body=message)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()

direct_consumer:

import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs',
                         exchange_type='direct')

result = channel.queue_declare('',exclusive=True)
queue_name = result.method.queue

severities = sys.argv[1:]
if not severities:
    sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
    sys.exit(1)

for severity in severities:
    channel.queue_bind(exchange='direct_logs',
                       queue=queue_name,
                       routing_key=severity)

print(' [*] Waiting for logs. To exit press CTRL+C')


def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))


channel.basic_consume(queue_name,
                      callback,
                      auto_ack=True)

channel.start_consuming()

比如consumer带的参数是info,那么它只能接收到direct_producer info的消息

 

更细致的消息过滤

尽管使用direct exchange对我们的系统进行了改进,但它仍然存在局限性-它不能基于多个条件进行路由。

在我们的日志记录系统中,我们可能不仅要根据严重性订阅日志,还要根据发出日志的源订阅日志。 根据严重性(info / warn / crit ...)和工具(auth / cron / kern ...)路由日志。

这将为我们提供很大的灵活性-我们可能只想听听来自'cron'的严重错误,也可以听听'kern'的所有日志。

 

 producer:

import pika
import sys
 
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
channel = connection.channel()
 
channel.exchange_declare(exchange='topic_logs',
                         type='topic')
 
routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='topic_logs',
                      routing_key=routing_key,
                      body=message)
print(" [x] Sent %r:%r" % (routing_key, message))
connection.close()

consumer:

import pika
import sys
 
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
channel = connection.channel()
 
channel.exchange_declare(exchange='topic_logs',
                         type='topic')
 
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
 
binding_keys = sys.argv[1:]
if not binding_keys:
    sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
    sys.exit(1)
 
for binding_key in binding_keys:
    channel.queue_bind(exchange='topic_logs',
                       queue=queue_name,
                       routing_key=binding_key)
 
print(' [*] Waiting for logs. To exit press CTRL+C')
 
def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))
 
channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)
 
channel.start_consuming()

To receive all the logs run:

python receive_logs_topic.py "#"

To receive all logs from the facility "kern":

python receive_logs_topic.py "kern.*"

Or if you want to hear only about "critical" logs:

python receive_logs_topic.py "*.critical"

You can create multiple bindings:

python receive_logs_topic.py "kern.*" "*.critical"

And to emit a log with a routing key "kern.critical" type:

python emit_log_topic.py "kern.critical" "A critical kernel error"

 

Remote procedure call (RPC)

 

为了说明如何使用RPC服务,我们将创建一个简单的客户端类。 它将公开一个名为call的方法,该方法发送RPC请求并阻塞,直到收到答案为止:

fibonacci_rpc = FibonacciRpcClient()
result = fibonacci_rpc.call(4)
print("fib(4) is %r" % result)

 

 RPC server

import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
 
channel = connection.channel()
 
channel.queue_declare(queue='rpc_queue')
 
def fib(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fib(n-1) + fib(n-2)
 
def on_request(ch, method, props, body):
    n = int(body)
 
    print(" [.] fib(%s)" % n)
    response = fib(n)
 
    ch.basic_publish(exchange='',
                     routing_key=props.reply_to,
                     properties=pika.BasicProperties(correlation_id = \
                                                         props.correlation_id),
                     body=str(response))
    ch.basic_ack(delivery_tag = method.delivery_tag)
 
channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue')
 
print(" [x] Awaiting RPC requests")
channel.start_consuming()

RPC client

import pika
import uuid
 
class FibonacciRpcClient(object):
    def __init__(self):
        self.connection = pika.BlockingConnection(pika.ConnectionParameters(
                host='localhost'))
 
        self.channel = self.connection.channel()
 
        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue
 
        self.channel.basic_consume(self.on_response, no_ack=True,
                                   queue=self.callback_queue)
 
    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body
 
    def call(self, n):
        self.response = None
        self.corr_id = str(uuid.uuid4())
        self.channel.basic_publish(exchange='',
                                   routing_key='rpc_queue',
                                   properties=pika.BasicProperties(
                                         reply_to = self.callback_queue,
                                         correlation_id = self.corr_id,
                                         ),
                                   body=str(n))
        while self.response is None:
            self.connection.process_data_events()
        return int(self.response)
 
fibonacci_rpc = FibonacciRpcClient()
 
print(" [x] Requesting fib(30)")
response = fibonacci_rpc.call(30)
print(" [.] Got %r" % response)
posted @ 2020-03-25 00:55  jehuzzh  阅读(193)  评论(0编辑  收藏  举报