Dayday up ---- python Day11
rabbitMQ
安装:
wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.5.0/rabbitmq-server-3.5.0-1.noarch.rpm
rpm -ivh rabbitmq-server-3.5.0-1.noarch.rpm
启用维护插件: rabbitmq-plugins enable rabbitmq_management
修改配置文件:
cp /usr/share/doc/rabbitmq-server-3.6.2/rabbitmq.config.example /etc/rabbitmq/rabbitmq.config
修改配置文件:%% {loopback_users, [<<"guest">>]}, 为 {loopback_users, []}
如果不修改, 在使用rabbit 的时候会报错:pika.exceptions.ProbableAuthenticationError
service rabbitmq-server start
端口5672
例子 :
send 端
# --*--coding:utf-8--*--
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host="172.16.19.221",port=5672)) # 创建连接
channel = connection.channel() # 声明一个管道
channel.queue_declare(queue='hello') # 声明queue
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!') # 发送内容
print("[x] Sent 'hello world!' ")
connection.close()
receive 端
# --*--coding:utf-8--*--
import pika
connection = pika.BlockingConnection( pika.ConnectionParameters('172.16.19.221')) # 创建连接
channel = connection.channel() # 声明一个管道
channel.queue_declare(queue='hello') # 声明queue
def callback(ch, method, properties, body): # 回调函数,执行完了代表队列处理完了
print("[x] Received %r" % body)
channel.basic_consume(callback,queue='hello',no_ack=True) # no_ack 代表不关心这个消息,不管有没有处理完
print('[*] waiting for messages. To exit press CTRL+C')
channel.start_consuming() # 开始接受 消息
rabbitMQ采用 轮询的消息依次的分发给客户端 (接收端) 消息的传递
消费者的socket如果断了,生产者就能检测到 ,转给另一位消费者,跟负载均衡差不多 (按启动顺序分发)
开启 no_ack 之后 代表不关心消息有没有 处理完, 如果 一位消费者在消费消息的时候突然断电, 那么这个消息就不会给其他消费者继续消费,就代表没有完成
如果想要检测消费者是否断电,然后自动转给其他消费者需要关闭 no_ack (注销即可)或者在 回调函数里面加上 ch.basic_ack(delivery_tag=method.delivery_tag)
# --*--coding:utf-8--*-- import pika,time connection = pika.BlockingConnection( pika.ConnectionParameters('172.16.19.221')) # 创建连接 channel = connection.channel() # 声明一个管道 channel.queue_declare(queue='hello') # 声明queue def callback(ch, method, properties, body): # 回调函数 print("--->",ch,method,properties) time.sleep(10) print("[x] Received %r" % body) ch.basic_ack(delivery_tag=method.delivery_tag) # 代表关心这个消息,需要消息确认 channel.basic_consume(callback,queue='hello', # no_ack=True ) # no_ack 代表不关心这个消息,不管有没有处理完 print('[*] waiting for messages. To exit press CTRL+C') channel.start_consuming()
消息持久化 :
rabbitmq队列默认 如果 rabbitmq-server 服务器 突然断电, 那么消息会全部消失
例如: 队列 hello 中 有 2条消息 ,重启之后就会消失

怎么做持久化呢 ?
在声明队列的时候 ,将 队列序列化开启 durable=True ( 如果 send端开启了 ,那么 consumer端也得开启 )
send 端
# --*--coding:utf-8--*--
import pika
connection = pika.BlockingConnection( pika.ConnectionParameters (host="172.16.19.221",port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello-1',durable=True) # durable=True 队列持久化
channel.basic_publish(exchange='',
routing_key='hello-1',
body="hello world!")
print("[x] Sent 'hello world!' ")
connection.close()
consumer 端
# --*--coding:utf-8--*--
import pika,time
connection = pika.BlockingConnection(pika.ConnectionParameters(host="172.16.19.221",port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello-1',durable=True) # durable=True 队列持久化
def callback(ch, method, properties, body):
print("--->", ch, method, properties)
# time.sleep(10)
print("[x] Received %r" % body)
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_consume(callback,queue='hello-1')
channel.start_consuming()
效果:

虽然队列持久化了,但是消息内容没有了, 如果需要就得做消息持久化 :
做消息持久化只需要修改 send端配置
# --*--coding:utf-8--*--
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello-2',durable=True)
channel.basic_publish(exchange='',
routing_key='hello-2',
body='hello world!',
properties=pika.BasicProperties(
delivery_mode=2, # 消息持久化 在send端添加,如果设置为 1代表非持久化
))
print("[x] Sent 'hello world!' ")
connection.close()
效果:

消息公平分发
如果Rabbit只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置perfetch=1,意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了
# --*--coding:utf-8--*--
import pika,time
connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello-3',durable=True)
def callback(ch, method, properties, body):
print("-->:",ch,method,properties)
time.sleep(10)
print("[x] received %r" % body)
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_qos(prefetch_count=1) # 开启分发策略,当前消息处理完就给你发,否则不发, 只需要在consumer端设置
channel.basic_consume(callback,queue='hello-3') # 如果收到消息就调用 callback 函数
channel.start_consuming()
消息发布订阅(队列广播)
之前的例子都基本都是1对1的消息发送和接收,即消息只能发送到指定的queue里,但有些时候你想让你的消息被所有的Queue收到,类似广播的效果,这时候就要用到exchange了
Exchange在定义的时候是有类型的,以决定到底是哪些Queue符合条件,可以接收消息
fanout: 所有bind到此exchange的queue都可以接收消息
direct: 通过routingKey和exchange决定的那个唯一的queue可以接收消息
topic:所有符合routingKey(此时可以是一个表达式)的routingKey所bind的queue可以接收消息
广播是实时的,如果客户端没有启动,就收不到
1、fanout
publisher
# --*--coding:utf-8--*--
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.exchange_declare(exchange='logs',
type='fanout') # 指定类型 fanout
message = ''.join(sys.argv[1:]) or 'info: hello world!'
channel.basic_publish(exchange='logs',
routing_key='', # 没有queue名
body=message,
)
print("sent 'hello world' ")
connection.close()
subscriber
# --*--coding:utf-8--*--
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
# channel.queue_declare(queue='hello-4',durable=True)
channel.exchange_declare(exchange='logs',
type='fanout')
result = channel.queue_declare(exclusive=True)
# exclusive排他,唯一的 不指定queue名字,rabbitmq会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue
channel.queue_bind(exchange='logs',
queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print("-->",ch,method,properties)
print("recevied:",body)
channel.basic_consume(callback,queue=queue_name,no_ack=True)
channel.start_consuming()
2、有选择的接受消息(type=direct)
RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。
publisher_direct
# --*--coding:utf-8--*--
import pika,sys
connection = pika.BlockingConnection( pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.exchange_declare(exchange="direct_logs",
type='direct') # 声明广播类型
severity = sys.argv[1] if len(sys.argv) > 1 else 'info' # 设定日志类型,屏幕输入的第一个字符为日志类型
message = ' '.join(sys.argv[2:]) or 'hello world !' # 屏幕输入的第二个为日志内容
channel.basic_publish(exchange='direct_logs',
routing_key=severity,
body=message) # 发送消息
print("[x] Sent %r:%r" % (severity, message))
connection.close()
sublisher_direct
# --*--coding:utf-8--*--
import pika
import sys
connection = pika.BlockingConnection( pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs',
type='direct') # 声明广播类型
result = channel.queue_declare(exclusive=True)
# exclusive排他,唯一的 不指定queue名字,rabbitmq会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue # 获取 队列名
severities = sys.argv[1:] # 输入接收日志类型
if not severities:
sys.stderr.write("Usage: %s [info] [warning] [error] \n" % sys.argv[0])
sys.exit(1)
for severity in severities:
channel.queue_bind(exchange='direct_logs',
queue=queue_name,
routing_key=severity)
print('[*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body): # 回调函数
print(" [x] %r:%r" % (method.routing_key,body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True) # 接收消息
channel.start_consuming() # 启动接收程序
更细致的消息过滤
publisher_topic
# --*--coding:utf-8--*--
import pika,sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672))
channel = connection.channel()
channel.exchange_declare(exchange='topic_logs',
type='topic') # 指定广播类型为topic 更为精致的消息过滤
routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
message = ' '.join(sys.argv[2:]) or 'hello world'
channel.basic_publish(exchange='topic_logs',
routing_key=routing_key,
body=message)
print(" [x] Sent %r:%r" % (routing_key,message))
connection.close()
subscriber_topic
# --*--coding:utf-8--*--
import pika,sys
connection = pika.BlockingConnection( pika.ConnectionParameters(host='172.16.19.221',port=5672) )
channel = connection.channel()
channel.exchange_declare(exchange='topic_logs',
type='topic')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = sys.argv[1:]
if not binding_keys:
sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
sys.exit(1)
for binding_key in binding_keys:
channel.queue_bind(exchange='topic_logs',
queue=queue_name,
routing_key=binding_key)
print("[*] waiting for logs. To exit press CTRL+C")
def callback(ch,method,properties,body):
print("[x] %r:%r" % (method.routing_key, body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
表达式符号说明:#代表一个或多个字符,*代表任何字符
例:#.a会匹配a.a,aa.a,aaa.a等
*.a会匹配a.a,b.a,c.a等
注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanou
To receive all the logs run:
python receive_logs_topic.py "#"
To receive all logs from the facility "kern":
python receive_logs_topic.py "kern.*"
Or if you want to hear only about "critical" logs:
python receive_logs_topic.py "*.critical"
You can create multiple bindings:
python receive_logs_topic.py "kern.*" "*.critical"
And to emit a log with a routing key "kern.critical" type:
python emit_log_topic.py "kern.critical" "A critical kernel error"
RPC
Remote procedure call (RPC)
To illustrate how an RPC service could be used we're going to create a simple client class. It's going to expose a method named call which sends an RPC request and blocks until the answer is received:
fibonacci_rpc = FibonacciRpcClient()
result = fibonacci_rpc.call(4)
print("fib(4) is %r" % result)
rpc_server
# --*--coding:utf-8--*-- import pika,paramiko import time,sys connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221',port=5672)) channel = connection.channel() channel.queue_declare(queue='rpc_queue') class cmd_server(object): def __init__(self,hostname,port,username,password,cmd): self.h = hostname self.p = port self.u = username self.pa = password self.c = cmd def exec_cmd(self): # 后台执行 ssh = paramiko.SSHClient() # ssh.connect(hostname=self.h,port=self.p,username=self.u,password=self.pa) # stdin, stdout, stderr = ssh.exec_command(self.cmd) # result = stdout.read().decode() ssh.close() return "hello world" def on_request(ch,method,props,body): # 返回结果 hostname,port,username,password,cmd = body.split() # 客户端传来的命令 print(hostname,port,username,password,cmd) server_1 = cmd_server(hostname,port,username,password,cmd) response = server_1.exec_cmd() # 返回给客户端的结果 ch.basic_publish(exchange='', routing_key=props.reply_to, properties=pika.BasicProperties(correlation_id= props.correlation_id), body=str(response)) ch.basic_ack(delivery_tag=method.delivery_tag) channel.basic_qos(prefetch_count=1) channel.basic_consume(on_request,queue='rpc_queue') print("[x] Awaiting RPC requests") channel.start_consuming()
rpc_client
# --*--coding:utf-8--*-- import pika import uuid,sys class Server_cmd_Client(object): def __init__(self): self.connection = pika.BlockingConnection(pika.ConnectionParameters(host='172.16.19.221')) self.channel = self.connection.channel() result = self.channel.queue_declare(exclusive=True) self.callback_queue = result.method.queue self.channel.basic_consume(self.on_response,no_ack=True, queue=self.callback_queue) def on_response(self, ch,method,props,body): if self.corr_id == props.correlation_id: self.response = body def call(self, n): # 把客户端消息传给服务端 self.response = None self.corr_id = str(uuid.uuid4()) self.channel.basic_publish(exchange='', routing_key='rpc_queue', properties=pika.BasicProperties( reply_to=self.callback_queue, correlation_id=self.corr_id, ), body=str(n)) while self.response is None: self.connection.process_data_events() return self.response.decode() client_cmd_rpc = Server_cmd_Client() print(" 执行命令") response = client_cmd_rpc.call(sys.argv[1:]) # 服务端传来的结果 print(" [.] Got %r" % response)
Redis
redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)。这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,而且这些操作都是原子性的。在此基础上,redis支持各种不同方式的排序。与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。
安装redis模块
pip install redis
操作:
API使用
redis-py 的API的使用可以分类为:
- 连接方式
- 连接池
- 操作
- String 操作
- Hash 操作
- List 操作
- Set 操作
- Sort Set 操作
- 管道
- 发布订阅
1、操作模式
redis-py提供两个类Redis和StrictRedis用于实现Redis的命令,StrictRedis用于实现大部分官方的命令,并使用官方的语法和命令,Redis是StrictRedis的子类,用于向后兼容旧版本的redis-py。
# --*--coding:utf-8--*--
import redis
r = redis.Redis(host='172.16.19.221',port=6379)
r.set('day',11)
print(r.get('day'))
print(r.keys())
结果:
b'11'
[b'day', b'foo']
2、连接池
redis-py使用connection pool来管理对一个redis server的所有连接,避免每次建立、释放连接的开销。默认,每个Redis实例都会维护一个自己的连接池。可以直接建立一个连接池,然后作为参数Redis,这样就可以实现多个Redis实例共享一个连接池。
# --*--coding:utf-8--*--
import redis
pool = redis.ConnectionPool(host='172.16.19.221', port=6379)
r = redis.Redis( connection_pool=pool )
r.set('day2', 'lala')
print(r.get('day2'))
3、redis操作
String操作,redis中的String在在内存中按照一个name对应一个value来存储
set(name, value, ex=None, px=None, nx=False, xx=False)
在Redis中设置值,默认,不存在则创建,存在则修改
参数: ex,过期时间(秒) px,过期时间(毫秒) nx,如果设置为True,则只有name不存在时,当前set操作才执行 xx,如果设置为True,则只有name存在时,岗前set操作才执行
setnx(name, value)
设置值,只有name不存在时,执行设置操作(添加)
setex(name, value, time)
# 设置值
# 参数: # time,过期时间(数字秒 或 timedelta对象)psetex(name, time_ms, value)
# 设置值
# 参数: # time_ms,过期时间(数字毫秒 或 timedelta对象)mset(*args, **kwargs)
批量设置值
如: mset(k1='v1', k2='v2') 或 mget({'k1': 'v1', 'k2': 'v2'})get(name) 获取值
mget(keys, *args) 批量获取
如: mget('ylr', 'wupeiqi') 或 r.mget(['ylr', 'wupeiqi'])getset()

浙公网安备 33010602011771号