celery分布式任务队列入门
Celery 分布式任务队列快速入门
本节内容
Celery介绍和基本使用
在项目中如何使用celery
启用多个workers
Celery 定时任务
与django结合
通过django配置celery periodic task
一、Celery介绍和基本使用
Celery 是一个 基于python开发的分布式异步消息任务队列,通过它可以轻松的实现任务的异步处理, 如果你的业务场景中需要用到异步任务,就可以考虑使用celery, 举几个实例场景中可用的例子:
(异步和定时)
- 你想对100台机器执行一条批量命令,可能会花很长时间 ,但你不想让你的程序等着结果返回,而是给你返回 一个任务ID,你过一段时间只需要拿着这个任务id就可以拿到任务执行结果, 在任务执行ing进行时,你可以继续做其它的事情。
- 你想做一个定时任务,比如每天检测一下你们所有客户的资料,如果发现今天 是客户的生日,就给他发个短信祝福
Celery 在执行任务时需要通过一个消息中间件来接收和发送任务消息,以及存储任务结果, 一般使用rabbitMQ or Redis,后面会讲
1.1 Celery有以下优点:
- 简单:一单熟悉了celery的工作流程后,配置和使用还是比较简单的
- 高可用:当任务执行失败或执行过程中发生连接中断,celery 会自动尝试重新执行任务
- 快速:一个单进程的celery每分钟可处理上百万个任务
- 灵活: 几乎celery的各个组件都可以被扩展及自定制
Celery基本工作流程图

1.2 Celery安装使用
Celery的默认broker(中间件)是RabbitMQ, 仅需配置一行就可以
1 broker_url = 'amqp://guest:guest@localhost:5672//
rabbitMQ 没装的话请装一下,安装看这里 http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html#id3
使用Redis做broker也可以
安装redis组件
1 $ pip install -U "celery[redis]"
配置
Configuration is easy, just configure the location of your Redis database:
app.conf.broker_url = 'redis://localhost:6379/0'
Where the URL is in the format of:
redis://:password@hostname:port/db_number #需要验证就加上密码
all fields after the scheme are optional, and will default to localhost on port 6379, using database 0.
如果想获取每个任务的执行结果,还需要配置一下把任务结果存在哪
If you also want to store the state and return values of tasks in Redis, you should configure these settings:
app.conf.result_backend = 'redis://localhost:6379/0'
1. 3 开始使用Celery啦
安装celery模块
1 $ pip install celery
创建一个celery application 用来定义你的任务列表
创建一个任务文件就叫tasks.py吧
1 from celery import Celery 2 3 app = Celery('tasks', 4 broker='redis://localhost', 5 backend='redis://localhost') 6 7 @app.task 8 def add(x,y): 9 print("running...",x,y) 10 return x+y
启动Celery Worker来开始监听并执行任务
1 $ celery -A tasks worker --loglevel=info
调用任务
再打开一个终端, 进行命令行模式,调用任务
1 >>> from tasks import add 2 >>> add.delay(4, 4)
看你的worker终端会显示收到 一个任务,此时你想看任务结果的话,需要在调用 任务时 赋值个变量
1 >>> result = add.delay(4, 4)

redis存储了这个队列消息包括执行结果什么的:

The ready() method returns whether the task has finished processing or not: 几个参数
>>> result.ready() #检测结果是否已经返回
False
You can wait for the result to complete, but this is rarely used since it turns the asynchronous call into a synchronous one:
result.get() = 8
>>> result.get(timeout=1) #设置超时时间,如果1秒内没有取到结果就报错
8

In case the task raised an exception, get() will re-raise the exception, but you can override this by specifying the propagate argument:
>>> result.get(propagate=False) #将错误格式化了就是

If the task raised an exception you can also gain access to the original traceback:
>>> result.traceback # trackback是个单独的模块,追踪异常用的,获得具体的报错信息
…




二、在项目中如何使用celery
可以把celery配置成一个应用,真正在项目中连接celery的文件和任务文件不应该写在一起:
目录格式如下
1 proj/__init__.py #init文件 也就是说这是个包package,不是单纯的文件夹 2 /celery.py #配置文件 3 /tasks.py #任务文件
proj/celery.py内容 #配制文件的内容
1 from __future__ import absolute_import, unicode_literals #以防用from .. import xx 的时候有重名的模块会出问题,加上这个之后python就自动变成绝对路径导入了 2 from celery import Celery 3 4 app = Celery('proj', 5 broker='amqp://', 6 backend='amqp://', #接收返回结果的 7 include=['proj.tasks']) #在声明celery这个app的时候,明确告诉他我的tasks都在这个proj文件夹下,这是配置文件和tasks任务文件分开的时候要写 8 9 # Optional configuration, see the application user guide. 10 app.conf.update( 11 result_expires=3600, #在redis中1个小时之内没有取,结果就过期丢掉了。这是秒为单位的,不配置就一直有,只针对这个app 12 ) 13 14 if __name__ == '__main__': 15 app.start()
proj/tasks.py中的内容
1 from __future__ import absolute_import, unicode_literals 2 from .celery import app #相对导入 3 4 5 @app.task 6 def add(x, y): 7 return x + y 8 9 10 @app.task 11 def mul(x, y): 12 return x * y 13 14 15 @app.task 16 def xsum(numbers): 17 return sum(numbers)
直接拖动也行,用rz命令去打开也行:

然后解压:

启动worker
1 $ celery -A proj(项目名) worker -l info
输出
1 -------------- celery@Alexs-MacBook-Pro.local v4.0.2 (latentcall) 2 ---- **** ----- 3 --- * *** * -- Darwin-15.6.0-x86_64-i386-64bit 2017-01-26 21:50:24 4 -- * - **** --- 5 - ** ---------- [config] 6 - ** ---------- .> app: proj:0x103a020f0 7 - ** ---------- .> transport: redis://localhost:6379// 8 - ** ---------- .> results: redis://localhost/ 9 - *** --- * --- .> concurrency: 8 (prefork) 10 -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) 11 --- ***** ----- 12 -------------- [queues] 13 .> celery exchange=celery(direct) key=celery

一个是工作进程一个是管理进程(检测下达的指令)

分布式:(类似于抢红包的很多,就将任务分发到很多的机器上)

后台启动worker
In the background
In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial.
The daemonization scripts uses the celery multi command to start one or more workers in the background: #在后台可以启动多个worker,理论上个数没有限制
$ celery multi start w1 -A proj -l info #这个命令是在后台启动的worker,所以生产上都是用这个命令创建worker,为了方式ssh连接断开导致程序断开。
#注意这里不用写worker
celery multi v4.0.0 (latentcall) > Starting nodes... > w1.halcyon.local: OK
You can restart it too:
$ celery multi restart w1 -A proj -l info #重启
celery multi v4.0.0 (latentcall)
> Stopping nodes...
> w1.halcyon.local: TERM -> 64024
> Waiting for 1 node.....
> w1.halcyon.local: OK
> Restarting node w1.halcyon.local: OK
celery multi v4.0.0 (latentcall)
> Stopping nodes...
> w1.halcyon.local: TERM -> 64052
or stop it:
$ celery multi stop w1 -A proj -l info #停止worker,即使没有worker,还能发送任务,等有worker之后,任务还是会被执行

The stop command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the stopwait command instead, this ensures all currently executing tasks is completed before exiting:
$ celery multi stopwait w1 -A proj -l info

如果是在前台执行操作,一般情况下ssh会设置为30分钟没有操作就会自动断开,这样前台的程序就断了,为了保证前台的连接即便是断了还能执行程序,就只能配制在后台执行程序,各个程序都有后台启动的方法。
三、Celery 定时任务
celery支持定时任务,设定好任务的执行时间,celery就会定时自动帮你执行, 这个定时任务模块叫celery beat
1 from celery import Celery 2 from celery.schedules import crontab 3 4 app = Celery(
'task',
broker='redis://:123456@192.168.14.10',
backend='redis://:123456@192.168.14.10',
) 5 6 @app.on_after_configure.connect #固定的,意思是等程序启动并连接连上之后自动执行下面的函数,生成定时任务。(可以自行控制的) 7 def setup_periodic_tasks(sender, **kwargs): 8 # Calls test('hello') every 10 seconds. 9 sender.add_periodic_task(10.0, test.s('hello'), name='add every 10') #10秒钟执行的定时任务,任务名称为add every 10 10 #执行方法也是固定的 11 # Calls test('world') every 30 seconds 12 sender.add_periodic_task(30.0, test.s('world'), expires=10) 13 14 # Executes every Monday morning at 7:30 a.m. 15 sender.add_periodic_task( 16 crontab(hour=7, minute=30, day_of_week=1), 17 test.s('Happy Mondays!'), 18 ) 19 20 @app.task 21 def test(arg): 22 print(arg)

add_periodic_task 会添加一条定时任务
上面是通过调用函数添加定时任务,也可以像写配置文件 一样的形式添加, 下面是每30s执行的任务
1 app.conf.beat_schedule = { 2 'add-every-30-seconds': { 3 'task': 'tasks.add', 4 'schedule': 30.0, 5 'args': (16, 16) 6 }, 7 } 8 app.conf.timezone = 'UTC'
任务添加好了,需要让celery单独启动一个专门的进程来定时发起这些任务, 注意, 这里是发起任务,不是执行,这个进程只会不断的去检查你的任务计划, 每发现有任务需要执行了,就发起一个任务调用消息,交给celery worker去执行
启动任务调度器 celery beat (这个插件就叫做beat)
1 $ celery -A periodic_task beat
输出like below
1 celery beat v4.0.2 (latentcall) is starting. 2 __ - ... __ - _ 3 LocalTime -> 2017-02-08 18:39:31 4 Configuration -> 5 . broker -> redis://localhost:6379// 6 . loader -> celery.loaders.app.AppLoader 7 . scheduler -> celery.beat.PersistentScheduler 8 . db -> celerybeat-schedule 9 . logfile -> [stderr]@%WARNING 10 . maxinterval -> 5.00 minutes (300s)
此时还差一步,就是还需要启动一个worker,负责执行celery beat发起的任务
启动celery worker来执行任务
1 $ celery -A periodic_task worker 2 3 -------------- celery@Alexs-MacBook-Pro.local v4.0.2 (latentcall) 4 ---- **** ----- 5 --- * *** * -- Darwin-15.6.0-x86_64-i386-64bit 2017-02-08 18:42:08 6 -- * - **** --- 7 - ** ---------- [config] 8 - ** ---------- .> app: tasks:0x104d420b8 9 - ** ---------- .> transport: redis://localhost:6379// 10 - ** ---------- .> results: redis://localhost/ 11 - *** --- * --- .> concurrency: 8 (prefork) 12 -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) 13 --- ***** ----- 14 -------------- [queues] 15 .> celery exchange=celery(direct) key=celery
好啦,此时观察worker的输出,是不是每隔一小会,就会执行一次定时任务呢!
注意:Beat needs to store the last run times of the tasks in a local database file (named celerybeat-schedule(二进制的不能看出是什么) by default), so it needs access to write in the current directory, or alternatively you can specify a custom location for this file: #celerybeat-schedule存的是最后一次执行的记录,以及这个任务进行了多长时间,如果中途断开了,下次再启动worker去执行的时候,就用这个时间,而不是重新计时了。如果任务没有执行完中途断开了,而且时间超出了这个任务的时间,那么这个任务就丢失了,从下个任务开始计时,下个任务直接启动。
1 $ celery -A periodic_task beat -s /home/celery/var/run/celerybeat-schedule

更复杂的定时配置
上面的定时任务比较简单,只是每多少s执行一个任务,但如果你想要每周一三五的早上8点给你发邮件怎么办呢?哈,其实也简单,用crontab功能,跟linux自带的crontab功能是一样的,可以个性化定制任务执行时间
linux crontab http://www.cnblogs.com/peida/archive/2013/01/08/2850483.html
1 from celery.schedules import crontab 2 3 app.conf.beat_schedule = { 4 # Executes every Monday morning at 7:30 a.m. 5 'add-every-monday-morning': { 6 'task': 'tasks.add', 7 'schedule': crontab(hour=7, minute=30, day_of_week=1), 8 'args': (16, 16), 9 }, 10 }
上面的这条意思是每周1的早上7.30执行tasks.add任务

还有更多定时配置方式如下:
| Example | Meaning |
crontab() |
Execute every minute. |
crontab(minute=0, hour=0) |
Execute daily at midnight. |
crontab(minute=0, hour='*/3') |
Execute every three hours: midnight, 3am, 6am, 9am, noon, 3pm, 6pm, 9pm. |
|
Same as previous. |
crontab(minute='*/15') |
Execute every 15 minutes. |
crontab(day_of_week='sunday') |
Execute every minute (!) at Sundays. |
|
Same as previous. |
|
Execute every ten minutes, but only between 3-4 am, 5-6 pm, and 10-11 pm on Thursdays or Fridays. |
crontab(minute=0,hour='*/2,*/3') |
Execute every even hour, and every hour divisible by three. This means: at every hour except: 1am, 5am, 7am, 11am, 1pm, 5pm, 7pm, 11pm |
crontab(minute=0, hour='*/5') |
Execute hour divisible by 5. This means that it is triggered at 3pm, not 5pm (since 3pm equals the 24-hour clock value of “15”, which is divisible by 5). |
crontab(minute=0, hour='*/3,8-17') |
Execute every hour divisible by 3, and every hour during office hours (8am-5pm). |
crontab(0, 0,day_of_month='2') |
Execute on the second day of every month. |
|
Execute on every even numbered day. |
|
Execute on the first and third weeks of the month. |
|
Execute on the eleventh of May every year. |
|
Execute on the first month of every quarter. |
上面能满足你绝大多数定时任务需求了,甚至还能根据潮起潮落来配置定时任务, 具体看 http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#solar-schedules

crontab的编辑页面,里面写定时任务:
下面的是在linux上的写法:


输出结果:


在python和shell中可以这样写

四、最佳实践之与django结合
django 可以轻松跟celery结合实现异步任务,只需简单配置即可
If you have a modern Django project layout like:
- proj/
- proj/__init__.py
- proj/settings.py
- proj/urls.py
- manage.py
then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:
file: proj/proj/celery.py 和settings同一级 ,创建celery文件
1 from __future__ import absolute_import, unicode_literals 2 import os 3 from celery import Celery 4 5 # set the default Django settings module for the 'celery' program. 6 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings') #声明一下django的环境变量 7 8 app = Celery('proj') 9 10 # Using a string here means the worker don't have to serialize 11 # the configuration object to child processes. 12 # - namespace='CELERY' means all celery-related configuration keys #在settings中一些连接redis的配制 13 # should have a `CELERY_` prefix. 名字必须是这个CELERY_开头的 14 app.config_from_object('django.conf:settings', namespace='CELERY') 15 16 # Load task modules from all registered Django app configs. 17 app.autodiscover_tasks() #如果有多个app,那么celery也会自动识别任务,并加载 18 19 20 @app.task(bind=True) 21 def debug_task(self): 22 print('Request: {0!r}'.format(self.request))
settings配制文件中加上下面的内容:
CELERY_BROKER_URL = 'redis://:111111@192.168.14.10',
CELERY_RESULT_BACKEND = 'redis://:111111@192.168.14.10',
Then you need to import this app in your proj/proj/__init__.py module. This ensures that the app is loaded when Django starts so that the @shared_task decorator (mentioned later) will use it:
proj/proj/__init__.py: #加上这个配制之后,django会自动识别celery.py
1 from __future__ import absolute_import, unicode_literals 2 3 # This will make sure the app is always imported when 4 # Django starts so that shared_task will use this app. 5 from .celery import app as celery_app 6 7 __all__ = ['celery_app']
Note that this example project layout is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in the First Steps with Celery tutorial.
Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module won’t *** with the library:
1 from __future__ import absolute_import
Then we set the default DJANGO_SETTINGS_MODULE environment variable for the celery command-line program:
1 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
You don’t need this line, but it saves you from always passing in the settings module to the celery program. It must always come before creating the app instances, as is what we do next:
1 app = Celery('proj')
This is our instance of the library.
We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted.
The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_, so for example the task_always_eager` setting becomes CELERY_TASK_ALWAYS_EAGER, and the broker_url setting becomes CELERY_BROKER_URL.
You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object.
1 app.config_from_object('django.conf:settings', namespace='CELERY')
Next, a common practice for reusable apps is to define all tasks in a separate tasks.pymodule, and Celery does have a way to auto-discover these modules:
1 app.autodiscover_tasks()
With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py convention:
1 - app1/ 2 - tasks.py #就在每个app的目录中和models同级,名称必须为tasks.py 3 - models.py 4 - app2/ 5 - tasks.py 6 - models.py
Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance.
然后在具体的app里的tasks.py里写你的任务
1 # Create your tasks here 2 from __future__ import absolute_import, unicode_literals 3 from celery import shared_task #意思是各个app中都可以导入这些任务,前提是必须有shared_task这个装饰器 4 5 6 @shared_task 7 def add(x, y): 8 return x + y 9 10 11 @shared_task 12 def mul(x, y): 13 return x * y 14 15 16 @shared_task 17 def xsum(numbers): 18 return sum(numbers)
还有就是:

2:

在你的django views里调用celery task
1 from django.shortcuts import render,HttpResponse 2 3 # Create your views here. 4 5 from bernard import tasks 6 7 def task_test(request): 8 9 res = tasks.add.delay(228,24) 10 print("start running task") 11 print("async task res",res.get() ) 12 13 return HttpResponse('res %s'%res.get())




然后在tasks中的add函数中加个time.sleep,模拟一个结果延迟的情况:这个动作要在linux上解压的项目中添加。以为linux为执行端,里面有这个项目,为我们用python起的django只是任务发起端,他只是将一些tasks中的函数名什么的传过去了,并通过url映射,views调用,来发起任务。


整个流程: 注意不要在pycharm中的views函数中直接get结果,那就成同步了,要生成task_id返回给调用者,这样才能异步取结果。

而linux中的worker和windows上的django是共用这个redis的,桥梁为celery,将任务找到,worker执行。
然后异步拿结果的过程:
1 def index(request): 2 3 res = tasks.add.delay(5,999) 4 5 print(res) 6 return HttpResponse(res.task_id) #任务id返回给调用者
#然后通过生成的task_id,前端通过ajax不断的拿着这个id走对应的获取结果的url来后台的函数中拿结果
1 def task_res(request,task_id): 2 #通过这个方法和task_id来取结果,因为settings中已经配置了redis返回结果的配制 3 4 r = AsyncResult(id=task_id) #r.status是查看有没有结果返回(有的话是success,没有的话pending) 5 return HttpResponse(r.get())
url:
1 from app01 import views 2 3 urlpatterns = [ 4 url(r'^admin/', admin.site.urls), 5 url(r'^index/', views.index), 6 url(r'^task_res/(\.+)', views.task_res), #这个url的匹配规则也的好像不对。 7 8 ]
ajax这里就没有写了。
注意一点redis的:如果是手动stop redis,所有的数据会自动保存,但是如果是机器挂了,那么没有保存的数据就没了,将所有的key强制刷到硬盘上的命令是save
五、在django中使用计划任务功能 (定时任务)
There’s the django-celery-beat extension that stores the schedule in the Django database, and presents a convenient admin interface to manage periodic tasks at runtime.
To install and use this extension:
-
Use pip to install the package:
$ pip install django-celery-beat -
Add the
django_celery_beatmodule toINSTALLED_APPSin your Django project’settings.py:INSTALLED_APPS = ( ..., 'django_celery_beat', ) Note that there is no dash in the module name, only underscores. -
Apply Django database migrations so that the necessary tables are created: 因为django_celery_beat会创建几张表,所以要migrate
$ python manage.py migrate #之后定时任务都放到数据库表里面 -
Start the celery beat service using the
djangoscheduler:$ celery -A proj beat -l info -S django #启动beat -
Visit the Django-Admin interface to set up some periodic tasks.



在admin页面里,有3张表


配置完长这样

如果是命令的话:


然后这个cmd的任务是个新任务,所以要重启worker,worker才能拿到这个任务。


在admin中创建的是任务计划:


然后启动celery_beat任务调度器去django的数据库中读取任务,然后把任务发给worker
此时启动你的celery beat 和worker,会发现每隔2分钟,beat会发起一个任务消息让worker执行scp_task任务
注意,经测试,每添加或修改一个任务,celery beat都需要重启一次,要不然新的配置不会被celery beat进程读到





浙公网安备 33010602011771号