随笔分类 -  python_爬虫

python_爬虫
摘要:# -*- coding: UTF-8 -*- import requests import time from urllib.parse import quote import pymongo #获取出发地 def from_point(): # client=pymongo.MongoClien 阅读全文
posted @ 2021-02-23 10:54 山药牛肉 阅读(199) 评论(0) 推荐(0)
摘要:# -*- coding: UTF-8 -*- import pymongo # 连接数据库 client = pymongo.MongoClient('localhost',27017) #创建链接客户端 db=client['qunar'] #创建数据库连接 collection=db['dep 阅读全文
posted @ 2021-02-23 10:05 山药牛肉 阅读(236) 评论(0) 推荐(0)
摘要:# -*- coding: UTF-8 -*- import pymongo # 连接数据库 client = pymongo.MongoClient('localhost',27017) db=client['qunar'] collection=db['departures'] # 读取数据 d 阅读全文
posted @ 2021-02-22 09:05 山药牛肉 阅读(627) 评论(0) 推荐(0)
摘要:转载https://www.cnblogs.com/whx2008/p/12633661.htmlfrom urllib.parse import quote def convert(content): return quote(content) if __name__ == "__main__": 阅读全文
posted @ 2021-02-19 16:31 山药牛肉 阅读(200) 评论(0) 推荐(0)
摘要:自由行为例 1.用Google浏览器输入website:touch.qunar.com 2.谷歌浏览器按F12 阅读全文
posted @ 2021-02-18 15:53 山药牛肉 阅读(37) 评论(0) 推荐(0)
摘要:# -*- encoding: utf-8 -*- import requests import pandas as pd import time import pymongo #导入mongodb的库 #建立链接 client = pymongo.MongoClient('localhost',2 阅读全文
posted @ 2021-02-09 15:54 山药牛肉 阅读(207) 评论(0) 推荐(0)
摘要:# -*- encoding: utf-8 -*- import requests import pandas as pd import time import json url='https://cdn.heweather.com/china-city-list.txt' strhtml= req 阅读全文
posted @ 2021-02-07 16:05 山药牛肉 阅读(178) 评论(0) 推荐(0)
摘要:from bs4 import BeautifulSoup import requests import re url='http://www.cntour.cn/' strhtml=requests.get(url) soup= BeautifulSoup(strhtml.text,'lxml') 阅读全文
posted @ 2021-02-04 13:39 山药牛肉 阅读(43) 评论(0) 推荐(0)
摘要:from bs4 import BeautifulSoup import requests url='http://www.cntour.cn/' strhtml=requests.get(url) soup= BeautifulSoup(strhtml.text,'lxml') data= sou 阅读全文
posted @ 2021-02-04 09:55 山药牛肉 阅读(93) 评论(1) 推荐(0)
摘要:根据https://www.cnblogs.com/Irvingcode/p/12544584.html 有些改进:生成salt的方式不同(ts = str(time.time() * 1000))different:int(time.mktime(date_time_obj.timetuple() 阅读全文
posted @ 2021-02-03 15:22 山药牛肉 阅读(50) 评论(1) 推荐(0)