随笔分类 - python_爬虫
python_爬虫
摘要:# -*- coding: UTF-8 -*- import requests import time from urllib.parse import quote import pymongo #获取出发地 def from_point(): # client=pymongo.MongoClien
阅读全文
摘要:# -*- coding: UTF-8 -*- import pymongo # 连接数据库 client = pymongo.MongoClient('localhost',27017) #创建链接客户端 db=client['qunar'] #创建数据库连接 collection=db['dep
阅读全文
摘要:# -*- coding: UTF-8 -*- import pymongo # 连接数据库 client = pymongo.MongoClient('localhost',27017) db=client['qunar'] collection=db['departures'] # 读取数据 d
阅读全文
摘要:转载https://www.cnblogs.com/whx2008/p/12633661.htmlfrom urllib.parse import quote def convert(content): return quote(content) if __name__ == "__main__":
阅读全文
摘要:自由行为例 1.用Google浏览器输入website:touch.qunar.com 2.谷歌浏览器按F12
阅读全文
摘要:# -*- encoding: utf-8 -*- import requests import pandas as pd import time import pymongo #导入mongodb的库 #建立链接 client = pymongo.MongoClient('localhost',2
阅读全文
摘要:# -*- encoding: utf-8 -*- import requests import pandas as pd import time import json url='https://cdn.heweather.com/china-city-list.txt' strhtml= req
阅读全文
摘要:from bs4 import BeautifulSoup import requests import re url='http://www.cntour.cn/' strhtml=requests.get(url) soup= BeautifulSoup(strhtml.text,'lxml')
阅读全文
摘要:from bs4 import BeautifulSoup import requests url='http://www.cntour.cn/' strhtml=requests.get(url) soup= BeautifulSoup(strhtml.text,'lxml') data= sou
阅读全文
摘要:根据https://www.cnblogs.com/Irvingcode/p/12544584.html 有些改进:生成salt的方式不同(ts = str(time.time() * 1000))different:int(time.mktime(date_time_obj.timetuple()
阅读全文

浙公网安备 33010602011771号