1. 基础使用

import urllib.request
response = urllib.request.urlopen(url)
print(response.read().decode('utf-8'))
print(type(response))
print(response.status)
print(response.getheaders())
  • HTTPResponse类型方法:

read方法是按照字节读取
response.readlines方法按照行读取
response.geturl方法获取url地址
response.getheaders方法获取状态信息

  • 请求头定制
url = 'https://www.baidu.com/'
headers = {'User-Agent':''}
request =urllib.request.Request(url, headers=headers)
response = urllib.request.urlopen(request)
print(response.read().decode('utf-8'))
print(response.status)

2. 编解码三种方法
import urllib.parse
// 将中文转化为unicode编码
// 单个参数

  • urllib.parse.quote方法

// 多个参数

  • urllib.parse.urlencode方法
data = {
    'wd': '周杰伦',
    'sex': '男'
}
a = urllib.parse.urlencode(data)
base_url = 'https://www.baidu.com/s?'
url = base_url+a
print(url)
  • post方法
url = 'https://fanyi.baidu.com/sug'
data = {
     'kw':'spider'
 }
# post请求的参数,必须进行编码,还要调用encode方法
data = urllib.parse.urlencode(data).encode('utf-8')
# post请求的参数要放在请求头定制里面
request = urllib.request.Request(url,data,headers)

4. IP代理

proxies_pool = [
    {'http':'122.239.137.179'},
    {'http':'42.203.39.44'},
]
import random
import urllib.request
proxies = random.choice(proxies_pool)
print(proxies)
url = ''
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
}
request = urllib.request.Request(url=url, headers=headers)
# 1.获取hanler对象
handler = urllib.request.ProxyHandler(proxies=proxies)
# 2.获取opener对象
opener = urllib.request.build_opener(handler)
# 3.调用opener方法
response = opener.open(request)
content = response.read().decode('utf-8')
print(content)
posted on 2024-01-18 10:23  HelloJacker  阅读(10)  评论(0)    收藏  举报