美拍网图片下载

难度不大,网站对f12和右键有限制,但不是大问题。

网站没有反爬,至少没有给图片加防盗链,下载的图片并不是最高清,但清晰度还可以。

  • 多线程代码直接复制之前的,其它的也大部分是走流程,固定套路
  • 网站上图片全部为jpg格式,所以无需考虑png格式下载
  • 唯一需要注意的是,有些图片src链接并不完整,需要自行判断和补全

网站:https://4zipai.net

使用方式:

先:输入路径

后:输入网址,如:‘https://4zipai.net/selfies/202207/139004.html

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import os
import re
import threading
import requests
from bs4 import BeautifulSoup
import time

headers = {
'Cookie': '_ga=GA1.2.87052855.1662906879; _gid=GA1.2.106606571.1662906879; twk_idm_key=Szy-fmwxLJBDQNQQ_hKZE; TawkConnectionTime=0; twk_uuid_5e23d8f08e78b86ed8aa0035=%7B%22uuid%22%3A%221.101H94883vBguY180oYHfz0VN3Yrx0pdi2oaeD50URIjcHT13XZdZReDZEMwzt5gW4NEYVHRIUmMAPKTQXzgo0tbdNL6fRa2f2JnkKEdjUC5Me7ZTzLZlaEgUmdlaJJk9PBSm4ORF3UQSw%22%2C%22version%22%3A3%2C%22domain%22%3A%224zipai.net%22%2C%22ts%22%3A1662906990942%7D; __cf_bm=v0FGBMppZPUweg7R0uBuFPrQlE71b0ptig4q4MkaeBU-1662906991-0-AcVALr7cJKi1sMQpzf8Zs1DEJ1PojPDd9mLT8fncCrdyiEBznfws9/awsYksUmTA0dbcUfgPxplYWbTz7LfBSmLvl1dQAD4RU0ni6jxBgdSIvn8SxmBZSJkJCI00EuzjOw==',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}

class myThread(threading.Thread):
def __init__(self, url, fileName, file_path):
threading.Thread.__init__(self)
self.url = url
self.file_path = file_path
self.fileName = fileName

def open_url(self, url):
response = requests.get(url, headers)
return response.text

def run(self):
img = requests.get(self.url, headers=headers)
file = self.file_path + "\\" + self.fileName + ".jpg"
if not os.path.exists(file):
print("Downloading %s" % self.fileName)
with open('%s/%s.jpg' % (self.file_path, self.fileName), 'wb') as f:
f.write(img.content)
else:
print(self.file_path + "\\" + self.fileName + " exist")

# 'https://4zipai.net/selfies/202207/139004.html'
save_path = input("输入要保存的路径文件夹")
url = input("输入网址")
rsp = requests.get(url=url, headers=headers)
rsp.encoding="UTF-8"
soup = BeautifulSoup(rsp.text, 'lxml')
# 获得标题名
title=soup.find('div',class_="item_title")
save_path=save_path+'\\'+ str(title.find("h1").text).strip(" ")
# 创建标题对应目录
if not os.path.exists(save_path):
os.makedirs(save_path)
li = soup.find(class_='content_left')
for i in li.find_all('img'):
# 查找图片链接
each_url=str(i.get('src'))
judge = each_url[1]
if judge == "d":
each_url = "https://4zipai.net/" + each_url
name = str(each_url).split('/')[-1].split('.')[0]
if len(name) >= 9:
thread1 = myThread(each_url, name, save_path)
thread1.start()
time.sleep(0.1)

推荐

/selfies/202208/140912.html

/selfies/202209/142694.html

/selfies/201808/70879.html

/selfies/201804/63980.html

/selfies/201708/48223.html

/selfies/201903/80767.html

/selfies/201804/63878.html

/selfies/202207/139004.html

/selfies/202209/142301.html

/selfies/202209/142402.html

/selfies/202209/142391.html

/selfies/202209/142416.html

/selfies/202208/142269.html

/selfies/202209/142317.html