通过本篇博文,介绍一下我对指定信息进行爬取的时候的思路,顺便贴一下代码。
一、首先获取想要爬取的网站的url链接的规则变化
可以看出来该网站页面的url结构简单,变化的只是https://mm.taobao.com/json/request_top_list.htm?page= page的值
二、对网站页面的DOM树的结构进行分析,方便我们获取我们想要的内容信息,
我写了个简单的网页分析脚本analyze.py:用来输出DOM树,方便我后面做筛选.
# -*- coding:utf-8 -*-
#模块导入
import requests
from bs4 import BeautifulSoup
#想要分析的网站页面
url = "http://mm.taobao.com/json/request_top_list.htm?page=1"
response = requests.get(url)
response.encoding = 'gb2312'
html = response.text
#使用lxml解析器进行处理
soup = BeautifulSoup(html, 'lxml')
#把DOM树结构输出
print soup.prettify()
<html>
<body>
<div class="list-item">
<div class="personal-info">
<div class="pic-word">
<div class="pic s60">
<a class="lady-avatar" href="//mm.taobao.com/687471686.htm" target="_blank">
<img alt="" height="60" src="//gtd.alicdn.com/sns_logo/i2/TB1XZ1PQVXXXXaJXpXXSutbFXXX.jpg_60x60.jpg" width="60"/>
</a>
</div>
<p class="top">
<a class="lady-name" href="//mm.taobao.com/self/model_card.htm?user_id=687471686" target="_blank">
田媛媛
</a>
<em>
<strong>
27
</strong>
岁
</em>
<span>
广州市
</span>
<span class="friend-follow J_FriendFollow" data-custom="type=14&app_id=12052609" data-group="" data-userid="687471686">
加关注
</span>
</p>
<p>
<em>
平面模特 设计师 T台、展模特
</em>
<em>
<strong>
164433
</strong>
粉丝
</em>
</p>
</div>
<div class="pic w610">
<a href="//mm.taobao.com/photo-687471686-10000854046.htm?pic_id=10003369435" target="_blank">
<img data-ks-lazyload="//img.alicdn.com/imgextra/i4/687471686/TB1TORaKFXXXXc0aXXXXXXXXXXX_!!2-tstar.png" src="//assets.alicdn.com/kissy/1.0.0/build/imglazyload/spaceball.gif"/>
</a>
</div>
</div>
<div class="list-info">
<div class="popularity">
<dl>
<dt>
1
</dt>
<dd>
<span>
总积分:
</span>
60742
</dd>
</dl>
</div>
<ul class="info-detail">
<li>
新增积分:
<strong>
529
</strong>
</li>
<li>
好评率:
<strong>
90.0
</strong>
%
</li>
<li>
导购照片:
<strong>
888
</strong>
张
</li>
<li>
签约数量:
<strong>
406
</strong>
次
</li>
</ul>
<p class="description">
你还在为上下衣物搭配而苦恼么..你还在为出门不知道穿什么而烦躁么 ..vvip女神教你一键(_)美美哒 ! 不需要过多的搭配.不需要为不协调而苦恼 ..我们为你选好让你出门美美哒!!
</p>
<div class="J_LikeIt" photo-favor-count="0" photo-id="687471686_10003369435">
<div class="mm-photolike">
<a class="mm-photolike-btn" data-count="0" data-targetid="687471686_10003369435" href="javascript:void(0)">
喜欢
</a>
<var class="mm-photolike-count radius-3">
0
</var>
</div>
</div>
</div>
</div>
<input id="J_Totalpage" type="hidden" value="4316"/>
</body>
</html>
分析的时候我们其实可以只截取一个人的信息,因为输出太多。每个人的结构都是固定的,方便分析!
三、程序代码:
# -*- coding:utf-8 -*-
import requests
from bs4 import BeautifulSoup
import sys
import re
reload(sys)
sys.setdefaultencoding('utf-8')
for num in range(1,4300):
try:
URL = 'http://mm.taobao.com/json/request_top_list.htm?page=%d' % num
#print "现在爬取的网站url是:" + URL
response = requests.get(URL)
response.encoding = 'gb2312'
text = response.text
soup = BeautifulSoup(text, 'lxml')
for model in soup.select(".list-item"):
try:
model_id = model.find('span', {'class': 'friend-follow J_FriendFollow'})['data-userid']
json_url = "http://mm.taobao.com/self/info/model_info_show.htm?user_id=%d" % int(model_id)
response_json = requests.get(json_url)
response_json.encoding = 'gb2312'
text_response_json = response_json.text
soup_json = BeautifulSoup(text_response_json, 'lxml')
print "***********************************" + model.find('a', {'class': 'lady-name'}).string + "*********************************"
print "模特的名字:" + model.find('a', {'class': 'lady-name'}).string
print "模特的年龄:"+ model.find('p', {'class': 'top'}).em.strong.string
print "生日:" + soup_json.find('li', {'class': 'mm-p-cell-left'}).span.string
blood = soup_json.find_all('li', {'class': 'mm-p-cell-right'})[1].span.string
if blood is None:
blood = "无"
print "血型:" + blood
print "学校/专业:" + soup_json.find_all('li')[5].span.string
print "身高:" + soup_json.find('li', {'class': 'mm-p-small-cell mm-p-height'}).p.string
print "体重:" + soup_json.find('li', {'class': 'mm-p-small-cell mm-p-weight'}).p.string
print "三围:" + soup_json.find('li', {'class': 'mm-p-small-cell mm-p-size'}).p.string
print "罩杯:" + soup_json.find('li', {'class': 'mm-p-small-cell mm-p-bar'}).p.string
print "鞋码:" + soup_json.find('li', {'class': 'mm-p-small-cell mm-p-shose'}).p.string
print "模特所在地:"+ model.find('p', {'class': 'top'}).span.string
print "模特的id:"+ model.find('span', {'class': 'friend-follow J_FriendFollow'})['data-userid']
print "模特的标签:"+ model.find_all('p')[1].em.string
print "模特的粉丝数:"+ model.find_all('p')[1].strong.string
print "模特的排名:"+ [text for text in model.find('div', {'class': 'popularity'}).dl.dt.stripped_strings][0]
print model.find('ul', {'class': 'info-detail'}).get_text(" ",strip=True)
print "模特的个人资料页面:" +"http:"+ model.find('a', {'class': 'lady-name'})['href']
print "模特的个人作品页面:" +"http:"+ model.find('a', {'class': 'lady-avatar'})['href']
print "模特的个人头像:" + "http:" + model.find('img')['src']
print "***********************************" + model.find('a', {'class': 'lady-name'}).string + "*********************************"
print "\n"
except:
print "error"
except:
print num + "page is error"
四、数据有差不多三万条 所以我截取部分信息:
总结:写的这篇博客整个程序的开发的思路的整个梳理。用到requests和beautiful这俩库。
希望对想做爬虫开发的人有点帮助。在我看来 思路很重要!