此系列笔记来源于
中国大学MOOC-北京理工大学-嵩天老师的Python系列课程
7. Re(正则表达式)库入门
regular expression = regex = RE
是一种通用的字符串表达框架,用来简洁表达一组字符串的表达式,也可用来判断某字符串的特征归属
- 正则表达式的语法
常用操作符1
常用操作符2
实例
经典实例
- Re库的基本使用
- 正则表达式的表示类型为raw string类型(原生字符串类型),表示为r’text’
- Re库主要功能函数
功能函数
- re.search(pattern,string,flags=0)
- re.match(pattern,string,flags=0)
因为match为从开始位置开始匹配,使用时要加if进行判别返回结果是否为空,否则会报错 - re.findall(pattern,string,flags=0)
- re.split(pattern,string,maxsplit=0,flags=0)
maxsplit为最大分割数,剩余部分作为最后一个元素输出 - re.finditer(pattern,string,flags=0)
- re.sub(pattern,repl,string,count=0,flags=0)
repl是用来替换的字符串,count为替换次数
- re.search(pattern,string,flags=0)
- Re库的另一种等价用法
Re库的函数式用法为一次性操作,还有一种为面向对象用法,可在编译后多次操作
regex = re.compile(pattern,flags=0)
通过compile生成的regex对象才能被叫做正则表达式
- Re库的match对象
Match对象的属性
Match对象的方法
实例
- Re库的贪婪匹配和最小匹配
Re库默认采取贪婪匹配,即输出匹配最长的子串最小匹配操作符
8.实例二:淘宝商品比价定向爬虫(requests-re)
步骤1:提交商品搜索请求,循环获取页面
步骤2:对于每个页面,提取商品名称和价格信息
步骤3:将信息输出显示
import requests import re def getHTMLText(url): try: r = requests.get(url, timeout=30) r.raise_for_status() r.encoding = r.apparent_encoding return r.text except: return "" def parsePage(ilt, html): try: plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"',html) tlt = re.findall(r'\"raw_title\"\:\".*?\"',html) for i in range(len(plt)): price = eval(plt[i].split(':')[1]) title = eval(tlt[i].split(':')[1]) ilt.append([price , title]) except: print("") def printGoodsList(ilt): tplt = "{:4}\t{:8}\t{:16}" print(tplt.format("序号", "价格", "商品名称")) count = 0 for g in ilt: count = count + 1 print(tplt.format(count, g[0], g[1])) def main(): goods = '书包' depth = 3 start_url = 'https://s.taobao.com/search?q=' + goods infoList = [] for i in range(depth): try: url = start_url + '&s=' + str(44*i) html = getHTMLText(url) parsePage(infoList, html) except: continue printGoodsList(infoList) main()
9.实例三:股票数据定向爬虫(requests-bs
4-re)
步骤1:从东方财富网获取股票列表
步骤2:根据股票列表逐个到百度股票获取个股信息
步骤3:将结果存储到文件
#CrawBaiduStocksB.py import requests from bs4 import BeautifulSoup import traceback import re def getHTMLText(url, code="utf-8"): try: r = requests.get(url) r.raise_for_status() r.encoding = code return r.text except: return "" def getStockList(lst, stockURL): html = getHTMLText(stockURL, "GB2312") soup = BeautifulSoup(html, 'html.parser') a = soup.find_all('a') for i in a: try: href = i.attrs['href'] lst.append(re.findall(r"[s][hz]\d{6}", href)[0]) except: continue def getStockInfo(lst, stockURL, fpath): count = 0 for stock in lst: url = stockURL + stock + ".html" html = getHTMLText(url) try: if html=="": continue infoDict = {} soup = BeautifulSoup(html, 'html.parser') stockInfo = soup.find('div',attrs={'class':'stock-bets'}) name = stockInfo.find_all(attrs={'class':'bets-name'})[0] infoDict.update({'股票名称': name.text.split()[0]}) keyList = stockInfo.find_all('dt') valueList = stockInfo.find_all('dd') for i in range(len(keyList)): key = keyList[i].text val = valueList[i].text infoDict[key] = val with open(fpath, 'a', encoding='utf-8') as f: f.write( str(infoDict) + '\n' ) count = count + 1 print("\r当前进度: {:.2f}%".format(count*100/len(lst)),end="") except: count = count + 1 print("\r当前进度: {:.2f}%".format(count*100/len(lst)),end="") continue def main(): stock_list_url = 'http://quote.eastmoney.com/stocklist.html' stock_info_url = 'https://gupiao.baidu.com/stock/' output_file = 'D:/BaiduStockInfo.txt' slist=[] getStockList(slist, stock_list_url) getStockInfo(slist, stock_info_url, output_file) main()
http://www.cnblogs.com/zufezzt/p/6207301.html#3662973
升级pip包:
import pip
from subprocess import call
for dist in pip.get_installed_distributions():
call("pip install --upgrade " + dist.project_name, shell=True)
import pip
from subprocess import call
for dist in pip.get_installed_distributions():
call("pip install --upgrade " + dist.project_name, shell=True)
简单的python服务器
python -m SimpleHTTPServer