我们可以通过读取HTTP_USER_AGENT来判断请求,从而做到屏蔽页面抓取。
国内一些蜘蛛的UA:
$spiders = array(
'Googlebot',
'Baiduspider',
"Sogou web spider",
"360Spider",
"YisouSpider",
"Sosospider",
"YoudaoBot",
'Yahoo! Slurp',
'EasouSpider',
'bingbot'
);
模板兔给出大致的代码供参考:
$ua = $_SERVER['HTTP_USER_AGENT']; //将恶意USER_AGENT存入数组 $now_ua = array('FeedDemon ','BOT/0.1 (BOT for JCE)','CrawlDaddy ','Java','Feedly','UniversalFeedParser','ApacheBench','Swiftbot','ZmEu','Indy Library','oBot','jaunty','YandexBot','AhrefsBot','MJ12bot','WinHttp','EasouSpider','HttpClient','Microsoft URL Control','YYSpider','jaunty','Python-urllib','lightDeckReports Bot'); //禁止空USER_AGENT,dedecms等主流采集程序都是空USER_AGENT,部分sql注入工具也是空USER_AGENT if(!$ua) { header("Content-type: text/html; charset=utf-8"); die('禁止爬取'); }else{ foreach($now_ua as $value ){ //判断是否是数组中存在的UA if(strpos($ua,$value) !== false){ header("Content-type: text/html; charset=utf-8"); die('禁止爬取'); } } }
1 个评论