Development/Python

3 Web Scraping Methods in python

juniz 2020. 8. 6. 17:07
반응형

Beautifulsoup

- Easy to learn

- Require dependency

- Relatively slow

- Doesn't work well with some pages(e.g. amazon)

 

Examples

# In terminal
pip install beautifulsoup4
pip install requests

# Extract
from bs4 import Beautifulsoup
import requests

URL = 'https://blog.scrapinghub.com/'

page = requests.get(URL)

soup = BeautifulSoup(page.text, 'html.parser')
titles = soup.find_all('div',{'class':'post-header'})

for title in titles:
    result = title.find('h2').text.strip()
    print(result)


'''
result
Blog Comments API (BETA): Extract Blog Comment DATA At Scale
Your Price Intelligence Questions Answered
Data Center Proxies vs. Residential Proxies
How to Get High Success Rates With Proxies: 3 Steps to Scale Up
Job Postings API: Stable release
Web Scraping Basics: A Developer’s Guide To Reliably Extract Data
Extracting Article & News Data: The Importance of Data Quality
Price Gouging or Economics at Work: Price Intelligence to Track Consumer Sentiment
A Practical Guide to Web Data QA Part III: Holistic Data Valid
'''

 

Scrapy

- Fast

- Not user friendly

 

Examples

# Install
pip install scrapy

# Startproject
scrapy startproject <project_name>

# Run
scrapy crawl posts

# Extract
import scrapy


class PostsSpider(scrapy.Spider):
    name = 'posts'

    start_urls = [
        'https://blog.scrapinghub.com/',
    ]

    def parse(self, response):
        for post in response.css('div.post-item'):
            yield {
                'title':post.css('.post-header h2 a::text')[0].get(),
            }
        next_page = response.css('a.next-posts-link::attr(href)').get()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

 

Selenium Driver

- Versatile

- Works well with js

- Relatively slow

 

Selenium Example은 차후에 추가 예정입니다. 

 

 

 

반응형