• Stars
    star
    1,829
  • Rank 25,381 (Top 0.6 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 7 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Integration layer between Requests and Selenium for automation of web actions.

Requestium

Build Status License

Requestium is a Python library that merges the power of Requests, Selenium, and Parsel into a single integrated tool for automatizing web actions.

The library was created for writing web automation scripts that are written using mostly Requests but that are able to seamlessly switch to Selenium for the JavaScript heavy parts of the website, while maintaining the session.

Requestium adds independent improvements to both Requests and Selenium, and every new feature is lazily evaluated, so its useful even if writing scripts that use only Requests or Selenium.

Read more about the motivation behind creating this library in this blopost.

Features

  • Enables switching between a Requests' Session and a Selenium webdriver while maintaining the current web session.
  • Integrates Parsel's parser into the library, making xpath, css, and regex much cleaner to write.
  • Improves Selenium's handling of dynamically loading elements.
  • Makes cookie handling more flexible in Selenium.
  • Makes clicking elements in Selenium more reliable.
  • Supports Chromedriver natively plus adding a custom webdriver.

Installation

pip install requestium

You should then download your preferred Selenium webdriver if you plan to use the Selenium part of Requestium, such as Chromedriver.

Usage

First create a session as you would do on Requests, and optionally add arguments for the web-driver if you plan to use one.

from requestium import Session, Keys

options = {'arguments': ['headless']}
s = Session(webdriver_path='./chromedriver', default_timeout=15, webdriver_options=options)

Since headless mode is common, there's a shortcut for it by pecifying headless=True.

from requestium import Session, Keys

s = Session(webdriver_path='./chromedriver' headless=True)

You can also create a Selenium webdriver outside Requestium and have it use that instead:

from selenium import webdriver
from requestium import Session, Keys

firefox_driver = webdriver.Firefox()

s = Session(driver=firefox_driver)

You can also specify a 3rd party Chrome webdriver class and use it by specifying the browser argument as well. This will allow, for example, to use Selenium-Wire to get XHR requests of a web page:

from seleniumwire import webdriver
from requestium import Session, Keys

seleniumwire_driver = webdriver.Chrome()

s = Session(webdriver_path='./chromedriver', driver=seleniumwire_driver)

You don't need to parse the response, it is done automatically when calling xpath, css or re.

title = s.get('http://samplesite.com').xpath('//title/text()').extract_first(default='Default Title')

Regex require less boilerplate when compared to Python's standard re module.

response = s.get('http://samplesite.com/sample_path')

# Extracts the first match
identifier = response.re_first(r'ID_\d\w\d', default='ID_1A1')

# Extracts all matches as a list
users = response.re(r'user_\d\d\d')

The Session object is just a regular Requests's session object, so you can use all of its methods.

s.post('http://www.samplesite.com/sample', data={'field1': 'data1'})
s.proxies.update({'http': 'http://10.11.4.254:3128', 'https': 'https://10.11.4.252:3128'})

And you can switch to using the Selenium webdriver to run any js code.

s.transfer_session_cookies_to_driver()  # You can maintain the session if needed
s.driver.get('http://www.samplesite.com/sample/process')

The driver object is a Selenium webdriver object, so you can use any of the normal selenium methods plus new methods added by Requestium.

s.driver.find_element_by_xpath("//input[@class='user_name']").send_keys('James Bond', Keys.ENTER)

# New method which waits for element to load instead of failing, useful for single page web apps
s.driver.ensure_element_by_xpath("//div[@attribute='button']").click()

Requestium also adds xpath, css, and re methods to the Selenium driver object.

if s.driver.re(r'ID_\d\w\d some_pattern'):
    print('Found it!')

And finally you can switch back to using Requests.

s.transfer_driver_cookies_to_session()
s.post('http://www.samplesite.com/sample2', data={'key1': 'value1'})

Selenium workarounds

Requestium adds several 'ensure' methods to the driver object, as Selenium is known to be very finicky about selecting elements and cookie handling.

Wait for element

The ensure_element_by_ methods waits for the element to be loaded in the browser and returns it as soon as it loads. It's named after Selenium's find_element_by_ methods (which immediately raise an exception if they can't find the element).

Requestium can wait for an element to be in any of the following states:

  • present (default)
  • clickable
  • visible
  • invisible (useful for things like waiting for loading... gifs to disappear)

These methods are very useful for single page web apps where the site is dynamically changing its elements. We usually end up completely replacing our find_element_by_ calls with ensure_element_by_ calls as they are more flexible.

Elements you get using these methods have the new ensure_click method which makes the click less prone to failure. This helps with getting through a lot of the problems with Selenium clicking.

s.driver.ensure_element_by_xpath("//li[@class='b1']", state='clickable', timeout=5).ensure_click()

# === We also added these methods named in accordance to Selenium's api design ===
# ensure_element_by_id
# ensure_element_by_name
# ensure_element_by_link_text
# ensure_element_by_partial_link_text
# ensure_element_by_tag_name
# ensure_element_by_class_name
# ensure_element_by_css_selector

Add cookie

The ensure_add_cookie method makes adding cookies much more robust. Selenium needs the browser to be at the cookie's domain before being able to add the cookie, this method offers several workarounds for this. If the browser is not in the cookies domain, it GETs the domain before adding the cookie. It also allows you to override the domain before adding it, and avoid making this GET. The domain can be overridden to '', this sets the cookie's domain to whatever domain the driver is currently in.

If it can't add the cookie it tries to add it with a less restrictive domain (Eg.: home.site.com -> site.com) before failing.

cookie = {"domain": "www.site.com",
          "secure": false,
          "value": "sd2451dgd13",
          "expiry": 1516824855.759154,
          "path": "/",
          "httpOnly": true,
          "name": "sessionid"}
s.driver.ensure_add_cookie(cookie, override_domain='')

Considerations

New features are lazily evaluated, meaning:

  • The Selenium webdriver process is only started if you call the driver object. So if you don't need to use the webdriver, you could use the library with no overhead. Very useful if you just want to use the library for its integration with Parsel.
  • Parsing of the responses is only done if you call the xpath, css, or re methods of the response. So again there is no overhead if you don't need to use this feature.

A byproduct of this is that the Selenium webdriver could be used just as a tool to ease in the development of regular Requests code: You can start writing your script using just the Requests' session, and at the last step of the script (the one you are currently working on) transfer the session to the Chrome webdriver. This way, a Chrome process starts in your machine, and acts as a real time "visor" for the last step of your code. You can see in what state your session is currently in, inspect it with Chrome's excellent inspect tools, and decide what's the next step your session object should take. Very useful to try code in an IPython interpreter and see how the site reacts in real time.

When transfer_driver_cookies_to_session is called, Requestium automatically updates your Requests session user-agent to match that of the browser used in Selenium. This doesn't happen when running Requests without having switched from a Selenium session first though. So if you just want to run Requests but want it to use your browser's user agent instead of the default one (which sites love to block), just run:

s.copy_user_agent_from_driver()

Take into account that doing this will launch a browser process.

Note: The Selenium Chrome webdriver doesn't support automatic transfer of proxies from the Session to the Webdriver at the moment.

Comparison with Requests + Selenium + lxml

A silly working example of a script that runs on Reddit. We'll then show how it compares to using Requests + Selenium + lxml instead of Requestium.

Using Requestium

from requestium import Session, Keys

# If you want requestium to type your username in the browser for you, write it in here:
reddit_user_name = ''

s = Session('./chromedriver', default_timeout=15)
s.driver.get('http://reddit.com')
s.driver.find_element_by_xpath("//a[@href='https://www.reddit.com/login']").click()

print('Waiting for elements to load...')
s.driver.ensure_element_by_class_name("desktop-onboarding-sign-up__form-toggler",
				      state='visible').click()

if reddit_user_name:
    s.driver.ensure_element_by_id('user_login').send_keys(reddit_user_name)
    s.driver.ensure_element_by_id('passwd_login').send_keys(Keys.BACKSPACE)
print('Please log-in in the chrome browser')

s.driver.ensure_element_by_class_name("desktop-onboarding__title", timeout=60, state='invisible')
print('Thanks!')

if not reddit_user_name:
    reddit_user_name = s.driver.xpath("//span[@class='user']//text()").extract_first()

if reddit_user_name:
    s.transfer_driver_cookies_to_session()
    response = s.get("https://www.reddit.com/user/{}/".format(reddit_user_name))
    cmnt_karma = response.xpath("//span[@class='karma comment-karma']//text()").extract_first()
    reddit_golds_given = response.re_first(r"(\d+) gildings given out")
    print("Comment karma: {}".format(cmnt_karma))
    print("Reddit golds given: {}".format(reddit_golds_given))
else:
    print("Couldn't get user name")

Using Requests + Selenium + lxml

import re
from lxml import etree
from requests import Session
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

# If you want requestium to type your username in the browser for you, write it in here:
reddit_user_name = ''

driver = webdriver.Chrome('./chromedriver')
driver.get('http://reddit.com')
driver.find_element_by_xpath("//a[@href='https://www.reddit.com/login']").click()

print('Waiting for elements to load...')
WebDriverWait(driver, 15).until(
    EC.visibility_of_element_located((By.CLASS_NAME, "desktop-onboarding-sign-up__form-toggler"))
).click()

if reddit_user_name:
    WebDriverWait(driver, 15).until(
        EC.presence_of_element_located((By.ID, 'user_login'))
    ).send_keys(reddit_user_name)
    driver.find_element_by_id('passwd_login').send_keys(Keys.BACKSPACE)
print('Please log-in in the chrome browser')

try:
    WebDriverWait(driver, 3).until(
        EC.presence_of_element_located((By.CLASS_NAME, "desktop-onboarding__title"))
    )
except TimeoutException:
    pass
WebDriverWait(driver, 60).until(
    EC.invisibility_of_element_located((By.CLASS_NAME, "desktop-onboarding__title"))
)
print('Thanks!')

if not reddit_user_name:
    tree = etree.HTML(driver.page_source)
    try:
        reddit_user_name = tree.xpath("//span[@class='user']//text()")[0]
    except IndexError:
        reddit_user_name = None

if reddit_user_name:
    s = Session()
    # Reddit will think we are a bot if we have the wrong user agent
    selenium_user_agent = driver.execute_script("return navigator.userAgent;")
    s.headers.update({"user-agent": selenium_user_agent})
    for cookie in driver.get_cookies():
        s.cookies.set(cookie['name'], cookie['value'], domain=cookie['domain'])
    response = s.get("https://www.reddit.com/user/{}/".format(reddit_user_name))
    try:
        cmnt_karma = etree.HTML(response.content).xpath(
            "//span[@class='karma comment-karma']//text()")[0]
    except IndexError:
        cmnt_karma = None
    match = re.search(r"(\d+) gildings given out", str(response.content))
    if match:
        reddit_golds_given = match.group(1)
    else:
        reddit_golds_given = None
    print("Comment karma: {}".format(cmnt_karma))
    print("Reddit golds given: {}".format(reddit_golds_given))
else:
    print("Couldn't get user name")

Similar Projects

This project intends to be a drop-in replacement of Requests' Session object, with added functionality. If your use case is a drop in replacement for a Selenium webdriver, but that also has some of Requests' functionality, Selenium-Requests does just that.

License

Copyright © 2018, Tryolabs. Released under the BSD 3-Clause.

More Repositories

1

luminoth

Deep Learning toolkit for Computer Vision.
Python
2,398
star
2

norfair

Lightweight Python library for adding real-time multi-object tracking to any detector.
Python
2,381
star
3

metamon

Collection of Ansible playbooks to quickly start your Django Application
Shell
340
star
4

fetch-it

An enhanced HTTP client based on fetch.
JavaScript
237
star
5

TLSphinx

Swift wrapper around Pocketsphinx
C++
156
star
6

aws-workshop

Learn to deploy real applications in a scalable way, using Amazon Web Services.
Python
152
star
7

react-examples

Examples of using React
JavaScript
131
star
8

awesome-tryo

A curated list of awesome resources we use at Tryolabs
116
star
9

TLMetaResolver

TLMetaResolver is an extension to UIWebView written in Swift that adds the ability to parse the meta tags in the loaded web page
Swift
80
star
10

django-kitsune

Host server monitoring app for Django Admin. Allows to schedule checks on hosts and notify results to administrators by mail.
Python
66
star
11

taggerine

Annotation tool for images
JavaScript
64
star
12

TLAnimatedSegue

Segue for present controllers with custom animations.
Objective-C
58
star
13

daywatch

E-commerce scraping and analytics platform.
Python
53
star
14

graphql-parser

GraphQL parser for Python
Python
48
star
15

django-tastypie-extendedmodelresource

An extension for TastyPie's ModelResource, to allow features such as easily having multiple nested resources.
Python
44
star
16

stable-diffusion-dreambooth

A notebook containing code to train your own Dreambooth model using Stable Diffusion.
Jupyter Notebook
43
star
17

nginx-docker

Based on official nginx Docker image and h5bp, with templating and custom intialization script support
Shell
38
star
18

soccer-video-analytics

Demo on how to compute soccer ball possession automatically using AI.
Python
37
star
19

vierjavibot

JavaScript
30
star
20

libreQDA

JavaScript
29
star
21

TLFormView

A universal iOS form
Objective-C
25
star
22

object-detection-workshop

Learn the inners of object detection with Deep Learning by understanding Faster R-CNN model, and how to use Luminoth to solve real-world problems.
Jupyter Notebook
25
star
23

lambda-mailer

Uses AWS lambda to create a serverless endpoint for processing a contact form.
Python
24
star
24

social-media-scraper

Scrapes social media handles out of websites.
JavaScript
17
star
25

nvd3-tags

Declarative NVD3 charts
JavaScript
13
star
26

python-simple-getty

Python
8
star
27

norfair-ros

ROS package for multi-object tracking using Norfair.
Python
8
star
28

fashion-assistant

Jupyter Notebook
7
star
29

causal_inference

Measure the impact of an intervention in a time series, using different sources as references.
Python
6
star
30

jimbot

CoffeeScript
5
star
31

khipu-2023

4
star
32

norfair-ros-dev

Full ROS environment combining different nodes for object detection and tracking using Norfair.
Python
3
star
33

dvc-template

A template repository for projects using DVC
Python
3
star
34

squat-wars

Squat counter game featured at Khipu 2023 running on a Raspberry Pi 4 together with a Coral TPU
Python
2
star
35

TryoCoQA

A Conversational Question Answering dataset for Tryolabs' blog posts.
2
star
36

cookiecutter-django-docker

Python
2
star
37

temporian-examples

2
star
38

restricttotopic

Validator for GuardrailsHub to check if a text is related with a topic.
Python
1
star
39

python-workshop

Code for the Python workshop on building the snake game using pygame.
Python
1
star
40

ml-garden

Library to create and re-use ML pipelines
Python
1
star