• Stars
    star
    1,850
  • Rank 25,066 (Top 0.5 %)
  • Language
    Python
  • License
    MIT License
  • Created over 7 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Scrape job websites into a single spreadsheet with no duplicates.

JobFunnel Banner
Build Status Code Coverage

Automated tool for scraping job postings into a .csv file.

Since this project was developed, CAPTCHA has clamped down hard, help us re-build the backend and make this tool useful again!

Benefits over job search sites:

  • Never see the same job twice!
  • No advertising.
  • See jobs from multiple job search websites all in one place.

masterlist.csv

Installation

JobFunnel requires Python 3.8 or later.

pip install git+https://github.com/PaulMcInnis/JobFunnel.git

Usage

By performing regular scraping and reviewing, you can cut through the noise of even the busiest job markets.

Configure

You can search for jobs with YAML configuration files or by passing command arguments.

Download the demo settings.yaml by running the below command:

wget https://git.io/JUWeP -O my_settings.yaml

NOTE:

  • It is recommended to provide as few search keywords as possible (i.e. Python, AI).

  • JobFunnel currently supports CANADA_ENGLISH, USA_ENGLISH, UK_ENGLISH, FRANCE_FRENCH, and GERMANY_GERMAN locales.

Scrape

Run funnel with your settings YAML to populate your master CSV file with jobs from available providers:

funnel load -s my_settings.yaml

Review

Open the master CSV file and update the per-job status:

  • Set to interested, applied, interview or offer to reflect your progression on the job.

  • Set to archive, rejected or delete to remove a job from this search. You can review 'blocked' jobs within your block_list_file.

Advanced Usage

  • Automating Searches
    JobFunnel can be easily automated to run nightly with crontab
    For more information see the crontab document.

  • Writing your own Scrapers
    If you have a job website you'd like to write a scraper for, you are welcome to implement it, Review the Base Scraper for implementation details.

  • Remote Work
    Bypass a frustrating user experience looking for remote work by setting the search parameter remoteness to match your desired level, i.e. FULLY_REMOTE.

  • Adding Support for X Language / Job Website
    JobFunnel supports scraping jobs from the same job website across locales & domains. If you are interested in adding support, you may only need to define session headers and domain strings, Review the Base Scraper for further implementation details.

  • Blocking Companies
    Filter undesired companies by adding them to your company_block_list in your YAML or pass them by command line as -cbl.

  • Job Age Filter
    You can configure the maximum age of scraped listings (in days) by configuring max_listing_days.

  • Reviewing Jobs in Terminal
    You can review the job list in the command line:

    column -s, -t < master_list.csv | less -#2 -N -S
    
  • Respectful Delaying
    Respectfully scrape your job posts with our built-in delaying algorithms.

    To better understand how to configure delaying, check out this Jupyter Notebook which breaks down the algorithm step by step with code and visualizations.

  • Recovering Lost Data
    JobFunnel can re-build your master CSV from your cache_folder where all the historic scrape data is located:

    funnel --recover
    
  • Running by CLI
    You can run JobFunnel using CLI only, review the command structure via:

    funnel inline -h
    

CAPTCHA

JobFunnel does not solve CAPTCHA. If, while scraping, you receive a Unable to extract jobs from initial search result page:\ error. Then open that url on your browser and solve the CAPTCHA manually.