Welcome to the documentation of Nesta's skills extractor library.
This page contains information on how to install and use Nesta's skills extraction library. The skills library allows you to extract skills phrases from job advertisement texts and maps them onto a skills taxonomy of your choice.
We currently support three different taxonomies to map onto: the European Commission’s European Skills, Competences, and Occupations (ESCO), Lightcast’s Open Skills and a “toy” taxonomy developed internally for the purpose of testing.
If you'd like to learn more about the models used in the library, please refer to the model card page.
You may also want to read more about the wider project by reading:
You can use pip to install the library:
pip install ojd-daps-skills
You will also need to install spaCy's English language model:
python -m spacy download en_core_web_sm
Note that this package was developed on MacOS and tested on Ubuntu. Changes have been made to be compatible on a Windows system but are not tested and cannot be guaranteed.
When the package is first used it will automatically download a folder of neccessary data and models. (~1GB)
The library supports three key skills extraction functionalities :
- Extract AND map skills to a taxonomy of your choice;
- Extract skills from job adverts;
- Map a list of skills to a taxonomy of your choice.
The option local=False
can only be used by those with access to Nesta's S3 bucket.
If you would like to extract AND map skills in one step, you are able to do so with the extract_skills
method.
from ojd_daps_skills.pipeline.extract_skills.extract_skills import ExtractSkills #import the module
es = ExtractSkills(config_name="extract_skills_toy", local=True) #instantiate with toy taxonomy configuration file
es.load() #load necessary models
job_adverts = [
"The job involves communication skills and maths skills",
"The job involves Excel skills. You will also need good presentation skills"
] #toy job advert examples
job_skills_matched = es.extract_skills(job_adverts) #match and extract skills to toy taxonomy
The outputs are as follows:
job_skills_matched
>>> [{'SKILL': [('communication skills', ('communication, collaboration and creativity', 'S1')), ('maths skills', ('working with computers', 'S5'))]}, {'SKILL': [('Excel skills', ('working with computers', 'S5')), ('presentation skills', ('communication, collaboration and creativity', 'S1'))]}]
You can simply extract skills from a job advert or list of job adverts:
from ojd_daps_skills.pipeline.extract_skills.extract_skills import ExtractSkills #import the module
es = ExtractSkills(config_name="extract_skills_toy", local=True) #instantiate with toy taxonomy configuration file
es.load() #load necessary models
job_adverts = [
"The job involves communication skills and maths skills",
"The job involves Excel skills. You will also need good presentation skills"
] #toy job advert examples
predicted_skills = es.get_skills(job_adverts) #extract skills from list of job adverts
The outputs are as follows:
predicted_skills
[{'EXPERIENCE': [], 'SKILL': ['communication skills', 'maths skills'], 'MULTISKILL': []}, {'EXPERIENCE': [], 'SKILL': ['Excel skills', 'presentation skills'], 'MULTISKILL': []}]
You can map either the predicted_skills
output from get_stills
or simply map a list of skills to a taxonomy of your choice. In this instance, we map a list of skills:
from ojd_daps_skills.pipeline.extract_skills.extract_skills import ExtractSkills #import the module
es = ExtractSkills(config_name="extract_skills_toy", local=True) #instantiate with toy taxonomy configuration file
es.load() #load necessary models
skills_list = [
"Communication",
"Excel skills",
"working with computers"
] #list of skills (and/or multiskills) to be matched
skills_list_matched = es.map_skills(skills_list) #match formatted skills to toy taxonomy
The outputs are as follows:
skills_list_matched
>>> [{'SKILL': [('Excel skills', ('working with computers', 'S5')), ('Communication', ('use communication techniques', 'cdef')), ('working with computers', ('communication, collaboration and creativity', 'S1'))]}]
If you would like to demo the library using a front end, we have also built a streamlit app that allows you to extract skills for a given text. The app allows you to paste a job advert of your choice, extract and map skills onto any of the configurations: extract_skills_lightcast
and extract_skills_esco
.
If you'd like to modify or develop the source code you can clone it by first running:
git clone [email protected]:nestauk/ojd_daps_skills.git
- Meet the data science cookiecutter requirements, in brief:
- Install:
direnv
andconda
- Install:
- Create a blank cookiecutter conda log file:
mkdir .cookiecutter/state
touch .cookiecutter/state/conda-create.log
- Run
make install
to configure the development environment - Install spaCy's English language model:
python -m spacy download en_core_web_sm
The project is split into three core pipeline folders:
- skill_ner - Training a Named Entity Recognition (NER) model to extract skills from job adverts.
- skill_ner_mapping - Matching skills to an existing skills taxonomy using semantic similarity.
- extract_skills - User friendly functionality to extract and map skills from job adverts.
Much more about these steps can be found in each of the pipeline folder READMEs.
An example of extracting skills and mapping them to the ESCO taxonomy.
Some functions have tests, these can be checked by running
pytest
Various pieces of analysis are done in the analysis folder. These require access to various datasets from Nesta's private S3 bucket and are therefore only designed for internal Nesta use.
The technical and working style guidelines can be found here.
If contributing, changes will need to be pushed to a new branch in order for our code checks to be triggered.
This project was made possible via funding from the Economic Statistics Centre of Excellence
Project template is based on Nesta's data science project template (Read the docs here).