• Stars
    star
    1,263
  • Rank 37,244 (Top 0.8 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 10 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A chat bot for Slack (https://slack.com).

PyPI Build Status

A chat bot for Slack inspired by llimllib/limbo and will.

Features

Installation

pip install slackbot

Usage

Generate the slack api token

First you need to get the slack api token for your bot. You have two options:

  1. If you use a bot user integration of slack, you can get the api token on the integration page.
  2. If you use a real slack user, you can generate an api token on slack web api page.

Configure the bot

First create a slackbot_settings.py and a run.py in your own instance of slackbot.

Configure the api token

Then you need to configure the API_TOKEN in a python module slackbot_settings.py, which must be located in a python import path. This will be automatically imported by the bot.

slackbot_settings.py:

API_TOKEN = "<your-api-token>"

Alternatively, you can use the environment variable SLACKBOT_API_TOKEN.

Run the bot
from slackbot.bot import Bot
def main():
    bot = Bot()
    bot.run()

if __name__ == "__main__":
    main()
Configure the default answer

Add a DEFAULT_REPLY to slackbot_settings.py:

DEFAULT_REPLY = "Sorry but I didn't understand you"
Configure the docs answer

The message attribute passed to your custom plugins has an special function message.docs_reply() that will parse all the plugins available and return the Docs in each of them.

Send all tracebacks directly to a channel, private channel, or user

Set ERRORS_TO in slackbot_settings.py to the desired recipient. It can be any channel, private channel, or user. Note that the bot must already be in the channel. If a user is specified, ensure that they have sent at least one DM to the bot first.

ERRORS_TO = 'some_channel'
# or...
ERRORS_TO = 'username'
Configure the plugins

Add your plugin modules to a PLUGINS list in slackbot_settings.py:

PLUGINS = [
    'slackbot.plugins',
    'mybot.plugins',
]

Now you can talk to your bot in your slack client!

Attachment Support

from slackbot.bot import respond_to
import re
import json


@respond_to('github', re.IGNORECASE)
def github(message):
    attachments = [
    {
        'fallback': 'Fallback text',
        'author_name': 'Author',
        'author_link': 'http://www.github.com',
        'text': 'Some text',
        'color': '#59afe1'
    }]
    message.send_webapi('', json.dumps(attachments))

Create Plugins

A chat bot is meaningless unless you can extend/customize it to fit your own use cases.

To write a new plugin, simplely create a function decorated by slackbot.bot.respond_to or slackbot.bot.listen_to:

  • A function decorated with respond_to is called when a message matching the pattern is sent to the bot (direct message or @botname in a channel/private channel chat)
  • A function decorated with listen_to is called when a message matching the pattern is sent on a channel/private channel chat (not directly sent to the bot)
from slackbot.bot import respond_to
from slackbot.bot import listen_to
import re

@respond_to('hi', re.IGNORECASE)
def hi(message):
    message.reply('I can understand hi or HI!')
    # react with thumb up emoji
    message.react('+1')

@respond_to('I love you')
def love(message):
    message.reply('I love you too!')

@listen_to('Can someone help me?')
def help(message):
    # Message is replied to the sender (prefixed with @user)
    message.reply('Yes, I can!')

    # Message is sent on the channel
    message.send('I can help everybody!')

    # Start a thread on the original message
    message.reply("Here's a threaded reply", in_thread=True)

To extract params from the message, you can use regular expression:

from slackbot.bot import respond_to

@respond_to('Give me (.*)')
def giveme(message, something):
    message.reply('Here is {}'.format(something))

If you would like to have a command like 'stats' and 'stats start_date end_date', you can create reg ex like so:

from slackbot.bot import respond_to
import re


@respond_to('stat$', re.IGNORECASE)
@respond_to('stat (.*) (.*)', re.IGNORECASE)
def stats(message, start_date=None, end_date=None):

And add the plugins module to PLUGINS list of slackbot settings, e.g. slackbot_settings.py:

PLUGINS = [
    'slackbot.plugins',
    'mybot.plugins',
]

The @default_reply decorator

Added in slackbot 0.4.1

Besides specifying DEFAULT_REPLY in slackbot_settings.py, you can also decorate a function with the @default_reply decorator to make it the default reply handler, which is more handy.

@default_reply
def my_default_handler(message):
    message.reply('...')

Here is another variant of the decorator:

@default_reply(r'hello.*)')
def my_default_handler(message):
    message.reply('...')

The above default handler would only handle the messages which must (1) match the specified pattern and (2) can't be handled by any other registered handler.

List of third party plugins

You can find a list of the available third party plugins on this page.

More Repositories

1

portia

Visual scraping for Scrapy
Python
8,991
star
2

splash

Lightweight, scriptable browser as a service with an HTTP API
Python
3,898
star
3

dateparser

python parser for human readable dates
Python
2,525
star
4

frontera

A scalable frontier for web crawlers
Python
1,288
star
5

extruct

Extract embedded metadata from HTML markup
Python
832
star
6

scrapyrt

HTTP API for Scrapy spiders
Python
829
star
7

python-crfsuite

A python binding for crfsuite
Python
770
star
8

spidermon

Scrapy Extension for monitoring spiders execution.
Python
530
star
9

price-parser

Extract price amount and currency symbol from a raw text string
Python
307
star
10

article-extraction-benchmark

Article extraction benchmark: dataset and evaluation scripts
Python
268
star
11

webstruct

NER toolkit for HTML data
HTML
252
star
12

python-scrapinghub

A client interface for Scrapinghub's API
Python
195
star
13

adblockparser

Python parser for Adblock Plus filters
Python
187
star
14

js2xml

Convert Javascript code to an XML document
Python
186
star
15

testspiders

Useful test spiders for Scrapy
Python
183
star
16

scrapy-training

Scrapy Training companion code
Python
171
star
17

skinfer

Skinfer is a tool for inferring and merging JSON schemas
Python
140
star
18

sample-projects

Sample projects showcasing Scrapinghub tech
Python
137
star
19

shub

Scrapinghub Command Line Client
Python
125
star
20

python-simhash

An efficient simhash implementation for python
C
122
star
21

scrapy-poet

Page Object pattern for Scrapy
Python
119
star
22

number-parser

Parse numbers written in natural language
Python
108
star
23

mdr

A python library detect and extract listing data from HTML page.
C
106
star
24

web-poet

Web scraping Page Objects core library
Python
95
star
25

aile

Automatic Item List Extraction
HTML
87
star
26

wappalyzer-python

UNMAINTAINED Python wrapper for Wappalyzer (utility that uncovers the technologies used on websites)
Python
82
star
27

pydepta

A python implementation of DEPTA
C
80
star
28

scrapinghub-stack-scrapy

Software stack with latest Scrapy and updated deps
Dockerfile
60
star
29

aduana

Frontera backend to guide a crawl using PageRank, HITS or other ranking algorithms based on the link structure of the web graph, even when making big crawls (one billion pages).
C
55
star
30

scrapy-autoextract

Zyte Automatic Extraction integration for Scrapy
Python
55
star
31

scrapy-autounit

Automatic unit test generation for Scrapy.
Python
55
star
32

learn.scrapinghub.com

Scrapinghub Learning Center. Report issues in Jira: Report issues in Jira: https://scrapinghub.atlassian.net/projects/WEB
CSS
55
star
33

portia2code

Python
49
star
34

arche

Analyze scraped data
Python
47
star
35

scmongo

MongoDB extensions for Scrapy
Python
44
star
36

exporters

Exporters is an extensible export pipeline library that supports filter, transform and several sources and destinations
Python
40
star
37

webpager

Paginating the web
C
35
star
38

scrapy-frontera

More flexible and featured Frontera scheduler for Scrapy
Python
35
star
39

page_clustering

A simple algorithm for clustering web pages, suitable for crawlers
HTML
34
star
40

flatson

Tool to flatten stream of JSON-like objects, configured via schema
Python
33
star
41

scaws

Extensions for using Scrapy on Amazon AWS
Python
32
star
42

docker-images

Dockerfile
32
star
43

scrapylib

Collection of Scrapy utilities (extensions, middlewares, pipelines, etc)
Python
31
star
44

pycon-speakers

Speakers Spider (PyCon 2014 sprint)
Python
30
star
45

docker-devpi

pypi caching service using devpi and docker
Shell
28
star
46

crawlera-tools

Crawlera tools
Python
26
star
47

scrapinghub-entrypoint-scrapy

Scrapy entrypoint for Scrapinghub job runner
Python
25
star
48

scrapy-mosquitera

Restrict crawl and scraping scope using matchers.
Python
25
star
49

andi

Library for annotation-based dependency injection
Python
20
star
50

kafka-scanner

High Level Kafka Scanner
Python
19
star
51

autoextract-spiders

Pre-built Scrapy spiders for AutoExtract
Python
19
star
52

python-cld2

Python bindings for CLD2.
Python
17
star
53

product-extraction-benchmark

Jupyter Notebook
16
star
54

python-hubstorage

Deprecated HubStorage client library - please use python-scrapinghub>=1.9.0 instead
Python
16
star
55

shublang

Pluggable DSL that uses pipes to perform a series of linear transformations to extract data
Python
15
star
56

shub-workflow

Python
13
star
57

shubc

Go bindings for Scrapinghub HTTP API and a sweet command line tool for Scrapy Cloud
Go
13
star
58

scrapinghub-stack-portia

Software stack used to run Portia spiders in Scrapinghub cloud
Python
10
star
59

tutorials

Python
8
star
60

pastebin

Python
8
star
61

navscraper

Vanguard ETF NAV scraper
Python
8
star
62

varanus

A command line spider monitoring tool
Python
8
star
63

hcf-backend

Crawl Frontier HCF backend
Python
7
star
64

pydatanyc

Python
7
star
65

autoextract-poet

web-poet definitions for AutoExtract
Python
6
star
66

collection-scanner

HubStorage collection scanner library
Python
5
star
67

locode

Python
5
star
68

adblockgoparser

Golang parser for Adblock Plus filters
Go
4
star
69

autoextract-examples

Jupyter Notebook
4
star
70

webstruct-demo

HTTP demo for https://github.com/scrapinghub/webstruct
Python
4
star
71

shub-image

Deprecated client side tool to prepare docker images to run crawlers in Scrapinghub - please use shub>=2.5.0 instead
Python
4
star
72

docker-cloudera-manager

Run Cloudera Manager in docker
Dockerfile
3
star
73

custom-images-examples

Examples of custom images running on Scrapinghub platform
3
star
74

hubstorage-frontera

Hubstorage crawl frontier backend for Frontera
Python
3
star
75

httpation

Erlang
3
star
76

xpathcsstutorial

[Work in progress] XPath & CSS for web scraping tutorial
Jupyter Notebook
3
star
77

epmdless_dist

Erlang
2
star
78

egraylog

Erlang
2
star
79

scrapinghub-conda-recipes

Conda packages for scrapinghub channel
Shell
2
star
80

pydaybot

Demo bot for Python Day Uruguay 2011
Python
2
star
81

erl-iputils

Erlang
1
star
82

jupyterhub-stacks

A docker images for jhub cluster
Python
1
star
83

cld2

Compact Language Detector 2
C++
1
star
84

scrapinghub-stack-hworker

[DEPRECATED] Software stack fully compatible with Scrapy Cloud 1.0
Python
1
star
85

crawlera.com

crawlera.com website
HTML
1
star
86

discourse-sso-google

Use Google as Single-Sign-On provider for Discourse
Python
1
star
87

pkg-opengrok

Ubuntu packaging for OpenGrok
Shell
1
star