• Stars
    star
    993
  • Rank 44,453 (Top 1.0 %)
  • Language
    Nim
  • License
    MIT License
  • Created over 5 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Faster requests on Python 3

Faster-than-Requests

screenshot

screenshot

Library Speed Files LOC Dependencies Developers WebSockets Multi-Threaded Web Scraper Built-in
PyWGET 152.39 1 338 Wget >17 ❎ ❎
Requests 15.58 >20 2558 >=7 >527 ❎ ❎
Requests (cached object) 5.50 >20 2558 >=7 >527 ❎ ❎
Urllib 4.00 ??? 1200 0 (std lib) ??? ❎ ❎
Urllib3 3.55 >40 5242 0 (No SSL), >=5 (SSL) >188 ❎ ❎
PyCurl 0.75 >15 5932 Curl, LibCurl >50 ❎ ❎
PyCurl (no SSL) 0.68 >15 5932 Curl, LibCurl >50 ❎ ❎
Faster_than_requests 0.40 1 999 0 1 βœ”οΈ βœ”οΈ 7, One-Liner
  • Lines Of Code counted using CLOC.
  • Direct dependencies of the package when ready to run.
  • Benchmarks run on Docker from Dockerfile on this repo.
  • Developers counted from the Contributors list of Git.
  • Speed is IRL time to complete 10000 HTTP local requests.
  • Stats as of year 2020.
  • x86_64 64Bit AMD, SSD, Arch Linux.

Use

import faster_than_requests as requests

requests.get("http://httpbin.org/get")                                      # GET
requests.post("http://httpbin.org/post", "Some Data Here")                  # POST
requests.download("http://example.com/foo.jpg", "out.jpg")                  # Download a file
requests.scraper(["http://foo.io", "http://bar.io"], threads=True)          # Multi-Threaded Web Scraper
requests.scraper5(["http://foo.io"], sqlite_file_path="database.db")        # URL-to-SQLite Web Scraper
requests.scraper6(["http://python.org"], ["(www|http:|https:)+[^\s]+[\w]"]) # Regex-powered Web Scraper
requests.scraper7("http://python.org", "body > div.someclass a#someid"])    # CSS Selector Web Scraper
requests.websocket_send("ws://echo.websocket.org", "data here")             # WebSockets Binary/Text

Table Of Contents

get() post() put() head()
patch() delete() download() download2()
scraper() scraper2() scraper3() scraper4()
scraper5() scraper6() scraper7() get2str()
get2str2() get2dict() get2json()
post2str() post2dict() post2json() post2list()
download3() tuples2json() set_headers() multipartdata2str()
datauri() urlparse() urlencode() urldecode()
encodequery() encodexml() debugs() minifyhtml()
How to set DEBUG mode websocket_send() websocket_ping()
How to Install How to Windows FAQ Get Help
PyPI GitHub Actions / CI Examples Sponsors

get()

Description: Takes an URL string, makes an HTTP GET and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://dev.to.
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.get("http://example.com")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

See Also: get2str() and get2str2()

post()

Description: Takes an URL string, makes an HTTP POST and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://dev.to.
  • body the Body data, string type, required, can be empty string. To Post Files use this too.
  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.post("http://httpbin.org/post", "Some Data Here")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

put()

Description: Takes an URL string, makes an HTTP PUT and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://nim-lang.org.
  • body the Body data, string type, required, can be empty string.
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.put("http://httpbin.org/post", "Some Data Here")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

delete()

Description: Takes an URL string, makes an HTTP DELETE and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://nim-lang.org.
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.delete("http://example.com/api/something")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

patch()

Description: Takes an URL string, makes an HTTP PATCH and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://archlinux.org.
  • body the Body data, string type, required, can be empty string.
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.patch("http://example.com", "My Body Data Here")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

head()

Description: Takes an URL string, makes an HTTP HEAD and returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://nim-lang.org.
  • user_agent User Agent, string type, optional, should not be empty string.
  • max_redirects Maximum Redirects, int type, optional, defaults to 9, example 5, example 1.
  • proxy_url Proxy URL, string type, optional, if is "" then NO Proxy is used, defaults to "", example 172.15.256.1:666.
  • proxy_auth Proxy Auth, string type, optional, if proxy_url is "" then is ignored, defaults to "".
  • timeout Timeout, int type, optional, Milliseconds precision, defaults to -1, example 9999, example 666.
  • http_headers HTTP Headers, List of Tuples type, optional, example [("key", "value")], example [("DNT", "1")].

Examples:

import faster_than_requests as requests
requests.head("http://example.com/api/something")

Returns: Response, list type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like [body, type, status, version, url, length, headers], you can use to_json() to get JSON or to_dict() to get a dict or to_tuples() to get a tuples.

to_dict()

Description: Convert the response to dict.

Arguments:

  • ftr_response Response from any of the functions that return a response.

Returns: Response, dict type.

to_json()

Description: Convert the response to Pretty-Printed JSON.

Arguments:

  • ftr_response Response from any of the functions that return a response.

Returns: Response, Pretty-Printed JSON.

to_tuples()

Description: Convert the response to a list of tuples.

Arguments:

  • ftr_response Response from any of the functions that return a response.

Returns: Response, list of tuples.

Extras: Go beyond requests

scraper()

Description: Multi-Threaded Ready-Made URL-Deduplicating Web Scraper from a list of URLs.

All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. Requests are processed asynchronously. This means that it doesn’t need to wait for a request to be finished to be processed.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["http://example.io"].
  • html_tag HTML Tag to parse, string type, optional, defaults to "a" being Links, example "h1".
  • case_insensitive Case Insensitive, True for Case Insensitive, boolean type, optional, defaults to True, example True.
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • threads Passing threads = True uses Multi-Threading, threads = False will Not use Multi-Threading, boolean type, optional, omitting it will Not use Multi-Threading.

Examples:

import faster_than_requests as requests
requests.scraper(["https://nim-lang.org", "http://example.com"], threads=True)

Returns: Scraped Webs.

scraper2()

Description: Multi-Tag Ready-Made URL-Deduplicating Web Scraper from a list of URLs. All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. Requests are processed asynchronously. This means that it doesn’t need to wait for a request to be finished to be processed. You can think of this scraper as a parallel evolution of the original scraper.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["http://example.io"].
  • list_of_tags List of HTML Tags to parse, List type, optional, defaults to ["a"] being Links, example ["h1", "h2"].
  • case_insensitive Case Insensitive, True for Case Insensitive, boolean type, optional, defaults to True, example True.
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • verbose Verbose, print to terminal console the progress, bool type, optional, defaults to True, example False.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value, example 42.
  • threads Passing threads = True uses Multi-Threading, threads = False will Not use Multi-Threading, boolean type, optional, omitting it will Not use Multi-Threading.
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 5, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.

Examples:

import faster_than_requests as requests
requests.scraper2(["https://nim-lang.org", "http://example.com"], list_of_tags=["h1", "h2"], case_insensitive=False)

Returns: Scraped Webs.

scraper3()

Description: Multi-Tag Ready-Made URL-Deduplicating Web Scraper from a list of URLs.

This Scraper is designed with lots of extra options on the arguments. All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. You can think of this scraper as a parallel evolution of the original scraper.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["http://example.io"].
  • list_of_tags List of HTML Tags to parse, List type, optional, defaults to ["a"] being Links, example ["h1", "h2"].
  • case_insensitive Case Insensitive, True for Case Insensitive, boolean type, optional, defaults to True, example True.
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • start_with Match at the start of the line, similar to str().startswith(), string type, optional, example "<cite ".
  • ends_with Match at the end of the line, similar to str().endswith(), string type, optional, example "</cite>".
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value, example 42.
  • line_start Slice the line at the start by this index, integer type, optional, defaults to 0 meaning no slicing since string start at index 0, example 3 cuts off 3 letters of the line at the start.
  • line_end Slice the line at the end by this reverse index, integer type, optional, defaults to 1 meaning no slicing since string ends at reverse index 1, example 9 cuts off 9 letters of the line at the end.
  • pre_replacements List of tuples of strings to replace before parsing, replacements are in parallel, List type, optional, example [("old", "new"), ("red", "blue")] will replace "old" with "new" and will replace "red" with "blue".
  • post_replacements List of tuples of strings to replace after parsing, replacements are in parallel, List type, optional, example [("old", "new"), ("red", "blue")] will replace "old" with "new" and will replace "red" with "blue".
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 5, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.
  • verbose Verbose, print to terminal console the progress, bool type, optional, defaults to True, example False.

Examples:

import faster_than_requests as requests
requests.scraper3(["https://nim-lang.org", "http://example.com"], list_of_tags=["h1", "h2"], case_insensitive=False)

Returns: Scraped Webs.

scraper4()

Description: Images and Photos Ready-Made Web Scraper from a list of URLs.

The Images and Photos scraped from the first URL will be put into a new sub-folder named 0, Images and Photos scraped from the second URL will be put into a new sub-folder named 1, and so on. All arguments are optional, it only needs the URL to get to work. You can think of this scraper as a parallel evolution of the original scraper.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"].
  • case_insensitive Case Insensitive, True for Case Insensitive, boolean type, optional, defaults to True, example True.
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • visited_urls Do not visit same URL twice, even if redirected into, keeps track of visited URLs, bool type, optional, defaults to True.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value, example 42.
  • folder Directory to download Images and Photos, string type, optional, defaults to current folder, must not be empty string, example /tmp.
  • force_extension Force file extension to be this file extension, string type, optional, defaults to ".jpg", must not be empty string, example ".png".
  • https_only Force to download images on Secure HTTPS only ignoring plain HTTP, sometimes HTTPS may redirect to HTTP, bool type, optional, defaults to False, example True.
  • html_output Collect all scraped Images and Photos into 1 HTML file with all elements scraped, bool type, optional, defaults to True, example False.
  • csv_output Collect all scraped URLs into 1 CSV file with all links scraped, bool type, optional, defaults to True, example False.
  • verbose Verbose, print to terminal console the progress, bool type, optional, defaults to True, example False.
  • print_alt print to terminal console the alt attribute of the Images and Photos, bool type, optional, defaults to False, example True.
  • picture Scrap images from the new HTML5 <picture> tags instead of <img> tags, <picture> are Responsive images for several resolutions but also you get duplicated images, bool type, optional, defaults to False, example True.
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 5, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.

Examples:

import faster_than_requests as requests
requests.scraper4(["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"])

Returns: None.

scraper5()

Description: Recursive Web Scraper to SQLite Database, you give it an URL, it gives back an SQLite.

SQLite database can be visualized with any SQLite WYSIWYG, like https://sqlitebrowser.org If the script gets interrupted like with CTRL+C it will try its best to keep data consistent. Additionally it will create a CSV file with all the scraped URLs. HTTP Headers are stored as Pretty-Printed JSON. Date and Time are stored as Unix Timestamps. All arguments are optional, it only needs the URL and SQLite file path to get to work. You can think of this scraper as a parallel evolution of the original scraper.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"].
  • sqlite_file_path Full file path to a new SQLite Database, must be .db file extension, string type, required, must not be empty string, example "scraped_data.db".
  • skip_ends_with Skip the URL if ends with this pattern, list type, optional, must not be empty list, example [".jpg", ".pdf"].
  • case_insensitive Case Insensitive, True for Case Insensitive, boolean type, optional, defaults to True, example True.
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • visited_urls Do not visit same URL twice, even if redirected into, keeps track of visited URLs, bool type, optional, defaults to True.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value, example 42.
  • https_only Force to download images on Secure HTTPS only ignoring plain HTTP, sometimes HTTPS may redirect to HTTP, bool type, optional, defaults to False, example True.
  • only200 Only commit to Database the successful scraping pages, ignore all errors, bool type, optional, example True.
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 5, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • max_loops Maximum total Loops to do while scraping, like a global guard for infinite redirections, integer type, optional, example 999.
  • max_deep Maximum total scraping Recursive Deep, like a global guard for infinite deep recursivity, integer type, optional, example 999.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.

Examples:

import faster_than_requests as requests
requests.scraper5(["https://example.com"], "scraped_data.db")

Returns: None.

scraper6()

Description: Regex powered Web Scraper from a list of URLs. Scrap web content using a list of Perl Compatible Regular Expressions (PCRE standard). You can configure the Regular Expressions to be case insensitive or multiline or extended.

This Scraper is designed for developers that know Regular Expressions. Learn Regular Expressions.

All arguments are optional, it only needs the URL and the Regex to get to work. You can think of this scraper as a parallel evolution of the original scraper.

Regex Arguments: (Arguments focused on Regular Expression parsing and matching)

  • list_of_regex List of Perl Compatible Regular Expressions (PCRE standard) to match the URL against, List type, required, example ["(www|http:|https:)+[^\s]+[\w]"].
  • case_insensitive Case Insensitive Regular Expressions, do caseless matching, True for Case Insensitive, boolean type, optional, defaults to False, example True.
  • multiline Multi-Line Regular Expressions, ^ and $ match newlines within data, boolean type, optional, defaults to False, example True.
  • extended Extended Regular Expressions, ignore all whitespaces and # comments, boolean type, optional, defaults to False, example True.
  • dot Dot . matches anything, including new lines, boolean type, optional, defaults to False, example True.
  • start_with Perl Compatible Regular Expression to match at the start of the line, similar to str().startswith() but with Regular Expressions, string type, optional.
  • ends_with Perl Compatible Regular Expression to match at the end of the line, similar to str().endswith() but with Regular Expressions, string type, optional.
  • post_replacement_regex Perl Compatible Regular Expressions (PCRE standard) to replace after parsing, string type, optional, this option works with post_replacement_by, this is like a Regex post-processing, this option is for experts on Regular Expressions.
  • post_replacement_by string to replace by after parsing, string type, optional, this option works with post_replacement_regex, this is like a Regex post-processing, this option is for experts on Regular Expressions.
  • re_start Perl Compatible Regular Expression matchs start at this index, positive integer type, optional, defaults to 0, this option is for experts on Regular Expressions.

Arguments:

  • list_of_urls List of URLs, URL must be string type, required, must not be empty list, example ["http://example.io"].
  • deduplicate_urls Deduplicate list_of_urls removing repeated URLs, boolean type, optional, defaults to False, example False.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value, example 42.
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 5, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.
  • verbose Verbose, print to terminal console the progress, bool type, optional, defaults to True, example False.

Examples:

import faster_than_requests as requests
requests.scraper6(["http://nim-lang.org", "http://python.org"], ["(www|http:|https:)+[^\s]+[\w]"])

Returns: Scraped Webs.

scraper7()

Description: CSS Selector powered Web Scraper. Scrap web content using a CSS Selector. The CSS Syntax does NOT take Regex nor Regex-like syntax nor literal tag attribute values.

All arguments are optional, it only needs the URL and CSS Selector to get to work. You can think of this scraper as a parallel evolution of the original scraper.

Arguments:

  • url The URL, string type, required, must not be empty string, example "http://python.org".
  • css_selector CSS Selector, string type, required, must not be empty string, example "body nav.class ul.menu > li > a".
  • agent User Agent, string type, optional, must not be empty string.
  • redirects Maximum Redirects, integer type, optional, defaults to 9, must be positive integer.
  • timeout Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to -1, must be a positive integer value, example 42.
  • header HTTP Header, any HTTP Headers can be put here, list type, optional, example [("key", "value")].
  • proxy_url HTTPS Proxy Full URL, string type, optional, must not be empty string.
  • proxy_auth HTTPS Proxy Authentication, string type, optional, defaults to "", empty string is ignored.

Examples:

import faster_than_requests as requests
requests.scraper7("http://python.org", "body > div.class a#someid")
import faster_than_requests as requests
requests.scraper7("https://nim-lang.org", "a.pure-menu-link")

[
  '<a class="pure-menu-link" href="/blog.html">Blog</a>',
  '<a class="pure-menu-link" href="/features.html">Features</a>',
  '<a class="pure-menu-link" href="/install.html">Download</a>',
  '<a class="pure-menu-link" href="/learn.html">Learn</a>',
  '<a class="pure-menu-link" href="/documentation.html">Documentation</a>',
  '<a class="pure-menu-link" href="https://forum.nim-lang.org">Forum</a>',
  '<a class="pure-menu-link" href="https://github.com/nim-lang/Nim">Source</a>'
]

More examples: https://github.com/juancarlospaco/faster-than-requests/blob/master/examples/web_scraper_via_css_selectors.py

Returns: Scraped Webs.

websocket_ping()

Description: WebSocket Ping.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example "ws://echo.websocket.org".
  • data data to send, string type, optional, can be empty string, default is empty string, example "".
  • hangup Close the Socket without sending a close packet, optional, default is False, not sending close packet can be faster.

Examples:

import faster_than_requests as requests
requests.websocket_ping("ws://echo.websocket.org")

Returns: Response, string type, can be empty string.

websocket_send()

Description: WebSocket send data, binary or text.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example "ws://echo.websocket.org".
  • data data to send, string type, optional, can be empty string, default is empty string, example "".
  • is_text if True data is sent as Text else as Binary, optional, default is False.
  • hangup Close the Socket without sending a close packet, optional, default is False, not sending close packet can be faster.

Examples:

import faster_than_requests as requests
requests.websocket_send("ws://echo.websocket.org", "data here")

Returns: Response, string type.

get2str()

Description: Takes an URL string, makes an HTTP GET and returns a string with the response Body.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://archlinux.org.

Examples:

import faster_than_requests as requests
requests.get2str("http://example.com")

Returns: Response body, string type, can be empty string.

get2str2()

Description: Takes a list of URLs, makes 1 HTTP GET for each URL, and returns a list of strings with the response Body. This makes all GET fully parallel, in a single Thread, in a single Process.

Arguments:

  • list_of_urls A list of the remote URLs, list type, required. Objects inside the list must be string type.

Examples:

import faster_than_requests as requests
requests.get2str2(["http://example.com/foo", "http://example.com/bar"]) # Parallel GET

Returns: List of response bodies, list type, values of the list are string type, values of the list can be empty string, can be empty list.

get2dict()

Description: Takes an URL, makes an HTTP GET, returns a dict with the response Body.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://alpinelinux.org.

Examples:

import faster_than_requests as requests
requests.get2dict("http://example.com")

Returns: Response, dict type, values of the dict are string type, values of the dict can be empty string, but keys are always consistent.

get2json()

Description: Takes an URL, makes an HTTP GET, returns a Minified Computer-friendly single-line JSON with the response Body.

Arguments:

  • url the remote URL, string type, required, must not be empty string, example https://alpinelinux.org.
  • pretty_print Pretty Printed JSON, optional, defaults to False.

Examples:

import faster_than_requests as requests
requests.get2json("http://example.com", pretty_print=True)

Returns: Response Body, Pretty-Printed JSON.

post2str()

Description: Takes an URL, makes an HTTP POST, returns the response Body as string type.

Arguments:

  • url the remote URL, string type, required, must not be empty string.
  • body the Body data, string type, required, can be empty string.
  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].

Examples:

import faster_than_requests as requests
requests.post2str("http://example.com/api/foo", "My Body Data Here")

Returns: Response body, string type, can be empty string.

post2dict()

Description: Takes an URL, makes a HTTP POST on that URL, returns a dict with the response.

Arguments:

  • url the remote URL, string type, required, must not be empty string.
  • body the Body data, string type, required, can be empty string.
  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].

Examples:

import faster_than_requests as requests
requests.post2dict("http://example.com/api/foo", "My Body Data Here")

Returns: Response, dict type, values of the dict are string type, values of the dict can be empty string, but keys are always consistent.

post2json()

Description: Takes a list of URLs, makes 1 HTTP GET for each URL, returns a list of responses.

Arguments:

  • url the remote URL, string type, required, must not be empty string.
  • body the Body data, string type, required, can be empty string.
  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].
  • pretty_print Pretty Printed JSON, optional, defaults to False.

Examples:

import faster_than_requests as requests
requests.post2json("http://example.com/api/foo", "My Body Data Here")

Returns: Response, string type.

post2list()

Description: Takes a list of URLs, makes 1 HTTP POST for each URL, returns a list of responses.

Arguments:

  • list_of_urls the remote URLS, list type, required, the objects inside the list must be string type.
  • body the Body data, string type, required, can be empty string.
  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].

Examples:

import faster_than_requests as requests
requests.post2list("http://example.com/api/foo", "My Body Data Here")

Returns: List of response bodies, list type, values of the list are string type, values of the list can be empty string, can be empty list.

download()

Description: Takes a list of URLs, makes 1 HTTP GET for each URL, returns a list of responses.

Arguments:

  • url the remote URL, string type, required, must not be empty string.
  • filename the local filename, string type, required, must not be empty string, full path recommended, can be relative path, includes file extension.

Examples:

import faster_than_requests as requests
requests.download("http://example.com/api/foo", "my_file.ext")

Returns: None.

download2()

Description: Takes a list of URLs, makes 1 HTTP GET Download for each URL of the list.

Arguments:

  • list_of_files list of tuples, tuples must be 2 items long, first item is URL and second item is filename. The remote URL, string type, required, must not be empty string, is the first item on the tuple. The local filename, string type, required, must not be empty string, can be full path, can be relative path, must include file extension.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value.
  • threads Passing threads = True uses Multi-Threading, threads = False will Not use Multi-Threading, omitting it will Not use Multi-Threading.

Examples:

import faster_than_requests as requests
requests.download2([("http://example.com/cat.jpg", "kitten.jpg"), ("http://example.com/dog.jpg", "doge.jpg")])

Returns: None.

download3()

Description: Takes a list of URLs, makes 1 HTTP GET Download for each URL of the list. It will Retry again and again in loop until the file is downloaded or tries is 0, whatever happens first. If all retries have failed and tries is 0 it will error out.

Arguments:

  • list_of_files list of tuples, tuples must be 2 items long, first item is URL and second item is filename. The remote URL, string type, required, must not be empty string, is the first item on the tuple. The local filename, string type, required, must not be empty string, can be full path, can be relative path, must include file extension.
  • delay Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to 0, must be a positive integer value.
  • tries how many Retries to try, positive integer type, optional, defaults to 9, must be a positive integer value.
  • backoff Back-Off between retries, positive integer type, optional, defaults to 2, must be a positive integer value.
  • jitter Jitter applied to the Back-Off between retries (Modulo math operation), positive integer type, optional, defaults to 2, must be a positive integer value.
  • verbose be Verbose, bool type, optional, defaults to True.

Returns: None.

Examples:

import faster_than_requests as requests
requests.download3(
  [("http://INVALID/cat.jpg", "kitten.jpg"), ("http://INVALID/dog.jpg", "doge.jpg")],
  delay = 1, tries = 9, backoff = 2, jitter = 2, verbose = True,
)

Examples of Failed download output (intended):

$ python3 example_fail_all_retry.py

Retry: 3 of 3
(url: "http://NONEXISTENT", filename: "a.json")
No such file or directory
Additional info: "Name or service not known"
Retrying in 64 microseconds...
Retry: 2 of 3
(url: "http://NONEXISTENT", filename: "a.json")
No such file or directory
Additional info: "Name or service not known"
Retrying in 128 microseconds (Warning: This is the last Retry!).
Retry: 1 of 3
(url: "http://NONEXISTENT", filename: "a.json")
No such file or directory
Additional info: "Name or service not known"
Retrying in 256 microseconds (Warning: This is the last Retry!).
Traceback (most recent call last):
  File "example_fail_all_retry.py", line 3, in <module>
    downloader.download3()
  ...

$

set_headers()

Description: Set the HTTP Headers from the arguments. This is for the functions that NOT allow http_headers as argument.

Arguments:

  • http_headers HTTP Headers, List of Tuples type, required, example [("key", "value")], example [("DNT", "1")]. List of tuples, tuples must be 2 items long, must not be empty list, must not be empty tuple, the first item of the tuple is the key and second item of the tuple is value, keys must not be empty string, values can be empty string, both must the stripped.

Examples:

import faster_than_requests as requests
requests.set_headers(headers = [("key", "value")])
import faster_than_requests as requests
requests.set_headers([("key0", "value0"), ("key1", "value1")])
import faster_than_requests as requests
requests.set_headers([("content-type", "text/plain"), ("dnt", "1")])

Returns: None.

multipartdata2str()

Description: Takes MultiPart Data and returns a string representation. Converts MultipartData to 1 human readable string. The human-friendly representation is not machine-friendly, so is not Serialization nor Stringification, just for humans. It is faster and different than stdlib parse_multipart.

Arguments:

  • multipart_data MultiPart data, optional, list of tupes type, must not be empty list, example [("key", "value")].

Examples:

import faster_than_requests as requests
requests.multipartdata2str([("key", "value")])

Returns: string.

datauri()

Description: Takes data and returns a standard Base64 Data URI (RFC-2397). At the time of writing Python stdlib does not have a function that returns Data URI (RFC-2397) on base64 module. This can be used as URL on HTML/CSS/JS. It is faster and different than stdlib base64.

Arguments:

  • data Arbitrary Data, string type, required.
  • mime MIME Type of data, string type, required, example "text/plain".
  • encoding Encoding, string type, required, defaults to "utf-8", example "utf-8", "utf-8" is recommended.

Examples:

import faster_than_requests as requests
requests.datauri("Nim", "text/plain")

Returns: string.

urlparse()

Description: Parse any URL and return parsed primitive values like scheme, username, password, hostname, port, path, query, anchor, opaque, etc. It is faster and different than stdlib urlparse.

Arguments:

  • url The URL, string type, required.

Examples:

import faster_than_requests as requests
requests.urlparse("https://nim-lang.org")

Returns: scheme, username, password, hostname, port, path, query, anchor, opaque, etc.

urlencode()

Description: Encodes a URL according to RFC-3986, string to string. It is faster and different than stdlib urlencode.

Arguments:

  • url The URL, string type, required.
  • use_plus When use_plus is true, spaces are encoded as + instead of %20.

Examples:

import faster_than_requests as requests
requests.urlparse("https://nim-lang.org", use_plus = True)

Returns: string.

urldecode()

Description: Decodes a URL according to RFC-3986, string to string. It is faster and different than stdlib unquote.

Arguments:

  • url The URL, string type, required.
  • use_plus When use_plus is true, spaces are decoded as + instead of %20.

Examples:

import faster_than_requests as requests
requests.urldecode(r"https%3A%2F%2Fnim-lang.org", use_plus = False)

Returns: string.

encodequery()

Description: Encode a URL according to RFC-3986, string to string. It is faster and different than stdlib quote_plus.

Arguments:

  • query List of Tuples, required, example [("key", "value")], example [("DNT", "1")].
  • omit_eq If the value is an empty string then the ="" is omitted, unless omit_eq is false.
  • use_plus When use_plus is true, spaces are decoded as + instead of %20.

Examples:

import faster_than_requests as requests
requests.encodequery([("key", "value")], use_plus = True, omit_eq = True)

Returns: string.

encodexml()

Description: Convert the characters &, <, >, " in a string to an HTML-safe string, output is Valid XML. Use this if you need to display text that might contain such characters in HTML, SVG or XML. It is faster and different than stdlib html.escape.

Arguments:

  • s Arbitrary string, required.

Examples:

import faster_than_requests as requests
requests.encodexml("<h1>Hello World</h1>")

Returns: string.

minifyhtml()

Description: Fast HTML and SVG Minifier. Not Obfuscator.

Arguments:

  • html HTML string, required.

Examples:

import faster_than_requests as requests
requests.minifyhtml("<h1>Hello</h1>          <h1>World</h1>")

Returns: string.

gen_auth_header()

Description: Helper for HTTP Authentication headers.

Returns 1 string kinda like "Basic base64(username):base64(username)", so it can be used like [ ("Authorization": gen_auth_header("username", "password") ) ]. See #168 (comment)

Arguments:

  • username Username string, must not be empty string, required.
  • password Password string, must not be empty string, required.

Returns: string.

debugs

**Description:** Debug the internal Configuration of the library, takes no arguments, returns nothing, prints the pretty-printed human-friendly multi-line JSON Configuration to standard output terminal.

Examples:

import faster_than_requests as requests
requests.debugs()

Arguments: None.

Returns: None.

optimizeGC()

Description: This module uses compile-time deterministic memory management GC (kinda like Rust, but for Python). Python at run-time makes a pause, runs a Garbage Collector, and resumes again after the pause.

gctricks.optimizeGC allows you to omit the Python GC pauses at run-time temporarily on a context manager block, this is the proper way to use this module for Benchmarks!, this is optional but recommended, we did not invent this, this is inspired from work from Instagram Engineering team and battle tested by them:

This is NOT a function, it is a context manager, it takes no arguments and wont return.

This calls init_client() at start and close_client() at end automatically.

Examples:

from gctricks import optmizeGC

with optmizeGC:
  # All your HTTP code here. Chill the GC. Calls init_client() and close_client() automatically.

# GC run-time pauses enabled again.

init_client()

Description: Instantiate the HTTP Client object, for deferred initialization, call it before the start of all HTTP operations.

get(), post(), put(), patch(), delete(), head() do NOT need this, because they auto-init, this exist for performance reasons to defer the initialization and was requested by the community.

This is optional but recommended.

Read optimizeGC documentation before using.

Arguments: None.

Examples:

import faster_than_requests as requests
requests.init_client()
# All your HTTP code here.

Returns: None.

close_client()

Description: Tear down the HTTP Client object, for deferred de-initialization, call it after the end of all HTTP operations.

get(), post(), put(), patch(), delete(), head() do NOT need this, because they auto-init, this exist for performance reasons to defer the de-initialization and was requested by the community.

This is optional but recommended.

Read optimizeGC documentation before using.

Arguments: None.

Examples:

import faster_than_requests as requests
# All your HTTP code here.
requests.close_client()

Returns: None.

For more Examples check the Examples and Tests.

Instead of having a pair of functions with a lot of arguments that you should provide to make it work, we have tiny functions with very few arguments that do one thing and do it as fast as possible.

A lot of functions are oriented to Data Science, Big Data, Open Data, Web Scrapping, working with HTTP REST JSON APIs.

Install

  • pip install faster_than_requests

Docker

  • Make a quick test drive on Docker!.
$ ./build-docker.sh
$ ./run-docker.sh
$ ./server4benchmarks &  # Inside Docker.
$ python3 benchmark.py   # Inside Docker.

Dependencies

  • None

Platforms

  • βœ… Linux
  • βœ… Windows
  • βœ… Mac
  • βœ… Android
  • βœ… Raspberry Pi
  • βœ… BSD

Extras

More Faster Libraries...

Requisites

  • Python 3.
  • 64 Bit.

Windows

  • Documentation assumes experience with Git, GitHub, cmd, Compiled software, PC with Administrator.
  • If installation fails on Windows, just use the Source Code:

win-compile

The only software needed is Git for Windows and Nim.

Reboot after install. Administrator required for install. Everything must be 64Bit.

If that fails too, dont waste time and go directly for Docker for Windows..

For info about how to install Git for Windows, read Git for Windows Documentation.

For info about how to install Nim, read Nim Documentation.

For info about how to install Docker for Windows., read Docker for Windows. Documentation.

GitHub Actions Build everything from zero on each push, use it as guidance too.

nimble install nimpy
nim c -d:ssl -d:danger --app:lib --tlsEmulation:on --out:faster_than_requests.pyd faster_than_requests.nim

Sponsors

  • Become a Sponsor and help improve this library with the features you want!, we need Sponsors!.
Bitcoin BTC

BEP20 Binance Smart Chain Network BSC

0xb78c4cf63274bb22f83481986157d234105ac17e

BTC Bitcoin Network

1Pnf45MgGgY32X4KDNJbutnpx96E4FxqVi
Ethereum ETH Dai DAI Uniswap UNI Axie Infinity AXS Smooth Love Potion SLP

BEP20 Binance Smart Chain Network BSC

0xb78c4cf63274bb22f83481986157d234105ac17e

ERC20 Ethereum Network

0xb78c4cf63274bb22f83481986157d234105ac17e
Tether USDT

BEP20 Binance Smart Chain Network BSC

0xb78c4cf63274bb22f83481986157d234105ac17e

ERC20 Ethereum Network

0xb78c4cf63274bb22f83481986157d234105ac17e

TRC20 Tron Network

TWGft53WgWvH2mnqR8ZUXq1GD8M4gZ4Yfu
Solana SOL

BEP20 Binance Smart Chain Network BSC

0xb78c4cf63274bb22f83481986157d234105ac17e

SOL Solana Network

FKaPSd8kTUpH7Q76d77toy1jjPGpZSxR4xbhQHyCMSGq
Cardano ADA

BEP20 Binance Smart Chain Network BSC

0xb78c4cf63274bb22f83481986157d234105ac17e

ADA Cardano Network

DdzFFzCqrht9Y1r4Yx7ouqG9yJNWeXFt69xavLdaeXdu4cQi2yXgNWagzh52o9k9YRh3ussHnBnDrg7v7W2hSXWXfBhbo2ooUKRFMieM
Sandbox SAND Decentraland MANA

ERC20 Ethereum Network

0xb78c4cf63274bb22f83481986157d234105ac17e
Algorand ALGO

ALGO Algorand Network

WM54DHVZQIQDVTHMPOH6FEZ4U2AU3OBPGAFTHSCYWMFE7ETKCUUOYAW24Q
Binance

https://pay.binance.com/en/checkout/e92e536210fd4f62b426ea7ee65b49c3

FAQ

  • Whats the idea, inspiration, reason, etc ?.

Feel free to Fork, Clone, Download, Improve, Reimplement, Play with this Open Source. Make it 10 times faster, 10 times smaller.

  • This works with SSL ?.

Yes.

  • This works without SSL ?.

Yes.

  • This requires Cython ?.

No.

  • This runs on PyPy ?.

No.

  • This runs on Python2 ?.

I dunno. (Not supported)

  • This runs on 32Bit ?.

No.

  • This runs with Clang ?.

No.

  • Where to get help ?.

https://github.com/juancarlospaco/faster-than-requests/issues

  • How to set the URL ?.

url="http://example.com" (1st argument always).

  • How to set the HTTP Body ?.

body="my body"

  • How to set an HTTP Header key=value ?.

set_headers()

  • How can be faster than PyCurl ?.

I dunno.

  • Why use Tuple instead of Dict for HTTP Headers ?.

For speed performance reasons, dict is slower, bigger, heavier and mutable compared to tuple.

  • Why needs 64Bit ?.

Maybe it works on 32Bit, but is not supported, integer sizes are too small, and performance can be worse.

  • Why needs Python 3 ?.

Maybe it works on Python 2, but is not supported, and performance can be worse, we suggest to migrate to Python3.

  • Can I wrap the functions on a try: except: block ?.

Functions do not have internal try: except: blocks, so you can wrap them inside try: except: blocks if you need very resilient code.

  • PIP fails to install or fails build the wheel ?.

Add at the end of the PIP install command:

--isolated --disable-pip-version-check --no-cache-dir --no-binary :all:

Not my Bug.

  • How to Build the project ?.

build.sh or build.nims

  • How to Package the project ?.

package.sh or package.nims

  • This requires Nimble ?.

No.

  • Whats the unit of measurement for speed ?.

Unmmodified raw output of Python timeit module.

Please send Pull Request to Python to improve the output of timeit.

  • The LoC is a lie, not counting the lines of code of the Compiler ?.

Projects that use Cython wont count the whole Cython on the LoC, so we wont neither.

Stars

⭐ @juancarlospaco ⭐ @nikitavoloboev ⭐ @5u4 ⭐ @CKristensen ⭐ @Lips7 ⭐ @zeandrade ⭐ @SoloDevOG ⭐ @AM-I-Human ⭐ @pauldevos ⭐ @divanovGH ⭐ @ali-sajjad-rizavi ⭐ @George2901 ⭐ @jeaps17 ⭐ @TeeWallz ⭐ @Shinji-Mimura ⭐ @oozdemir83 ⭐ @aaman007 ⭐ @dungtq ⭐ @bonginkosi0607 ⭐ @4nzor ⭐ @CyberLionsNFT ⭐ @Lyapsus ⭐ @boskuv ⭐ @jckli ⭐ @VitSimon ⭐ @zjmdp ⭐ @maxclac ⭐ @krishna2206 ⭐ @KhushC-03 ⭐ @nicksnell ⭐ @skandix ⭐ @gioleppe ⭐ @mvandermeulen ⭐ @Vexarr ⭐ @baajarmeh ⭐ @znel2002 ⭐ @matkuki ⭐ @SmartManoj ⭐ @SmartManoj ⭐ @Zardex1337 ⭐ @jocago ⭐ @gnimmel ⭐ @sting8k ⭐ @c4p-n1ck ⭐ @w0xi ⭐ @Bing-su ⭐ @AdrianCert ⭐ @asanoop24 ⭐ @zaphodfortytwo ⭐ @Ozerioss ⭐ @nkot56297 ⭐ @sendyputra ⭐ @thedrow ⭐ @MinhHuyDev ⭐ @rapjul ⭐ @Arthavruksha ⭐ @xdroff ⭐ @ShodhArthavruksha ⭐ @aziddddd ⭐ @breezechen ⭐ @TobeTek ⭐ @antonpetrov145 ⭐ @MartianDominic ⭐ @sk37025 ⭐ @Alexb309 ⭐ @alimp5 ⭐ @oldhroft ⭐ @davidcralph ⭐ @rainmana ⭐ @amitdo ⭐ @atlassion ⭐ @cytsai1008 ⭐ @bsouthern ⭐ @techrattt ⭐ @vnicetn ⭐ @Perry-xD ⭐ @israelvf ⭐ @BernardoOlisan ⭐ @ZenoMilk12 ⭐ @rundef ⭐ @semihyesilyurt ⭐ @dunaevv ⭐ @mlueckl ⭐ @johnrlive ⭐ @CrazyBonze ⭐ @v-JiangNan ⭐ @justfly50 ⭐ @mamert ⭐ @ccamateur ⭐ @5l1v3r1 ⭐ @Wykiki ⭐ @Kladdkaka ⭐ @giubaru ⭐ @eamigo86 ⭐ @eadwinCode ⭐ @s4ke ⭐ @DG4MES ⭐ @AlexJupiterian ⭐ @baodinhaaa ⭐ @a-golda ⭐ @Furkan-izgi ⭐ @Abdulvoris101 ⭐ @devlenni ⭐ @kasahh ⭐ @vishaltanwar96 ⭐ @codehangen ⭐ @svenko99 ⭐ @kb1900 ⭐ @lusteroak ⭐ @nitheesh-cpu ⭐ @techpixel ⭐ @tk-iitd ⭐ @smartniz ⭐ @popcrocks ⭐ @senic35 ⭐ @NychetheAwesome ⭐ @rafalohaki ⭐ @Danipulok ⭐ @lwinkelm ⭐ @sunlei ⭐ @Minnikeswar ⭐ @theol-git ⭐ @Mohammad-Mohsen ⭐ @neeksor ⭐ @0xNev ⭐ @imvast ⭐ @daweedkob ⭐ @Landcruiser87 ⭐ @kirillzhosul ⭐ @FurkanEdizkan ⭐ @sodinokibi ⭐ @stepan-zubkov ⭐ @Nexlab-One ⭐ @PApostol ⭐ @callmeAsadUllah ⭐ @jaredv1 ⭐ @Goblin80 ⭐ @michikxd ⭐ @babywyrm ⭐ @MooneDrJune ⭐ @grknbyk ⭐ @francomerendan ⭐ @noudin-ledger ⭐ @chip-felton-montage ⭐ @Ruddy35 ⭐ @xilicode ⭐ @BrianTurza ⭐ @oguh43 ⭐ @oyoxo ⭐ @encoreshao ⭐ @peter279k ⭐ @xalien10 ⭐ @DahnJ ⭐ @ld909 ⭐ @lafabo ⭐ @AndrewGPU ⭐ @jerheff ⭐ @wtv-piyush ⭐ @themorya ⭐ @frank-cq ⭐ @sunilravilla ⭐ @root-11 ⭐ @BrianPugh ⭐ @kioenguyen0207 ⭐ @dotpmrcunha ⭐ @eldar-mustafayev ⭐ @krishna2206 ⭐ @mrkprdo ⭐ @pythoninthegrass ⭐ @jaysontran-novobi ⭐ @eleqtrizit ⭐ @BlackNurse ⭐ @rruimor ⭐ @gosoxharp ⭐ @scripthon ⭐ @WillianFF ⭐ @yashprakash13 ⭐ @rhyd0n ⭐ @meet-ai ⭐ @Cremeux ⭐ @hawk-roman-rey ⭐ @OldMidnight ⭐ @christos-bsq ⭐ @Xenia101 ⭐ @beholders-eye ⭐ @lectinua ⭐ @pietroppeter ⭐ @linwaytin ⭐ @0x13A0F ⭐ @ocervell ⭐ @ZhymabekRoman ⭐ @kabartay ⭐ @tkanhe ⭐ @yudelevi ⭐ @kantarcise ⭐ @fernando-aristizabal ⭐ @BrianNTang ⭐ @asen-mitov ⭐ @mrgick ⭐ @jpmanson ⭐ @thvadora ⭐ @jramosss ⭐ @AleksandrUsolcev ⭐ @eantho ⭐ @RodrigoTorresWeb ⭐ @mighty-phoenix ⭐ @DSDanielPark ⭐ @0xHaris ⭐ @watchakorn-18k ⭐ @watchakorn-18k ⭐ @enesalsat ⭐ @eal ⭐ @abhie-lp ⭐ @martinskou ⭐ @SHjalilo ⭐ @chrisgoddard ⭐ @OuMal ⭐ @dylanosaur ⭐ @Seirdy ⭐ @mix-protocol ⭐ @hoangthanh168 ⭐ @LaffX ⭐ @Kanchangk ⭐ @epistemicjanitor ⭐ @danielsalvador ⭐ @Utone ⭐ @andimahathir ⭐ @alexvivanov ⭐ @resort-io ⭐ @itschasa ⭐ @Kensingtonn ⭐ @christopherhall1815 ⭐ @edwardlopez0311 ⭐ @XMVZ ⭐ @robd003 ⭐ @Tatsh ⭐ @movrsi ⭐ @rachen ⭐ @DavideGalilei ⭐ @linusheinz ⭐ @levushakov ⭐ @zerintg ⭐ @arynyklas ⭐ @infamix ⭐ @aspekts ⭐ @shahabejaz ⭐ @vovavili ⭐ @FingonCelebrindal ⭐ @ysnbogt ⭐ @Athroniaeth ⭐ @hansalemaos ⭐ @xTrayambak ⭐ @extrememicro ⭐ @Pineapple217 ⭐ @PiyushDixit96 ⭐ @NateShoffner ⭐ @noe1sanji ⭐ @bsljth ⭐ @hellogeraldblah ⭐ @ande128 ⭐ @bsljth ⭐ @Murplugg ⭐ @juguerre ⭐ @ande128 ⭐ @atksh ⭐ @k-kawakami213 ⭐ @fredlarochelle ⭐ @emekeh ⭐ @KNipunDananjaya ⭐ @cryoz ⭐ @weztask ⭐ @retouching ⭐ @degaur ⭐ @testtsubscribe ⭐ @catsuns ⭐ @Leiiib ⭐ @mqt2r ⭐ @solarsbeans ⭐ @F1ashhimself ⭐ @IsakWesterlund ⭐ @Mr-Jef ⭐ @ghbook ⭐ @zyr1on ⭐ @goznauk ⭐ @scmanjarrez ⭐ @NiYeh ⭐ @Sepehr0011 ⭐ @JarvanLei

More Repositories

1

pyvoicechanger

Real Time Microphone Voice Changer Python 3.6+ App. Works with On-Line Games and VideoConferences!
Python
252
star
2

awesome-streaming-tools

Awesome Streaming Tools
193
star
3

nodejs

Alternative StdLib for Nim for NodeJS/JavaScript targets, hijacks NodeJS StdLib for Nim
Nim
192
star
4

cpython

Alternative StdLib for Nim for Python targets, hijacks Python StdLib for Nim
Python
132
star
5

webgui

Web Technologies based Crossplatform GUI Framework with Dark theme
Nim
126
star
6

nim-gatabase

Connection-Pooling Compile-Time ORM for Nim
Nim
117
star
7

faster-than-csv

Faster CSV for Python
Python
95
star
8

cliche

AutoMagic CLI argument parsing is cliche
Nim
69
star
9

nim-html-dsl

Nim HTML DSL
Nim
62
star
10

thatlib

Faster pathlib for Python
Python
53
star
11

nim-contra

Lightweight Self-Documenting Design by Contract Programming and Security Hardened mode.
Nim
51
star
12

harpoon

HTTP Harpoon, HTTP Client
Nim
44
star
13

plz

PLZ Python PIP Alternative
Python
40
star
14

nim-new-backend

Template to create a new Backend for Nim, based on the JavaScript Backend, as minimalistic as possible to focus on codegen
Nim
38
star
15

nim-nimterlingua

Internationalization at Compile Time for Nim
Nim
37
star
16

faster-than-walk

Faster recursive directory walk on Python 3
Python
36
star
17

nim-presentation-slides

Nim Presentation Slides and Examples.
Nim
32
star
18

nim-xdo

Nim GUI Automation Linux, simulate user interaction, mouse and keyboard.
Nim
32
star
19

tradingview

TradingView client
Nim
30
star
20

nimrun-action

Nim Bisect commit-by-commit action
JavaScript
25
star
21

binance

Binance API Client Futures Spot
Nim
24
star
22

nim-firejail

Firejail wrapper for Nim, Isolate your Production App before its too late!
Nim
21
star
23

gothon

Python 3.6+ modules written on pure Go 1.10+. No CFFI, no CTypes, no CGo, no PyBindGen, just import *.go files from Python.
Python
20
star
24

nim-html-tools

HTML5 Tools for Nim, all Templates, No CSS, No Libs, No JS Framework
Nim
20
star
25

nim-telegram-bot

Nim Telegram Bot.
Nim
19
star
26

choosenim_install

Nim integration for Python PIP, install Nim via PIP, write Python modules using Nim
Python
18
star
27

dik

Table implemented as optimized sorted hashed dictionary for heterogeneous values
Nim
17
star
28

geoguessrnim

GeoGuessr browser plugin, hide Ads, Filters for StreetView and Mapillary for Chromium and Firefox
Nim
17
star
29

nim-crc32

CRC-32 In-place for Nim.
Nim
15
star
30

nim-libravatar

Libravatar library for Nim, Gravatar alternative. Libravatar is an open source free federated avatar api & service.
Nim
14
star
31

nim-openstreetmap

OpenStreetMap API Lib for Nim
Nim
13
star
32

nim-bytes2human

Calculate all Byte units from integer bytes positive int64 with precision from Bytes to Yottabytes and return a human friendly string representation.
Nim
13
star
33

biblioteca-guarrilla

Simple web to share books, Calibre, Jester, Spectre CSS, No JavaScript, WebP & ZIP to reduce bandwidth
Nim
12
star
34

nim4py

NimScript Interoperability for Python
Python
12
star
35

krita-plugin-bashactions

Plugin to execute Bash commands and programs as Actions on your current Images from Krita.
Python
12
star
36

nim-random-font-color

Random curated Fonts and pastel Colors for your UI/UX design, design for non-designers, poors man design.
Nim
11
star
37

nim_packages_security_audit

Fully Automated Nim Packages Security Audit
Nim
11
star
38

cloudbet

Cloudbet Virtual Crypto Casino API Client
Nim
11
star
39

nim-tinyslation

Text string translation from free online crowdsourced API. Tinyslation a tiny translation. Nim package.
Nim
11
star
40

grabulma

HTML/CSS WYSIWYG for Bulma CSS without JavaScript
JavaScript
11
star
41

bitcoin-lightning

Open-source zero-fee no-download offline-capable anonymous instant micropayments platform with plugins Bitcoin Lightning ⚑️
Nim
11
star
42

nimbug

Nim Semi-Auto Bug Report Tool
Nim
10
star
43

nim-easings

Robert Penner Easing Functions for Nim
C
10
star
44

nimpretty-action

Nimpretty GitHub Action
JavaScript
10
star
45

nim-osrm

Open Source Routing Machine for OpenStreetMap API Lib and App for Nim
Nim
9
star
46

nim-dnsprotec

DNS /etc/hosts file manager, Block 1 Million malicious domains with 1 command
Nim
9
star
47

nim-css-html-minify

HTML & CSS Minify Lib & App based on Regexes & parallel MultiReplaces
Nim
8
star
48

nim-kdialog

Nim KDialog wrapper for super easy GUI.
Nim
8
star
49

nim-datetime2human

Calculate date & time with precision from seconds to millenniums. Human friendly date time as string. ISO-8601 representation and more.
Nim
8
star
50

ballena-itcher

Flash ISO images to SD cards & USB drives, safely and easily.
Nim
7
star
51

borapp

BORA App: Boletin Oficial Republica Argentina
Nim
6
star
52

netpbm

NetPBM Raster Bitmap Pure Python 3 Image, No dependencies, Color, Grayscale or Black&White, Object, Encoder and Decoder, PNG Export, JSON Import/Export, and other useful manipulation functions. Uses native Python lists.
Python
6
star
53

nimble_install

Nimble integration for Python
Python
6
star
54

fision

Nim
6
star
55

xpm

Nim
6
star
56

pre-commit-nim

Pre-Commit Hook for Nim.
6
star
57

aria

Aria2 API lib for Nim using RPC for any backend
Nim
5
star
58

nim-smnar

Servicio Meteorologico Nacional Argentina API Client
Nim
5
star
59

nim-chonker

Arch Linux Pacman Optimizer: Your Arch Linux is too fat, lets make it lightweight
Nim
5
star
60

grapespectre

HTML/CSS WYSIWYG for Spectre CSS without JavaScript
JavaScript
5
star
61

apistar-msgpack

Apistar MessagePack Renderer and Parser pluggable components.
Python
5
star
62

parselicense

Parse Standard SPDX Licenses from string to Enum
Nim
4
star
63

nim-requirementstxt

Python requirements.txt generic iterator parser for Nim
Nim
4
star
64

nim-tinyfiledialogs

TinyFileDialogs wrapper for Nim.
C
4
star
65

spacemacs-nim-python

http://spacemacs.org with http://python.org and https://nim-lang.org plugins, +stripped down, +eye candy.
Emacs Lisp
4
star
66

openexchangerate

OpenExchangeRates API client for Python 3.7+.
Python
4
star
67

nim-hackpad

Hackathon Web Scratchpad for teaching Nim on events using Wifi with limited or no Internet.
HTML
4
star
68

microraptor

3D HTML5 Presentations from a simple MarkDown file. Convert a GitHub README.md to Presentations with one command.
Python
3
star
69

nim-openexchangerates

OpenExchangeRates API Client for Nim
Nim
3
star
70

nim-nominatim

OpenStreetMap Nominatim API Lib for Nim
Nim
3
star
71

nim-openweathermap

OpenWeatherMap API Lib for Nim
Nim
3
star
72

nim-steam

Steam API Client for Nim
Nim
3
star
73

python-course

Curso Python 3 Castellano AutoGuiado Automatizado para GitHub for Classrooms | Python 3 Course SelfGuided Automatized for GitHub for Classrooms
Python
3
star
74

nim-unsplash

Unsplash API Client for Nim
Nim
3
star
75

nim-georefar

GeoRef Argentina Government MultiSync API Client for Nim
Nim
3
star
76

nim-calibre

Calibre eBooks App Database Client for Nim.
Nim
3
star
77

changelogen

Conflict-less Changelog.md generator with categories for Nim
Nim
3
star
78

nim-osutil

OS Utils for Nim, simple tiny but useful procs for OS.
Nim
3
star
79

nim-overpass

OpenStreetMap Overpass API Lib for Nim
Nim
3
star
80

nim-minifyc

C Source Code Minifier for Nim
Nim
2
star
81

kanban5

Kanban Board, based on HTML5 Local Storage and CSV
2
star
82

nim-crosscompile

Crosscompile Nim source code into multiple targets on Linux with this proc.
Nim
2
star
83

nim-pallas

Fully automated crossplatform standalone single-file multi-GUI custom installers to install anything anywhere without coding
Nim
2
star
84

nim-itchio

itch.io API Client for Nim
Nim
2
star
85

bgelauncher

Blender Game Engine Launcher
Python
2
star
86

nim-espeak

Nim Espeak NG wrapper, for super easy Voice and Text-To-Speech
Nim
2
star
87

nim-webp-tools

Nim WebP Tools wrapper.
Nim
2
star
88

nim-usigar

USIG Argentina Government MultiSync API Client for Nim
HTML
2
star
89

nim-datauri

Data URI Base64 UTF-8 Nim module.
Nim
1
star
90

libarchibi

Libarchive Chibi Edition
Nim
1
star
91

fuzzy-cootie

Fuzzy Cootie Font Repository (Unofficial)
1
star
92

nimwc-calendar

Calendar for NimWC
Nim
1
star
93

python-native

Nim
1
star
94

nimlint-action

Nimlint GitHub Action
JavaScript
1
star
95

curso-nim

HTML
1
star
96

linkode-gui

linkode.org desktop client on python3 qt5
Python
1
star
97

nim-printdebug

Print Debug for Nim, tiny 3 lines Lib, C Target.
C
1
star
98

cuckoor

Cuckoo-R: QR Code App
Python
1
star
99

juancarlospaco

1
star
100

cpp-examples

C++ 23 and Nim 1.8 examples
C++
1
star