• Stars
    star
    114
  • Rank 308,031 (Top 7 %)
  • Language
    C++
  • License
    Other
  • Created about 8 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Weighted MinHash implementation on CUDA (multi-gpu).

MinHashCuda Build Status PyPI 10.5281/zenodo.286955

This project is the reimplementation of Weighted MinHash calculation from ekzhu/datasketch in NVIDIA CUDA and thus brings 600-1000x speedup over numpy with MKL (Titan X 2016 vs 12-core Xeon E5-1650). It supports running on multiple GPUs to be even faster, e.g., processing 10Mx12M matrix with sparsity 0.0014 takes 40 minutes using two Titan Xs. The produced results are bit-to-bit identical to the reference implementation. Read the article.

The input format is 32-bit float CSR matrix. The code is optimized for low memory consumption and speed.

What is Weighted MinHash

MinHash can be used to compress unweighted set or binary vector, and estimate unweighted Jaccard similarity. It is possible to modify MinHash for weighted Jaccard by expanding each item (or dimension) by its weight. However this approach does not support real number weights, and doing so can be very expensive if the weights are very large. Weighted MinHash is created by Sergey Ioffe, and its performance does not depend on the weights - as long as the universe of all possible items (or dimension for vectors) is known. This makes it unsuitable for stream processing, when the knowledge of unseen items cannot be assumed.

Building

cmake -DCMAKE_BUILD_TYPE=Release . && make

It requires cudart, curand >=8.0, OpenMP 4.0 compatible compiler (that is, not gcc <=4.8) and cmake >= 3.2. If numpy headers are not found, specify the includes path with defining NUMPY_INCLUDES. If you do not want to build the Python native module, add -D DISABLE_PYTHON=y. If CUDA is not automatically found, add -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0 (change the path to the actual one). If you are building in a Docker container you may encounter the following error: Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) This means you need to install the rest of the CUDA toolkit, which can be installed like in the nvidia/cuda:8.0-devrel Dockerfile. If you still run into Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS) then run: ln -s /usr/local/cuda/targets/x86_64-linux/include/* /usr/local/cuda/include/

Python users: if you are using Linux x86-64 and CUDA 8.0, then you can install this easily:

pip install libMHCUDA

Otherwise, you'll have to install it from source:

pip install git+https://github.com/src-d/minhashcuda.git

Building in Python virtual environments, e.g. pyenv or conda is officially not supported. You can still submit patches to fix the related problems.

Testing

test.py contains the unit tests based on unittest. They require datasketch and scipy.

Contributions

...are welcome! See CONTRIBUTING and code of conduct.

License

Apache 2.0

Python example

import libMHCUDA
import numpy
from scipy.sparse import csr_matrix

# Prepare the rows
numpy.random.seed(1)
data = numpy.random.randint(0, 100, (6400, 130))
mask = numpy.random.randint(0, 5, data.shape)
data *= (mask >= 4)
del mask
m = csr_matrix(data, dtype=numpy.float32)
del data

# We've got 80% sparse matrix 6400 x 130
# Initialize the hasher aka "generator" with 128 hash samples for every row
gen = libMHCUDA.minhash_cuda_init(m.shape[-1], 128, seed=1, verbosity=1)

# Calculate the hashes. Can be executed several times with different number of rows
hashes = libMHCUDA.minhash_cuda_calc(gen, m)

# Free the resources
libMHCUDA.minhash_cuda_fini(gen)

The functions can be easily wrapped into a class (not included).

Python API

Import "libMHCUDA".

def minhash_cuda_init(dim, samples, seed=time(), deferred=False, devices=0, verbosity=0)

Creates the hasher.

dim integer, the number of dimensions in the input. In other words, length of each weight vector. Must be less than 2³².

samples integer, the number of hash samples. The more the value, the more precise are the estimates, but the larger the hash size and the longer to calculate (linear). Must not be prime for performance considerations and less than 2¹⁶.

seed integer, the random generator seed for reproducible results.

deferred boolean, if True, disables the initialization of WMH parameters with random numbers. In that case, the user is expected to call minhash_cuda_assign_random_vars() afterwards.

devices integer, bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device, 3 means using first and second device. Special value 0 enables all available devices. Default value is 0.

verbosity integer, 0 means complete silence, 1 means mere progress logging, 2 means lots of output.

return integer, pointer to generator struct (opaque).

def minhash_cuda_calc(gen, matrix, row_start=0, row_finish=0xffffffff)

Calculates Weighted MinHash-es. May reallocate memory on GPU but does it's best to reuse the buffers.

gen integer, pointer to generator struct obtained from init().

matrix scipy.sparse.csr_matrix instance, the number of columns must match dim. The number of rows must be less than 2³¹.

row_start integer, slice start offset (the index of the first row to process). Enables efficient zero-copy sparse matrix slicing.

row_finish integer, slice finish offset (the index of the row after the last one to process). The resulting matrix row slice is [row-start:row_finish].

return numpy.ndarray of shape (number of matrix rows, samples, 2) and dtype uint32.

def minhash_cuda_fini(gen)

Disposes any resources allocated by init() and subsequent calc()-s. Generator pointer is invalidated.

gen integer, pointer to generator struct obtained from init().

C API

Include "minhashcuda.h".

MinhashCudaGenerator* mhcuda_init(
    uint32_t dim, uint16_t samples, uint32_t seed, int deferred,
    uint32_t devices, int verbosity, MHCUDAResult *status)

Initializes the Weighted MinHash generator.

dim the number of dimensions in the input. In other words, length of each weight vector.

samples he number of hash samples. The more the value, the more precise are the estimates, but the larger the hash size and the longer to calculate (linear). Must not be prime for performance considerations.

seed the random generator seed for reproducible results.

deferred if set to anything except 0, disables the initialization of WMH parameters with random numbers. In that case, the user is expected to call mhcuda_assign_random_vars() afterwards.

devices bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device, 3 means using first and second device. Special value 0 enables all available devices.

verbosity 0 means complete silence, 1 means mere progress logging, 2 means lots of output.

status pointer to the reported return code. May be nullptr. In case of any error, the returned result is nullptr and the code is stored into *status (with nullptr check).

return pointer to the allocated generator opaque struct.

MHCUDAResult mhcuda_calc(
    const MinhashCudaGenerator *gen, const float *weights,
    const uint32_t *cols, const uint32_t *rows, uint32_t length,
    uint32_t *output)

Calculates the Weighted MinHash-es for the specified CSR matrix.

gen pointer to the generator opaque struct obtained from mhcuda_init(). weights sparse matrix's values. cols sparse matrix's column indices, must be the same size as weights. rows sparse matrix's row indices. The first element is always 0, the last is effectively the size of weights and cols. length the number of rows. "rows" argument must have the size (rows + 1) because of the leading 0. output resulting hashes array of size rows x samples x 2.

return the status code.

MHCUDAResult mhcuda_fini(MinhashCudaGenerator *gen);

Frees any resources allocated by mhcuda_init() and mhcuda_calc(), including device buffers. Generator pointer is invalidated.

gen pointer to the generator opaque struct obtained from mhcuda_init().

return the status code.

README {#ignore_this_doxygen_anchor}

More Repositories

1

awesome-machine-learning-on-source-code

Cool links & research papers related to Machine Learning applied to source code (MLonCode)
6,247
star
2

go-git

Project has been moved to: https://github.com/go-git/go-git
Go
4,904
star
3

hercules

Gaining advanced insights from Git repository history.
Go
2,613
star
4

gitbase

SQL interface to git repositories, written in Go. https://docs.sourced.tech/gitbase
Go
2,063
star
5

go-mysql-server

An extensible MySQL server implementation in Go.
Go
1,040
star
6

go-kallax

Kallax is a PostgreSQL typesafe ORM for the Go language.
Go
858
star
7

kmcuda

Large scale K-means and K-nn implementation on NVIDIA GPU / CUDA
Jupyter Notebook
800
star
8

proteus

Generate .proto files from Go source code.
Go
734
star
9

wmd-relax

Calculates Word Mover's Distance Insanely Fast
Python
461
star
10

enry

A faster file programming language detector
Go
460
star
11

datasets

source{d} datasets ("big code") for source code analysis and machine learning on source code
Jupyter Notebook
323
star
12

guide

Aiming to be a fully transparent company. All information about source{d} and what it's like to work here.
JavaScript
294
star
13

lapjv

Linear Assignmment Problem solver using Jonker-Volgenant algorithm - Python 3 native module.
C++
252
star
14

go-license-detector

Reliable project licenses detector.
Go
237
star
15

engine-deprecated

[DISCONTINUED] Go to https://github.com/src-d/sourced-ce/
Go
217
star
16

go-billy

The missing interface filesystem abstraction for Go
Go
199
star
17

sourced-ce

source{d} Community Edition (CE)
Go
188
star
18

beanstool

Dependency free beanstalkd admin tool
Go
151
star
19

lookout

Assisted code review, running custom code analyzers on pull requests
Go
149
star
20

ml

sourced.ml is a library and command line tools to build and apply machine learning models on top of Universal Abstract Syntax Trees
Python
141
star
21

reading-club

Paper reading club at source{d}
115
star
22

go-siva

siva - seekable indexed verifiable archiver
Go
98
star
23

jgit-spark-connector

jgit-spark-connector is a library for running scalable data retrieval pipelines that process any number of Git repositories for source code analysis.
Scala
71
star
24

gitbase-web

gitbase web client; source{d} CE comes with a new UI, check it at https://docs.sourced.tech/community-edition/
Go
57
star
25

gemini

Advanced similarity and duplicate source code at scale.
Scala
54
star
26

apollo

Advanced similarity and duplicate source code proof of concept for our research efforts.
Python
52
star
27

borges

borges collects and stores Git repositories.
Go
52
star
28

okrs

Objectives & Key Results repository for the source{d} team
48
star
29

go-queue

Queue is a generic interface to abstract the details of implementation of queue systems.
Go
47
star
30

vecino

Vecino is a command line application to discover Git repositories which are similar to the one that the user provides.
Python
46
star
31

jgscm

Jupyter support for Google Cloud Storage
Python
45
star
32

code2vec

MLonCode community effort to implement Learning Distributed Representations of Code (https://arxiv.org/pdf/1803.09473.pdf)
Python
40
star
33

coreos-nvidia

Yet another NVIDIA driver container for Container Linux (aka CoreOS)
Makefile
38
star
34

style-analyzer

Lookout Style Analyzer: fixing code formatting and typos during code reviews
Jupyter Notebook
32
star
35

code-annotation

🐈 Code Annotation Tool
JavaScript
28
star
36

flamingo

Flamingo is a very thin and simple platform-agnostic chat bot framework
Go
27
star
37

blog

source{d} blog
HTML
27
star
38

sparkpickle

Pure Python implementation of reading SequenceFile-s with pickles written by Spark's saveAsPickleFile()
Python
24
star
39

go-errors

Yet another errors package, implementing error handling primitives.
Go
23
star
40

homebrew

Real homebrew!
22
star
41

infrastructure-dockerfiles

Dockerfile-s to build the images which power source{d}'s computing infrastructure.
Dockerfile
22
star
42

conferences

Tracking events, CfPs, abstracts, slides, and all other even related things
22
star
43

tmsc

Python
21
star
44

models

Machine learning models for MLonCode trained using the source{d} stack
19
star
45

terraform-provider-online

Terraform provider for Online.net
Go
19
star
46

modelforge

Python library to share machine learning models easily and reliably.
Python
18
star
47

identity-matching

source{d} extension to match Git signatures to real people.
Go
17
star
48

tensorflow-swivel

C++
16
star
49

seriate

Optimal ordering of elements in a set given their distance matrix.
Python
16
star
50

gitcollector

Go
15
star
51

go-vitess

An automatic filter-branch of Go libraries from the great Vitess project.
Go
15
star
52

rovers

Rovers is a service to retrieve repository URLs from multiple repository hosting providers.
HTML
14
star
53

go-parse-utils

Go
14
star
54

ml-core

source{d} MLonCode foundation - core algorithms and models.
Python
14
star
55

charts

Applications for Kubernetes
Smarty
12
star
56

role2vec

TeX
12
star
57

snippet-ranger

Jupyter Notebook
12
star
58

fsbench

a small tool for benchmarking filesystems
Go
11
star
59

dev-similarity

Jupyter Notebook
11
star
60

go-log

Log is a generic logging library based on logrus
Go
11
star
61

tab-vs-spaces

Jupyter Notebook
10
star
62

ghsync

GitHub API v3 > PostgreSQL
Go
9
star
63

diffcuda

Accelerated bulk diff on GPU
C
9
star
64

ml-mining

Python
8
star
65

go-billy-siva

A limited go-billy filesystem implementation based on siva.
Go
8
star
66

go-compose-installer

A toolkit to create installers based on docker compose.
Go
8
star
67

github-reminder

A GitHub application to handle deadline reminders in a GitHub idiomatic way.
Go
8
star
68

go-git-fixtures

several git fixtures to run go-git tests
Go
8
star
69

docsrv

docsrv is an app to serve versioned documentation for GitHub projects on demand
Go
7
star
70

go-cli

CLI scaffolding for Go
Go
7
star
71

shell-complete

Python
7
star
72

kubernetes-local-pv-provisioner

Helping you setting up local persistent volumes
Go
7
star
73

engine-analyses

Analyses of open source projects with source{d} Engine
Jupyter Notebook
7
star
74

sourced-ui

source{d} UI
JavaScript
7
star
75

gypogit

[UNMAINTAINED] go-git wrapper for Python
Python
6
star
76

go-borges

Go
6
star
77

treediff

Python
6
star
78

engine-tour

Temporary storage for useful guides for the source{d} engine
Jupyter Notebook
6
star
79

jupyter-spark-docker

Dockerfile with jupyter and scala installed
Dockerfile
6
star
80

imports

Go
6
star
81

git-validate

Go
6
star
82

k8s-pod-headless-service-operator

Go
6
star
83

landing

landing for source{d}
HTML
5
star
84

lookout-terraform-analyzer

This is a lookout analyzer that checks if your PR has been Terraform fmt'ed when submitting it.
Go
5
star
85

swivel-spark-prep

Distributed equivalent of prep.py and fastprep from Swivel using Apache Spark.
Scala
5
star
86

ci

Make-based build system for Go projects at source{d}
Shell
5
star
87

framework

[DEPRECATED]
Go
4
star
88

platform-starter

Starter and basic configuration for platform frontend projects.
Go
4
star
89

metadata-retrieval

Go
4
star
90

lookout-sdk

SDK for lookout analyzers
Python
4
star
91

code-completion

autocompletion prototype
Python
4
star
92

siva-java

siva format implemented in Java
Java
4
star
93

design

All things design at source{d}: branding, guidelines, UI assets, media & co.
4
star
94

berserker

Large scale UAST extractor [DEPRECATED]
Shell
4
star
95

combustion

Go
3
star
96

tm-experiments

Topic Modeling Experiments on Source Code
Python
3
star
97

go-YouTokenToMe

Go
3
star
98

lookout-sdk-ml

SDK for ML based Lookout analyzers
Python
3
star
99

go-asdf

Advanced Scientific Data Format reader library in pure Go.
Go
3
star
100

google-cloud-dns-healthcheck

Go
3
star