• Stars
    star
    251
  • Rank 161,862 (Top 4 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created about 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

HBase as a TinkerPop Graph Database

HGraphDB - HBase as a TinkerPop Graph Database

Build Status Maven Javadoc Mentioned in Awesome Bigtable

HGraphDB is a client layer for using HBase as a graph database. It is an implementation of the Apache TinkerPop 3 interfaces.

Note: For HBase 1.x, use HGraphDB 2.2.2. For HBase 2.x, use HGraphDB 3.0.0.

Installing

Releases of HGraphDB are deployed to Maven Central.

<dependency>
    <groupId>io.hgraphdb</groupId>
    <artifactId>hgraphdb</artifactId>
    <version>3.2.0</version>
</dependency>

Setup

To initialize HGraphDB, create an HBaseGraphConfiguration instance, and then use a static factory method to create an HBaseGraph instance.

Configuration cfg = new HBaseGraphConfiguration()
    .setInstanceType(InstanceType.DISTRIBUTED)
    .setGraphNamespace("mygraph")
    .setCreateTables(true)
    .setRegionCount(numRegionServers)
    .set("hbase.zookeeper.quorum", "127.0.0.1")
    .set("zookeeper.znode.parent", "/hbase-unsecure");
HBaseGraph graph = (HBaseGraph) GraphFactory.open(cfg);

As you can see above, HBase-specific configuration parameters can be passed directly. These will be used when obtaining an HBase connection.

The resulting graph can be used like any other TinkerPop graph instance.

Vertex v1 = graph.addVertex(T.id, 1L, T.label, "person", "name", "John");
Vertex v2 = graph.addVertex(T.id, 2L, T.label, "person", "name", "Sally");
v1.addEdge("knows", v2, T.id, "edge1", "since", LocalDate.now());

A few things to note from the above example :

  • HGraphDB accepts user-supplied IDs, for both vertices and edges.
  • The following types can be used for both IDs and property values:
    • boolean
    • String
    • numbers (byte, short, int, long, float, double)
    • java.math.BigDecimal
    • java.time.LocalDate
    • java.time.LocalTime
    • java.time.LocalDateTime
    • java.time.Duration
    • java.util.UUID
    • byte arrays
    • Enum instances
    • Kryo-serializable instances
    • Java-serializable instances

Using Indices

Two types of indices are supported by HGraphDB:

  • Vertices can be indexed by label and property.
  • Edges can be indexed by label and property, specific to a vertex.

An index is created as follows:

graph.createIndex(ElementType.VERTEX, "person", "name");
...
graph.createIndex(ElementType.EDGE, "knows", "since");

The above commands should be run before the relevant data is populated. To create an index after data has been populated, first create the index with the following parameters:

graph.createIndex(ElementType.VERTEX, "person", "name", false, /* populate */ true, /* async */ true);

Then run a MapReduce job using the hbase command:

hbase io.hgraphdb.mapreduce.index.PopulateIndex \
    -t vertex -l person -p name -op /tmp -ca gremlin.hbase.namespace=mygraph

Once an index is created and data has been populated, it can be used as follows:

// get persons named John
Iterator<Vertex> it = graph.verticesByLabel("person", "name", "John");
...
// get persons first known by John between 2007-01-01 (inclusive) and 2008-01-01 (exclusive)
Iterator<Edge> it = johnV.edges(Direction.OUT, "knows", "since", 
    LocalDate.parse("2007-01-01"), LocalDate.parse("2008-01-01"));

Note that the indices support range queries, where the start of the range is inclusive and the end of the range is exclusive.

An index can also be specified as a unique index. For a vertex index, this means only one vertex can have a particular property name-value for the given vertex label. For an edge index, this means only one edge of a specific vertex can have a particular property name-value for a given edge label.

graph.createIndex(ElementType.VERTEX, "person", "name", /* unique */ true);

To drop an index, invoke a MapReduce job using the hbase command:

hbase io.hgraphdb.mapreduce.index.DropIndex \
    -t vertex -l person -p name -op /tmp -ca gremlin.hbase.namespace=mygraph

Pagination

Once an index is defined, results can be paginated. HGraphDB supports keyset pagination, for both vertex and edge indices.

// get first page of persons (note that null is passed as start key)
final int pageSize = 20;
Iterator<Vertex> it = graph.verticesWithLimit("person", "name", null, pageSize);
...
// get next page using start key of last person from previous page
it = graph.verticesWithLimit("person", "name", "John", pageSize + 1);
...
// get first page of persons most recently known by John
Iterator<Edge> it = johnV.edgesWithLimit(Direction.OUT, "knows", "since", 
    null, pageSize, /* reversed */ true);

Also note that indices can be paginated in descending order by passing reversed as true.

Schema Management

By default HGraphDB does not use a schema. Schema management can be enabled by calling HBaseGraphConfiguration.useSchema(true). Once schema management is enabled, the schema for vertex and edge labels can be defined.

graph.createLabel(ElementType.VERTEX, "author", /* id */ ValueType.STRING, "age", ValueType.INT);
graph.createLabel(ElementType.VERTEX, "book", /* id */ ValueType.STRING, "publisher", ValueType.STRING);
graph.createLabel(ElementType.EDGE, "writes", /* id */ ValueType.STRING, "since", ValueType.DATE);   

Edge labels must be explicitly connected to vertex labels before edges are added to the graph.

graph.connectLabels("author", "writes", "book"); 

Additional properties can be added to labels at a later time; otherwise labels cannot be changed.

graph.updateLabel(ElementType.VERTEX, "author", "height", ValueType.DOUBLE);

Whenever vertices or edges are added to the graph, they will first be validated against the schema.

Counters

One unique feature of HGraphDB is support for counters. The use of counters requires that schema management is enabled.

graph.createLabel(ElementType.VERTEX, "author", ValueType.STRING, "bookCount", ValueType.COUNTER);

HBaseVertex v = (HBaseVertex) graph.addVertex(T.id, "Kierkegaard", T.label, "author");
v.incrementProperty("bookCount", 1L);

One caveat is that indices on counters are not supported.

Counters can be used by clients to materialize the number of edges on a node, for example, which will be more efficient than retrieving all the edges in order to obtain the count. In this case, whenever an edge is added or removed, the client would either increment or decrement the corresponding counter.

Counter updates are atomic as they make use of the underlying support for counters in HBase.

Graph Analytics with Giraph

HGraphDB provides integration with Apache Giraph by providing two input formats, HBaseVertexInputFormat and HBaseEdgeInputFormat, that can be used to read from the vertices table and the edges tables, respectively. HGraphDB also provides two abstract output formats, HBaseVertexOutputFormat and HBaseEdgeOutputFormat, that can be used to modify the graph after a Giraph computation.

Finally, HGraphDB provides a testing utility, InternalHBaseVertexRunner, that is similar to InternalVertexRunner in Giraph, and that can be used to run Giraph computations using a local Zookeeper instance running in another thread.

See this blog post for more details on using Giraph with HGraphDB.

Graph Analytics with Spark GraphFrames

Apache Spark GraphFrames can be used to analyze graphs stored in HGraphDB. First the vertices and edges need to be wrapped with Spark DataFrames using the Spark-on-HBase Connector and a custom SHCDataType. Once the vertex and edge DataFrames are available, obtaining a GraphFrame is as simple as the following:

val g = GraphFrame(verticesDataFrame, edgesDataFrame)

See this blog post for more details on using Spark GraphFrames with HGraphDB.

Graph Analytics with Flink Gelly

HGraphDB provides support for analyzing graphs with Apache Flink Gelly. First the vertices and edges need to be wrapped with Flink DataSets by importing graph data with instances of HBaseVertexInputFormat and HBaseEdgeInputFormat. After obtaining the DataSets, a Gelly graph can be created as follows:

ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
Graph gelly = Graph.fromTupleDataSet(vertices, edges, env);

See this blog post for more details on using Flink Gelly with HGraphDB.

Support for Google Cloud Bigtable

HGraphDB can be used with Google Cloud Bigtable. Since Bigtable does not support namespaces, we set the name of the graph as the table prefix below.

Configuration cfg = new HBaseGraphConfiguration()
    .setInstanceType(InstanceType.BIGTABLE)
    .setGraphTablePrefix("mygraph")
    .setCreateTables(true)
    .set("hbase.client.connection.impl", "com.google.cloud.bigtable.hbase2_x.BigtableConnection")
    .set("google.bigtable.instance.id", "my-instance-id")
    .set("google.bigtable.project.id", "my-project-id");
HBaseGraph graph = (HBaseGraph) GraphFactory.open(cfg);

Using the Gremlin Console

One benefit of having a TinkerPop layer to HBase is that a number of graph-related tools become available, which are all part of the TinkerPop ecosystem. These tools include the Gremlin DSL and the Gremlin console. To use HGraphDB in the Gremlin console, run the following commands:

         \,,,/
         (o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: tinkerpop.tinkergraph
gremlin> :install org.apache.hbase hbase-client 2.2.1
gremlin> :install org.apache.hbase hbase-common 2.2.1
gremlin> :install org.apache.hadoop hadoop-common 2.7.4
gremlin> :install io.hgraphdb hgraphdb 3.0.0
gremlin> :plugin use io.hgraphdb

Then restart the Gremlin console and run the following:

gremlin> graph = HBaseGraph.open("mygraph", "127.0.0.1", "/hbase-unsecure")

Performance Tuning

Caching

HGraphDB provides two kinds of caches, global caches and relationship caches. Global caches contain both vertices and edges. Relationship caches are specific to a vertex and cache the edges that are incident to the vertex. Both caches can be controlled through HBaseGraphConfiguration by specifying a maximum size for each type of cache as well as a TTL for elements after they have been accessed via the cache. Specifying a maximum size of 0 will disable caching.

Lazy Loading

By default, vertices and edges are eagerly loaded. In some failure conditions, it may be possible for indices to point to vertices or edges which have been deleted. By eagerly loading graph elements, stale data can be filtered out and removed before it reaches the client. However, this incurs a slight performance penalty. As an alternative, lazy loading can be enabled. This can be done by calling HBaseGraphConfiguration.setLazyLoading(true). However, if there are stale indices in the graph, the client will need to handle the exception that is thrown when an attempt is made to access a non-existent vertex or edge.

Bulk Loading

HGraphDB also provides an HBaseBulkLoader class for more performant loading of vertices and edges. The bulk loader will not attempt to check if elements with the same ID already exist when adding new elements.

Implementation Notes

HGraphDB uses a tall table schema. The schema is created in the namespace specified to the HBaseGraphConfiguration. The tables look as follows:

Vertex Table

Row Key Column: label Column: createdAt Column: [property1 key] Column: [property2 key] ...
[vertex ID] [label value] [createdAt value] [property1 value] [property2 value] ...

Edge Table

Row Key Column: label Column: fromVertex Column: toVertex Column: createdAt Column: [property1 key] Column: [property2 key] ...
[edge ID] [label value] [fromVertex ID ] [toVertex ID] [createdAt value] [property1 value] [property2 value] ...

Vertex Index Table

Row Key Column: createdAt Column: vertexID
[vertex label, isUnique, property key, property value, vertex ID (if not unique)] [createdAt value] [vertex ID (if unique)]

Edge Index Table

Row Key Column: createdAt Column: vertexID Column: edgeID
[vertex1 ID, direction, isUnique, property key, edge label, property value, vertex2 ID (if not unique), edge ID (if not unique)] [createdAt value] [vertex2 ID (if unique)] [edge ID (if unique)]

Index Metadata Table

Row Key Column: createdAt Column: isUnique Column: state
[label, property key, element type] [createdAt value] [isUnique value] [state value]

Note that in the index tables, if the index is a unique index, then the indexed IDs are stored in the column values; otherwise they are stored in the row key.

If schema management is enabled, two additional tables are used:

Label Metadata Table

Row Key Column: id Column: createdAt Column: [property1 key] Column: [property2 key] ...
[label, element type] [id type] [createdAt value] [property1 type] [property2 type] ...

Label Connections Table

Row Key Column: createdAt
[from vertex label, edge label, to vertex label] [createdAt value]

HGraphDB was designed to support the features mentioned here.

Future Enhancements

Possible future enhancements include MapReduce jobs for the following:

  • Cleaning up stale indices.

More Repositories

1

kareldb

A Relational Database Backed by Apache Kafka
Java
392
star
2

kcache

An In-Memory Cache Backed by Apache Kafka
Java
238
star
3

generator-angular-flask

Yeoman generator for AngularJS + Flask
Python
204
star
4

generator-angular-go-martini

Yeoman generator for AngularJS + Go + Martini
JavaScript
186
star
5

awesome-hbase

A curated list of awesome HBase projects and resources.
161
star
6

MicroFrameworkRosettaStone

A comparison of a number of web micro-frameworks via code generation
142
star
7

kafka-graphs

Graph Analytics with Apache Kafka
Java
101
star
8

generator-angular-express-sequelize

Yeoman generator for AngularJS + Express + Sequelize
JavaScript
81
star
9

generator-angular-dropwizard

Yeoman generator for AngularJS + Dropwizard
JavaScript
66
star
10

generator-angular-slim

Yeoman generator for AngularJS + Slim
JavaScript
61
star
11

generator-angular-scotty

Yeoman generator for AngularJS + Scotty
JavaScript
54
star
12

generator-angular-sinatra

Yeoman generator for AngularJS + Sinatra
JavaScript
44
star
13

kwack

In-Memory Analytics for Kafka using DuckDB
Java
36
star
14

generator-angular-spark

Yeoman generator for AngularJS + Spark
JavaScript
34
star
15

kdatalog

Kafka as a Datalog Engine
Java
27
star
16

kgiraffe

A GraphQL Interface for Apache Kafka and Schema Registry
Java
24
star
17

hdocdb

HBase as a JSON Document Database
Java
24
star
18

keta

A Transactional Metadata Store Backed by Apache Kafka
Java
19
star
19

generator-angular-nancy

Yeoman generator for AngularJS + Nancy
JavaScript
16
star
20

generator-angular-scalatra

Yeoman generator for AngularJS + Scalatra
JavaScript
16
star
21

generator-angular-luminus

Yeoman generator for AngularJS + Luminus
JavaScript
12
star
22

schema-registry-browser

Confluent Schema Registry Browser
Vue
12
star
23

generator-angular-mojolicious

Yeoman generator for AngularJS + Mojolicious
JavaScript
12
star
24

kafka-connect-streams

Kafka Connect Integration with Kafka Streams + KSQL
Java
11
star
25

stream-processing-kickstarter

A comparison of stream-processing frameworks with Kafka integration
Java
10
star
26

generator-angular-ratpack

Yeoman generator for AngularJS + Ratpack
JavaScript
10
star
27

janusgraph-kafka

Kafka storage adapter for JanusGraph
Java
9
star
28

generator-angular-dynamo

Yeoman generator for AngularJS + Dynamo
JavaScript
9
star
29

generator-angular-suave

Yeoman generator for AngularJS + Suave
JavaScript
8
star
30

kmachines

Distributed Fine-Grained Finite State Machines with Kafka
Java
8
star
31

generator-angular-caveman2

Yeoman generator for AngularJS + Caveman2
JavaScript
7
star
32

provision-angular-flask

Ansible provisioner for AngularJS + Flask
6
star
33

hentitydb

HBase as an Entity Database
Java
6
star
34

jsonata-python

JSONata for Python
Python
6
star
35

generator-angular-opium

Yeoman generator for AngularJS + Opium
JavaScript
6
star
36

provision-angular-go-martini

Ansible provisioner for AngularJS + Go + Martini
5
star
37

schema-registry-chess-engine

Confluent Schema Registry Chess Engine
Java
5
star
38

kafka-connect-jsonata

Kafka Connect JSONata Transform
Java
5
star
39

kstore

A Wide Column Store Backed by Apache Kafka
Java
4
star
40

schema-registry-mode-plugin

Confluent Schema Registry Subject Modes
Java
4
star
41

provision-angular-express-sequelize

Ansible provisioner for AngularJS + Express + Sequelize
3
star
42

generator-angular-nickel

Yeoman generator for AngularJS + Nickel
JavaScript
3
star
43

janusgraph-cosmosdb

The Azure Cosmos DB Storage Backend for JanusGraph
Java
3
star
44

generator-angular-axiom

Yeoman generator for AngularJS + Axiom
JavaScript
3
star
45

provision-angular-dropwizard

Ansible provisioner for AngularJS + Dropwizard
3
star
46

generator-angular-orbit

Yeoman generator for AngularJS + Orbit
JavaScript
2
star
47

kstore-shell

HBase Shell for KStore
Shell
2
star
48

cel.net

Common Expression Language for .NET
Starlark
2
star
49

generator-angular-spin

Yeoman generator for AngularJS + Spin
JavaScript
2
star
50

provision-angular-spark

Ansible provisioner for AngularJS + Spark
2
star
51

generator-angular-chinook

Yeoman generator for AngularJS + Chinook
JavaScript
2
star
52

json-schema-compatibility

Java
2
star
53

provision-angular-caveman2

Ansible provisioner for AngularJS + Caveman2
2
star
54

generator-angular-kitura

Yeoman generator for AngularJS + Kitura
JavaScript
2
star
55

provision-angular-scotty

Ansible provisioner for AngularJS + Scotty
1
star
56

generator-aurelia-dropwizard

Yeoman generator for Aurelia + Dropwizard
JavaScript
1
star
57

generator-angular-start

Yeoman generator for AngularJS + Start
JavaScript
1
star
58

provision-angular-luminus

Ansible provisioner for AngularJS + Luminus
1
star
59

maestro

A Dropwizard service for running orchestrations.
JavaScript
1
star
60

provision-angular-mojolicious

Ansible provisioner for AngularJS + Mojolicious
1
star
61

demo-data-contracts

Java
1
star