• Stars
    star
    623
  • Rank 69,289 (Top 2 %)
  • Language
    C#
  • License
    Apache License 2.0
  • Created about 11 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DataStax C# Driver for Apache Cassandra

DataStax C# Driver for Apache Cassandra

A modern, feature-rich and highly tunable C# client library for Apache Cassandra (2.0+) using Cassandra's binary protocol and Cassandra Query Language v3.

It also provides additional features for DataStax Enterprise:

  • IAuthenticator implementations that use the authentication scheme negotiation in the server-side DseAuthenticator.
  • DSE graph integration.
  • Serializers for geospatial types which integrate seamlessly with the driver.

The driver targets .NET Framework 4.5.2 and .NET Standard 2.0. For more detailed information about platform compatibility, check this section.

Installation

Get it on Nuget

PM> Install-Package CassandraCSharpDriver

Build status Windows Build status Latest stable

Features

  • Sync and Async API
  • Simple, Prepared, and Batch statements
  • Asynchronous IO, parallel execution, request pipelining
  • Connection pooling
  • Auto node discovery
  • Automatic reconnection
  • Configurable load balancing and retry policies
  • Works with any cluster size
  • Linq2Cql and Ado.Net support

Documentation

Getting Help

You can use the project Mailing list or create a ticket on the Jira issue tracker. Additionally, you can ask questions on DataStax Community.

Upgrading from previous versions or from the DSE C# driver

If you are upgrading from previous versions of the driver, visit the Upgrade Guide.

If you are upgrading from the DSE C# driver (which has been unified with this driver), there's also a section related to this on the Upgrade Guide.

Basic Usage

// Configure the builder with your cluster's contact points
var cluster = Cluster.Builder()
                     .AddContactPoints("host1")
                     .Build();

// Connect to the nodes using a keyspace
var session = cluster.Connect("sample_keyspace");

// Execute a query on a connection synchronously
var rs = session.Execute("SELECT * FROM sample_table");

// Iterate through the RowSet
foreach (var row in rs)
{
    var value = row.GetValue<int>("sample_int_column");

    // Do something with the value
}

If you are using DataStax Astra you can configure your cluster instance by setting the secure bundle and the user credentials:

// Configure the builder with your cluster's cloud secure connection bundle and credentials
var cluster = Cluster.Builder()
                     .WithCloudSecureConnectionBundle("path/to/secure-connect-DATABASE_NAME.zip")
                     .WithCredentials("user_name", "p@ssword1")
                     .Build();

Prepared statements

Prepare your query once and bind different parameters to obtain best performance.

// Prepare a statement once
var ps = session.Prepare("UPDATE user_profiles SET birth=? WHERE key=?");

// ...bind different parameters every time you need to execute
var statement = ps.Bind(new DateTime(1942, 11, 27), "hendrix");
// Execute the bound statement with the provided parameters
session.Execute(statement);

Batching statements

You can execute multiple statements (prepared or unprepared) in a batch to update/insert several rows atomically even in different column families.

// Prepare the statements involved in a profile update once
var profileStmt = session.Prepare("UPDATE user_profiles SET email=? WHERE key=?");
var userTrackStmt = session.Prepare("INSERT INTO user_track (key, text, date) VALUES (?, ?, ?)");
// ...you should reuse the prepared statement
// Bind the parameters and add the statement to the batch batch
var batch = new BatchStatement()
  .Add(profileStmt.Bind(emailAddress, "hendrix"))
  .Add(userTrackStmt.Bind("hendrix", "You changed your email", DateTime.Now));
// Execute the batch
session.Execute(batch);

Asynchronous API

Session allows asynchronous execution of statements (for any type of statement: simple, bound or batch) by exposing the ExecuteAsync method.

// Execute a statement asynchronously using await
var rs = await session.ExecuteAsync(statement);

Avoid boilerplate mapping code

The driver features a built-in Mapper and Linq components that can use to avoid boilerplate mapping code between cql rows and your application entities.

User user = mapper.Single<User>("SELECT name, email FROM users WHERE id = ?", userId);

See the driver components documentation for more information.

Automatic pagination of results

You can iterate indefinitely over the RowSet, having the rows fetched block by block until the rows available on the client side are exhausted.

var statement = new SimpleStatement("SELECT * from large_table");
// Set the page size, in this case the RowSet will not contain more than 1000 at any time
statement.SetPageSize(1000);
var rs = session.Execute(statement);
foreach (var row in rs)
{
  // The enumerator will yield all the rows from Cassandra
  // Retrieving them in the back in blocks of 1000.
}

User defined types mapping

You can map your Cassandra User Defined Types to your application entities.

For a given udt

CREATE TYPE address (
  street text,
  city text,
  zip_code int,
  phones set<text>
);

For a given class

public class Address
{
  public string Street { get; set; }
  public string City { get; set; }
  public int ZipCode { get; set; }
  public IEnumerable<string> Phones { get; set;}
}

You can either map the properties by name

// Map the properties by name automatically
session.UserDefinedTypes.Define(
  UdtMap.For<Address>()
);

Or you can define the properties manually

session.UserDefinedTypes.Define(
  UdtMap.For<Address>()
    .Map(a => a.Street, "street")
    .Map(a => a.City, "city")
    .Map(a => a.ZipCode, "zip_code")
    .Map(a => a.Phones, "phones")
);

You should map your UDT to your entity once and you will be able to use that mapping during all your application lifetime.

var rs = session.Execute("SELECT id, name, address FROM users where id = x");
var row = rs.First();
// You can retrieve the field as a value of type Address
var userAddress = row.GetValue<Address>("address");
Console.WriteLine("user lives on {0} Street", userAddress.Street);

Setting cluster and statement execution options

You can set the options on how the driver connects to the nodes and the execution options.

// Example at cluster level
var cluster = Cluster
  .Builder()
  .AddContactPoints(hosts)
  .WithCompression(CompressionType.LZ4)
  .WithLoadBalancingPolicy(new DCAwareRoundRobinPolicy("west"));

// Example at statement (simple, bound, batch) level
var statement = new SimpleStatement(query)
  .SetConsistencyLevel(ConsistencyLevel.Quorum)
  .SetRetryPolicy(DowngradingConsistencyRetryPolicy.Instance)
  .SetPageSize(1000);

Authentication

If you are using the PasswordAuthenticator which is included in the default distribution of Apache Cassandra, you can use the Builder.WithCredentials method or you can explicitly create a PlainTextAuthProvider instance.

For clients connecting to a DSE cluster secured with DseAuthenticator, two authentication providers are included (on the Cassandra.DataStax.Auth namespace):

  • DsePlainTextAuthProvider: plain-text authentication;
  • DseGssapiAuthProvider: GSSAPI authentication.

To configure a provider, pass it when initializing the cluster:

using Cassandra;
using Cassandra.DataStax.Auth;
ICluster cluster = Cluster.Builder()
    .AddContactPoint("127.0.0.1")
    .WithAuthProvider(new DseGssapiAuthProvider())
    .Build();

DataStax Graph

ISession has dedicated methods to execute graph queries:

using Cassandra.DataStax.Graph;
session.ExecuteGraph("system.createGraph('demo').ifNotExist().build()");

GraphStatement s1 = new SimpleGraphStatement("g.addV(label, 'test_vertex')").SetGraphName("demo");
session.ExecuteGraph(s1);

GraphStatement s2 = new SimpleGraphStatement("g.V()").SetGraphName("demo");
GraphResultSet rs = session.ExecuteGraph(s2);

IVertex vertex = rs.First().To<IVertex>();
Console.WriteLine(vertex.Label);

Graph options

You can set default graph options when initializing the cluster. They will be used for all graph statements. For example, to avoid repeating SetGraphName("demo") on each statement:

ICluster cluster = Cluster.Builder()
    .AddContactPoint("127.0.0.1")
    .WithGraphOptions(new GraphOptions().SetName("demo"))
    .Build();

If an option is set manually on a GraphStatement, it always takes precedence; otherwise the default option is used. This might be a problem if a default graph name is set, but you explicitly want to execute a statement targeting system, for which no graph name must be set. In that situation, use GraphStatement.SetSystemQuery():

GraphStatement s = new SimpleGraphStatement("system.createGraph('demo').ifNotExist().build()")
    .SetSystemQuery();
session.ExecuteGraph(s);

Query execution

As explained, graph statements can be executed with the session's ExecuteGraph method. There is also an asynchronous equivalent called ExecuteGraphAsync that returns a Task that can be awaited upon.

Handling results

Graph queries return a GraphResultSet, which is a sequence of GraphNode elements:

GraphResultSet rs = session.ExecuteGraph(new SimpleGraphStatement("g.V()"));

// Iterating as IGraphNode
foreach (IGraphNode r in rs)
{
    Console.WriteLine(r);
}

IGraphNode represents a response item returned by the server. Each item can be converted to the expected type:

GraphResultSet rs = session.ExecuteGraph(new SimpleGraphStatement("g.V()"));
IVertex vertex = rs.First().To<IVertex>();
Console.WriteLine(vertex.Label);

Additionally, you can apply the conversion to all the sequence by using GraphResultSet.To<T>() method:

foreach (IVertex vertex in rs.To<IVertex>())
{
    Console.WriteLine(vertex.Label);
}

GraphNode provides implicit conversion operators to string, int, long and others in order to improve code readability, allowing the following C# syntax:

var rs = session.ExecuteGraph(new SimpleGraphStatement("g.V().has('name', 'marko').values('location')"));
foreach (string location in rs)
{
    Console.WriteLine(location);
}

GraphNode inherits from DynamicObject, allowing you to consume it using the dynamic keyword and/or as a dictionary.

dynamic r = session.ExecuteGraph(new SimpleGraphStatement("g.V()")).First();

Parameters

Graph query parameters are always named. Parameter bindings are passed as an anonymous type or as a IDictionary<string, object> alongside the query:

session.ExecuteGraph("g.addV(label, vertexLabel)", new { vertexLabel = "test_vertex_2" });

Note that, unlike in CQL, Gremlin placeholders are not prefixed with ":".

Prepared statements with DSE Graph

Prepared graph statements are not supported by DSE Graph.

Geospatial types

DSE 5 comes with a set of additional types to represent geospatial data: PointType, LineStringType and PolygonType:

cqlsh> CREATE TABLE points_of_interest(name text PRIMARY KEY, coords 'PointType');
cqlsh> INSERT INTO points_of_interest (name, coords) VALUES ('Eiffel Tower', 'POINT(48.8582 2.2945)');

The DSE driver includes C# representations of these types, that can be used directly in queries:

using Cassandra.Geometry;
Row row = session.Execute("SELECT coords FROM points_of_interest WHERE name = 'Eiffel Tower'").First();
Point coords = row.GetValue<Point>("coords");

var statement = new SimpleStatement("INSERT INTO points_of_interest (name, coords) VALUES (?, ?)",
    "Washington Monument",
    new Point(38.8895, 77.0352));
session.Execute(statement);

Compatibility

  • Apache Cassandra versions 2.0 and above.
  • DataStax Enterprise versions 4.8 and above.
  • The driver targets .NET Framework 4.5.2 and .NET Standard 2.0

Here is a list of platforms and .NET targets that Datastax uses when testing this driver:

Platform net462 net472 net481 net6 net7 net8
Windows Server 2019³ ✓² ✓¹
Ubuntu 18.04 - - -

¹ No tests are run for the net7 target on the Windows platform but net7 is still considered fully supported.

² Only unit tests are ran for the net6 target on the windows platform but net6 is still considered fully supported.

³ Appveyor's Visual Studio 2022 image is used for these tests.

Mono 6.12.0 is also used to run net462 tests on Ubuntu 18.04 but Datastax can't guarantee that the driver fully supports Mono in a production environment. Datastax recommends the modern cross platform .NET platform instead.

Note: DataStax products do not support big-endian systems.

Building and running the tests

You can use Visual Studio or msbuild to build the solution.

Check the documentation for building the driver from source and running the tests.

License

© DataStax, Inc.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

spark-cassandra-connector

DataStax Connector for Apache Spark to Apache Cassandra
Scala
1,929
star
2

python-driver

DataStax Python Driver for Apache Cassandra
Python
1,371
star
3

nodejs-driver

DataStax Node.js Driver for Apache Cassandra
JavaScript
1,227
star
4

php-driver

[MAINTENANCE ONLY] DataStax PHP Driver for Apache Cassandra
C
433
star
5

cpp-driver

DataStax C/C++ Driver for Apache Cassandra
C++
390
star
6

cass-operator

The DataStax Kubernetes Operator for Apache Cassandra
Go
256
star
7

ruby-driver

[MAINTENANCE ONLY] DataStax Ruby Driver for Apache Cassandra
Ruby
227
star
8

graph-book

The Code Examples and Notebooks for The Practitioners Guide to Graph Data
Shell
187
star
9

cql-proxy

A client-side CQL proxy/sidecar.
Go
170
star
10

metric-collector-for-apache-cassandra

Drop-in metrics collection and dashboards for Apache Cassandra
Java
106
star
11

ragstack-ai

RAGStack is an out of the box solution simplifying Retrieval Augmented Generation (RAG) in AI apps.
Python
85
star
12

dsbulk

DataStax Bulk Loader (DSBulk) is an open-source, Apache-licensed, unified tool for loading into and unloading from Apache Cassandra(R), DataStax Astra and DataStax Enterprise (DSE)
Java
76
star
13

docker-images

Docker images published by DataStax.
Shell
73
star
14

dynamo-cassandra-proxy

Preview version of an open source tool that enables developers to run their AWS DynamoDB™ workloads on Apache Cassandra™. With the proxy, developers can run DynamoDB workloads outside of AWS (including on premises, other clouds, and in hybrid configurations).
Java
73
star
15

cstar_perf

Apache Cassandra performance testing platform
Python
72
star
16

zdm-proxy

An open-source component designed to seamlessly handle the real-time client application activity while a migration is in progress.
Go
71
star
17

ai-chatbot-starter

A starter app to build AI powered chat bots with Astra DB and LlamaIndex
Python
61
star
18

zdm-proxy-automation

An Ansible-based automation suite to deploy and manage the Zero Downtime Migration Proxy
Go
59
star
19

graph-examples

Java
52
star
20

fallout

Distributed System Testing as a Service
Java
51
star
21

pulsar-jms

DataStax Starlight for JMS, a JMS API for Apache Pulsar ®
Java
47
star
22

reactive-pulsar

Reactive Streams adapter for Apache Pulsar Java Client
Java
47
star
23

astra-assistants-api

A backend implementation of the OpenAI beta Assistants API
Python
47
star
24

pulsar-helm-chart

Apache Pulsar Helm chart
Mustache
46
star
25

kafka-examples

Examples of using the DataStax Apache Kafka Connector.
Java
45
star
26

cassandra-quarkus

An Apache Cassandra(R) extension for Quarkus
Java
39
star
27

wikichat

Python
38
star
28

kaap

KAAP, Kubernetes Autoscaling for Apache Pulsar
Java
34
star
29

sstable-to-arrow

Java
33
star
30

simulacron

Simulacron - An Apache Cassandra® Native Protocol Server Simulator
Java
32
star
31

cdc-apache-cassandra

Datastax CDC for Apache Cassandra
Java
32
star
32

pulsar-admin-console

Pulsar Admin Console is a web based UI that administrates topics, namespaces, sources, sinks and various aspects of Apache Pulsar features.
Vue
32
star
33

ragbot-starter

An Astra DB and OpenAI chatbot
TypeScript
32
star
34

code-samples

Code samples from DataStax
Scala
31
star
35

astra-cli

Command Line Interface for DataStax Astra
Java
30
star
36

diagnostic-collection

Set of scripts for collection of diagnostic information from DSE/Cassandra clusters
Python
28
star
37

starlight-for-rabbitmq

Starlight for RabbitMQ, a proxy layer between RabbitMQ/AMQP0.9.1 clients and Apache Pulsar
Java
27
star
38

dse-metric-reporter-dashboards

Prometheus & Grafana dashboards for DSE metric collector
Python
27
star
39

SwiftieGPT

TypeScript
27
star
40

spark-cassandra-stress

A tool for testing the DataStax Spark Connector against Apache Cassandra or DSE
Scala
26
star
41

cla-enforcer

A Contributor License Agreement enforcement bot
Ruby
25
star
42

pulsar-heartbeat

Pulsar Heartbeat monitors Pulsar cluster availability, tracks latency of Pulsar message pubsub, and reports failures of the Pulsar cluster. It produces synthetic workloads to measure end-to-end message pubsub latency.
Go
23
star
43

cassandra-data-migrator

Cassandra Data Migrator - Migrate & Validate data between origin and target Apache Cassandra®-compatible clusters.
Java
22
star
44

cassandra-log4j-appender

Cassandra appenders for Log4j
Java
20
star
45

cassandra-data-apis

Data APIs for Apache Cassandra
Go
19
star
46

labs

DataStax Labs preview program
Java
19
star
47

terraform-provider-astra

A project that allows DataStax Astra users to manage their full database lifecycle for Astra Serverless databases (built on Apache Cassandra(TM)) using Terraform
Go
18
star
48

dc-failover-demo

Fault Tolerant Applications with Apache Cassandra™ Demo
HCL
17
star
49

astra-sdk-java

Set of client side libraries to help with Astra Platform usage
Java
17
star
50

kafka-sink

Apache Kafka® sink for transferring events/messages from Kafka topics to Apache Cassandra®, DataStax Astra and DataStax Enterprise (DSE).
Java
17
star
51

starlight-for-kafka

DataStax - Starlight for Kafka
Java
15
star
52

astrajs

A monorepo containing tools for interacting with DataStax Astra and Stargate
JavaScript
15
star
53

native-protocol

An implementation of the Apache Cassandra® native protocol
Java
14
star
54

astrapy

AstraPy is a Pythonic interface for DataStax Astra DB and the Data API
Python
14
star
55

block-explorer

TypeScript
12
star
56

go-cassandra-native-protocol

Cassandra Native Protocol bindings for the Go language
Go
12
star
57

cassandra-reactive-demo

A demo application that interacts with Apache Cassandra(R) using the Java driver 4.4+ and reactive programming
Java
11
star
58

pulsar-sink

An Apache Pulsar® sink for transferring events/messages from Pulsar topics to Apache Cassandra®, DataStax Astra or DataStax Enterprise (DSE) tables.
Java
11
star
59

adelphi

Automation tool for testing C* OSS that assembles cassandra-diff, nosqlbench, fqltool
Python
9
star
60

pulsar-transformations

Java
9
star
61

gatling-dse-plugin

Scala
8
star
62

snowflake-connector

Datastax Snowflake Sink Connector for Apache Pulsar
Java
8
star
63

gocql-astra

Support for gocql on Astra
Go
8
star
64

dsbulk-migrator

Java
8
star
65

release-notes

Release Notes for DataStax Products
8
star
66

vault-plugin-secrets-datastax-astra

HashiCorp Vault Plugin for Datstax Astra
Go
8
star
67

pulsar-3rdparty-connector

This project provides simple templates and instructions to build Apache Pulsar connectors on base of the existing Apache Kafka connectors.
Shell
8
star
68

dsbench-labs

DSBench - A Database Testing Power Tool
7
star
69

remote-junit-runner

JUnit runner that executes tests in a remote JVM
Java
7
star
70

cass-config-builder

Configuration builder for Apache Cassandra based on definitions at datastax/cass-config-definitions
Clojure
7
star
71

java-driver-scala-extras

Scala extensions and utilities for the DataStax Java Driver
Scala
6
star
72

burnell

A proxy to Pulsar cluster
Go
6
star
73

gatling-dse-stress

Scala
5
star
74

astra-client-go

Go
5
star
75

gatling-dse-simcatalog

Scala
4
star
76

java-quotient-filter

A Java Quotient Filter implementation.
Java
4
star
77

pulsar-ansible

Shell
4
star
78

astra-db-ts

Typescript client for Astra DB Vector
TypeScript
4
star
79

terraform-helm-oci-release

HCL
3
star
80

ds-support-diagnostic-collection

Scripts for collection of diagnostic information from DSE/Cassandra clusters running on various platforms
Shell
3
star
81

go-cassandra-simple-client

A simple Go client for the Cassandra native protocol
3
star
82

cass-config-definitions

Shell
3
star
83

astra-ide-plugin

Kotlin
3
star
84

charts

DataStax Helm Charts
Shell
3
star
85

astra-db-chatbot-starter

Python
2
star
86

java-driver-examples-osgi

Examples showing the usage of the DataStax Java driver in OSGi applications.
Java
2
star
87

nodejs-driver-graph

DataStax Node.js Driver Extensions for DSE Graph
JavaScript
2
star
88

aws-secrets-manager-integration-astra

Python
2
star
89

starlight-for-grpc

Java
2
star
90

astra-streaming-examples

Java
2
star
91

homebrew-luna-streaming-shell

Shell
2
star
92

astra-block-examples

Various Astra Block Examples
TypeScript
2
star
93

cassandra-drivers-smoke-test

Smoke tests for Apache Cassandra using the DataStax Drivers
Shell
2
star
94

junitpytest

JUnit5 plugin to run pytest via Gradle
Java
2
star
95

migration-docs

JavaScript
2
star
96

venice-helm-chart

Smarty
2
star
97

spark-cassandra-connector-devtools

Extra stuff useful for development of spark-cassandra-connector e.g. performance tests
2
star
98

cpp-dse-driver-examples

Examples for using the DataStax C/C++ Enterprise Driver
C
2
star
99

venice

Java
1
star
100

fallout-tests

Python
1
star