π This module will soon be deprecated, because it is superseded by
abstract-level
.
Click to expand
- Introduction
- Supported Platforms
- Usage
- API
levelup(db[, options[, callback]])
db.supports
db.open([options][, callback])
db.close([callback])
db.put(key, value[, options][, callback])
db.get(key[, options][, callback])
db.getMany(keys[, options][, callback])
db.del(key[, options][, callback])
db.batch(array[, options][, callback])
(array form)db.batch()
(chained form)db.status
db.isOperational()
db.createReadStream([options])
db.createKeyStream([options])
db.createValueStream([options])
db.iterator([options])
db.clear([options][, callback])
- What happened to
db.createWriteStream
? - Promise Support
- Events
- Multi-process Access
- Contributing
- Big Thanks
- Donate
- License
Fast and simple storage. A Node.js wrapper for abstract-leveldown
compliant stores, which follow the characteristics of LevelDB.
LevelDB is a simple key-value store built by Google. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, bi-directional iterators and simple compression using the very fast Snappy algorithm.
LevelDB stores entries sorted lexicographically by keys. This makes the streaming interface of levelup
- which exposes LevelDB iterators as Readable Streams - a very powerful query mechanism.
The most common store is leveldown
which provides a pure C++ binding to LevelDB. Many alternative stores are available such as level.js
in the browser or memdown
for an in-memory store. They typically support strings and Buffers for both keys and values. For a richer set of data types you can wrap the store with encoding-down
.
The level
package is the recommended way to get started. It conveniently bundles levelup
, leveldown
and encoding-down
. Its main export is levelup
- i.e. you can do var db = require('level')
.
We aim to support Active LTS and Current Node.js releases as well as browsers. For support of the underlying store, please see the respective documentation.
If you are upgrading: please see UPGRADING.md
.
First you need to install levelup
! No stores are included so you must also install leveldown
(for example).
$ npm install levelup leveldown
All operations are asynchronous. If you do not provide a callback, a Promise is returned.
var levelup = require('levelup')
var leveldown = require('leveldown')
// 1) Create our store
var db = levelup(leveldown('./mydb'))
// 2) Put a key & value
db.put('name', 'levelup', function (err) {
if (err) return console.log('Ooops!', err) // some kind of I/O error
// 3) Fetch by key
db.get('name', function (err, value) {
if (err) return console.log('Ooops!', err) // likely the key was not found
// Ta da!
console.log('name=' + value)
})
})
The main entry point for creating a new levelup
instance.
db
must be anabstract-leveldown
compliant store.options
is passed on to the underlying store when opened and is specific to the type of store being used
Calling levelup(db)
will also open the underlying store. This is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form function (err, db) {}
where db
is the levelup
instance. If you don't provide a callback, any read & write operations are simply queued internally until the store is fully opened, unless it fails to open, in which case an error
event will be emitted.
This leads to two alternative ways of managing a levelup
instance:
levelup(leveldown(location), options, function (err, db) {
if (err) throw err
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
})
Versus the equivalent:
// Will throw if an error occurs
var db = levelup(leveldown(location), options)
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
A read-only manifest. Might be used like so:
if (!db.supports.permanence) {
throw new Error('Persistent storage is required')
}
if (db.supports.bufferKeys && db.supports.promises) {
await db.put(Buffer.from('key'), 'value')
}
Opens the underlying store. In general you shouldn't need to call this method directly as it's automatically called by levelup()
. However, it is possible to reopen the store after it has been closed with close()
.
If no callback is passed, a promise is returned.
close()
closes the underlying store. The callback will receive any error encountered during closing as the first argument.
You should always clean up your levelup
instance by calling close()
when you no longer need it to free up resources. A store cannot be opened by multiple instances of levelup
simultaneously.
If no callback is passed, a promise is returned.
put()
is the primary method for inserting data into the store. Both key
and value
can be of any type as far as levelup
is concerned.
options
is passed on to the underlying store.
If no callback is passed, a promise is returned.
Get a value from the store by key
. The key
can be of any type. If it doesn't exist in the store then the callback or promise will receive an error. A not-found err object will be of type 'NotFoundError'
so you can err.type == 'NotFoundError'
or you can perform a truthy test on the property err.notFound
.
db.get('foo', function (err, value) {
if (err) {
if (err.notFound) {
// handle a 'NotFoundError' here
return
}
// I/O or other error, pass it up the callback chain
return callback(err)
}
// .. handle `value` here
})
The optional options
object is passed on to the underlying store.
If no callback is passed, a promise is returned.
Get multiple values from the store by an array of keys
. The optional options
object is passed on to the underlying store.
The callback
function will be called with an Error
if the operation failed for any reason. If successful the first argument will be null
and the second argument will be an array of values with the same order as keys
. If a key was not found, the relevant value will be undefined
.
If no callback is provided, a promise is returned.
del()
is the primary method for removing data from the store.
db.del('foo', function (err) {
if (err)
// handle I/O or other error
});
options
is passed on to the underlying store.
If no callback is passed, a promise is returned.
batch()
can be used for very fast bulk-write operations (both put and delete). The array
argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside the underlying store.
Each operation is contained in an object having the following properties: type
, key
, value
, where the type is either 'put'
or 'del'
. In the case of 'del'
the value
property is ignored. Any entries with a key
of null
or undefined
will cause an error to be returned on the callback
and any type: 'put'
entry with a value
of null
or undefined
will return an error.
const ops = [
{ type: 'del', key: 'father' },
{ type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' },
{ type: 'put', key: 'dob', value: '16 February 1941' },
{ type: 'put', key: 'spouse', value: 'Kim Young-sook' },
{ type: 'put', key: 'occupation', value: 'Clown' }
]
db.batch(ops, function (err) {
if (err) return console.log('Ooops!', err)
console.log('Great success dear leader!')
})
options
is passed on to the underlying store.
If no callback is passed, a promise is returned.
batch()
, when called with no arguments will return a Batch
object which can be used to build, and eventually commit, an atomic batch operation. Depending on how it's used, it is possible to obtain greater performance when using the chained form of batch()
over the array form.
db.batch()
.del('father')
.put('name', 'Yuri Irsenovich Kim')
.put('dob', '16 February 1941')
.put('spouse', 'Kim Young-sook')
.put('occupation', 'Clown')
.write(function () { console.log('Done!') })
batch.put(key, value[, options])
Queue a put operation on the current batch, not committed until a write()
is called on the batch. The options
argument, if provided, must be an object and is passed on to the underlying store.
This method may throw
a WriteError
if there is a problem with your put (such as the value
being null
or undefined
).
batch.del(key[, options])
Queue a del operation on the current batch, not committed until a write()
is called on the batch. The options
argument, if provided, must be an object and is passed on to the underlying store.
This method may throw
a WriteError
if there is a problem with your delete.
batch.clear()
Clear all queued operations on the current batch, any previous operations will be discarded.
batch.length
The number of queued operations on the current batch.
batch.write([options][, callback])
Commit the queued operations for this batch. All operations not cleared will be written to the underlying store atomically, that is, they will either all succeed or fail with no partial commits.
The optional options
object is passed to the .write()
operation of the underlying batch object.
If no callback is passed, a promise is returned.
A readonly string that is one of:
new
- newly created, not opened or closedopening
- waiting for the underlying store to be openedopen
- successfully opened the store, available for useclosing
- waiting for the store to be closedclosed
- store has been successfully closed.
Returns true
if the store accepts operations, which in the case of levelup
means that status
is either opening
or open
, because it opens itself and queues up operations until opened.
Returns a Readable Stream of key-value pairs. A pair is an object with key
and value
properties. By default it will stream all entries in the underlying store from start to end. Use the options described below to control the range, direction and results.
db.createReadStream()
.on('data', function (data) {
console.log(data.key, '=', data.value)
})
.on('error', function (err) {
console.log('Oh my!', err)
})
.on('close', function () {
console.log('Stream closed')
})
.on('end', function () {
console.log('Stream ended')
})
You can supply an options object as the first parameter to createReadStream()
with the following properties:
-
gt
(greater than),gte
(greater than or equal) define the lower bound of the range to be streamed. Only entries where the key is greater than (or equal to) this option will be included in the range. Whenreverse=true
the order will be reversed, but the entries streamed will be the same. -
lt
(less than),lte
(less than or equal) define the higher bound of the range to be streamed. Only entries where the key is less than (or equal to) this option will be included in the range. Whenreverse=true
the order will be reversed, but the entries streamed will be the same. -
reverse
(boolean, default:false
): stream entries in reverse order. Beware that due to the way that stores like LevelDB work, a reverse seek can be slower than a forward seek. -
limit
(number, default:-1
): limit the number of entries collected by this stream. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of-1
means there is no limit. Whenreverse=true
the entries with the highest keys will be returned instead of the lowest keys. -
keys
(boolean, default:true
): whether the results should contain keys. If set totrue
andvalues
set tofalse
then results will simply be keys, rather than objects with akey
property. Used internally by thecreateKeyStream()
method. -
values
(boolean, default:true
): whether the results should contain values. If set totrue
andkeys
set tofalse
then results will simply be values, rather than objects with avalue
property. Used internally by thecreateValueStream()
method.
Returns a Readable Stream of keys rather than key-value pairs. Use the same options as described for createReadStream()
to control the range and direction.
You can also obtain this stream by passing an options object to createReadStream()
with keys
set to true
and values
set to false
. The result is equivalent; both streams operate in object mode.
db.createKeyStream()
.on('data', function (data) {
console.log('key=', data)
})
// same as:
db.createReadStream({ keys: true, values: false })
.on('data', function (data) {
console.log('key=', data)
})
Returns a Readable Stream of values rather than key-value pairs. Use the same options as described for createReadStream()
to control the range and direction.
You can also obtain this stream by passing an options object to createReadStream()
with values
set to true
and keys
set to false
. The result is equivalent; both streams operate in object mode.
db.createValueStream()
.on('data', function (data) {
console.log('value=', data)
})
// same as:
db.createReadStream({ keys: false, values: true })
.on('data', function (data) {
console.log('value=', data)
})
Returns an abstract-leveldown
iterator, which is what powers the readable streams above. Options are the same as the range options of createReadStream()
and are passed to the underlying store.
These iterators support for await...of
:
for await (const [key, value] of db.iterator()) {
console.log(value)
}
Delete all entries or a range. Not guaranteed to be atomic. Accepts the following range options (with the same rules as on iterators):
gt
(greater than),gte
(greater than or equal) define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. Whenreverse=true
the order will be reversed, but the entries deleted will be the same.lt
(less than),lte
(less than or equal) define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. Whenreverse=true
the order will be reversed, but the entries deleted will be the same.reverse
(boolean, default:false
): delete entries in reverse order. Only effective in combination withlimit
, to remove the last N records.limit
(number, default:-1
): limit the number of entries to be deleted. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of-1
means there is no limit. Whenreverse=true
the entries with the highest keys will be deleted instead of the lowest keys.
If no options are provided, all entries will be deleted. The callback
function will be called with no arguments if the operation was successful or with an WriteError
if it failed for any reason.
If no callback is passed, a promise is returned.
db.createWriteStream()
has been removed in order to provide a smaller and more maintainable core. It primarily existed to create symmetry with db.createReadStream()
but through much discussion, removing it was the best course of action.
The main driver for this was performance. While db.createReadStream()
performs well under most use cases, db.createWriteStream()
was highly dependent on the application keys and values. Thus we can't provide a standard implementation and encourage more write-stream
implementations to be created to solve the broad spectrum of use cases.
Check out the implementations that the community has produced here.
Each function accepting a callback returns a promise if the callback is omitted. The only exception is the levelup
constructor itself, which if no callback is passed will lazily open the underlying store in the background.
Example:
const db = levelup(leveldown('./my-db'))
await db.put('foo', 'bar')
console.log(await db.get('foo'))
levelup
is an EventEmitter
and emits the following events.
Event | Description | Arguments |
---|---|---|
put |
Key has been updated | key, value (any) |
del |
Key has been deleted | key (any) |
batch |
Batch has executed | operations (array) |
clear |
Entries were deleted | options (object) |
opening |
Underlying store is opening | - |
open |
Store has opened | - |
ready |
Alias of open |
- |
closing |
Store is closing | - |
closed |
Store has closed. | - |
error |
An error occurred | error (Error) |
For example you can do:
db.on('put', function (key, value) {
console.log('inserted', { key, value })
})
Stores like LevelDB are thread-safe but they are not suitable for accessing with multiple processes. You should only ever have a store open from a single Node.js process. Node.js clusters are made up of multiple processes so a levelup
instance cannot be shared between them either.
See Level/awesome
for modules like multileveldown
that may help if you require a single store to be shared across processes.
Level/levelup
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Cross-browser Testing Platform and Open Source β₯ Provided by Sauce Labs.
Support us with a monthly donation on Open Collective and help us continue our work.