• Stars
    star
    227
  • Rank 175,900 (Top 4 %)
  • Language
    Go
  • License
    MIT License
  • Created almost 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Textile hub services and buckets lib

DEPRECATION NOTICE: Textile's hosted Hub infrastructure will be taken off-line on January 9th, 2023. At this time, all ThreadDB and Bucket data will no longer be available, and will subsequently be removed. See #578 for further details.

textile

Made by Textile Chat on Slack GitHub license GitHub action standard-readme compliant

Textile hub services and buckets lib

Textile connects and extends Libp2p, IPFS, and Filecoin. Three interoperable technologies makeup Textile:

  • ThreadDB: A server-less p2p database built on Libp2p
  • Powergate: File storage built on Filecoin and IPFS
  • Buckets: File and dynamic directory storage built on ThreadDB, Powergate, and UnixFS.

Join us on our public Slack channel for news, discussions, and status updates. Check out our blog for the latest posts and announcements.

Table of Contents

Security

Textile is still under heavy development and no part of it should be used before a thorough review of the underlying code and an understanding APIs and protocols may change rapidly. There may be coding mistakes, and the underlying protocols may contain design flaws. Please let us know immediately if you have discovered a security vulnerability.

Please also read the security note for go-ipfs.

Background

Go to the docs for more about the motivations behind Textile.

Install

This repo contains two service daemons with CLIs and a Buckets Library for building local-first apps and services.

The Hub

hubd

git clone https://github.com/textileio/textile
cd textile
go get ./cmd/hubd

hub

git clone https://github.com/textileio/textile
cd textile
go get ./cmd/hub

Note: hub includes buck as a subcommand: hub buck. This is because hubd hosts buckd, along with other services.

hub is built in part on the gRPC client, which can be imported to an existing project:

import "github.com/textileio/textile/v2/api/hub/client"

Buckets

buckd

git clone https://github.com/textileio/textile
cd textile
go get ./cmd/buckd

buck

git clone https://github.com/textileio/textile
cd textile
go get ./cmd/buck

buck is built in part on the gRPC client, which can be imported in an existing project:

import "github.com/textileio/textile/v2/api/buckets/client"

The Buckets Library

import "github.com/textileio/textile/v2/buckets/local"

The full spec is available here.

Getting Started

The Hub

The Hub daemon (hubd), a.k.a. The Hub, is a hosted wrapper around other Textile services that includes developer accounts for individuals and organizations. You are encouraged to run your own, and we strongly discourage the use of the hosted Textile Hub as it will soon be shutting down.

The layout of the hub client CLI mirrors the services wrapped by hubd:

  • hub threads provides limited access to ThreadDB.
  • hub buck provides access to Buckets (buckd) by wrapping the standalone buck CLI.
  • hub buck archive provides limited access to The Hub's hosted Powergate instance, and the Filecoin network.

Try hub --help for more usage.

The Hub Client.

Usage:
  hub [command]

Available Commands:
  billing     Billing management
  buck        Manage an object storage bucket
  destroy     Destroy your account
  fil         Interact with Filecoin related commands.
  help        Help about any command
  init        Initialize account
  keys        API key management
  login       Login
  logout      Logout
  orgs        Org management
  threads     Thread management
  update      Update the hub CLI
  version     Show current version
  whoami      Show current user

Flags:
      --api string        API target (default "api.hub.textile.io:443")
  -h, --help              help for hub
      --identity string   User identity
      --key string        User API key
      --newIdentity       Generate a new user identity
  -o, --org string        Org username
      --secret string     User API secret
  -s, --session string    User session token
      --token string      User identity token

Use "hub [command] --help" for more information about a command.

Read more about The Hub, including how to create an account, in the docs.

Running Buckets

Much like threadsd, the buckd daemon can be run as a server or alongside desktop apps or command-line tools. The easiest way to run buckd is by using the provided Docker Compose files. If you're new to Docker and/or Docker Compose, get started here. Once you are setup, you should have docker-compose in your PATH.

Create an .env file and add the following values:

REPO_PATH=~/myrepo
BUCK_LOG_DEBUG=true

Copy this compose file and run it with the following command.

docker-compose -f docker-compose.yml up

Congrats! Now you have Buckets running locally.

The Docker Compose file starts an IPFS node, which is used to pin bucket files and folders. You could point buckd to a different (possibly remote) IPFS node by setting the BUCK_ADDR_IPFS_API variable to a different multiaddress.

By default, this approach does not start Powergate. If you do, be sure to set the BUCK_ADDR_POWERGATE_API variable to the multiaddress of your Powergate. buckd must be configured with Powergate to enable Filecoin archiving with buck archive.

Creating a bucket

Since hub buck and buck are functionally identical, this section will focus on buck and the Buckets Library using a locally running buckd.

First off, take a look at buck --help.

The Bucket Client.

Manages files and folders in an object storage bucket.

Usage:
  buck [command]

Available Commands:
  add         Add a UnixFs DAG locally at path
  archive     Create a Filecoin archive
  cat         Cat bucket objects at path
  decrypt     Decrypt bucket objects at path with password
  destroy     Destroy bucket and all objects
  encrypt     Encrypt file with a password
  help        Help about any command
  init        Initialize a new or existing bucket
  links       Show links to where this bucket can be accessed
  ls          List top-level or nested bucket objects
  pull        Pull bucket object changes
  push        Push bucket object changes
  root        Show bucket root CIDs
  status      Show bucket object changes
  watch       Watch auto-pushes local changes to the remote

Flags:
      --api string   API target (default "127.0.0.1:3006")
  -h, --help         help for buck

Use "buck [command] --help" for more information about a command.

A Textile bucket functions a bit like an S3 bucket. It's a virtual filesystem where you can push, pull, list, and cat files. You can share them via web links or render the whole thing as a website or web app. They also function a bit like a Git repository. The point of entry is from a folder on your local machine that is synced to a remote.

To get started, initialize a new bucket.

mkdir mybucket && cd mybucket
buck init

When prompted, give your bucket a name and either opt-in or decline bucket encyption (see Creating a private bucket for more about bucket encryption).

You should now see two links for the new bucket on the locally running gateway.

> http://127.0.0.1:8006/thread/bafkq3ocmdkrljadlgybtvocytpdw4hbnzygxecxehdp7pfj32lxp34a/buckets/bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi Thread link
> http://127.0.0.1:8006/ipns/bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi IPNS link (propagation can be slow)
> Success! Initialized /path/to/mybucket as a new empty bucket

The first URL is the link to the ThreadDB instance. Internally, a collection named buckets is created. Each new instance in this collection amounts to a new bucket. However, when you visit this link, you'll notice a custom file browser. This is because the gateway considers the built-in buckets collection a special case. You can still view the raw ThreadDB instance by appending ?json=true to the URL.

The second URL is the bucket's unique IPNS address, which is auto-updated when you add, modify, or delete files.

If you have configured the daemon with DNS settings, you will see a third URL that links to the bucket's WWW address, where it is rendered as a static website / client-side application. See buckd --help for more info.

Important: If your bucket is private (encrypted), an access token (JWT) will be appended to these links. This token represents your identity across all buckets and should not be shared without caution.

buck init created a configuration folder in mybucket called .textile. This folder is somewhat like a .git folder, as it contains information about the bucket's remote address and local state.

.textile/config.yml will look something like,

key: bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi
thread: bafkq3ocmdkrljadlgybtvocytpdw4hbnzygxecxehdp7pfj32lxp34a

Where key is the bucket's unique key, and thread is it's ThreadDB ID.

Additionally, .textile/repo contains a repository describing the current file structure, which is used to stage changes against the remote.

Creating a private bucket

Bucket encryption (AES-CTR + AES-512 HMAC) happens entirely within the buckd, meaning your data gets encrypted on the way in, and decrypted on the way out. This type of encryption has two goals:

  • Obfuscate bucket data / files (the normal goal of encryption)
  • Obfuscate directory structure, which amounts to encrypting IPLD nodes and their links.

As a result of these goals, we refer to encrypted buckets as private buckets. Read more about bucket encryption here.

To create a new private bucket, use the --private flag with buck init or respond y when prompted.

In addition to bucket-level encryption, you can also protect a file with a password.

Adding files and folders to a bucket

Bucket files and folders are content-addressed by Cids. Check out the spec if you're unfamiliar with Cids.

buck stages new files as additions:

echo "hello world" > hello.txt
buck status
> new file:  hello.txt

buck status is powered by DAG-based diffing. Much like git, this allows buck to only push and pull changes. Read more about bucket diffing in the docs, or check out this in-depth blog post.

Use push to sync the change.

buck push
+ hello.txt: bafkreifjjcie6lypi6ny7amxnfftagclbuxndqonfipmb64f2km2devei4
> bafybeihm4zrnrsdroazwsvk3i65ooqzdftaugdkjiedr6ocq65u3ap4wni

The output shows the Cid of the added file and the bucket's new root Cid.

push will sync all types of file changes: Additions, modifications, and deletions.

Recreating an existing bucket

It's often useful to recreate a bucket from the remote. This is somewhat like re-cloning a Git repo. This can be done in a different location on the same machine, or, if buckd has a public IP address, from a completely different machine.

Let's recreate the bucket from the previous step in a new directory outside of the original bucket.

mkdir mybucket2 && cd mybucket2
buck init --existing

The --existing flag allows for interactively selecting an existing bucket to initialize from.

? Which exiting bucket do you want to init from?:
  β–Έ MyBucket bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi

At this point, there's only one bucket to choose from.

Note: If buckd was running inside The Hub (hubd), you would be able to choose from buckets belonging to your Organizations and well as your individual Developer account by using the --org flag. Read more about Hub Accounts and Organizations here.

> Selected bucket MyBucket
+ hello.txt: bafkreifjjcie6lypi6ny7amxnfftagclbuxndqonfipmb64f2km2devei4
+ .textileseed: bafkreifbdzttoqsch5j66hfmcbsic6qvwrikibgzfbg3tn7rc3j63ukk3u
> Your bucket links:
> http://127.0.0.1:8006/thread/bafkq3ocmdkrljadlgybtvocytpdw4hbnzygxecxehdp7pfj32lxp34a/buckets/bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi Thread link
> http://127.0.0.1:8006/ipns/bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi IPNS link (propagation can be slow)
> Success! Initialized /path/to/mybucket2 from an existing bucket

Just as before, the output shows the bucket's remote links. However, in this case init also pulled down the content.

Note: .textileseed is used to randomize a bucket's top level Cid and cannot be modified.

The --existing flag is really just a helper that sets the --thread and --key flags, which match the config values we saw earlier. We could have used those flags directly to achieve the same result.

buck init --thread bafkq3ocmdkrljadlgybtvocytpdw4hbnzygxecxehdp7pfj32lxp34a --key bafzbeifyzfm3kosie25s5qthvvcjrr42ivd7doqhwvu5m4ks7uqv4j5lyi

Lastly, we could have just copied .textile/config.yml to a new directory and used buck pull to pull down the existing content.

Creating a bucket from an existing Cid

Sometimes it's useful to create a bucket from a UnixFS directory that is already on the IPFS network.

We can simulate this scenario by adding a local folder to IPFS and then using its root Cid to create a bucket with the --cid flag. Here's a local directory.

.
β”œβ”€β”€ a
β”‚Β Β  β”œβ”€β”€ bar.txt
β”‚Β Β  β”œβ”€β”€ foo.txt
β”‚Β Β  └── one
β”‚Β Β      β”œβ”€β”€ baz.txt
β”‚Β Β      β”œβ”€β”€ buz.txt
β”‚Β Β      └── two
β”‚Β Β          β”œβ”€β”€ boo.txt
β”‚Β Β          └── fuz.txt
β”œβ”€β”€ b
β”‚Β Β  β”œβ”€β”€ foo.txt
β”‚Β Β  └── one
β”‚Β Β      β”œβ”€β”€ baz.txt
β”‚Β Β      β”œβ”€β”€ muz.txt
β”‚Β Β      β”œβ”€β”€ three
β”‚Β Β      β”‚Β Β  └── far.txt
β”‚Β Β      └── two
β”‚Β Β          └── fuz.txt
└── c
    β”œβ”€β”€ one.jpg
    └── two.jpg

Use the recursvie flag -r with ipfs add.

ipfs add -r .
added QmcDkcMJXZsNnExehsE1Yh6SRWucHa9ruVT82gpL83431W mydir/a/bar.txt
added QmYiUq2U6euWnKag23wFppG12hon4EBDswdoe4MwrKzDBn mydir/a/foo.txt
added QmXrd35ja3kknnmgj5kyDM74jfG8GLJJQGtRpEQpXCLTR3 mydir/a/one/baz.txt
added QmSWJvCzotB3CbdxVu8mBvmLqpSuEQgUoJHTFy1azRfwhT mydir/a/one/buz.txt
added QmT6h1eaBV74Sh75upE7ugFLkBnmyGr3WsQ8w8yx5NjgPV mydir/a/one/two/boo.txt
added QmTdg1b5eWEx4zJtrgvew1inkkZ29fp9mbQ4uHyKurW8Ub mydir/a/one/two/fuz.txt
added QmYiQAk1seXrmuQkpGE83AxJyNZDK1RNSaLyp3Z4r1zsrB mydir/b/foo.txt
added QmXrd35ja3kknnmgj5kyDM74jfG8GLJJQGtRpEQpXCLTR3 mydir/b/one/baz.txt
added QmSWJvCzotB3CbdxVu8mBvmLqpSuEQgUoJHTFy1azRfwhT mydir/b/one/muz.txt
added QmYs12A3CGSTHX4QrsvBe2AvLHEThrapXoTFQpyh8AzpFa mydir/b/one/three/far.txt
added QmTdg1b5eWEx4zJtrgvew1inkkZ29fp9mbQ4uHyKurW8Ub mydir/b/one/two/fuz.txt
added QmaLpwNPwftSQY3w4ZtMfZ8k38D5EgK2bcDuU4UwzREJpi mydir/c/one.jpg
added QmYLiWv2WXQd1m8YyHx4dMoj8B3Kuiuu7pCCoYibkqKyVj mydir/c/two.jpg
added QmT5YXeCfbMuVjanbHjQhECUQSACJLecfmjRBZHvmu5FDU mydir/a/one/two
added QmWh2Wx9Lec4wbEvFbsq4HmYjFmgUFtxNJ8wEVwXjhJ2uk mydir/a/one
added QmSujVHvG8Y3Jv21AbMFNQPphjyqNamh6cvdyXSD1jAtSZ mydir/a
added QmUGSorWDy2JiKYvQuJzEb4TnYDuDNLcdFyR6NhMwnwdvy mydir/b/one/three
added QmWvX7UVexbjXJtxKMyMSgGpPesFQD7teNTqUcDsP2mzW6 mydir/b/one/two
added QmPyMD67EgSZS1WpvgudHkxbA5zgjqmse8srPpFb9sVefT mydir/b/one
added QmQdAtg5NkwkvLtTbka3eci58UGj3m9AehC2sbksGSbjPZ mydir/b
added QmcjtVAF9PQfMKTc57vcvZeBrzww3TLxPcQfUQW7cXXLJL mydir/c
added QmcvkGF2t8Z94UqhdtdFRokGoqypbGyKkzRPVF4owmjVrE mydir

After adding the entire directory, we see the root Cid is QmcvkGF2t8Z94UqhdtdFRokGoqypbGyKkzRPVF4owmjVrE. Let's create the bucket using this Cid.

buck init --cid QmcvkGF2t8Z94UqhdtdFRokGoqypbGyKkzRPVF4owmjVrE

The files behind the Cid will be pulled into the new bucket.

+ a/bar.txt: QmcDkcMJXZsNnExehsE1Yh6SRWucHa9ruVT82gpL83431W
+ a/foo.txt: QmYiUq2U6euWnKag23wFppG12hon4EBDswdoe4MwrKzDBn
+ a/one/two/fuz.txt: QmTdg1b5eWEx4zJtrgvew1inkkZ29fp9mbQ4uHyKurW8Ub
+ a/one/baz.txt: QmXrd35ja3kknnmgj5kyDM74jfG8GLJJQGtRpEQpXCLTR3
+ c/two.jpg: QmYLiWv2WXQd1m8YyHx4dMoj8B3Kuiuu7pCCoYibkqKyVj
+ b/foo.txt: QmYiQAk1seXrmuQkpGE83AxJyNZDK1RNSaLyp3Z4r1zsrB
+ a/one/buz.txt: QmSWJvCzotB3CbdxVu8mBvmLqpSuEQgUoJHTFy1azRfwhT
+ a/one/two/boo.txt: QmT6h1eaBV74Sh75upE7ugFLkBnmyGr3WsQ8w8yx5NjgPV
+ b/one/muz.txt: QmSWJvCzotB3CbdxVu8mBvmLqpSuEQgUoJHTFy1azRfwhT
+ b/one/three/far.txt: QmYs12A3CGSTHX4QrsvBe2AvLHEThrapXoTFQpyh8AzpFa
+ b/one/baz.txt: QmXrd35ja3kknnmgj5kyDM74jfG8GLJJQGtRpEQpXCLTR3
+ b/one/two/fuz.txt: QmTdg1b5eWEx4zJtrgvew1inkkZ29fp9mbQ4uHyKurW8Ub
+ c/one.jpg: QmaLpwNPwftSQY3w4ZtMfZ8k38D5EgK2bcDuU4UwzREJpi
> Your bucket links:
> http://127.0.0.1:8006/thread/bafk3k3itq2rsybcvhf6wuvumruw3j6cw7ixhrtx4ek45qgvp3e7u2xa/buckets/bafzbeiawo6ghgsqjlorii4wghdl4tzz54x2kiwtcgtaq7b3h5gta2yok2i Thread link
> http://127.0.0.1:8006/ipns/bafzbeiawo6ghgsqjlorii4wghdl4tzz54x2kiwtcgtaq7b3h5gta2yok2i IPNS link (propagation can be slow)
> Success! Initialized /path/to/mybucket3 as a new bootstrapped bucket

Currently, UnixFS in go-ipfs uses Cid version 0, which is why we see all these old-style Cids started with Qm. Of course, you can also use UnixFS directories that use Cid version 1.

Similar to initializing a new bucket from an existing Cid, buck add allows you to add and/or merge in an existing UnixFS directory to an existing bucket. Like adding new files locally, this works by pulling down the UnixFS content from the IPFS network into the local bucket. Sync the changes with buck push as normal.

Pulling an existing UnixFS directory into a new or existing private bucket is also possible. Just opt-in to encryption during initialization as normal. buckd will recursively encrypt (without duplicating) the Cid's IPLD file and directory nodes as they are pulled into the new bucket.

Exploring bucket contents

Use buck ls [path] to explore bucket contents. Omitting [path] will list the top-level directory.

buck ls

  NAME          SIZE     DIR    OBJECTS  CID
  .textileseed  32       false  n/a      bafkreiezexkrnk7yew6glm6sulhur66bbecc2aeaitf7uz4ymmp442lepu
  a             3726     true   3        QmSujVHvG8Y3Jv21AbMFNQPphjyqNamh6cvdyXSD1jAtSZ
  b             3191     true   2        QmQdAtg5NkwkvLtTbka3eci58UGj3m9AehC2sbksGSbjPZ
  c             1537626  true   2        QmcjtVAF9PQfMKTc57vcvZeBrzww3TLxPcQfUQW7cXXLJL

Use [path] to drill into directories, e.g.,

buck ls a

  NAME     SIZE  DIR    OBJECTS  CID
  bar.txt  517   false  n/a      QmcDkcMJXZsNnExehsE1Yh6SRWucHa9ruVT82gpL83431W
  foo.txt  557   false  n/a      QmYiUq2U6euWnKag23wFppG12hon4EBDswdoe4MwrKzDBn
  one      2502  true   3        QmWh2Wx9Lec4wbEvFbsq4HmYjFmgUFtxNJ8wEVwXjhJ2uk

buck cat functions a lot like ls, but cats file contents to stdout.

Resetting bucket contents

Similar to a git reset --hard, you can use buck pull --hard to discard local changes that have not been pushed.

Continuing with the bucket above, add, modify, and/or delete some files. buck status should show your staged changes.

buck status
> modified:  a/bar.txt
> deleted:   a/one/baz.txt
> new file:  b/one/three/car.txt
> deleted:   b/foo.txt

Normally, buck pull will move your local changes to temporary .buckpatch files, apply the remote / upstream changes, then reapply your local changes. However, the --hard flag will prune all local changes, resetting the local bucket contents to match the remote exactly.

buck pull --hard
+ a/one/baz.txt: QmXrd35ja3kknnmgj5kyDM74jfG8GLJJQGtRpEQpXCLTR3
+ b/foo.txt: QmYiQAk1seXrmuQkpGE83AxJyNZDK1RNSaLyp3Z4r1zsrB
+ a/bar.txt: QmcDkcMJXZsNnExehsE1Yh6SRWucHa9ruVT82gpL83431W
- b/one/three/car.txt
> QmTz6HoC18QQqAEtYhfLc4Fse3LPbSCKV8vouvE88MKjFj

Now buck status will report > Everything up-to-date.

Try buck pull --help for more options when pulling the remote.

Watching a bucket for changes

So far we've seen how a bucket can change locally, but the remote can also change. This could happen for a couple reasons:

  • Changes are pushed from a different bucket copy against the same buckd.
  • Changes are pushed from a different buckd at the ThreadDB layer. This is known as a multi-writer scenario. See Multi-writer buckets for more.

In either case, it is possible to listen for and apply the remote changes using buck watch. This will also watch for local changes and auto-push them to the remote. In this way, multiple copies of the same bucket can be kept in sync.

watch will block until it's cancelled with a Ctrl-C.

buck watch
> Success! Watching /path/to/mybucket for changes...

watch will survive network interruptions, reconnecting when possible.

> Not connected. Trying to connect...
> Not connected. Trying to connect...
> Not connected. Trying to connect...
> Success! Watching /path/to/mybucket for changes...

While watch is active, file and folders dropped into the bucket will be automatically pushed.

Protecting a file with a password

Private buckets handle encryption entirely within buckd, but you can use an additional client-side encryption layer with buck encrypt to password protect files. This encryption is also AES-CTR + AES-512 HMAC, which means you can efficiently encrypt large file streams. However, unlike bucket-wide encryption in private buckets, client-side encryption is only available for files, not IPLD directory nodes.

Let's create an encrypted version of the hello.txt file.

buck encrypt hello.txt supersecret > secret.txt

encrypt writes to stdout. So, here we redirect the output to a new file called secret.txt. scrypt is used to derive the AES and HMAC keys from a password. This carries the normal tradeoff: The encryption is only as good as the password. Also, as with all client-side encryption, you must also store or otherwise remember the password!

encrypt only works on local files. You'll have to use push to sync the new file to the remote.

buck push --yes
+ secret.txt: bafkreiayymufgaut3wrfbzfdxiacxn64mxijj54g2osyk7qnco54iftovi
> bafybeidhffwg5ucwktn7iwyvnkhxpz7b2yrh643bo74cjvsbquzpdgpcd4

decrypt, on the other hand, works on remote files. So, after pushing secret.txt, we can decrypt it (if we can remember the password) and write the plaintext to stdout.

buck decrypt secret.txt supersecret
hello world

Looks like it worked!

Sharing bucket files and folders

Bucket contents can be shared with other Hub accounts and users using the buck roles command. Each file and folder in a bucket maintains a set of public-key based access roles: None, Reader, Writer, and Admin. Only the Admin role can add and remove files and folders from a shared path. See hub buck roles grant --help for more about each role. For most applications, access roles only makes sense in the context of the Hub.

By default, public buckets have two roles located at the top-level path:

hub buck roles ls

  IDENTITY                                                     ROLE
  *                                                            Reader
  bbaareibzpb44ahd7oieqevvlqajidd4jajcvx2vdvti6bpw5wkqolwwerm  Admin

> Found 2 access roles

Since access roles are inherited down a bucket path, the single admin role grants the owner full access to all current and future files and folders. The default (*) Read role indicates that the entire bucket is open to the world. This is merely a reflection of the fact that the underlying UnixFS directory behind public (non-encrypted) buckets are discoverable on the IPFS Network.

Private buckets are not open to the world and are created with only the single admin role. However, we can still grant default (*) Read access to individual files, folders, or the entire bucket posteriori.

hub buck roles grant "*" myfolder
Use the arrow keys to navigate: ↓ ↑ β†’ ←
? Select a role:
  None
  β–Έ Reader
  Writer
  Admin

We can now see a new role added to myfolder.

 hub buck roles ls myfolder

  IDENTITY  ROLE
  *         Reader

> Found 1 access roles

Similarly, grant the None role to revoke access.

Manipulating access roles for a single Hub account or user (public key) can be cumbersome with the buck CLI. Applications in need of this level of granular access control should do so programmatically using the Go client, JavaScript client.

Creating a Filecoin bucket archive

Bucket archiving requires a Powergate to be running in buckd. If you're curious how to do this, take a look at this Docker Compose file.

Let's try archiving the bucket from the Creating a bucket section.

buck archive
> Warning! Archives are Filecoin Mainnet. Use with caution.
? Proceed? [y/N]

Please take note of the warning. Archiving should be considered experimental since Filecoin mainnet has not yet launched, and Powergate will either be running a localnet or mainnet.

You should see a success message if you proceed.

> Success! Archive queued successfully

This means that archiving has been initiated. It may take some time to complete...

buck archive status
> Archive is currently executing, grab a coffee and be patient...

Use the archive status command with -w to watch the progress of your archive as it moves through the Filecoin market deal stages.

buck archive status -w
> Archive is currently executing, grab a coffee and be patient...
>    Pushing new configuration...
>    Configuration saved successfully
>    Executing job 1006707f-efa8-48c2-98af-a1b320a59780...
>    Ensuring Hot-Storage satisfies the configuration...
>    No actions needed in Hot Storage.
>    Hot-Storage execution ran successfully.
>    Ensuring Cold-Storage satisfies the configuration...
>    Current replication factor is lower than desired, making 10 new deals...
>    Calculating piece size...
>    Estimated piece size is 256 bytes.
>    Proposing deal to miner t01459 with 0 fil per epoch...
>    Proposing deal to miner t0117734 with 500000000 fil per epoch...
>    Proposing deal to miner t0120993 with 500000000 fil per epoch...
>    Proposing deal to miner t0120642 with 500000000 fil per epoch...
>    Proposing deal to miner t0121477 with 500000000 fil per epoch...
>    Proposing deal to miner t0119390 with 500000000 fil per epoch...
>    Proposing deal to miner t0101180 with 10000000 fil per epoch...
>    Proposing deal to miner t0117803 with 500000000 fil per epoch...
>    Proposing deal to miner t0121852 with 500000000 fil per epoch...
>    Proposing deal to miner t0119822 with 500000000 fil per epoch...
>    Watching deals unfold...
>    Deal with miner t0117803 changed state to StorageDealClientFunding
>    Deal with miner t0121852 changed state to StorageDealClientFunding
>    Deal with miner t0121477 changed state to StorageDealClientFunding
>    Deal with miner t0101180 changed state to StorageDealClientFunding
>    Deal with miner t0119822 changed state to StorageDealClientFunding
>    Deal with miner t0119390 changed state to StorageDealClientFunding
>    Deal with miner t0120642 changed state to StorageDealClientFunding
>    Deal with miner t0117734 changed state to StorageDealClientFunding
>    Deal with miner t01459 changed state to StorageDealClientFunding
>    Deal with miner t0120993 changed state to StorageDealClientFunding
>    Deal with miner t0121477 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0119822 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0117734 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0121852 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t01459 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0120642 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0120993 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0117803 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0101180 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t0119390 changed state to StorageDealWaitingForDataRequest
>    Deal with miner t01459 changed state to StorageDealProposalAccepted
>    Deal with miner t01459 changed state to StorageDealSealing

The output will look something like the above. With a little luck, you will start seeing some successful storage deals.

Bucket archiving allows you to leverage the purely decentralized nature of Filecoin in your buckets. Check out this video from a blog post demonstrating Filecoin bucket recovery using the Lotus client.

Multi-writer buckets

Multi-writer buckets leverage the distributed nature of ThreadDB by allowing multiple identities to write to the same bucket hosted by different Libp2p hosts. Since buckets are ThreadDB collection instances, this is no different than normal ThreadDB peer collaboration.

To-do: Demonstrate joining a bucket from a ThreadDB invite.

Deleting a bucket

Deleting a bucket is easy... and permanent! buck destroy will delete your local bucket as well as the remote, making it unrecoverable with buck init --existing.

Using the Buckets Library

The buckets/local library powers both the buck and hub buck CLIs. Everything possible in buck, from bucket diffing, pushing, pulling, watching, archiving, etc., is available to you in existing projects by importing the Buckets Library.

go get github.com/textileio/textile/v2/buckets/local

Visit the GoDoc for a complete list of methods and more usage descriptions.

Creating a bucket

Create a new bucket by constructing a configuration object. Only Path is required.

// Setup the buckets lib
buckets := local.NewBuckets(cmd.NewClients("api.textile.io:443", false), local.DefaultConfConfig())

// Create a new bucket with config
mybuck, err := buckets.NewBucket(context.Background(), local.Config{
    Path: "path/to/bucket/folder"
})

// Check current status
diff, err := mybuck.DiffLocal() // diff contains staged changes

buckets.NewBucket will write a local config file and data repo.

See local.WithName, local.WithStrategy, local.WithPrivate, local.WithCid, local.WithInitPathEvents for more options when creating buckets.

To create a bucket from an existing remote, use its thread ID and instance ID (bucket key) in the config.

Getting an existing bucket

GetLocalBucket returns the bucket at path.

mybuck, err := buckets.GetLocalBucket(context.Background(), "path/to/bucket/folder")

Pushing local files

PushLocal pushes all staged changes to the remote and returns the new local and remote root Cids. These roots will only be different if the bucket is private (the remote is encrypted).

newRoots, err := mybuck.PushLocal()

See local.PathOption for more options when pushing.

Pulling remote changes

PullRemote pulls all remote changes locally and returns the new root Cids.

newRoots, err := mybuck.PullRemote()

See local.PathOption for more options when pulling.

Using the Mail Library

The mail/local library provides mechanisms for sending and receiving messages between Hub users. Mailboxes are built on ThreadDB.

go get github.com/textileio/textile/v2/mail/local

Visit the GoDoc for a complete list of methods and more usage descriptions.

Creating a mailbox

Like creating a bucket, create a new mailbox by constructing a configuration object. All fields are required.

// Setup the mail lib
mail := local.NewMail(cmd.NewClients("api.textile.io:443", true), local.DefaultConfConfig())

// Create a libp2p identity (this can be any thread.Identity)
privKey, _, err := crypto.GenerateEd25519Key(rand.Reader)
id := thread.NewLibp2pIdentity(privKey)

// Create a new mailbox with config
mailbox, err := mail.NewMailbox(context.Background(), local.Config{
    Path: "path/to/mail/folder", // Usually a global location like ~/.textile/mail
    Identity: id,
    APIKey: <API_SECRET>,
    APISecret: <API_KEY>,
})

APIKey and APISecret are User Group API Keys. Read more about creating API Keys.

To recreate a user's mailbox, specify the same identity and API Key in the config.

Getting an existing mailbox

GetLocalMailbox returns the mailbox at path.

mailbox, err := mail.GetLocalMailbox(context.Background(), "path/to/mailbox/folder")

Sending a message

When a mailbox sends a message to another mailbox, the message is encrypted for the recipient's inbox and for the senders sentbox. This allows both parties to control the message's lifecycle.

// Create two mailboxes (for most applications, this would not happen on the same machine)
box1, err := mail.NewMailbox(context.Background(), local.Config{...})
box2, err := mail.NewMailbox(context.Background(), local.Config{...})

// Send a message from the first mailbox to the second
message, err := box1.SendMessage(context.Background(), box2.Identity().GetPublic(), []byte("howdy"))

// List the recipient's inbox
inbox, err := box2.ListInboxMessages(context.Background())

// Open decrypts the message body
body, err := inbox[0].Open(context.Background(), box2.Identity())

// Mark the message as read
err = box2.ReadInboxMessage(context.Background(), inbox[0].ID)

Watching for new messages

Applications may watch for mailbox events in the inbox and/or sentbox.

// Handle mailbox events as they arrive
events := make(chan MailboxEvent)
defer close(events)
go func() {
    for e := range events {
        switch e.Type {
        case NewMessage:
            // handle new message
        case MessageRead:
            // handle message read (inbox only)
        case MessageDeleted:
            // handle message deleted
        }
    }
}()

// Start watching (the third param indicates we want to keep watching when offline)
state, err := mailbox.WatchInbox(context.Background(), events, true)
for s := range state {
    // handle connectivity state
}

Similarly, use WatchSentbox to watch a sentbox.

Developing

The easiest way to develop against hubd or buckd is to use the Docker Compose files found in cmd. The -dev flavored files do not persist repos via Docker Volumes, which may be desirable in some cases.

Contributing

Pull requests and bug reports are very welcome ❀️

This repository falls under the Textile Code of Conduct.

Feel free to get in touch by:

Changelog

A changelog is published along with each release.

License

MIT

More Repositories

1

go-threads

Server-less p2p database built on libp2p
Go
453
star
2

powergate

Multitiered file storage API built on Filecoin and IPFS
Go
385
star
3

go-textile

[DEPRECATED] Textile is a set of tools and infrastructure for building composable apps and services on the IPFS network
Go
357
star
4

photos

[DEPRECATED] Encrypted, secure, decentralized personal data wallet -- technology behind textile.photos
TypeScript
242
star
5

community

Textile community repo. Includes a sub-project for documentation πŸ“š and a discussion board for ideas & questions.
101
star
6

js-textile

Textile's JavaScript Libs. Home of ThreadDB, Buckets, and more. Available on npm as `@textile/hub`.
TypeScript
85
star
7

textile-facebook

[DEPRECATED] simple parsing tool to get your data out of a facebook export
HTML
81
star
8

android-ipfs-lite

Java
78
star
9

ios-ipfs-lite

Objective-C
63
star
10

js-examples

Examples and demos using Textile's Javascript/Typescript libraries and clients.
TypeScript
60
star
11

js-threads

This project has been moved to https://github.com/textileio/js-textile
TypeScript
60
star
12

js-powergate-client

Typescript/Javascript client for Textile's Powergate
TypeScript
45
star
13

react-native-sdk

[DEPRECATED] React Native bindings for https://github.com/textileio/go-textile
Java
40
star
14

lotus-devnet

Contanerized Lotus devnet using a mocked sectorbuilder
Go
39
star
15

grpc-ipfs-lite

A gRPC wrapper around ipfs-lite
Go
35
star
16

notes

[DEPRECATED] Textile Notes App
TypeScript
34
star
17

bidbot

A Filecoin Network sidecar for storage providers to bid in storage deal auctions.
Go
30
star
18

dapp-template

a basic template to build simple ipfs-based browser dapps
CSS
30
star
19

js-http-client

[DEPRECATED] Official Textile JS HTTP Wrapper Client
TypeScript
29
star
20

storage-js

Javascript/Typescript SDK for Textile's blockchain ↔ Filecoin bridge system
TypeScript
26
star
21

near-api-go

NEAR client written in Go
Go
22
star
22

advanced-react-native-boilerplate

[DEPRECATED] React Native boilerplate including react-navigation, redux, and sagas with example Textile management.
TypeScript
21
star
23

react-native-boilerplate

[DEPRECATED] A boilerplate app that shows creating, starting, and managing an IPFS peer using Textile's React Native SDK
TypeScript
20
star
24

photos-desktop

[DEPRECATED] Textile Photos... for desktop!
TypeScript
20
star
25

basin

Basin network interfaces & tooling for scalable subnets & onchain data storage
Rust
19
star
26

go-foldersync

A sample-app of file syncing using Threads V2 and IPFS Lite
Go
17
star
27

desktop

[DEPRECATED] Official Textile Desktop Tray App
TypeScript
17
star
28

cloudflare-update-dnslink

Update Cloudflare DNSLink with IPFS Hash
Shell
16
star
29

android-textile

[DEPRECATED] Android bindings for https://github.com/textileio/go-textile
Java
16
star
30

js-threads-client

This project/library has been moved to https://github.com/textileio/js-threads
TypeScript
15
star
31

go-buckets

File and dynamic directory storage built on Threads, IPFS, and LibP2P
Go
14
star
32

dart-threads-client

Threads client for Dart
Dart
14
star
33

js-threaddb

This project has been moved to https://github.com/textileio/js-textile
TypeScript
14
star
34

go-libp2p-pubsub-rpc

RPC over libp2p pubsub with error handling
Go
13
star
35

lotus-build

Automatic Lotus docker image building
Dockerfile
12
star
36

papers

Textile Whitepapers
TeX
12
star
37

explore

Demonstrator/exploratory projects by the Textile team & community
JavaScript
12
star
38

encryptoid

Browser ĐApp for encrypting and sending ephemeral secret messages over IPFS
JavaScript
12
star
39

github-action-buckets

Push repo to a Textile Bucket
TypeScript
11
star
40

storage-js-basic-demo

Minimal React app demo using @textile/near-storage.
TypeScript
11
star
41

dart-textile

Dart library to build apps on Textile
Dart
10
star
42

ios-textile

[DEPRECATED] iOS bindings for https://github.com/textileio/go-textile
Objective-C
10
star
43

gatsby-ipfs-blog

Template for publishing Gatsby blog over IPFS
JavaScript
10
star
44

go-textile-core

[DEPRECATED] Interfaces, types, and abstractions that make up go-textile
Go
8
star
45

go-eventstore

A lightweight event store in Go
Go
8
star
46

js-todo-demo

A simple todo app build with React, Typescript, Semantic UI, and Textile's Threads
TypeScript
8
star
47

textile.photos

Textile Photos Website
CSS
7
star
48

github-action-bucket-replicate

JavaScript
6
star
49

js-foldersync

Demo: Shared folder synchronization based on Textile Threads
TypeScript
6
star
50

eth-storage-bridge

Reference ETH ↔ Filecoin Bridge Smart Contract (Solidity)
TypeScript
6
star
51

build

Everything you need to know to start building for the decentralized web
6
star
52

go-textile-bots

[DEPRECATED]
Go
5
star
53

opts

[DEPRECATED] A handful of bash scripts for administering textile cafes
Shell
5
star
54

notes-desktop

[DEPRECATED] Example app built with reactjs and @textile/js-http-client
TypeScript
5
star
55

go-did-resolver

Universal did-resolver for Go environments
Go
5
star
56

go-auctions-client

A Go client and CLI for Filecoin Storage Auctions.
Go
4
star
57

go-libp2p-primer-article

Sample app for Libp2p-Primer article
Go
4
star
58

dcrypto

A stream-based encryption library (AES-CTR + AES-512 HMAC)
Go
4
star
59

near-storage-bridge

POC Near Smart Contract for locking funds to enable offline storage
TypeScript
4
star
60

broker-core

Broker for the Filecoin network
Go
4
star
61

ipfs-camp-2019

Material and content for Textile's various IPFS Camp contributions
TypeScript
4
star
62

js-textile-go-daemon

[DEPRECATED] Spawn and control the Textile daemon from Node/Javascript
TypeScript
4
star
63

js-textile-wallet

[DEPRECATED] Official Textile data wallet Javascript implementation
TypeScript
4
star
64

near-storage-dapp-demo

React dapp demo using @textile/near-storage to enable storage from a NEAR app.
TypeScript
4
star
65

broker-utils

A set of util documentation and scripts to interact with a Storage Broker.
Shell
3
star
66

react-native-permissions

Fork of original react-native-permissions but trimmed to only Camera permissions for use with QRCode Scanner library
Objective-C
3
star
67

hub-dashboard

Textile's Hub Dashboard
TypeScript
3
star
68

go-ds-mongo

Mongo implementation of go-datastore
Go
3
star
69

react-native-screen-control

Objective-C
3
star
70

xkcd-dapp-demo

Simple ĐApp for viewing and archiving the web's most precious resource
JavaScript
3
star
71

miner-index-web

Public portal for the Miners Index
HTML
3
star
72

node-starter

Node project with the basic building blocks for a web app
SCSS
3
star
73

node-chat

[DEPRECATED] A simple cli chat app using Textile
TypeScript
3
star
74

react-native-camera-roll

Simple camera roll update polling
Objective-C
3
star
75

js-datastore-ttl

An implementation of the Datastore interface that supports a time-to-live for key-value pairs.
TypeScript
2
star
76

npm-go-textile

Textile binary installation module for Node
JavaScript
2
star
77

react-native-textile-image

Java
2
star
78

go-datastore-extensions

go-datastore extensions
Go
2
star
79

filplus-notary-stats

Jupyter Notebook
2
star
80

near-storage-js

Development has moved to https://github.com/textileio/storage-js
TypeScript
2
star
81

basin-s3

Rust
2
star
82

js-http-playground

[DEPRECATED]
TypeScript
2
star
83

hugo-ipfs-blog

1
star
84

js-threads-shell

This project has been deprecated. No longer maintained.
TypeScript
1
star
85

near-storage-cli

Command-line utilities for Textile's Broker-based data storage system on the Near blockchain.
TypeScript
1
star
86

multiclock

Composable and self-describing clocks
1
star
87

www.boom.fyi

HTML
1
star
88

textile-cookies

Official Textile Cookiecutter Templates
JavaScript
1
star
89

go-ds-badger3

Go
1
star
90

github-action-bucket-remove

Remove a Textile Bucket in a Github Action
Dockerfile
1
star
91

ios-textile-image

[DEPRECATED] UIImage view subclass that loads data from Textile
Objective-C
1
star
92

jekyll-ipfs-blog

Template for publishing Jekyll website over IPFS using Textile Buckets
Ruby
1
star
93

go-bot-interfaces

[DEPRECATED]
1
star
94

filecoin-data-transfer-service

Command-line application to create and manage data migration to Filecoin
Go
1
star
95

workshops

Data and materials for public workshops
Jupyter Notebook
1
star
96

react-native-wait-for

TypeScript
1
star
97

npm-go-textile-dep

Official Textile binary distribution for Node projects
TypeScript
1
star
98

minimal-client-demo

[DEPRECATED] Simple project with minimal http client functionality to add files to an existing local thread
JavaScript
1
star
99

base

Base files we use to configure our repositories
1
star
100

prototype-designs

JavaScript
1
star