• Stars
    star
    107
  • Rank 323,587 (Top 7 %)
  • Language
    Nim
  • License
    Apache License 2.0
  • Created almost 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Nearly zero-overhead input/output streams for Nim

nim-faststreams

License: Apache License: MIT Stability: experimental Github action

FastStreams is a highly efficient library for all your I/O needs.

It offers nearly zero-overhead synchronous and asynchronous streams for handling inputs and outputs of various types:

  • Memory inputs and outputs for serialization frameworks and parsers
  • File inputs and outputs
  • Pipes and Process I/O
  • Networking

The library aims to provide a common interface between all stream types that allows the application code to be easily portable to different back-end event loops. In particular, Chronos and AsyncDispatch are already supported. It's envisioned that the library will also gain support for the Nginx event loop to allow the creation of web applications running as Nginx run-time modules and the SeaStar event loop for the development of extremely low-latency services taking advantage of kernel-bypass networking.

What does zero-overhead mean?

Even though FastStreams support multiple stream types, the API is designed in a way that allows the read and write operations to be handled without any dynamic dispatch in the majority of cases.

In particular, reading from a memoryInput or writing to a memoryOutput will have similar performance to a loop iterating over an openArray or another loop populating a pre-allocated string. memFileInput offers the same performance characteristics when working with files. The idiomatic use of the APIs with the rest of the stream types will result in a highly efficient memory allocation patterns and zero-copy performance in a great variety of real-world use cases such as:

  • Parsers for data formats and protocols employing formal grammars
  • Block ciphers
  • Compressors and decompressors
  • Stream multiplexers

The zero-copy behavior and low-memory usage is maintained even when multiple streams are layered on top of each other while back-pressure is properly accounted for. This makes FastStreams ideal for implementing highly-flexible networking stacks such as LibP2P.

The key ideas in the FastStreams design

FastStreams is heavily inspired by the System.IO.Pipelines API which was developed and released by Microsoft in 2018 and is considered the result of multiple years of evolution over similar APIs shipped in previous SDKs.

We highly recommend reading the following two articles which provide an in-depth explanation for the benefits of the design:

Here, we'll only summarize the main insights:

Obtaining data from the input device is not the same as consuming it.

When protocols and formats are layered on top of each other, it's highly inconvenient to handle a read operation that can return an arbitrary amount of data. If not enough data was returned, you may need to copy the available bytes into a local buffer and then repeat the reading operation until enough data is gathered and the local buffer can be processed. On the other hand, if more data was received, you need to complete the current stage of processing and then somehow feed the remaining bytes into the next stage of processing (e.g this might be a nested format or a different parsing branch in the formal grammar of the protocol). Both of these scenarios require logic that is difficult to write correctly and results in unnecessary copying of the input bytes.

A major difference in the FastStreams design is that the arbitrary-length data obtained from the input device is managed by the stream itself and you are provided with an API allowing you to control precisely how much data is consumed from the stream. Consuming the buffered content does not invoke costly asynchronous calls and you are allowed to peek at the stream contents before deciding which step to take next (something crucial for handling formal grammars). Thus, using the FastStreams API results in code that is both highly efficient and easy to author.

Higher efficiency is possible if we say goodbye to the good old single buffer.

The buffering logic inside the stream divides the data into "pages" which are allocated with a known fast path in the Nim allocator and which can be efficiently transferred between streams and threads in the layered streams scenario or in IPC mechanisms such as AsyncChannel. The consuming code can be aware of this, but doesn't need to. The most idiomatic usage of the API handles the buffer switching logic automatically for the user.

Nevertheless, the buffering logic can be configured for unbuffered reads and writes and it supports efficiently various common real-world patterns such as:

  • Length prefixes

    To handle protocols with length prefixes without any memory overhead, the output streams support "delayed writes" where a portion of the stream content is specified only after the prefixed content is written to the stream.

  • Block compressors and Block ciphers

    These can benefit significantly from a more precise control over the stride of the buffered pages which can be configured to match the block size of the encoder.

  • Content with known length

    Some streams have a known length which allows us to accurately estimate the size of the transformed content. The len and ensureRunway APIs make sure such cases are handled as optimally as possible.

Basic API usage

The FastStreams API consists of ony few major object types:

InputStream

An InputStream manages a particular input device. The library offers out of the box the following input stream types:

  • fileInput

    For reading files through the familiar fread API from the C run-time.

  • memFileInput

    For reading memory mapped files which provides best performance.

  • unsafeMemoryInput

    For handling strings, sequences and openarrays as an input stream.
    You are responsible for ensuring that the backing buffer won't be invalidated while the stream is being used.

  • memoryInput

    Primarily used to consume the contents written to a previously populated output stream, but it can also be used to consume the contents of strings and sequences in a memory-safe way (by creating a copy).

  • pipeInput (async)

    For arbitrary conmmunication between a produced and a consumer.

  • chronosInput (async)

    Enabled by importing faststreams/chronos_adapters.
    It can represent any Chronos Transport as an input stream.

  • asyncSocketInput (async)

    Enabled by importing faststreams/std_adapters.
    Allows using Nim's standard library AsyncSocket type as an input stream.

You can extend the library with new InputStream types without modifying it. Please see the inline code documentation of InputStreamVTable for more details.

All of the above APIs are possible constructors for creating an InputStream. The stream instances will manage their resources through destructors, but you might want to close them explicitly in async context or when you need to handle the possible errors from the closing operation.

Here is an example usage:

import
  faststreams/inputs

var
  jsonString = "[1, 2, 3]"
  jsonNodes = parseJson(unsafeMemoryInput(jsonString))
  moreNodes = parseJson(fileInput("data.json"))

The example above assumes we might have a parseJson function accepting an InputStream. Here how this function could be defined:

import
  faststreams/inputs

proc scanString(stream: InputStream): JsonToken {.fsMultiSync.} =
  result = newStringToken()

  advance stream # skip the opening quote

  while stream.readable:
    let nextChar = stream.read.char
    case nextChar
    of '\'':
      if stream.readable:
        let escaped = stream.read.char
        case escaped
        of 'n': result.add '\n'
        of 't': result.add '\t'
        else: result.add escaped
      else:
        error(UnexpectedEndOfFile)
    of '"'
      return
    else:
      result.add nextChar

  error(UnexpectedEndOfFile)

proc nextToken(stream: InputStream): JsonToken {.fsMultiSync.} =
  while stream.readable:
    case stream.peek.char
    of '"':
      result = scanString(stream)
    of '0'..'9':
      result = scanNumber(stream)
    of 'a'..'z', 'A'..'Z', '_':
      result = scanIdentifier(stream)
    of '{':
      advance stream # skip the character
      result = objectStartToken
    ...

  return eofToken

proc parseJson(stream: InputStream): JsonNode {.fsMultiSync.} =
  while (let token = nextToken(stream); token != eofToken):
    case token
    of numberToken:
      result = newJsonNumber(token.num)
    of stringToken:
      result = newJsonString(token.str)
    of objectStartToken:
      result = parseObject(stream)
    ...

The above example is nothing but a toy program, but we can already see many usage patterns of the InputStream type. For a more sophisticated and complete implementation of a JSON parser, please see the nim-json-serialization package.

As we can see from the example above, calling stream.read should always be preceded by a call to stream.readable. When the stream is in the readable state, we can also peek at the next character before we decide how to proceed. Besides calling read, we can also mark the data as consumed by calling stream.advance.

The above APIs demonstrate how you can consume the data one byte at the time. Common wisdom might tell you that this should be inefficient, but that's not the case with FastStreams. The loop while stream.readable: stream.read will compile to very efficient inlined code that performs nothing more than pointer increments and comparisons. This will be true even when working with async streams.

The readable check is the only place where our code may block (or await). Only when all the data in the stream buffers have been consumed, the stream will invoke a new read operation on the backing input device and this may repopulate the buffers with an arbitrary number of new bytes.

Sometimes, you need to check whether the stream contains at least a specific number of bytes. You can use the stream.readable(N) API to achieve this.

Reading multiple bytes at once is then possible with stream.read(N), but if you need to store the bytes in an object field or another long-term storage location, consider using stream.readInto(destination) which may result in zero-copy operation. It can also be used to implement unbuffered reading.

AsyncInputStream and fsMultiSync

An astute reader might have wondered what is the purpose of the custom pragma fsMultiSync used in the examples above? It is a simple macro generating an additional async copy of our stream processing functions where all the input types are replaced by their async counterparts (e.g. AsyncInputStream) and the return type is wrapped in a Future as usual.

The standard API of InputStream and AsyncInputStream is exactly the same. Operations such as readable will just invoke await behind the scenes, but there is one key difference - the await will be triggered only when there is not enough data already stored in the stream buffers. Thus, in the great majority of cases, we avoid the high cost of instantiating a Future and yielding control to the event loop.

We highly recommend implementing most of your stream processing code through the fsMultiSync pragma. This ensures the best possible performance and makes the code more easily testable (e.g. with inputs stored on disk). FastStreams ships with a set of fuzzing tools that will help you ensure that your code behaves correctly with arbitrary data and/or arbitrary interruption points.

Nevertheless, if you need a more traditional async API, please be aware that all of the functions discussed in this README also have an *Async suffix form that returns a Future (e.g. readableAsync, readAsync, etc).

One exception to the above rule is the helper stream.timeoutToNextByte(t) which can be used to detect situations where your communicating party is failing to send data in time. It accepts a Duration or an existing deadline Future and it's usually used like this:

proc performHandshake(c: Connection): bool {.async.} =
  if c.inputStream.timeoutToNextByte(HANDSHAKE_TIMEOUT):
    # The other party didn't send us anything in time,
    # We close the connection:
    close c
    return false

  while c.inputStream.readable:
    ...

It is assumed that in traditional async code, timeouts will be managed more explicitly with sleepAsync and the or operator defined over futures.

Range-restricted reads

Protocols transmitting serialized payloads often provide information regarding the size of the payload. When you invoke the deserialization routine, it's preferable if the provided boundaries are treated like an "end of file" marker for the deserializer. FastStreams provides an easy way to achieve this without extra copies and memory allocations through the withReadableRange facility. Here is a typical usage:

proc decodeFrame(s: AsyncInputStream, DecodedType: type): Option[DecodedType] =
  if not s.readable(4):
    return

  let lengthPrefix = toInt32 s.read(4)
  if s.readable(lengthPrefix):
    s.withReadableRange(lengthPrefix, range):
      range.readValue(Json, DecodedType)

Please note that the above example uses the nim-serialization library

Simply, inside the withReadableRange block, range becomes a stream for which s.readable will return false as soon as the Json parser has consumed the specified number of bytes.

Furthermore, withReadableRange guarantees that all stream operations within the block will be non-blocking, so it will transform the AsyncInputStream into a regular InputStream. Depending on the complexity of the stream processing functions, this will often lead to significant performance gains.

OutputStream and AsyncOutputStream

An OutputStream manages a particular output device. The library offers out of the box the following output stream types:

  • writeFileOutput

    For writing files through the familiar fwrite API from the C run-time.

  • memoryOutput

    For building a string or a seq[byte] result.

  • unsafeMemoryOutput

    For writing to an arbitrary existing buffer.
    You are responsible for ensuring that the backing buffer won't be invalidated while the stream is being used.

  • pipeOutput (async)

    For arbitrary conmmunication between a produced and a consumer.

  • chronosOutput (async)

    Enabled by importing faststreams/chronos_adapters.
    It can represent any Chronos Transport as an input stream.

  • asyncSocketOutput (async)

    Enabled by importing faststreams/std_adapters.
    Allows using Nim's standard library AsyncSocket type as an output stream.

You can extend the library with new OutputStream types without modifying it. Please see the inline code documentation of OutputStreamVTable for more details.

All of the above APIs are possible constructors for creating an OutputStream. The stream instances will manage their resources through destructors, but you might want to close them explicitly in async context or when you need to handle the possible errors from the closing operation.

Here is an example usage:

import
  faststreams/outputs

type
  ABC = object
    a: int
    b: char
    c: string

var stream = memoryOutput()
stream.writeNimRepr(ABC(a: 1, b: 'b', c: "str"))
var repr = stream.getOutput(string)

The writeNimRepr in the above example is not part of the library, but let's see how it can be implemented:

import
  typetraits, faststreams/outputs

proc writeNimRepr*(stream: OutputStream, str: string) =
  stream.write '"'

  for c in str:
    if c == '"':
      stream.write ['\'', '"']
    else:
      stream.write c

  stream.write '"'

proc writeNimRepr*(stream: OutputStream, x: char) =
  stream.write ['\'', x, '\'']

proc writeNimRepr*(stream: OutputStream, x: int) =
  stream.write $x # Making this more optimal has been left
                  # as an exercise for the reader

proc writeNimRepr*[T](stream: OutputStream, obj: T) =
  stream.write typetraits.name(T)
  stream.write '('

  var firstField = true
  for name, val in fieldPairs(obj):
    if not firstField:
      stream.write ", "

    stream.write name
    stream.write ": "
    stream.writeNimRepr val

    firstField = false

  stream.write ')'

When the stream is created, its output buffers will be initialized with a single page of pageSize bytes (specified at stream creation). Calls to write will just populate this page until it becomes full and only then it would be sent to the output device.

As the example demonstrates, a memoryOutput will continue buffering pages until they can be finally concatenated and returned in stream.getOutput. If the output fits within a single page, it will be efficiently moved to the getOutput result. When the output size is known upfront you can ensure that this optimization is used by calling stream.ensureRunway before any writes, but please note that the library is free to ignore this hint in async context or if a maximum memory usage policy is specified.

In a non-memory stream, any writes larger than a page or issued through the writeNow API will be sent to the output device immediately.

Please note that even in async context, write will complete immediately. To handle back-pressure properly, use stream.flush or stream.waitForConsumer which will ensure that the buffered data is drained to a specified number of bytes before continuing. The rationale here is that introducing an interruption point at every write produces less optimal code, but if this is desired you can use the stream.writeAndWait API.

If you have existing algorithms that output data to an openArray, you can use the stream.getWritableBytes API to continue using them without introducing any intermediate buffers.

Delayed Writes

Many protocols and formats employ fixed-size and variable-size length prefixes that have been tradionally difficult to handle because they require you to either measure the size of the content before writing it to the stream, or even worse, serialize it to a memory buffer in order to determine its size.

FastStreams supports handling such length prefixes with a zero-copy mechanism that doesn't require additional memory allocations. stream.delayFixedSizeWrite and stream.delayVarSizeWrite are APIs that return a WriteCursor object that can be used to implement a delayed write to the stream. After obtaining the write cursor you can take a note of the current pos in the stream and then continue issuing stream.write operations normally. After all of the content is written, you obtain pos again to determine the final value of the length prefix. Throughout the whole time, you are free to call write on the cursor to populate the "hole" left in the stream with bytes, but at the end you must call finalize to unlock the stream for flushing. You can also perform the finalization in one step with finalWrite (the one-step approach is manatory for variable-size prefixes).

Pipeline

(This section is a stub and it will be expanded with more details in the future)

A Pipeline represents a chain of transformations that should be applied to a stream. It starts with an InputStream followed by one or more transformation steps and ending with a result.

Each transformation step is a function of the kind:

type PipelineStep* = proc (i: InputStream, o: OutputStream)
                          {.gcsafe, raises: [Defect, CatchableError].}

A result obtaining operation is a function of the kind:

type PipelineResultProc*[T] = proc (i: InputStream): T
                                   {.gcsafe, raises: [Defect, CatchableError].}

Please note that stream.getOutput is an example of such a function.

Pipelnes executed in place with executePipeline API. If the first input source is async, then the whole pipeline with be executing asynchronously which can result in a much lower memory usage.

The pipeline transformation steps are usually employing the fsMultiSync pragma to make them usable in both synchronous and asynchronous scenarios.

Please note that the above higher-level APIs are just about simplifying the instantiation of multiple Pipe objects that can be used to hook input and output streams in arbitrary ways.

A stream multiplexer for example is likely to rely on the lower-level Pipe objects and the underlying PageBuffers directly.

License

Licensed and distributed under either of

or

at your option. This file may not be copied, modified, or distributed except according to those terms.

More Repositories

1

status-mobile

a free (libre) open source, mobile OS for Ethereum
Clojure
3,897
star
2

react-native-desktop-qt

A Desktop port of React Native, driven by Qt, forked from Canonical
JavaScript
1,194
star
3

status-go

The Status module that consumes go-ethereum
Go
728
star
4

nimbus-eth1

Nimbus: an Ethereum Execution Client for Resource-Restricted Devices
Nim
562
star
5

nimbus-eth2

Nim implementation of the Ethereum Beacon Chain
Nim
531
star
6

nim-chronos

Chronos - An efficient library for asynchronous programming
Nim
354
star
7

status-desktop

Status Desktop client made in Nim & QML
QML
291
star
8

status-keycard

Our Javacard Implementation for making secure transactions within Status and Ethereum
Java
195
star
9

status-network-token

Smart Contracts for the Status Contribution Period, along with Genesis and Network Tokens
JavaScript
148
star
10

nim-chronicles

A crafty implementation of structured logging for Nim.
Nim
138
star
11

nim-stew

stew is collection of utilities, std library extensions and budding libraries that are frequently used at Status, but are too small to deserve their own git repository.
Nim
119
star
12

open-bounty

Enable communities to distribute funds to push their cause forward.
JavaScript
118
star
13

doubleratchet

The Double Ratchet Algorithm implementation in Go
Go
110
star
14

nim-taskpools

Lightweight, energy-efficient, easily auditable threadpool
Nim
98
star
15

swarms

Swarm Home. New, completed and in-progress features for Status
HTML
92
star
16

contracts

Python
87
star
17

nim-json-rpc

Nim library for implementing JSON-RPC clients and servers
Nim
84
star
18

status-web

TypeScript
79
star
19

nim-websock

Websocket for Nim
Nim
74
star
20

questionable

Elegant optional types for Nim
Nim
73
star
21

nim-stint

Stack-based arbitrary-precision integers - Fast and portable with natural syntax for resource-restricted devices.
Nim
73
star
22

nim-eth

Common utilities for Ethereum
Nim
69
star
23

nim-drchaos

A powerful and easy-to-use fuzzing framework in Nim for C/C++/Obj-C targets
Nim
66
star
24

nim-graphql

Nim implementation of GraphQL with sugar and steroids
Nim
64
star
25

clj-rn

A utility for building ClojureScript-based React Native apps
Clojure
56
star
26

nim-confutils

Simplified handling of command line options and config files
Nim
56
star
27

nim-serialization

A modern and extensible serialization framework for Nim
Nim
54
star
28

nim-presto

REST API framework for Nim language
Nim
52
star
29

keycard-cli

A command line tool and shell to manage keycards
Go
46
star
30

keycard-go

Go pkg to interact with the Status Keycard
Go
41
star
31

nim-json-serialization

Flexible JSON serialization not relying on run-time type information
Nim
39
star
32

nim-metrics

Nim metrics client library supporting the Prometheus monitoring toolkit, StatsD and Carbon
Nim
38
star
33

nim-web3

Nim
37
star
34

nim-bearssl

BearSSL wrapper in Nim
C
37
star
35

ETHPrize-interviews

A repository for the ETHPrize website.
HTML
36
star
36

nim-codex

Decentralized Durability Engine
Nim
36
star
37

nim-toml-serialization

Flexible TOML serialization [not] relying on run-time type information.
Nim
35
star
38

hackathon

Status API Hackathon
JavaScript
32
star
39

nim-rocksdb

Nim wrapper for RocksDB, a persistent key-value store for Flash and RAM Storage.
Nim
29
star
40

nim-daemon

Cross-platform process daemonization library for Nim language
Nim
28
star
41

StatusQ

Reusable Status QML components
QML
27
star
42

nim-http-utils

Nim language HTTP helper procedures
Nim
27
star
43

status-electron

[OUTDATED NOT SUPPORTED] Status Electron (React Native Web and Electron)
Clojure
27
star
44

nim-style-guide

Status style guid for the Nim language
Nim
26
star
45

status-teller-network

DApp which provides a platform for borderless, peer-to-peer, fiat-to-crypto echanges that allows Stakeholders to find nearby users to exchange their cash for digital assets and currency.
JavaScript
26
star
46

nim-decimal

A correctly-rounded arbitrary precision decimal floating point arithmetic library
C
26
star
47

codex-research

Codex durability engine research
Jupyter Notebook
25
star
48

vyper-debug

Easy to use Vyper debugger | vdb (https://github.com/ethereum/vyper)
Python
24
star
49

react-native-status-keycard

React Native library to interact with Status Keycard using NFC connection
Java
24
star
50

nim-unittest2

Beautiful and efficient unit testing for Nim evolved from the standard library `unittest` module
Nim
24
star
51

mingw-windows10-uwp

Minimal Windows 10 Store ready sample of MinGW dll PInvoked from Windows 10 UWP application
C#
24
star
52

wiki.status.im

It's the wiki... for Status
HTML
24
star
53

nim-snappy

Nim implementation of Snappy compression algorithm
Nim
24
star
54

CryptoLife

A repo for all the #CryptoLife Hackathon submissions, The National House Smichov, Prague, 26-28th October 2018.
23
star
55

status-chat-widget

Easily embed a status chatroom in your website - outdated, please use https://github.com/status-im/js-waku
JavaScript
22
star
56

nim-blscurve

Nim implementation of BLS signature scheme (Boneh-Lynn-Shacham) over Barreto-Lynn-Scott (BLS) curve BLS12-381
C
22
star
57

keycard-connect

Keycard + WalletConnect
Kotlin
21
star
58

nimplay

Nim Ethereum Contract DSL. Targeting eWASM.
Nim
20
star
59

nim-cookbook

Nim
20
star
60

whisper-tutorial

Whisper Tutorial using web3js
JavaScript
20
star
61

ens-usernames

DApp to register usernames for Status Network
JavaScript
19
star
62

status-keycard-java

Java SDK for the Status Keycard
Java
19
star
63

Keycard.swift

Swift
19
star
64

account-contracts

Key managers, recovery, gas abstraction and self-sovereign identity for web3 universal login.
Solidity
19
star
65

nim-protobuf-serialization

Nim
19
star
66

ethereumj-personal

EthereumJ for Personal Devices DEPRECATED
Java
19
star
67

liquid-funding

Dapp for delegating donations to projects
JavaScript
19
star
68

nim-testutils

testrunner et al
Nim
17
star
69

react-native-transparent-video

React Native video player with alpha channel (alpha-packing) support.
Java
17
star
70

general-market-framework

A Generalised Market Framework for Ethereum
Python
16
star
71

nim-libbacktrace

Nim wrapper for libbacktrace
C
16
star
72

wallet

CSS
16
star
73

go-maven-resolver

Tool for resolving Java Maven dependencies
Go
15
star
74

murmur

WIP - Whisper node / client implementation built in javascript
JavaScript
15
star
75

keycard-installer-android

discontinued, use https://github.com/status-im/keycard-cli - Keycard applet installer over NFC
Java
15
star
76

bigbrother-specs

Research and specification for Big Brother protocol
14
star
77

status-dev-cli

Status command-line tools
JavaScript
14
star
78

nim-evmc

Nim EVMC - Ethereum VM binary compatible interface
Nim
14
star
79

specs

Specifications for Status clients.
HTML
14
star
80

nescience

A Zero-Knowledge Toolkit
Nim
13
star
81

geth_exporter

geth metrics exporter for Prometheus
Go
13
star
82

autobounty

Github bot that automatically funds https://openbounty.status.im bounties
JavaScript
13
star
83

status-security

Repository for all Status Network related security information
JavaScript
13
star
84

snt-gas-relay

SNT Gas Relay
JavaScript
13
star
85

move

MOVΞ - A Decentralised Ride Sharing DApp
JavaScript
13
star
86

nim-ttmath

C++
12
star
87

nim-eth-p2p

Nim Ethereum P2P protocol implementation
Nim
11
star
88

dreddit-devcon

JavaScript
11
star
89

keycard-pro

WIP
C
11
star
90

syng-client

The Mobile Client for the Ethereum Network DEPRECATED
Java
11
star
91

status-github-bot

A bot for github
JavaScript
11
star
92

embark-status

Provides an ability to debug Embark DApps in Status
JavaScript
11
star
93

pluto

Clojure
11
star
94

snt-voting

JavaScript
11
star
95

nimbus-launch

Jumpstart your Nim project at Status
Nim
11
star
96

keycard-redeem

TypeScript
10
star
97

translate.status.im

Help translate Status into your language!
JavaScript
10
star
98

nim-eth-contracts

Ethereum smart contracts in Nim
Nim
10
star
99

status-console-client

Status messaging console user interface
Go
10
star
100

the-explainers

The Explainers Initiative is an open effort to bring technical content regarding the Serenity upgrade of the Ethereum blockchain closer to non-technical and semi-technical communities
10
star