• Stars
    star
    596
  • Rank 75,095 (Top 2 %)
  • Language
    JavaScript
  • License
    Other
  • Created over 10 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Extended Memory Semantics - Persistent shared object memory and parallelism for Node.js and Python

OSX | Linux | Node 4.1-14.x, Python2/3: Build Status npm version

API Documentation | EMS Website

Extended Memory Semantics (EMS)

EMS makes possible persistent shared memory parallelism between Node.js, Python, and C/C++.

Extended Memory Semantics (EMS) unifies synchronization and storage primitives to address several challenges of parallel programming:

  • Allows any number or kind of processes to share objects
  • Manages synchronization and object coherency
  • Implements persistence to non-volatile memory and secondary storage
  • Provides dynamic load-balancing between processes
  • May substitute or complement other forms of parallelism

Examples: Parallel web servers, word counting

Table of Contents

EMS is targeted at tasks too large for one core or one process but too small for a scalable cluster

A modern multi-core server has 16-32 cores and nearly 1TB of memory, equivalent to an entire rack of systems from a few years ago. As a consequence, jobs formerly requiring a Map-Reduce cluster can now be performed entirely in shared memory on a single server without using distributed programming.

Sharing Persistent Objects Between Python and Javascript

Inter-language example in interlanguage.{js,py} The animated GIF demonstrates the following steps:

  • Start Node.js REPL, create an EMS memory
  • Store "Hello"
  • Open a second session, begin the Python REPL
  • Connect Python to the EMS shared memory
  • Show the object created by JS is present in Python
  • Modify the object, and show the modification can be seen in JS
  • Exit both REPLs so no programs are running to "own" the EMS memory
  • Restart Python, show the memory is still present
  • Initialize a counter from Python
  • Demonstrate atomic Fetch and Add in JS
  • Start a loop in Python incrementing the counter
  • Simultaneously print and modify the value from JS
  • Try to read "empty" data from Python, the process blocks
  • Write the empty memory, marking it full, Python resumes execution

Types of Concurrency

EMS extends application capabilities to include transactional memory and other fine-grained synchronization capabilities.

EMS implements several different parallel execution models:
  • Fork-Join Multiprocess: execution begins with a single process that creates new processes when needed, those processes then wait for each other to complete.
  • Bulk Synchronous Parallel: execution begins with each process starting the program at the main entry point and executing all the statements
  • User Defined: parallelism may include ad-hoc processes and mixed-language applications

Built in Atomic Operations

EMS operations may performed using any JSON data type, read-modify-write operations may use any combination of JSON data types. like operations on ordinary data.

Atomic read-modify-write operations are available in all concurrency modes, however collectives are not available in user defined modes.

  • Atomic Operations: Read, write, readers-writer lock, read when full and atomically mark empty, write when empty and atomically mark full

  • Primitives: Stacks, queues, transactions

  • Read-Modify-Write: Fetch-and-Add, Compare and Swap

  • Collective Operations: All basic OpenMP collective operations are implemented in EMS: dynamic, block, guided, as are the full complement of static loop scheduling, barriers, master and single execution regions

Examples and Benchmarks

API Documentation | EMS Website

Word Counting Using Atomic Operations

Word counting example

Map-Reduce is often demonstrated using word counting because each document can be processed in parallel, and the results of each document's dictionary reduced into a single dictionary. This EMS implementation also iterates over documents in parallel, but it maintains a single shared dictionary across processes, atomically incrementing the count of each word found. The final word counts are sorted and the most frequently appearing words are printed with their counts.

The performance of this program was measured using an Amazon EC2 instance:
c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory The leveling of scaling around 16 cores despite the presence of ample work may be related to the use of non-dedicated hardware: Half of the 36 vCPUs are presumably HyperThreads or otherwise shared resource. AWS instances are also bandwidth limited to EBS storage, where our Gutenberg corpus is stored.

Bandwidth Benchmarking

STREAMS Example

A benchmark similar to STREAMS gives us the maximum speed EMS double precision floating point operations can be performed on a c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory.

Benchmarking of Transactions and Work Queues

Transactions and Work Queues Example

Transactional performance is measured alone, and again with a separate process appending new processes as work is removed from the queue. The experiments were run using an Amazon EC2 instance:
c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory

Experiment Design

Six EMS arrays are created, each holding 1,000,000 numbers. During the benchmark, 1,000,000 transactions are performed, each transaction involves 1-5 randomly selected elements of randomly selected EMS arrays. The transaction reads all the elements and performs a read-modify-write operation involving at least 80% of the elements. After all the transactions are complete, the array elements are checked to confirm all the operations have occurred.

The parallel process scheduling model used is block dynamic (the default), where each process is responsible for successively smaller blocks of iterations. The execution model is bulk synchronous parallel, each processes enters the program at the same main entry point and executes all the statements in the program. forEach loops have their normal semantics of performing all iterations, parForEach loops are distributed across threads, each process executing only a portion of the total iteration space.


Immediate Transactions: Each process generates a transaction on integer data then immediately performs it.

Transactions from a Queue: One of the processes generates the individual transactions and appends them to a work queue the other threads get work from. Note: As the number of processes increases, the process generating the transactions and appending them to the work queue is starved out by processes performing transactions, naturally maximizing the data access rate.

Immediate Transactions on Strings: Each process generates a transaction appending to a string, and then immediately performs the transaction.
Measurements
Elem. Ref'd: Total number of elements read and/or written
Table Updates: Number of different EMS arrays (tables) written to
Trans. Performed: Number of transactions performed across all EMS arrays (tables)
Trans. Enqueued: Rate transactions are added to the work queue (only 1 generator thread in these experiments)

Synchronization as a Property of the Data, Not a Duty for Tasks

API Documentation | EMS Website

EMS internally stores tags that are used for synchronization of user data, allowing synchronization to happen independently of the number or kind of processes accessing the data. The tags can be thought of as being in one of three states, Empty, Full, or Read-Only, and the EMS intrinsic functions enforce atomic access through automatic state transitions.

The EMS array may be indexed directly using an integer, or using a key-index mapping from any primitive type. When a map is used, the key and data itself are updated atomically.



EMS memory is an array of JSON values (Number, Boolean, String, Undefined, or Object) accessed using atomic operators and/or transactional memory. Safe parallel access is managed by passing through multiple gates: First mapping a key to an index, then accessing user data protected by EMS tags, and completing the whole operation atomically.


EMS Data Tag Transitions & Atomic operations: F=Full, E=Empty, X=Don't Care, RW=Readers-Writer lock (# of current readers) CAS=Compare-and-Swap, FAA=Fetch-and-Add

More Technical Information

For a more complete description of the principles of operation, contact the author at [email protected]

Complete API reference


Installation

Because all systems are already multicore, parallel programs require no additional equipment, system permissions, or application services, making it easy to get started. The reduced complexity of lightweight threads communicating through shared memory is reflected in a rapid code-debug cycle for ad-hoc application development.

Quick Start with the Makefile

To build and test all C, Python 2 and 3, and Node.js targets, a makefile can automate most build and test tasks.

dunlin> make help
         Extended Memory Semantics  --  Build Targets
===========================================================
    all                       Build all targets, run all tests
    node                      Build only Node.js
    py                        Build both Python 2 and 3

    py[2|3]                   Build only Python2 or 3
    test                      Run both Node.js and Py tests
    test[_js|_py|_py2|_py3]   Run only Node.js, or only Py tests, respectively
    clean                     Remove all files that can be regenerated
    clean[_js|_py|_py2|_py3]  Remove Node.js or Py files that can be regenerated

Install via npm

EMS is available as a NPM Package. EMS depends on the Node addon API (N-API) package.

npm install ems

Install via GitHub

Download the source code, then compile the native code:

git clone https://github.com/mogill/ems.git
cd ems
npm install

Installing for Python

Python users should download and install EMS git (see above). There is no PIP package, but not due lack of desire or effort. A pull request is most welcome!

Run Some Examples

Click here for Detailed Examples.

On a Mac and most Linux distributions EMS will "just work", but some Linux distributions restrict access to shared memory. The quick workaround is to run jobs as root, a long-term solution will vary with Linux distribution.

Run the work queue driven transaction processing example on 8 processes:

npm run <example>

Or manually via:

cd Examples
node concurrent_Q_and_TM.js 8

Running all the tests with 8 processes:

npm run test      # Alternatively: npm test
cd Tests
rm -f EMSthreadStub.js   # Do not run the machine generated script used by EMS
for test in `ls *js`; do node $test 8; done

Platforms Supported

As of 2016-05-01, Mac/Darwin and Linux are supported. A windows port pull request is welcomed!

Roadmap

EMS 1.0 uses Nan for long-term Node.js support, we continue to develop on OSX and Linux via Vagrant.

EMS 1.3 introduces a C API.

EMS 1.4 Python API

EMS 1.4.8 Improved examples and documentation

EMS 1.5 Refactored JS-EMS object conversion temporary storage

EMS 1.6 [This Release] Updated to replace deprecated NodeJS NAN API with the N-API.

EMS 1.7 [Planned] Key deletion that frees all resources. Replace open hashing with chaining.

EMS 1.8 [Planned] Memory allocator based on R. Marotta, M. Ianni, A. Scarselli, A. Pellegrini and F. Quaglia, "NBBS: A Non-Blocking Buddy System for Multi-core Machines," 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Larnaca, Cyprus, 2019, pp. 11-20, doi: 10.1109/CCGRID.2019.00011.

EMS 1.9 [Planned] Vectorized JSON indexer.

EMS 1.10 [Planned] Support for persistent main system memory (PMEM) when multiple processes are supported.

EMS 2.0 [Planned] New API which more tightly integrates with ES6, Python, and other dynamically typed languages languages, making atomic operations on persistent memory more transparent.

License

BSD, other commercial and open source licenses are available.

Links

API Documentation

EMS Website

Download the NPM

Get the source at GitHub

About

Jace A Mogill specializes in hardware/software co-design of resource constrained computing at both the largest and smallest scales. He has over 20 years experience with distributed, multi-core, FPGA, CGRA, GPU, CPU, and custom computer architectures.

Copyright (C)2011-2020 Jace A Mogill