• Stars
    star
    129
  • Rank 279,262 (Top 6 %)
  • Language
    Scala
  • License
    Apache License 2.0
  • Created about 7 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A framework for rapid reporting API development; with out of the box support for high cardinality dimension lookups with druid.

Pipeline Status

Google Group: Maha-Users

Maha Release Notes

Maha Release Pipeline

Maha

A centralised library for building reporting APIs on top of multiple data stores to exploit them for what they do best.

We run millions of queries on multiple data sources for analytics every day. They run on hive, oracle, druid etc. We needed a way to utilize the data stores in our architecture to exploit them for what they do best. This meant we needed to easily tune and identify sets of use cases where each data store fits the best. Our goal became to build a centralized system which was able to make these decisions on the fly at query time and also take care of the end to end query execution. The system needed to take in all the heuristics available, applying any constraints already defined in the system and select the best data store to run the query. It then would need to generate the underlying queries and pass on all available information to the query execution layer in order to facilitate further optimization at that layer.

Key Features!

  • Configuration driven API making it easy to address multiple reporting use cases
  • Define cubes across multiple data sources (oracle, druid, hive)
  • Dynamic selection of query data source based on query cost, grain, weight
  • Dynamic query generation with support for filter and ordering on every column, pagination, star schema joins, query type etc
  • Pluggable partitioning scheme, and time providers
  • Access control based on schema/labeling in cube definitions
  • Define constraints on max lookback, max days window in cube definitions
  • Provide easily aliasing of physical column names across tables / engines
  • Query execution for Oracle, Druid out-of-the-box
  • Support for dim driven queries for entity management alongside metrics
  • API side joins between Oracle/Druid for fact driven or dim driven queries
  • Fault tolerant apis: fall back option to other datasource if configured
  • Supports customising and tweaking data source specific executor's config
  • MahaRequestLog : Kafka logging of API Statistics
  • Support for high cardinality dimension druid lookups
  • Standard JDBC driver to query maha (With Maha Dialect) powered by Avatica and Calcite.

Maha Architecture

Maha Architecture

Modules in maha

  • maha-core : responsible for creating Reporting Request, Request Model (Query Metadata) , Query Generation, Query Pipeline (Engine selection)
  • maha-druid-executor : Druid Query Executor
  • maha-oracle-executor : Oracle Query Executor
  • maha-presto-executor : Presto Query Executor
  • maha-postgres-executor : Postgres Query Executor
  • maha-druid-lookups: Druid Lookup extension for high cardinality dimension druid lookups
  • maha-par-request: Library for Parallel Execution, Blocking and Non Blocking Callables using Java utils
  • maha-service : One json config for creating different registries using the fact and dim definitions.
  • maha-api-jersey : Easy war file helper library for exposing the api using maha-service module
  • maha-api-example : End to end example implementation of maha apis
  • maha-par-request-2: Library for Parallel Execution, Blocking and Non Blocking Callables using Scala utils
  • maha-request-log: Kafka Events writer about the api usage request stats for given registry in maha

Getting Started

Installing Maha API Library

<dependency>
  <groupId>com.yahoo.maha</groupId>
  <artifactId>maha-api-jersey</artifactId>
  <version>6.53</version>
</dependency>
  • maha-api-jersey includes all the dependencies of other modules

Example Implementation of Maha Apis

  • Maha-Service Examples
    • Druid Wiki Ticker Example
    • H2 Database Student Course Example
      • you can run in the local as unit test

Druid Wiki Ticker Example

For this example, you need druid instance running in local and wikitikcer dataset indexed into druid, please take look at http://druid.io/docs/latest/tutorials/quickstart.html

Creating Fact Definition for Druid Wikiticker

          ColumnContext.withColumnContext { implicit dc: ColumnContext =>
        Fact.newFact(
          "wikipedia", DailyGrain, DruidEngine, Set(WikiSchema),
          Set(
            DimCol("channel", StrType())
            , DimCol("cityName", StrType())
            , DimCol("comment", StrType(), annotations = Set(EscapingRequired))
            , DimCol("countryIsoCode", StrType(10))
            , DimCol("countryName", StrType(100))
            , DimCol("isAnonymous", StrType(5))
            , DimCol("isMinor", StrType(5))
            , DimCol("isNew", StrType(5))
            , DimCol("isRobot", StrType(5))
            , DimCol("isUnpatrolled", StrType(5))
            , DimCol("metroCode", StrType(100))
            , DimCol("namespace", StrType(100, (Map("Main" -> "Main Namespace", "User" -> "User Namespace", "Category" -> "Category Namespace", "User Talk"-> "User Talk Namespace"), "Unknown Namespace")))
            , DimCol("page", StrType(100))
            , DimCol("regionIsoCode", StrType(10))
            , DimCol("regionName", StrType(200))
            , DimCol("user", StrType(200))
          ),
          Set(
          FactCol("count", IntType())
          ,FactCol("added", IntType())
          ,FactCol("deleted", IntType())
          ,FactCol("delta", IntType())
          ,FactCol("user_unique", IntType())
          ,DruidDerFactCol("Delta Percentage", DecType(10, 8), "{delta} * 100 / {count} ")
          )
        )
      }
        .toPublicFact("wikiticker_stats",
          Set(
            PubCol("channel", "Wiki Channel", InNotInEquality),
            PubCol("cityName", "City Name", InNotInEqualityLike),
            PubCol("countryIsoCode", "Country ISO Code", InNotInEqualityLike),
            PubCol("countryName", "Country Name", InNotInEqualityLike),
            PubCol("isAnonymous", "Is Anonymous", InNotInEquality),
            PubCol("isMinor", "Is Minor", InNotInEquality),
            PubCol("isNew", "Is New", InNotInEquality),
            PubCol("isRobot", "Is Robot", InNotInEquality),
            PubCol("isUnpatrolled", "Is Unpatrolled", InNotInEquality),
            PubCol("metroCode", "Metro Code", InNotInEquality),
            PubCol("namespace", "Namespace", InNotInEquality),
            PubCol("page", "Page", InNotInEquality),
            PubCol("regionIsoCode", "Region Iso Code", InNotInEquality),
            PubCol("regionName", "Region Name", InNotInEqualityLike),
            PubCol("user", "User", InNotInEquality)
          ),
          Set(
            PublicFactCol("count", "Total Count", InBetweenEquality),
            PublicFactCol("added", "Added Count", InBetweenEquality),
            PublicFactCol("deleted", "Deleted Count", InBetweenEquality),
            PublicFactCol("delta", "Delta Count", InBetweenEquality),
            PublicFactCol("user_unique", "Unique User Count", InBetweenEquality),
            PublicFactCol("Delta Percentage", "Delta Percentage", InBetweenEquality)
          ),
          Set.empty,
          getMaxDaysWindow, getMaxDaysLookBack
        )

Fact definition is the static object specification for the facts and dimension columns present in the table in the data-source, you can say it is object image of the table. DimCol has the base name, data-types, annotation. Annotations are the configurations stating the primary key/foreign key configuration, special character escaping in the query generation, static value mapping ie StrType(100, (Map("Main" -> "Main Namespace", "User" -> "User Namespace", "Category" -> "Category Namespace", "User Talk"-> "User Talk Namespace"), "Unknown Namespace")) . Fact definition can have derived columns, maha supports most common arithmetic derived expression.

Public Fact : Public fact contains the base name to public name mapping. Public Names can be directly used in the Request Json. Public fact are identified by the name called cube name ie 'wikiticker_stats'. Maha supports versioning on the cubes, you have multiple versions of the same cube.

Fact/Dimension Registration Factory: Facts and dimensions are registered under the derived static class object of FactRegistrationFactory or DimensionRegistration Factory. Factory Classes used in the maha-service-json-config.

maha-service-config.json

Maha Service Config json contains one place config for launching maha-apis which includes the following.

  • Set of Public Facts registered under Registry Name ie wikiticker_stats cube is registered under the registry name called wiki
  • Set of Registries
  • Set of Query of generator and their config
  • Set of Query Executors and their config
  • Bucketing configurations containing the cube version based routing of the reporting requests
  • UTC Time provider Maps , if the date /time is local date then you can have utc time provider to convert it to utc in query generation phase.
  • Parallel Service Executor Maps for serving the reporting request utilising the thread-pool config.
  • Maha Request Logging Config, kafka configuration for logging the maha request debug logs to kafka queue.

We have created api-jersey/src/test/resources/maha-service-config.json configuration to start with, this is maha api configuration for student and wiki registry.

Debugging maha-service-config json: For the configuration syntax of this json, you can take look at JsonModels/Factories in the service module. Once Maha Service loads this configuration, if there are some failures in loading the configuration then mahaService will return the list of FailedToConstructFactory/ ServiceConfigurationError/ JsonParseError.

Exposing the endpoints with api-jersey

Api-jersey uses maha-service-config json and create MahaResource beans. All you need to do is to create the following three beans 'mahaService', 'baseRequest', 'exceptionHandler' etc.

    <bean id="mahaService" class="com.yahoo.maha.service.example.ExampleMahaService" factory-method="getMahaService"/>
    <bean id="baseRequest" class="com.yahoo.maha.service.example.ExampleRequest" factory-method="getRequest"/>
    <bean id="exceptionHandler" class="com.yahoo.maha.api.jersey.GenericExceptionMapper" scope="singleton" />
    <import resource="classpath:maha-jersey-context.xml" />

Once your application context is ready, you are good to launch the war file on the web server. You can take look at the test application context that we have created for running local demo and unit test api-jersey/src/test/resources/testapplicationContext.xml

Launch the maha api demo in local

prerequisites
  • druid.io getting started guide in local for wikitiker demo
  • Postman (optional)
Run demo :
  • Step 1: Checkout yahoo/maha repository
  • Step 2: Run mvn clean install in maha
  • Step 3: Go to cd api-example module and run mvn jetty:run, you can run it with -X for debug logs.
  • Step 4: Step 2 will launch jetty server in local and will deploy maha-api example war and you are good to play with it!
Playing with demo :
  • GET Domain request: Dimension and Facts You can fetch wiki registry domain using curl http://localhost:8080/mahademo/registry/wiki/domain Domain tells you lit of cubes and their corresponding list of fields that you can request for particular registry. Here wiki is the registry name.

  • GET Flatten Domain request : Flatten dimension and facts fields You can get flatten domain using curl http://localhost:8080/mahademo/registry/wiki/flattenDomain

  • POST Maha Reporting Request for example student schema MahaRequest will look like following, you need to pass cube name, list of fields you want to fetch, filters, sorting columns etc.

{
   "cube": "student_performance",
   "selectFields": [
      {
         "field": "Student ID"
      },
      {
         "field": "Class ID"
      },
      {
         "field": "Section ID"
      },
      {
         "field": "Total Marks"
      }
   ],
   "filterExpressions": [
      {
         "field": "Day",
         "operator": "between",
         "from": "2017-10-20",
         "to": "2017-10-25"
      },
      {
         "field": "Student ID",
         "operator": "=",
         "value": "213"
      }
   ]
} 

you can find student.json in the api-example module, **make sure you change the dates to latest date range in YYYY-MM-dd to avoid max look back window error.

Curl command :

curl -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d @student.json http://localhost:8080/mahademo/registry/student/schemas/student/query?debug=true 

Sync Output :

{
	"header": {
		"cube": "student_performance",
		"fields": [{
				"fieldName": "Student ID",
				"fieldType": "DIM"
			},
			{
				"fieldName": "Class ID",
				"fieldType": "DIM"
			},
			{
				"fieldName": "Section ID",
				"fieldType": "DIM"
			},
			{
				"fieldName": "Total Marks",
				"fieldType": "FACT"
			}
		],
		"maxRows": 200
	},
	"rows": [
		[213, 200, 100, 125],
		[213, 198, 100, 120]
	]
}
  • POST Maha Reporting Request for example wiki schema

Request :

{
   "cube": "wikiticker_stats",
   "selectFields": [
      {
         "field": "Wiki Channel"
      },
      {
         "field": "Total Count"
      },
      {
         "field": "Added Count"
      },
      {
         "field": "Deleted Count"
      }
   ],
   "filterExpressions": [
      {
         "field": "Day",
         "operator": "between",
         "from": "2015-09-11",
         "to": "2015-09-13"
      }
   ]
}     

Curl :

      curl -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d @wikiticker.json http://localhost:8080/mahademo/registry/wiki/schemas/wiki/query?debug=true  

Output :

{"header":{"cube":"wikiticker_stats","fields":[{"fieldName":"Wiki Channel","fieldType":"DIM"},{"fieldName":"Total Count","fieldType":"FACT"},{"fieldName":"Added Count","fieldType":"FACT"},{"fieldName":"Deleted Count","fieldType":"FACT"}],"maxRows":200},"rows":[["#ar.wikipedia",423,153605,2727],["#be.wikipedia",33,46815,1235],["#bg.wikipedia",75,41674,528],["#ca.wikipedia",478,112482,1651],["#ce.wikipedia",60,83925,135],["#cs.wikipedia",222,132768,1443],["#da.wikipedia",96,44879,1097],["#de.wikipedia",2523,522625,35407],["#el.wikipedia",251,31400,9530],["#en.wikipedia",11549,3045299,176483],["#eo.wikipedia",22,13539,2],["#es.wikipedia",1256,634670,15983],["#et.wikipedia",52,2758,483],["#eu.wikipedia",13,6690,43],["#fa.wikipedia",219,74733,2798],["#fi.wikipedia",244,54810,2590],["#fr.wikipedia",2099,642555,22487],["#gl.wikipedia",65,12483,526],["#he.wikipedia",246,51302,3533],["#hi.wikipedia",19,34977,60],["#hr.wikipedia",22,25956,204],["#hu.wikipedia",289,166101,2077],["#hy.wikipedia",153,39099,4230],["#id.wikipedia",110,119317,2245],["#it.wikipedia",1383,711011,12579],["#ja.wikipedia",749,317242,21380],["#kk.wikipedia",9,1316,31],["#ko.wikipedia",533,66075,6281],["#la.wikipedia",33,4478,1542],["#lt.wikipedia",20,14866,242],["#min.wikipedia",1,2,0],["#ms.wikipedia",11,21686,556],["#nl.wikipedia",445,145634,6557],["#nn.wikipedia",26,33745,0],["#no.wikipedia",169,51385,1146],["#pl.wikipedia",565,138931,8459],["#pt.wikipedia",472,229144,8444],["#ro.wikipedia",76,28892,1224],["#ru.wikipedia",1386,640698,19612],["#sh.wikipedia",14,6935,2],["#simple.wikipedia",39,43018,546],["#sk.wikipedia",33,12188,72],["#sl.wikipedia",21,3624,266],["#sr.wikipedia",168,72992,2349],["#sv.wikipedia",244,42145,3116],["#tr.wikipedia",208,67193,1126],["#uk.wikipedia",263,137420,1959],["#uz.wikipedia",983,13486,8],["#vi.wikipedia",9747,295972,1388],["#war.wikipedia",1,0,0],["#zh.wikipedia",1126,191033,7916]]}
  • POST Maha Reporting Request for example student schema with TimeShift Curator MahaRequest will look like following, you need to pass cube name, list of fields you want to fetch, filters, sorting columns in the base request and timeshift curator configs (daysOffset is an day offset for requesting previous period's to and from dates)
{
 "cube": "student_performance",
 "selectFields": [
    {
       "field": "Student ID"
    },
    {
       "field": "Class ID"
    },
    {
       "field": "Section ID"
    },
    {
       "field": "Total Marks"
    }
 ],
 "filterExpressions": [
    {
       "field": "Day",
       "operator": "between",
       "from": "2019-10-20",
       "to": "2019-10-29"
    },
    {
       "field": "Student ID",
       "operator": "=",
       "value": "213"
    }
 ],
"curators": {
  "timeshift": {
    "config" : {
      "daysOffset": 0 
    }
  }
}
}    

please note that we have loaded the test data for demo in current day and day before. For timeshift curator demo, we have loaded data for 11 days back of current date. Please make sure that you update the requested to and from dates according to current dates.

Curl command :

curl -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d @student.json http://localhost:8080/mahademo/registry/student/schemas/student/query?debug=true 

Sync Output :

{
    "header": {
        "cube": "student_performance",
        "fields": [
            {
                "fieldName": "Student ID",
                "fieldType": "DIM"
            },
            {
                "fieldName": "Class ID",
                "fieldType": "DIM"
            },
            {
                "fieldName": "Section ID",
                "fieldType": "DIM"
            },
            {
                "fieldName": "Total Marks",
                "fieldType": "FACT"
            },
            {
                "fieldName": "Total Marks Prev",
                "fieldType": "FACT"
            },
            {
                "fieldName": "Total Marks Pct Change",
                "fieldType": "FACT"
            }
        ],
        "maxRows": 200,
        "debug": {}
    },
    "rows": [
        [
            213,
            198,
            100,
            120,
            98,
            22.45
        ],
        [
            213,
            200,
            100,
            125,
            110,
            13.64
        ]
    ]
}
  • POST Maha Reporting Request for example wiki schema with Total metrics curator

Request :

{
   "cube": "wikiticker_stats",
   "selectFields": [
      {
         "field": "Wiki Channel"
      },
      {
         "field": "Total Count"
      },
      {
         "field": "Added Count"
      },
      {
         "field": "Deleted Count"
      }
   ],
   "filterExpressions": [
      {
         "field": "Day",
         "operator": "between",
         "from": "2015-09-11",
         "to": "2015-09-13"
      }
   ],
   "curators": {
      "totalmetrics": {
         "config": {}
      }
   }
}

In druid quick-start tutorial, wikipedia data is loaded for 2015-09-12, thus no change in the requested dates here.

Curl :

      curl -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d @wikiticker.json http://localhost:8080/mahademo/registry/wiki/schemas/wiki/query?debug=true  

Output :

{
    "header": {
        "cube": "wikiticker_stats",
        "fields": [
            {
                "fieldName": "Wiki Channel",
                "fieldType": "DIM"
            },
            {
                "fieldName": "Total Count",
                "fieldType": "FACT"
            },
            {
                "fieldName": "Added Count",
                "fieldType": "FACT"
            },
            {
                "fieldName": "Deleted Count",
                "fieldType": "FACT"
            }
        ],
        "maxRows": 200,
        "debug": {}
    },
    "rows": [
        [
            "#ar.wikipedia",
            0,
            153605,
            2727
        ],
        [
            "#be.wikipedia",
            0,
            46815,
            1235
        ],
        [
            "#bg.wikipedia",
            0,
            41674,
            528
        ],
        [
            "#ca.wikipedia",
            0,
            112482,
            1651
        ],
        ... trimming other rows 
    ],
    "curators": {
        "totalmetrics": {
            "result": {
                "header": {
                    "cube": "wikiticker_stats",
                    "fields": [
                        {
                            "fieldName": "Total Count",
                            "fieldType": "FACT"
                        },
                        {
                            "fieldName": "Added Count",
                            "fieldType": "FACT"
                        },
                        {
                            "fieldName": "Deleted Count",
                            "fieldType": "FACT"
                        }
                    ],
                    "maxRows": -1,
                    "debug": {}
                },
                "rows": [
                    [
                        0,
                        9385573,
                        394298
                    ]
                ]
            }
        }
    }
}

Maha JDBC Query Layer (Example dbeaver configuration)

Maha is currently queryable by json REST APIs. We have exposed the standard JDBC interface to query maha so that users can use any other tool like SQL Labs/ dbeaver /Any other Database IDE that you like to query maha.
Users will be agnostic about which engine maha sql query will be fetching the data from and able to get the data back seamlessly without any code change from client side.
This feature is powered by Apache Calcite for sql parsing and Avatica JDBC for exposing the JDBC server.

Screen Shot 2022-05-26 at 9 32 40 PM

You can follow the below steps to configure your local explorer and query maha jdbc.

  1. Please follow the above steps and keep your api-example server running. It exposes this endpoint http://localhost:8080/mahademo/registry/student/schemas/student/sql-avatica to be used by avatica jdbc connection.
  2. Optionally you can run docker run -p 8080:8080 -it pranavbhole/pbs-docker-images:maha-api-example and it starts the maha-example-api server in local and you can skip step 1.
  3. Download the community version of DBeaver from https://dbeaver.io/
  4. Go to Driver Manager and Coonfigure Avatica Jar with the following settings as shown in the screenshot.
JDBC URL =  jdbc:avatica:remote:url=http://localhost:8080/mahademo/registry/student/schemas/student/sql-avatica
Driver Class Name =  org.apache.calcite.avatica.remote.Driver
  1. Mostly Avatica driver is backward compatible, we used the https://mvnrepository.com/artifact/org.apache.calcite.avatica/avatica-core/1.17.0 for demo.
  2. Example queries:

DESCRIBE student_performance;

SELECT 'Student ID', 'Total Marks', 'Student Name', 'Student Status' ,'Admitted Year',
 'Class ID' FROM student_performance where 'Student ID' = 213
 ORDER BY 'Total Marks' DESC;

Screen Shot 2022-05-26 at 9 26 11 PM

Screen Shot 2022-05-26 at 9 25 47 PM

Screen Shot 2022-05-26 at 9 53 24 PM

Screen Shot 2022-05-26 at 10 50 22 PM

Screen Shot 2022-05-26 at 10 52 23 PM

Screen Shot 2022-05-26 at 10 53 58 PM

Screen Shot 2022-05-26 at 10 54 16 PM

Presentation of 'Maha' at Bay Area Hadoop Meetup held on 29th Oct 2019:

'Maha' at Bay Area Hadoop Meetup held on 29th Oct 2019

Contributions

  • Hiral Patel
  • Pavan Arakere Badarinath
  • Pranav Anil Bhole
  • Shravana Krishnamurthy
  • Jian Shen
  • Shengyao Qian
  • Ryan Wagner
  • Raghu Kumar
  • Hao Wang
  • Surabhi Pandit
  • Parveen Kumar
  • Santhosh Joshi
  • Vivek Chauhan
  • Ravi Chotrani
  • Huiliang Zhang
  • Abhishek Sarangan
  • Jay Yang
  • Ritvik Jaiswal
  • Ashwin Tumma
  • Ann Therese Babu
  • Kevin Chen
  • Priyanka Dadlani

Acknowledgements

  • Oracle Query Optimizations
    • Remesh Balakrishnan
    • Vikas Khanna
  • Druid Query Optimizations
    • Eric Tschetter
    • Himanshu Gupta
    • Gian Merlino
    • Fangjin Yang
  • Hive Query Optimizations
    • Seshasai Kuchimanchi

More Repositories

1

CMAK

CMAK is a tool for managing Apache Kafka clusters
Scala
11,825
star
2

open_nsfw

Not Suitable for Work (NSFW) classification using deep neural network Caffe models.
Python
5,852
star
3

TensorFlowOnSpark

TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.
Python
3,873
star
4

serialize-javascript

Serialize JavaScript to a superset of JSON that includes regular expressions and functions.
JavaScript
2,804
star
5

gryffin

Gryffin is a large scale web security scanning platform.
Go
2,075
star
6

fluxible

A pluggable container for universal flux applications.
JavaScript
1,815
star
7

AppDevKit

AppDevKit is an iOS development library that provides developers with useful features to fulfill their everyday iOS app development needs.
Objective-C
1,442
star
8

mysql_perf_analyzer

MySQL performance monitoring and analysis.
Java
1,436
star
9

squidb

SquiDB is a SQLite database library for Android and iOS
Java
1,312
star
10

react-stickynode

A performant and comprehensive React sticky component.
JavaScript
1,266
star
11

CaffeOnSpark

Distributed deep learning on Hadoop and Spark clusters.
Jupyter Notebook
1,266
star
12

blink-diff

A lightweight image comparison tool.
JavaScript
1,191
star
13

egads

A Java package to automatically detect anomalies in large scale time-series data
Java
1,173
star
14

elide

Elide is a Java library that lets you stand up a GraphQL/JSON-API web service with minimal effort.
Java
1,003
star
15

vssh

Go Library to Execute Commands Over SSH at Scale
Go
952
star
16

webseclab

set of web security test cases and a toolkit to construct new ones
Go
915
star
17

kubectl-flame

Kubectl plugin for effortless profiling on kubernetes
Go
784
star
18

streaming-benchmarks

Benchmarks for Low Latency (Streaming) solutions including Apache Storm, Apache Spark, Apache Flink, ...
Jupyter Notebook
630
star
19

redislite

Redis in a python module.
Python
577
star
20

lopq

Training of Locally Optimized Product Quantization (LOPQ) models for approximate nearest neighbor search of high dimensional data in Python and Spark.
Python
563
star
21

HaloDB

A fast, log structured key-value store.
Java
497
star
22

hecate

Automagically generate thumbnails, animated GIFs, and summaries from videos
C++
477
star
23

fetchr

Universal data access layer for web applications.
JavaScript
447
star
24

storm-yarn

Storm-yarn enables Storm clusters to be deployed into machines managed by Hadoop YARN.
Java
417
star
25

react-i13n

A performant, scalable and pluggable approach to instrumenting your React application.
JavaScript
382
star
26

FEL

Fast Entity Linker Toolkit for training models to link entities to KnowledgeBase (Wikipedia) in documents and queries.
Java
335
star
27

monitr

A Node.js process monitoring tool.
C++
312
star
28

Oak

A Scalable Concurrent Key-Value Map for Big Data Analytics
Java
267
star
29

TDOAuth

A BSD-licensed single-header-single-source OAuth1 implementation.
Swift
249
star
30

routr

A component that provides router related functionalities for both client and server.
JavaScript
246
star
31

mysql_partition_manager

MySQL Partition Manager
SQLPL
212
star
32

l3dsr

Direct Server Return load balancing across Layer 3 boundaries.
Shell
193
star
33

dnscache

dnscache for Node
JavaScript
184
star
34

object_relation_transformer

Implementation of the Object Relation Transformer for Image Captioning
Python
176
star
35

fili

Easily make RESTful web services for time series reporting with Big Data analytics engines like Druid and SQL Databases.
Java
173
star
36

check-log4j

To determine if a host is vulnerable to log4j CVEโ€2021โ€44228
Shell
172
star
37

sherlock

Sherlock is an anomaly detection service built on top of Druid
Java
152
star
38

YMTreeMap

High performance Swift treemap layout engine for iOS and macOS.
Swift
134
star
39

covid-19-data

COVID-19 datasets are constructed entirely from primary (government and public agency) sources
109
star
40

subscribe-ui-event

Subscribe-ui-event provides a cross-browser and performant way to subscribe to browser UI Events.
JavaScript
109
star
41

jafar

๐ŸŒŸ!(Just another form application renderer)
JavaScript
109
star
42

panoptes

A Global Scale Network Telemetry Ecosystem
Python
99
star
43

reginabox

Registry In A Box
JavaScript
97
star
44

preceptor

Test runner and aggregator
JavaScript
85
star
45

hive-funnel-udf

Hive UDFs for funnel analysis
Java
85
star
46

graphkit

A lightweight Python module for creating and running ordered graphs of computations.
Python
84
star
47

SparkADMM

Generic Implementation of Consensus ADMM over Spark
Python
83
star
48

react-cartographer

Generic component for displaying Yahoo / Google / Bing maps.
JavaScript
82
star
49

storm-perf-test

A simple storm performance/stress test
Java
76
star
50

UDPing

UDPing measures latency and packet loss across a link.
C++
75
star
51

bgjs

TypeScript
67
star
52

ycb

A multi-dimensional configuration library that builds bundles from resource files describing a variety of values.
JavaScript
66
star
53

ariel

Ariel is an AWS Lambda designed to collect, analyze, and make recommendations about Reserved Instances for EC2.
Python
64
star
54

YMCache

YMCache is a lightweight object caching solution for iOS and Mac OS X that is designed for highly parallel access scenarios.
Objective-C
63
star
55

validatar

Functional testing framework for Big Data pipelines.
Java
58
star
56

imapnio

Java imap nio client that is designed to scale well for thousands of connections per machine and reduce contention when using large number of threads and cpus.
Java
55
star
57

serviceping

A ping like utility for tcp services
Python
52
star
58

proxy-verifier

Proxy Verifier is an HTTP replay tool designed to verify the behavior of HTTP proxies. It builds a verifier-client binary and a verifier-server binary which each read a set of YAML or JSON files that specify the HTTP traffic for the two to exchange.
C++
45
star
59

express-busboy

A simple body-parser like module for express that uses connect-busboy under the hood.
JavaScript
45
star
60

covid-19-api

Yahoo Knowledge COVID-19 API provides JSON-API and GraphQL interfaces to access COVID-19 publicly sourced data
JavaScript
45
star
61

covid-19-dashboard

Source code for the Yahoo Knowledge Graph COVID-19 Dashboard
JavaScript
42
star
62

photo-background-generation

Jupyter Notebook
41
star
63

yql-plus

The YQL+ parser, execution engine, and source SDK.
Java
40
star
64

panoptes-stream

A cloud native distributed streaming network telemetry.
Go
40
star
65

context-parser

A robust HTML5 context parser that parses HTML 5 web pages and reports the execution context of each character.
HTML
40
star
66

FmFM

Python
39
star
67

cocoapods-blocklist

A CocoaPods plugin used to check a project against a list of pods that you do not want included in your build. Security is the primary use, but keeping specific pods that have conflicting licenses is another possible use.
Ruby
39
star
68

ember-gridstack

Ember components to build drag-and-drop multi-column grids powered by gridstack.js
JavaScript
37
star
69

k8s-namespace-guard

K8s - Admission controller for guarding namespace
Go
35
star
70

VerizonVideoPartnerSDK-controls-ios

Public iOS implementation of the OneMobileSDK default custom controls interface... demonstrating how customers can implement their own custom video player controls.
Swift
35
star
71

SubdomainSleuth

Scanner to identify dangling DNS records and subdomain takeovers
Go
34
star
72

fluxible-action-utils

Utility methods to aid in writing actions for fluxible based applications.
JavaScript
34
star
73

parsec

A collection of libraries and utilities to simplify the process of building web service applications.
Java
34
star
74

mod_statuspage

Simple express/connect middleware to provide a status page with following details of the nodejs host.
JavaScript
32
star
75

bftkv

A distributed key-value storage that's tolerant to Byzantine fault.
JavaScript
30
star
76

spivak

Python
30
star
77

protractor-retry

Use protractor features to automatically re-run failed tests with a specific configurable number of attempts.
JavaScript
28
star
78

cubed

Data Mart As A Service
Java
27
star
79

jsx-test

An easy way to test your React Components (`.jsx` files).
JavaScript
27
star
80

ycb-java

YCB Java
Java
27
star
81

fluxible-immutable-utils

A mixin that provides a convenient interface for using Immutable.js inside react components.
JavaScript
25
star
82

maaf

Modality-Agnostic Attention Fusion for visual search with text feedback
Python
25
star
83

node-limits

Simple express/connect middleware to set limit to upload size, set request timeout etc.
JavaScript
24
star
84

GitHub-Security-Alerts-Workflow

Automation to Incorporate GitHub Security Alerts Into your Business Workflow
Python
23
star
85

bandar-log

Monitoring tool to measure flow throughput of data sources and processing components that are part of Data Ingestion and ETL pipelines.
Scala
21
star
86

fumble

Simple error objects in node. Created specifically to be used with https://github.com/yahoo/fetchr and based on https://github.com/hapijs/boom
JavaScript
21
star
87

SongbirdCharts

Allows for other apps to render accessible audio charts
Kotlin
21
star
88

express-csp

Express extension for Content Security Policy
JavaScript
19
star
89

elide-js

Elide is a library that makes it easy to talk to a JSON API compliant backend.
JavaScript
18
star
90

Zake

A python package that works to provide a nice set of testing utilities for the kazoo library.
Python
18
star
91

npm-auto-version

Automatically generate new NPM versions based on Git tags when publishing
JavaScript
18
star
92

httpmi

An HTTP proxy for IPMI commands.
Python
17
star
93

hodman

Selenium object library
JavaScript
17
star
94

elide-spring-boot-example

Spring Boot example using the Elide framework.
Java
17
star
95

cerebro

JavaScript
17
star
96

Override

In app feature flag management
Swift
16
star
97

ychaos

YChaos - The Resilience Framework by Yahoo!
Python
16
star
98

parsec-libraries

Tools to simplify deploying web services with Parsec.
Java
16
star
99

NetCHASM

An Automated health checking and server status verification system.
C++
14
star
100

k8s-ingress-claim

An admission control policy that safeguards against accidental duplicate claiming of Hosts/Domains.
Go
14
star