• Stars
    star
    1,388
  • Rank 33,631 (Top 0.7 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 4 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

What's in your data? Extract schema, statistics and entities from datasets

PyPI - Python Version GitHub GitHub last commit

Shows a black logo in light color mode and a white one in dark color mode.

Data Profiler | What's in your data?

The DataProfiler is a Python library designed to make data analysis, monitoring, and sensitive data detection easy.

Loading Data with a single command, the library automatically formats & loads files into a DataFrame. Profiling the Data, the library identifies the schema, statistics, entities (PII / NPI) and more. Data Profiles can then be used in downstream applications or reports.

Getting started only takes a few lines of code (example csv):

import json
from dataprofiler import Data, Profiler

data = Data("your_file.csv") # Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL

print(data.data.head(5)) # Access data directly via a compatible Pandas DataFrame

profile = Profiler(data) # Calculate Statistics, Entity Recognition, etc

readable_report = profile.report(report_options={"output_format": "compact"})

print(json.dumps(readable_report, indent=4))

Note: The Data Profiler comes with a pre-trained deep learning model, used to efficiently identify sensitive data (PII / NPI). If desired, it's easy to add new entities to the existing pre-trained model or insert an entire new pipeline for entity recognition.

For API documentation, visit the documentation page.

If you have suggestions or find a bug, please open an issue.

If you want to contribute, visit the contributing page.


Install

To install the full package from pypi: pip install DataProfiler[full]

If you want to install the ml dependencies without generating reports use DataProfiler[ml]

If the ML requirements are too strict (say, you don't want to install tensorflow), you can install a slimmer package with DataProfiler[reports]. The slimmer package disables the default sensitive data detection / entity recognition (labler)

Install from pypi: pip install DataProfiler


What is a Data Profile?

In the case of this library, a data profile is a dictionary containing statistics and predictions about the underlying dataset. There are "global statistics" or global_stats, which contain dataset level data and there are "column/row level statistics" or data_stats (each column is a new key-value entry).

The format for a structured profile is below:

"global_stats": {
    "samples_used": int,
    "column_count": int,
    "row_count": int,
    "row_has_null_ratio": float,
    "row_is_null_ratio": float,
    "unique_row_ratio": float,
    "duplicate_row_count": int,
    "file_type": string,
    "encoding": string,
    "correlation_matrix": list[list[int]], (*)
    "chi2_matrix": list[list[float]],
    "profile_schema": {
        string: list[int]
    },
    "times": dict[string, float],
},
"data_stats": [
    {
        "column_name": string,
        "data_type": string,
        "data_label": string,
        "categorical": bool,
        "order": string,
        "samples": list[str],
        "statistics": {
            "sample_size": int,
            "null_count": int,
            "null_types": list[string],
            "null_types_index": {
                string: list[int]
            },
            "data_type_representation": dict[string, float],
            "min": [null, float, str],
            "max": [null, float, str],
            "mode": float,
            "median": float,
            "median_absolute_deviation": float,
            "sum": float,
            "mean": float,
            "variance": float,
            "stddev": float,
            "skewness": float,
            "kurtosis": float,
            "num_zeros": int,
            "num_negatives": int,
            "histogram": {
                "bin_counts": list[int],
                "bin_edges": list[float],
            },
            "quantiles": {
                int: float
            },
            "vocab": list[char],
            "avg_predictions": dict[string, float],
            "data_label_representation": dict[string, float],
            "categories": list[str],
            "unique_count": int,
            "unique_ratio": float,
            "categorical_count": dict[string, int],
            "gini_impurity": float,
            "unalikeability": float,
            "precision": {
                'min': int,
                'max': int,
                'mean': float,
                'var': float,
                'std': float,
                'sample_size': int,
                'margin_of_error': float,
                'confidence_level': float
            },
            "times": dict[string, float],
            "format": string
        },
        "null_replication_metrics": {
            "class_prior": list[int],
            "class_sum": list[list[int]],
            "class_mean": list[list[int]]
        }
    }
]

(*) Currently the correlation matrix update is toggled off. It will be reset in a later update. Users can still use it as desired with the is_enable option set to True.

The format for an unstructured profile is below:

"global_stats": {
    "samples_used": int,
    "empty_line_count": int,
    "file_type": string,
    "encoding": string,
    "memory_size": float, # in MB
    "times": dict[string, float],
},
"data_stats": {
    "data_label": {
        "entity_counts": {
            "word_level": dict[string, int],
            "true_char_level": dict[string, int],
            "postprocess_char_level": dict[string, int]
        },
        "entity_percentages": {
            "word_level": dict[string, float],
            "true_char_level": dict[string, float],
            "postprocess_char_level": dict[string, float]
        },
        "times": dict[string, float]
    },
    "statistics": {
        "vocab": list[char],
        "vocab_count": dict[string, int],
        "words": list[string],
        "word_count": dict[string, int],
        "times": dict[string, float]
    }
}

The format for a graph profile is below:

"num_nodes": int,
"num_edges": int,
"categorical_attributes": list[string],
"continuous_attributes": list[string],
"avg_node_degree": float,
"global_max_component_size": int,
"continuous_distribution": {
    "<attribute_1>": {
        "name": string,
        "scale": float,
        "properties": list[float, np.array]
    },
    "<attribute_2>": None,
    ...
},
"categorical_distribution": {
    "<attribute_1>": None,
    "<attribute_2>": {
        "bin_counts": list[int],
        "bin_edges": list[float]
    },
    ...
},
"times": dict[string, float]

Profile Statistic Descriptions

Structured Profile

global_stats:

  • samples_used - number of input data samples used to generate this profile
  • column_count - the number of columns contained in the input dataset
  • row_count - the number of rows contained in the input dataset
  • row_has_null_ratio - the proportion of rows that contain at least one null value to the total number of rows
  • row_is_null_ratio - the proportion of rows that are fully comprised of null values (null rows) to the total number of rows
  • unique_row_ratio - the proportion of distinct rows in the input dataset to the total number of rows
  • duplicate_row_count - the number of rows that occur more than once in the input dataset
  • file_type - the format of the file containing the input dataset (ex: .csv)
  • encoding - the encoding of the file containing the input dataset (ex: UTF-8)
  • correlation_matrix - matrix of shape column_count x column_count containing the correlation coefficients between each column in the dataset
  • chi2_matrix - matrix of shape column_count x column_count containing the chi-square statistics between each column in the dataset
  • profile_schema - a description of the format of the input dataset labeling each column and its index in the dataset
    • string - the label of the column in question and its index in the profile schema
  • times - the duration of time it took to generate the global statistics for this dataset in milliseconds

data_stats:

  • column_name - the label/title of this column in the input dataset
  • data_type - the primitive python data type that is contained within this column
  • data_label - the label/entity of the data in this column as determined by the Labeler component
  • categorical - โ€˜trueโ€™ if this column contains categorical data
  • order - the way in which the data in this column is ordered, if any, otherwise โ€œrandomโ€
  • samples - a small subset of data entries from this column
  • statistics - statistical information on the column
    • sample_size - number of input data samples used to generate this profile
    • null_count - the number of null entries in the sample
    • null_types - a list of the different null types present within this sample
    • null_types_index - a dict containing each null type and a respective list of the indicies that it is present within this sample
    • data_type_representation - the percentage of samples used identifying as each data_type
    • min - minimum value in the sample
    • max - maximum value in the sample
    • mode - mode of the entries in the sample
    • median - median of the entries in the sample
    • median_absolute_deviation - the median absolute deviation of the entries in the sample
    • sum - the total of all sampled values from the column
    • mean - the average of all entries in the sample
    • variance - the variance of all entries in the sample
    • stddev - the standard deviation of all entries in the sample
    • skewness - the statistical skewness of all entries in the sample
    • kurtosis - the statistical kurtosis of all entries in the sample
    • num_zeros - the number of entries in this sample that have the value 0
    • num_negatives - the number of entries in this sample that have a value less than 0
    • histogram - contains histogram relevant information
      • bin_counts - the number of entries within each bin
      • bin_edges - the thresholds of each bin
    • quantiles - the value at each percentile in the order they are listed based on the entries in the sample
    • vocab - a list of the characters used within the entries in this sample
    • avg_predictions - average of the data label prediction confidences across all data points sampled
    • categories - a list of each distinct category within the sample if categorial = 'true'
    • unique_count - the number of distinct entries in the sample
    • unique_ratio - the proportion of the number of distinct entries in the sample to the total number of entries in the sample
    • categorical_count - number of entries sampled for each category if categorical = 'true'
    • gini_impurity - measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset
    • unalikeability - a value denoting how frequently entries differ from one another within the sample
    • precision - a dict of statistics with respect to the number of digits in a number for each sample
    • times - the duration of time it took to generate this sample's statistics in milliseconds
    • format - list of possible datetime formats
  • null_replication_metrics - statistics of data partitioned based on whether column value is null (index 1 of lists referenced by dict keys) or not (index 0)
    • class_prior - a list containing probability of a column value being null and not null
    • class_sum- a list containing sum of all other rows based on whether column value is null or not
    • class_mean- a list containing mean of all other rows based on whether column value is null or not

Unstructured Profile

global_stats:

  • samples_used - number of input data samples used to generate this profile
  • empty_line_count - the number of empty lines in the input data
  • file_type - the file type of the input data (ex: .txt)
  • encoding - file encoding of the input data file (ex: UTF-8)
  • memory_size - size of the input data in MB
  • times - duration of time it took to generate this profile in milliseconds

data_stats:

  • data_label - labels and statistics on the labels of the input data
    • entity_counts - the number of times a specific label or entity appears inside the input data
      • word_level - the number of words counted within each label or entity
      • true_char_level - the number of characters counted within each label or entity as determined by the model
      • postprocess_char_level - the number of characters counted within each label or entity as determined by the postprocessor
    • entity_percentages - the percentages of each label or entity within the input data
      • word_level - the percentage of words in the input data that are contained within each label or entity
      • true_char_level - the percentage of characters in the input data that are contained within each label or entity as determined by the model
      • postprocess_char_level - the percentage of characters in the input data that are contained within each label or entity as determined by the postprocessor
    • times - the duration of time it took for the data labeler to predict on the data
  • statistics - statistics of the input data
    • vocab - a list of each character in the input data
    • vocab_count - the number of occurrences of each distinct character in the input data
    • words - a list of each word in the input data
    • word_count - the number of occurrences of each distinct word in the input data
    • times - the duration of time it took to generate the vocab and words statistics in milliseconds

Graph Profile

  • num_nodes - number of nodes in the graph
  • num_edges - number of edges in the graph
  • categorical_attributes - list of categorical edge attributes
  • continuous_attributes - list of continuous edge attributes
  • avg_node_degree - average degree of nodes in the graph
  • global_max_component_size: size of the global max component

continuous_distribution:

  • <attribute_N>: name of N-th edge attribute in list of attributes
    • name - name of distribution for attribute
    • scale - negative log likelihood used to scale and compare distributions
    • properties - list of statistical properties describing the distribution
      • [shape (optional), loc, scale, mean, variance, skew, kurtosis]

categorical_distribution:

  • <attribute_N>: name of N-th edge attribute in list of attributes

    • bin_counts: counts in each bin of the distribution histogram
    • bin_edges: edges of each bin of the distribution histogram
  • times - duration of time it took to generate this profile in milliseconds

Support

Supported Data Formats

  • Any delimited file (CSV, TSV, etc.)
  • JSON object
  • Avro file
  • Parquet file
  • Text file
  • Pandas DataFrame
  • A URL that points to one of the supported file types above

Data Types

Data Types are determined at the column level for structured data

  • Int
  • Float
  • String
  • DateTime

Data Labels

Data Labels are determined per cell for structured data (column/row when the profiler is used) or at the character level for unstructured data.

  • UNKNOWN
  • ADDRESS
  • BAN (bank account number, 10-18 digits)
  • CREDIT_CARD
  • EMAIL_ADDRESS
  • UUID
  • HASH_OR_KEY (md5, sha1, sha256, random hash, etc.)
  • IPV4
  • IPV6
  • MAC_ADDRESS
  • PERSON
  • PHONE_NUMBER
  • SSN
  • URL
  • US_STATE
  • DRIVERS_LICENSE
  • DATE
  • TIME
  • DATETIME
  • INTEGER
  • FLOAT
  • QUANTITY
  • ORDINAL

Get Started

Load a File

The Data Profiler can profile the following data/file types:

  • CSV file (or any delimited file)
  • JSON object
  • Avro file
  • Parquet file
  • Text file
  • Pandas DataFrame
  • A URL that points to one of the supported file types above

The profiler should automatically identify the file type and load the data into a Data Class.

Along with other attributtes the Data class enables data to be accessed via a valid Pandas DataFrame.

# Load a csv file, return a CSVData object
csv_data = Data('your_file.csv')

# Print the first 10 rows of the csv file
print(csv_data.data.head(10))

# Load a parquet file, return a ParquetData object
parquet_data = Data('your_file.parquet')

# Sort the data by the name column
parquet_data.data.sort_values(by='name', inplace=True)

# Print the sorted first 10 rows of the parquet data
print(parquet_data.data.head(10))

# Load a json file from a URL, return a JSONData object
json_data = Data('https://github.com/capitalone/DataProfiler/blob/main/dataprofiler/tests/data/json/iris-utf-8.json')

If the file type is not automatically identified (rare), you can specify them specifically, see section Specifying a Filetype or Delimiter.

Profile a File

Example uses a CSV file for example, but CSV, JSON, Avro, Parquet or Text also work.

import json
from dataprofiler import Data, Profiler

# Load file (CSV should be automatically identified)
data = Data("your_file.csv")

# Profile the dataset
profile = Profiler(data)

# Generate a report and use json to prettify.
report  = profile.report(report_options={"output_format": "pretty"})

# Print the report
print(json.dumps(report, indent=4))

Updating Profiles

Currently, the data profiler is equipped to update its profile in batches.

import json
from dataprofiler import Data, Profiler

# Load and profile a CSV file
data = Data("your_file.csv")
profile = Profiler(data)

# Update the profile with new data:
new_data = Data("new_data.csv")
profile.update_profile(new_data)

# Print the report using json to prettify.
report  = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

Note that if the data you update the profile with contains integer indices that overlap with the indices on data originally profiled, when null rows are calculated the indices will be "shifted" to uninhabited values so that null counts and ratios are still accurate.

Merging Profiles

If you have two files with the same schema (but different data), it is possible to merge the two profiles together via an addition operator.

This also enables profiles to be determined in a distributed manner.

import json
from dataprofiler import Data, Profiler

# Load a CSV file with a schema
data1 = Data("file_a.csv")
profile1 = Profiler(data1)

# Load another CSV file with the same schema
data2 = Data("file_b.csv")
profile2 = Profiler(data2)

profile3 = profile1 + profile2

# Print the report using json to prettify.
report  = profile3.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

Note that if merged profiles had overlapping integer indices, when null rows are calculated the indices will be "shifted" to uninhabited values so that null counts and ratios are still accurate.

Profiler Differences

For finding the change between profiles with the same schema we can utilize the profile's diff function. The diff will provide overall file and sampling differences as well as detailed differences of the data's statistics. For example, numerical columns have both t-test to evaluate similarity and PSI (Population Stability Index) to quantify column distribution shift. More information is described in the Profiler section of the Github Pages.

Create the difference report like this:

import json
import dataprofiler as dp

# Load a CSV file
data1 = dp.Data("file_a.csv")
profile1 = dp.Profiler(data1)

# Load another CSV file
data2 = dp.Data("file_b.csv")
profile2 = dp.Profiler(data2)

diff_report = profile1.diff(profile2)
print(json.dumps(diff_report, indent=4))

Profile a Pandas DataFrame

import pandas as pd
import dataprofiler as dp
import json

my_dataframe = pd.DataFrame([[1, 2.0],[1, 2.2],[-1, 3]])
profile = dp.Profiler(my_dataframe)

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

# read a specified column, in this case it is labeled 0:
print(json.dumps(report["data_stats"][0], indent=4))

Unstructured Profiler

In addition to the structured profiler, DataProfiler provides unstructured profiling for the TextData object or string. The unstructured profiler also works with list[string], pd.Series(string) or pd.DataFrame(string) given profiler_type option specified as unstructured. Below is an example of the unstructured profiler with a text file.

import dataprofiler as dp
import json

my_text = dp.Data('text_file.txt')
profile = dp.Profiler(my_text)

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

Another example of the unstructured profiler with pd.Series of strings is given as below, with the profiler option profiler_type='unstructured'

import dataprofiler as dp
import pandas as pd
import json

text_data = pd.Series(['first string', 'second string'])
profile = dp.Profiler(text_data, profiler_type='unstructured')

# print the report using json to prettify.
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4))

Graph Profiler

DataProfiler also provides the ability to profile graph data from a csv file. Below is an example of the graph profiler with a graph data csv file:

import dataprofiler as dp
import pprint

my_graph = dp.Data('graph_file.csv')
profile = dp.Profiler(my_graph)

# print the report using pretty print (json dump does not work on numpy array values inside dict)
report = profile.report()
printer = pprint.PrettyPrinter(sort_dicts=False, compact=True)
printer.pprint(report)

Visit the documentation page for additional Examples and API details

References

Sensitive Data Detection with High-Throughput Neural Network Models for Financial Institutions
Authors: Anh Truong, Austin Walters, Jeremy Goodsitt
2020 https://arxiv.org/abs/2012.09597
The AAAI-21 Workshop on Knowledge Discovery from Unstructured Data in Financial Services

More Repositories

1

react-native-pathjs-charts

Android and iOS charts based on react-native-svg and paths-js
JavaScript
879
star
2

datacompy

Pandas, Polars, and Spark DataFrame comparison for humans and more!
Python
430
star
3

cqrs-manager-for-distributed-reactive-services

Experimental CQRS and Event Sourcing service
Java
302
star
4

SWHttpTrafficRecorder

A simple library empowering you to record/capture HTTP(s) traffic of an iOS app for mocking/stubbing later.
Objective-C
205
star
5

fpe

A format-preserving encryption implementation in Go
Go
199
star
6

rubicon-ml

Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!
Jupyter Notebook
125
star
7

giraffez

User-friendly Teradata client for Python
Python
109
star
8

locopy

locopy: Loading/Unloading to Redshift and Snowflake using Python.
Python
102
star
9

checks-out

Checks-Out pull request approval system
Go
77
star
10

dataCompareR

dataCompareR is an R package that allows users to compare two datasets and view a report on the similarities and differences.
R
75
star
11

stack-deployment-tool

Go
66
star
12

bash_shell_mock

A shell script mocking utility/framework for the BASH shell
Shell
66
star
13

architecture-viewer

Visualize your PlantUML sequence diagrams as interactive architecture diagrams!
JavaScript
59
star
14

go-future-context

A simple Future (Promise) library for Go.
Go
55
star
15

AI_Dictionary_English_Spanish

TeX
49
star
16

acronym-decoder

Acronym Decoder
TypeScript
43
star
17

synthetic-data

Generating complex, nonlinear datasets appropriate for use with deep learning/black box models which 'need' nonlinearityโ€ฉ
Python
42
star
18

Particle-Cloud-Framework

Python
35
star
19

slackbot-destroyer

๐Ÿ“ฃ โŒ Slack integration that can destroy all incoming messages from Slackbot.
Python
34
star
20

global-attribution-mapping

GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Python
32
star
21

oas-nodegen

A library for generating completely customizable code from the Open API Specification (FKA Swagger) RESTful API documentation using the scripting power of Node.js.
JavaScript
28
star
22

federated-model-aggregation

The Federated Model Aggregation (FMA) Service is a collection of installable python components that make up the generic workflow/infrastructure needed for federated learning.
Python
28
star
23

easy-screenshots

Android Instrumentation Test Screenshots made Easy.
Java
21
star
24

edgetest

edgetest is a tox-inspired python library that will loop through your project's dependencies, and check if your project is compatible with the latest version of each dependency
Python
16
star
25

ablation

Evaluating XAI methods through ablation studies.
Python
15
star
26

serverless-shell

โšก๏ธ๐Ÿš Serverless Shell with environment variables plugin
JavaScript
14
star
27

OAuthClient

Awesome OAuth Client for Java.
Java
13
star
28

otvPlots

ovtPlots: An R Package for Variable Level Monitoring
R
13
star
29

json-syntax

Generates functions to convert Python classes to and from JSON friendly objects.
Python
12
star
30

screen-object

screen-object (ruby gem for mobile app automation)
Ruby
12
star
31

jwt-security

JavaScript
12
star
32

BankAccountStarter-API-reference-app

CSS
10
star
33

CreditOffers-API-reference-app

JavaScript
10
star
34

Rewards-API-reference-app

JavaScript
10
star
35

local-crontab

๐Ÿ—บ๏ธโฐ Convert local crontabs to UTC crontabs
JavaScript
8
star
36

modtracker

JSON unmarshaling in Go that includes detection of modified fields
Go
7
star
37

grpc-cucumber-js

JavaScript
7
star
38

edgetest-hub

hub plugin for edgetest
Python
2
star
39

oas-nodegen-example

Example project that shows how to customize generated code to fit a specific design pattern using oas-nodegen
Java
2
star
40

edgetest-conda

Conda plugin for edgetest
Python
1
star
41

edgetest-pip-tools

pip-tools plugin for edgetest
Python
1
star