• Stars
    star
    106
  • Rank 325,871 (Top 7 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 10 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tika-Similarity uses the Tika-Python package (Python port of Apache Tika) to compute file similarity based on Metadata features.

Apache Tika File Similarity based on Jaccard distance, Edit distance & Cosine distance

This project demonstrates the usage of the Tika-Python package (Python port of Apache Tika) to compute file similarity based on metadata features.

The script can iterate over all files in the current directory, or specific files by command line, derive their metadata features, and compute the union of all features. The union of all features becomes the "golden feature set" that all document features are compared to via intersect. The length of that intersect per file divided by the length of the unioned set becomes the similarity score.

Scores are sorted in reverse (descending) order which can be shown in three different Data-Driven document visualizaions. A companion project to this effort is Auto Extractor which uses Apache Spark and Apache Nutch to take web crawl data, and produce D3-visualizations and clusters of similar pages.

Pre-requisite

Installation

git clone https://github.com/chrismattmann/tika-img-similarity
pip install tika editdistance

You can also check out ETLlib

How to use

Optional: Compute similarity only on specific IANA MIME Type(s) inside a directory using --accept

Key-based comparison

This compares metadata feature names as a golden feature set

#!/usr/bin/env python
python similarity.py -f [directory of files] [--accept [jpeg pdf etc...]]
or 
python similarity.py -c [file1 file2 file3 ...]

Value-based comparison

This compares metadata feature names together with its value as a golden feature set

#!/usr/bin/env python
python value-similarity.py -f [directory of files] [--accept [jpeg pdf etc...]]
or 
python value-similarity.py -c [file1 file2 file3 ...]

Edit Distance comparison on Metadata Values

  • This computes pairwise similarity scores based on Edit Distance Similarity.
  • Similarity Score of 1 implies an identical pair of documents.
#!/usr/bin/env python
python edit-value-similarity.py [-h] --inputDir INPUTDIR --outCSV OUTCSV [--accept [png pdf etc...]] [--allKeys]

--inputDir INPUTDIR  path to directory containing files

--outCSV OUTCSV      path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Edit distance

--accept [ACCEPT]    Optional: compute similarity only on specified IANA MIME Type(s)

--allKeys            Optional: compute edit distance across all metadata keys of 2 documents, else default to only intersection of metadata keys

Eg: python edit-value-similarity.py --inputDir /path/to/files --outCSV /path/to/output.csv --accept png pdf gif

Cosine Distance comparison on Metadata Values

  • This computes pairwise similarity scores based on Cosine Distance Similarity.
  • Similarity Score of 1 implies an identical pair of documents.
#!/usr/bin/env python
python cosine_similarity.py [-h] --inputDir INPUTDIR --outCSV OUTCSV [--accept [png pdf etc...]]

--inputDir INPUTDIR  path to directory containing files

--outCSV OUTCSV      path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Cosine distance

--accept [ACCEPT]    Optional: compute similarity only on specified IANA MIME Type(s)

Similarity based on Stylistic/Authorship features

  • This calculates pairwise cosine similarity on bag of signatures/features produced by extracting stylistic/authorship features from text.
#!/usr/bin/env python
python psykey.py --inputDir INPUTDIR --outCSV OUTCSV --wordlists WRODLIST_FOLDER

--inputDir INPUTDIR  path to directory containing files

--outCSV OUTCSV      path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Cosine distance of stylistic and authorship features

--wordlists WRODLIST_FOLDER    path to the folder that contains files that are word list belonging to different classes. eg Use the wordlist folder provided with the tika-similarity library. If adding your own, make sure that the file is .txt with one word per line. Also, the name of the file will be considered the name of the class.

D3 visualization

Cluster viz

  • Jaccard Similarity
* python cluster-scores.py [-t threshold_value] (for generating cluster viz)
* open cluster-d3.html(or dynamic-cluster.html for interactive viz) in your browser
  • Edit Distance & Cosine Similarity
* python edit-cosine-cluster.py --inputCSV <PATH TO CSV FILE> --cluster <INTEGER OPTION> (for generating cluster viz)

  <PATH TO CSV FILE> - Path to CSV file generated by running edit-value-similarity.py or cosine_similarity.py
  <INTEGER OPTION> - Pass 0 to cluster based on x-coordinate, 1 to cluster based on y-coordinate, 2 to cluster based on similarity score

* open cluster-d3.html(or dynamic-cluster.html for interactive viz) in your browser

Default threshold value is 0.01.

Circlepacking viz

  • Jaccard Similarity
* python circle-packing.py (for generating circlepacking viz)
* open circlepacking.html(or dynamic-circlepacking.html for interactive viz) in your browser
  • Edit Distance & Cosine Similarity
* python edit-cosine-circle-packing.py --inputCSV <PATH TO CSV FILE> --cluster <INTEGER OPTION> (for generating circlepacking viz)

  <PATH TO CSV FILE> - Path to CSV file generated by running edit-value-similarity.py or cosine_similarity.py
  <INTEGER OPTION> - Pass 0 to cluster based on x-coordinate, 1 to cluster based on y-coordinate, 2 to cluster based on similarity score


* open circlepacking.html(or dynamic-circlepacking.html for interactive viz) in your browser

Composite viz

This is a combination of cluster viz and circle packing viz. The deeper color, the more the same attributes in the cluster.

* open compositeViz.html in your browser

Image of composite viz

Big data way

if you are dealing with big data, you can use it this way:

* python generateLevelCluster.py (for generating level cluster viz)
* open levelCluster-d3.html in your browser

You can set max number for each node _maxNumNode(default _maxNumNode = 10) in generateLevelCluster.py Image of level composite viz

Questions, comments?

Send them to Chris A. Mattmann.

Contributors

  • Chris A. Mattmann, JPL
  • Dongni Zhao, USC
  • Harshavardhan Manjunatha, USC
  • Thamme Gowda, USC
  • Ayberk Yılmaz, USC
  • Aravind Ram, USC
  • Aishwarya Parameshwaran, USC
  • Rashmi Nalwad, USC
  • Asitang Mishra, JPL
  • Suzanne Stathatos, JPL

License

This project is licensed under the Apache License, version 2.0.

More Repositories

1

tika-python

Tika-Python is a Python binding to the Apache Tikaâ„¢ REST services allowing Tika to be called natively in the Python community.
Python
1,465
star
2

MLwithTensorFlow2ed

Code for Machine Learning with TensorFlow: 2nd Edition Published by Manning Publications
Jupyter Notebook
139
star
3

imagecat

ImageCat is an Apache OODT RADIX application that uses Apache Solr, Apache Tika and Apache OODT to ingest 10s of millions of files (images,but could be extended to other files) in place, and to extract metadata and OCR information from those files/images using Tika and Tesseract OCR.
Java
94
star
4

lucene-geo-gazetteer

Uses Apache Lucene, OpenNLP and geonames and extracts locations from text and geocodes them.
Java
36
star
5

nutch-python

Nutch-Python is a Python binding to the Apache Nutch™ REST services allowing Nutch to be called natively in the Python community. — Edit
Python
35
star
6

etllib

This is the ETL lib package. It provides an API to munge and prepare JSON, TSV and other data using Apache Tika and JSON parsing/loading for ETL via Apache OODT (or other libs) into Apache Solr.
Python
16
star
7

solrcene

Spatial Branch of Apache Solr
Java
13
star
8

trec-dd-polar

A dataset downloaded from the deep and scientific web across three major Polar data centers for use in research.
Shell
13
star
9

shangridocs

Document exploration tool
JavaScript
12
star
10

drat

The Distributed Release Audit Tool (DRAT) for code analysis and verification.
JavaScript
8
star
11

politics-hacking

Scripts to process & analyze web data regarding politics.
Python
6
star
12

apachestuff

Python
6
star
13

DCGAN-AnimeFaces

Jupyter Notebook
5
star
14

NSFDataVizHackathon-2014

Shell
5
star
15

DCGAN-Dog-Generator

Jupyter Notebook
4
star
16

disco

Data Intensive Software Connectors
Java
4
star
17

deeplearning-udacity

Chris's assignments from DeepLearning class on udacity.
Jupyter Notebook
3
star
18

ctakesparser-utils

Shell
3
star
19

grobidparser-resources

Shell
3
star
20

bigtranslate

An Apache OODT, Apache Tika, and Apache Solr based system to automatically take large TSV file datasets, and to translate them from one language to another. Built and inspired by the DARPA XDATA Employment dataset.
Shell
3
star
21

geotopicparser-utils

2
star
22

memex-autonomy

Python
2
star
23

HyspIRI

Shell
2
star
24

ace

Automated Concept Extraction from Search Engines
Java
2
star
25

apple

Automatic precondition, convert and publish remote sending data to the ESGF.
Java
2
star
26

memex-weapons

CSS
1
star
27

earthcube

Shell
1
star
28

oodt-pushpull-plugins

Java
1
star
29

COVID19-text

Jupyter Notebook
1
star
30

smartcontracts

JavaScript
1
star
31

labkey-dumper

Java
1
star
32

maars-search

1
star
33

videocat

1
star