• Stars
    star
    2
  • Language
    Jupyter Notebook
  • Created about 5 years ago
  • Updated about 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

object dection for face and eyes using pre trained model and developed a code for sample collection

More Repositories

1

RSNA-Pneumonia-Detection-Challenge

In this competition, you’re challenged to build an algorithm to detect a visual signal for pneumonia in medical images. Specifically, your algorithm needs to automatically locate lung opacities on chest radiographs.
Jupyter Notebook
9
star
2

Faster-R-CNN-for-Open-Images-Dataset-by-Keras

Introduction The original code of Keras version of Faster R-CNN I used was written by yhenon (resource link: GitHub .) He used the PASCAL VOC 2007, 2012, and MS COCO datasets. For me, I just extracted three classes, “Person”, “Car” and “Mobile phone”, from Google’s Open Images Dataset V4. I applied configs different from his work to fit my dataset and I removed unuseful code. Btw, to run this on Google Colab (for free GPU computing up to 12hrs), I compressed all the code into three .ipynb notebooks. Sorry for the messy structure. I wrote my exploring and experiment results for Faster R-CNN in this article in Medium. If you are in China, you cannot directly access Medium. So I make a copy in here. Project Structure Object_Detection_DataPreprocessing.ipynb is the file to extract subdata from Open Images Dataset V4 which includes downloading the images and creating the annotation files for our training. I run this part by my own computer because of no need for GPU computation. frcnn_train_vgg.ipynb is the file to train the model. The configuration and model saved path are inside this file. frcnn_test_vgg.ipynb is the file to test the model with test images and calculate the mAP (mean average precision) for the model.
Jupyter Notebook
7
star
3

Faster-R-CNN-model-deployment-using-flask-on-local-host

In this Repository I have deployed the faster Rcnn model on flask to deploy it on local server and aslo provided the code for how to use AWS sagamaker to deploy your pre train model on aws
Python
6
star
4

ISBI-Challenge-Segmentation-of-neuronal-structures-in-EM-stacks

In this challenge, a full stack of EM slices will be used to train machine learning algorithms for the purpose of automatic segmentation of neural structures. The images are representative of actual images in the real-world, containing some noise and small image alignment errors. None of these problems led to any difficulties in the manual labeling of each element in the image stack by an expert human neuroanatomist. The aim of the challenge is to compare and rank the different competing methods based on their pixel and object classification accuracy.
Jupyter Notebook
5
star
5

Resent-50-from-scratch-for-multi--label-image-classification

kaggle competition iMet Collection 2019 Classify the multi-­‐label images classification according to their given label using a ResNet50 . build the model from the scratch
Jupyter Notebook
5
star
6

Multi-label-image-classification-in-keras

Classify the multi-­‐label images classification according to their given label . build the model from the scratch in keras
Jupyter Notebook
4
star
7

Yolo-V-3-network-from-scratch-in-pytorch

yolo v3 in pytorch ( python version 3 ) for image real time image detection and video detection (video formate .avi supported by opencv )
Jupyter Notebook
4
star
8

Titanic-Data-set-

The objective was to predict the chances of passenger’s survival Challenges – Dealing with missing value and Data Cleaning Techniques used One Hot encoding for categorical data and Min Max Scaler for numerical data Model used – Logistic Regression, Decision tree (ID3 algorithm), Random Forest Classifier
Jupyter Notebook
4
star
9

Object-Detection-using-Fast.ai

The labelled dataset below will have images of damaged cars along with labels for various types of damages. Among various categories of damages, The model only 2 damage categories- scratches or spots, and dents . Use various image enhancement and augmentation techniques to improve accuracy of the model.
Jupyter Notebook
4
star
10

PyTorch-SSD

All code was taken from Max deGroot's & Ellis Brown's ssd.pytorch repository except the object_detection.py file. However, some modifications were done in order to make this project run on Windows 10 and Python 3.6 with PyTorch 0.4.1
Jupyter Notebook
3
star
11

Faster-R-CNN-model-dockerization

Python
3
star
12

Nitinguptadu

Breast cancer detection
Jupyter Notebook
3
star
13

CNN-code-in-pytorch

CNN code in pytorch for raw image in Jupiter notebook with accuracy of 96.2%
Jupyter Notebook
3
star
14

key-word-recognition-using-CNN

key word recognition ( one to nine ) using CNN
Python
3
star
15

Pillow-image-data-augumenatation-using-manullay-preprocessing

I also noticed there are lots of images in the data which are specific B&W or only of R/B/G channel. Based on these observations I decided to write the below code to do small changes in images which are from unbalanced classes in training sample ans save them:
Jupyter Notebook
3
star
16

Coding-

python basic A to Z
Jupyter Notebook
2
star
17

Image-Classifiaction-Using-Resnet-

This Model Runs on Free Heroku server .Static page is inside app.py . Model is loaded by keras application
Python
2
star
18

https-github.com-Nitinguptadu-To-acess-gpu-from-local-computer-in-pytorch-and-keras-

Jupyter Notebook
2
star
19

Manually-created-one-hot-encoding-

Manullay created Hot one encoding and find unique values form csv
Jupyter Notebook
2
star
20

Keras-Gpu-installization-for-yolo-v3-for-text-detection-

Gpu GTX 1650 with 4 gb Graphic Ram Cuda version 10 tensorflow-gpu==1.13.1
Jupyter Notebook
2
star
21

Yolo-v3-

Yolo V 3 network from scratch in pytorch
2
star
22

Gas-sensor-array-under-dynamic-gas-mixtures

Time series data for Ethylene and Methane in air, and Ethylene and CO in air
Jupyter Notebook
2
star
23

CONV2D-with-Tabular-Data-

This repo is for self learning Purpose
Jupyter Notebook
2
star
24

To-acess-gpu-from-local-computer-in-pytorch-and-keras-

2
star
25

Multi-Label-Image-Classification-Model-in-Python

Multi-Label Image Classification Model in Python (Kearas)
Jupyter Notebook
2
star
26

CNN-code-for-raw-image-

cnn code in pytorch for raw colour image with accuracy of 96.2 %
2
star
27

Image-classification-Heroku-

This code is for self learning purpose
Python
2
star
28

github-

2
star
29

Flye

Jupyter Notebook
2
star
30

Fine-tune-VGG16-Image-Classifier-with-Keras-

Fine-tune VGG16 Image Classifier with Keras
Jupyter Notebook
2
star
31

Autoencoder-cifar10.py-using-keras.ipynb

Jupyter Notebook
2
star
32

Fine-tune-VGG16-Image-Classifier-with-Keras

Fine-tune VGG16 Image Classifier with Keras
2
star
33

Tsfresh

Feature extraction settings When starting a new data science project involving time series you probably want to start by extracting a comprehensive set of features. Later you can identify which features are relevant for the task at hand. In the final stages, you probably want to fine tune the parameter of the features to fine tune your models. You can do all those things with tsfresh. So, you need to know how to control which features are calculated by tsfresh and how one can adjust the parameters. In this section, we will clarify this. For the lazy: Just let me calculate some features So, to just calculate a comprehensive set of features, call the tsfresh.extract_features() method without passing a default_fc_parameters or kind_to_fc_parameters object, which means you are using the default options (which will use all feature calculators in this package for what we think are sane default parameters). For the advanced: How do I set the parameters for all kind of time series? After digging deeper into your data, you maybe want to calculate more of a certain type of feature and less of another type. So, you need to use custom settings for the feature extractors. To do that with tsfresh you will have to use a custom settings object: >>> from tsfresh.feature_extraction import ComprehensiveFCParameters >>> settings = ComprehensiveFCParameters() >>> # Set here the options of the settings object as shown in the paragraphs below >>> # ... >>> from tsfresh.feature_extraction import extract_features >>> extract_features(df, default_fc_parameters=settings) The default_fc_parameters is expected to be a dictionary, which maps feature calculator names (the function names you can find in the tsfresh.feature_extraction.feature_calculators file) to a list of dictionaries, which are the parameters with which the function will be called (as key value pairs). Each function parameter combination, that is in this dict will be called during the extraction and will produce a feature. If the function does not take any parameters, the value should be set to None. For example fc_parameters = { "length": None, "large_standard_deviation": [{"r": 0.05}, {"r": 0.1}] } will produce three features: one by calling the tsfresh.feature_extraction.feature_calculators.length() function without any parameters and two by calling tsfresh.feature_extraction.feature_calculators.large_standard_deviation() with r = 0.05 and r = 0.1. So you can control, which features will be extracted, by adding/removing either keys or parameters from this dict. It is as easy as that. If you decide to not calculate the length feature here, you delete it from the dictionary: del fc_parameters["length"] And now, only the two other features are calculated. For convenience, three dictionaries are predefined and can be used right away: tsfresh.feature_extraction.settings.ComprehensiveFCParameters: includes all features without parameters and all features with parameters, each with different parameter combinations. This is the default for extract_features if you do not hand in a default_fc_parameters at all. tsfresh.feature_extraction.settings.MinimalFCParameters: includes only a handful of features and can be used for quick tests. The features which have the “minimal” attribute are used here. tsfresh.feature_extraction.settings.EfficientFCParameters: Mostly the same features as in the tsfresh.feature_extraction.settings.ComprehensiveFCParameters, but without features which are marked with the “high_comp_cost” attribute. This can be used if runtime performance plays a major role. Theoretically, you could calculate an unlimited number of features with tsfresh by adding entry after entry to the dictionary. For the ambitious: How do I set the parameters for different type of time series? It is also possible, to control the features to be extracted for the different kinds of time series individually. You can do so by passing another dictionary to the extract function as a kind_to_fc_parameters = {“kind” : fc_parameters} parameter. This dict must be a mapping from kind names (as string) to fc_parameters objects, which you would normally pass as an argument to the default_fc_parameters parameter. So, for example using kind_to_fc_parameters = { "temperature": {"mean": None}, "pressure": {"max": None, "min": None} } will extract the “mean” feature of the “temperature” time series and the “min” and “max” of the “pressure” time series. The kind_to_fc_parameters argument will partly override the default_fc_parameters. So, if you include a kind name in the kind_to_fc_parameters parameter, its value will be used for that kind. Other kinds will still use the default_fc_parameters. A handy trick: Do I really have to create the dictionary by hand? Not necessarily. let’s assume you have a DataFrame of tsfresh features. By using feature selection algorithms you find out that only a subgroup of features is relevant. Then, we provide the tsfresh.feature_extraction.settings.from_columns() method that constructs the kind_to_fc_parameters dictionary from the column names of this filtered feature matrix to make sure that only relevant features are extracted. This can save a huge amount of time because you prevent the calculation of uncessary features. Let’s illustrate that with an example: # X_tsfresh containes the extracted tsfresh features X_tsfresh = extract_features(...) # which are now filtered to only contain relevant features X_tsfresh_filtered = some_feature_selection(X_tsfresh, y, ....) # we can easily construct the corresponding settings object kind_to_fc_parameters = tsfresh.feature_extraction.settings.from_columns(X_tsfresh_filtered) this will construct you the kind_to_fc_parameters dictionary that corresponds to the features and parameters (!) from the tsfresh features that were filtered by the some_feature_selection feature selection algorithm.
Jupyter Notebook
2
star
34

Machine-learning-A-to-Z

Linear Regression , logistic Regression , KNN, K Mean ,DecisionTrees_RandomForest_Classification,Feature Engineering,K fold,PCA,Random_Forest_Regression,RMSE
2
star
35

Busigence-Assigment-multilable-image-classifictation-in-keras

Jupyter Notebook
2
star
36

Raw-Fashion-image-data-classification-using-cnn

multi-­‐label classification problem Classify the images according to their given label .build the model from the scratch
Jupyter Notebook
2
star
37

cat-and-Dog-Nitin

this repo is for self learning purpose
Jupyter Notebook
2
star
38

Converting-RBG-image-to-gray-scale

converting and resize colour image into gray scale using opencv
Jupyter Notebook
2
star
39

Calculate-the-Screen-Time-of-Video-Using-CNN

Calculate the Screen Time of Actors in any Video (with Python codes) using Convolutional Neural Networks
2
star
40

Implementing-Autoencoders-in-Keras-Tutorial

In this tutorial, you’ll learn about autoencoders in deep learning and you will implement a convolutional and denoising autoencoder in Python with Keras. You will work with the NotMNIST alphabet dataset as an example
Jupyter Notebook
2
star
41

Raw-Fashion-image-data-for-image-classification-using-CNN

multi-­‐label classification problem
2
star
42

Image-classification

Classification of MNIST data set
Jupyter Notebook
2
star
43

Pytorch-Basic

image classification using pytorch
Jupyter Notebook
2
star
44

Pytorch-CNN

Image classification using MNIST Fashion data in pytorch
Jupyter Notebook
2
star
45

Cifar-10-data-set-with-train-accuracy-99-and-test-accuracy-93-

This repo is for self learning purpose
Jupyter Notebook
2
star
46

Linear-regresssion

2
star
47

speech_to_text

Jupyter Notebook
2
star
48

Fast.ai

Multi label image classification problem
Jupyter Notebook
2
star
49

Nitinguptadu-Raw-Fashion-image-data-for-image-classification-using-CNN

multi-­‐label classification problem
2
star
50

Dentisty.AI

Please use the following assignment for Data Scientist position: The deadline of Assignment is 3 days. 1. Download the data-set from : https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/download/ 2. Augment the data-set to make it efficient against shift, light variation and other noise. 3. Train a transfer learning based classifier to classify images according to benign/malignant given in the Excel file. 4. Train a transfer learning based Semantic segmentation according to the annotation in _anno.BMP file. 5. Upload the assignment on your Github in form of a IPYNB showing results on a few test set images.
Jupyter Notebook
2
star
51

Build-a-RESTful-service-that-extracts-expense-date-from-a-receipt.

Build a RESTful service that extracts expense date from a receipt.Deploy the service on any cloud platform like Heroku/AWS/GCP. The service should contain one API which has the following contract: Request: POST /extract_date Payload: {“base_64_image_content”: <base_64_image_content>} Response: If date is present: {“date”: “YYYY-MM-DD”} If date is not present: {“date”: null}
Jupyter Notebook
2
star
52

Chest-X-Ray-Images-Pneumonia-

using Fast.ai
Jupyter Notebook
2
star
53

Develop-a-generalized-algorithm-to-detect-the-brightness-of-any-image

Develop a generalized algorithm to detect the brightness of any image. Your algorithm should take an image as input and give a score between (0-10) as output (zero being low bright and 10 being high bright). You will not be provided with data for training.
Jupyter Notebook
2
star
54

Keras---Python-Deep-Learning-Neural-Network-API

This series will teach you how to use Keras, a neural network API written in Python. Each video focuses on a specific concept and shows how the full implementation is done in code using Keras and Python. We will learn how to preprocess data, organize data for training, validation and testing, build an artificial neural network from scratch, train an artificial neural network, build a convolutional neural network (CNN) and much more!
Jupyter Notebook
2
star
55

Faster-Rcnn-docker-

Python
2
star
56

Resnet-50

Let's implement resnet from scratch in pytorch
Jupyter Notebook
2
star
57

Generative-Adversarial-Network

a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Compatable with python 3.6 Data set net cifar 10
Jupyter Notebook
2
star
58

Visualizing-Decision-Trees-Random-forest-with-Python

How to Visualize Decision Trees using Matplotlib How to Visualize Decision Trees using Graphviz (what is Graphviz, how to install it on Mac and Windows, and how to use it to visualize decision trees) How to Visualize Individual Decision Trees from Bagged Trees or Random Forests
Jupyter Notebook
2
star
59

Multi-lable-classifiaction-on-tabluar-data-

This Project is for self learning Purpose
Jupyter Notebook
1
star
60

Speech-to-text-using-deep-learning-

This repo is for self learning purpose
Jupyter Notebook
1
star
61

my-laptop-requirements-

1
star
62

Image-classification-model-deployment-using-Flask-

image classifiacion model with flask using keras
Python
1
star
63

Docker-NLM-

This is my first docker image this repo is for personal learning purpose
Python
1
star
64

Python-Challenges-

This repo is for self learning purpose
Jupyter Notebook
1
star
65

cat-and-dog

1
star
66

pandas-profiling-vs-Sweetviz-

### Powerful EDA (Exploratory Data Analysis) using Sweetviz & pandas profiling
HTML
1
star
67

Lungs-Segmentation

lungs segmentation without Mask image uisng Tradinational Computer vision (open-CV)
Jupyter Notebook
1
star
68

Converting-Image-into-base-64-and-coverting-base64-to-image

This repo is for self learning purpose
Jupyter Notebook
1
star
69

Automated-image-classification

This Repo is for self learning purpose
Jupyter Notebook
1
star
70

Yugen.ai

Project
Jupyter Notebook
1
star
71

Yugen-ai-Heroku

Heroku
Python
1
star
72

Heroku-

this code is for self learning purpose. in this code we have deploy the a simple web app code with static file using flask on heroku server
Python
1
star
73

RASA-chat-bot-

This repo is for self learning Purpose
Python
1
star
74

Image-to-text-using-object-detection-

This repo is for self learning purpose
Jupyter Notebook
1
star
75

opencv-

this repo is for self learning purpose
Jupyter Notebook
1
star
76

Web-Scrapping-wiki

This Repo is self learning purpose
Jupyter Notebook
1
star
77

Nitinguptadu-Predictive-Tests-For-Assessing-Risk-of-Cancer-Recurrence-

Machine Learning Flask CSS HTML Heroku Xgboost pycart
Jupyter Notebook
1
star
78

scrapping-covid-19-Data-

Fetching data from website api and processing data and saving data in mongodb and saving data in csv and send mail using python on daily basis
Jupyter Notebook
1
star
79

Heroku-Demo

Python
1
star
80

Covid-19-time-series-Classification

This repo is for self learning purpose
Jupyter Notebook
1
star
81

Keras-cnn-for-Regressiion-and-classification--with-Multi-Target-

This Repo is for self learning purpose
Jupyter Notebook
1
star
82

Ploting-of-Desion-Boundary-in-python-3.0

This repo is for self learning purpose
Jupyter Notebook
1
star
83

keras-ocr-

this repo is for self learning purpose
Jupyter Notebook
1
star
84

sending-email-using-python-

Python
1
star
85

exp

1
star
86

Comaptaring-Two-columns-

This repo is for self learning purpose for comparing two numerical columns and creating a new columns for saving higher have from two colomns
Jupyter Notebook
1
star
87

Time-series-EEg-Classification

This repo is for self learning purpose
HTML
1
star
88

Saving-ML-and-DL-models-in-MongoDB-using-python.

This repo is for self learning purpose . Saving models in a database and loading them using python is easy. We choose MongoDB because it is an open-source document database and leading NoSQL database.
Jupyter Notebook
1
star
89

Flasgger

lasgger is a Flask extension to extract OpenAPI-Specification from all Flask views registered in your API. Flasgger also comes with SwaggerUI embedded so you can access http://localhost:5000/apidocs and visualize and interact with your API resources. Flasgger also provides validation of the incoming data, using the same specification it can validates if the data received as as a POST, PUT, PATCH is valid against the schema defined using YAML, Python dictionaries or Marshmallow Schemas. Flasgger can work with simple function views or MethodViews using docstring as specification, or using @swag_from decorator to get specification from YAML or dict and also provides SwaggerView which can use Marshmallow Schemas as specification. Flasgger is compatible with Flask-RESTful so you can use Resources and swag specifications together, take a look at restful example. Flasgger also supports Marshmallow APISpec as base template for specification, if you are using APISPec from Marshmallow take a look at apispec example.
Python
1
star
90

Ankur-Machine-learning-

Client: M17(https://m17.asia/en/product/17media/) a streaming platform. DataLink: ​ Client_data.zip The attached data here is generated from the live-streaming platform. Analyze the data to come up with the top 20% streamers. Using these top 20% streamers as good streamers create a classification model which can classify whether any streamer is good streamer or not. Evaluation Metric​ : F1-Score Classification Report (Using sklearn.metrics.classification_report) is also required
Jupyter Notebook
1
star
91

Object-detection-

Use below directory structure, and generate text file containing (KITTI format ) annotation for each image. . ├── images │ ├── 000000.jpg │ . │ . │ └── XXXXXX.jpg └── annotations ├── 000000.txt . . └── XXXXXX.txt The text file should contain one annotation per line, with values separated by space. className 0.00 0 0.00 x1 y1 x2 y2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 className 0.00 0 0.00 x1 y1 x2 y2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Eg., person 0.00 0 0.00 20 50 196 600 0.00 0.00 0.00 0.00 0.00 0.00 0.00 face 0.00 0 0.00 100 120 150 180 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Note:- The dataset is available to download through Google drive & Kaggle , here's what can be done. Download it manually and upload to colab (https://www.kaggle.com/imneonizer/wider-person/download).
Jupyter Notebook
1
star
92

Gas-sensors-for-home-activity-monitoring-Data-Set

Abstract: 100 recordings of a sensor array under different conditions in a home setting: background, wine and banana presentations. The array includes 8 MOX gas sensors, and humidity and temperature sensors. Source: Creators: Flavia Huerta,Gaurav Gawade Ramon Huerta, University of California San Diego, USA Donors: Flavia Huerta Ramon Huerta, University of California San Diego, USA (rhuerta ‘@’ ucsd.edu) Thiago Mosqueiro, University of California San Diego, USA (thmosqueiro ‘@’ ucsd.edu) Jordi Fonollosa, Institute for Bioengineering of Catalunya, Spain (jfonollosa ‘@’ ibecbarcelona.eu) Nikolai Rulkov, University of California San Diego, USA ( nrulkov ‘@’ ucsd.edu ) Irene Rodriguez-Lujan, Universidad Autonoma de Madrid, Spain ( Irene.rodriguez ‘@’ uam.es ) Data Set Information: This dataset has recordings of a gas sensor array composed of 8 MOX gas sensors, and a temperature and humidity sensor. This sensor array was exposed to background home activity while subject to two different stimuli: wine and banana. The responses to banana and wine stimuli were recorded by placing the stimulus close to the sensors. The duration of each stimulation varied from 7min to 2h, with an average duration of 42min. This dataset contains a set of time series from three different conditions: wine, banana and background activity. There are 36 inductions with wine, 33 with banana and 31 recordings of background activity. One possible application is to discriminate among background, wine and banana. This dataset is composed of two files: HTsensordataset.dat (zipped), where the actual time series are stored, and the HTSensormetadata.dat, where metadata for each induction is stored. Each induction is uniquely identified by an id in both files. Thus, metadata for a particular induction can be easily found by matching columns id from each file. We also made available python scripts to exemplify how to import, organize and plot our data. The scripts are available on GitHub: https://github.com/gauravgawade951999/gauravgit For each induction, we include one hour of background activity prior to and after the stimulus presentation. Time series were recorded at one sample per second, with minor variations at some data points due to issues in the wireless communication. For details on which sensors were used and how the time series is organized, see Attribute Information below.
Jupyter Notebook
1
star