• Stars
    star
    9
  • Rank 1,939,727 (Top 39 %)
  • Language
    Jupyter Notebook
  • Created over 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Multimodal emotion recognition is a challenging task because emotions can be expressed through various modalities. It can be applied in various fields, for example, human-computer interaction, crime, healthcare, multimedia retrieval, etc. In recent times, neural networks have achieved overwhelming success in determining emotional states. Motivated by these advancements, we present a multimodal emotion recognition system which is based on body language, facial expression and speech. This paper presents the techniques used in the Multimodal Emotion Recognition in Polish challenge. To detect the emotional state for various videos, data preprocessing operations are performed and robust features are extracted. For this purpose, we have used facial landmark detection for facial expressions and MFCC for speech.

More Repositories

1

Forest-Fire-Detection-through-UAV-imagery-using-CNNs

Wildfire is a natural disaster, causing irreparable damage to local ecosystem. Sudden and uncontrollable wildfires can be a real threat to residents’ lives. Statistics from National Interagency Fire Center (NIFC) in the USA show that the burned area doubled from 1990 to 2015 in the USA. Recent wildfires in northern California (reported by CNN) have already resulted in more than 40 deaths and 50 missing. More than 200,000 local residents have been evacuated under emergency. The wildfires occur 220,000 times per year globally, the annual burned area is over 6 million hectares. Accurate and early detection of wildfire is therefore of great importance. Fire detection task is crucial for people safety. Several fire detection systems were developed to prevent damages caused by fire. One can find different technical solutions. Most of them are sensors based and are also generally limited to indoors. They detect the presence of particles generated by smoke and fire by ionization, which requires a close proximity to the fire. Consequently, they cannot be used in large covered area. Moreover, they cannot provide information about initial fire location, direction of smoke propagation, size of the fire, growth rate of the fire, etc. To get over such limitations video fire detection systems are used
Jupyter Notebook
33
star
2

-Fake-News-Detection-

Fake news is misinformation or manipulated news that is spread across the social media with an intention to damage a person, agency and organisation. Due to the dissemination of fake news, there is a need for computational methods to detect them. Fake news detection aims to help users to expose varieties of fabricated news. To achieve this goal, first we have taken the datasets which contains both fake and real news and conducted various experiments to organize fake news detector. We used natural processing, machine learning and deep learning techniques to classify the datasets. We yielded a comprehensive audit of detecting fake news by including fake news categorization, existing algorithms from machine learning techniques. In this project, we explored different machine learning models like Naïve Bayes, K nearest neighbors, decision tree, random forest and deep learning networks like Shallow Convolutional Neural Networks (CNN), Deep Convolutional Neural Network (VDCNN), Long Short-Term Memory Network (LSTM), Gated Recurrent Unit Network (GRU), Combination of Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) and Convolutional Neural Network with Gated Recurrent Unit (CNN-LSTM).
Jupyter Notebook
24
star
3

Real-Time-Multiple-Object-Detection

The ability of the computer to locate and identify each object in an image/video is known as object detection. Object detection has many applications in self-driving cars, pedestrian counting, face detection, vehicle detection etc. One of the crucial element of the self-driving car is the detection of various objects on the road like traffic signals, pedestrian’s other vehicles, sign boards etc. In this project, Convolutional Neural Network (CNN) based approach is used for real-time detection of multiple objects on the road. YOLO (You Only Look Once) v2 Deep Learning model is trained on PASCAL VOC dataset. We achieved mAP score of 78 on test dataset after training the model on NVIDIA DGX-1 V100 Super Computer. The trained model is then applied on recorded videos and on live streaming received through web cam.
Python
21
star
4

Pedestrian-Detection

Autonomous driving involves perceiving and interpreting a vehicle’s environment using various sensors for controlling the vehicle, marking drivable areas and locating pedestrians. A pedestrian detector plays a key role demanding real time response. An efficient pedestrian detector must determine the exact location of a pedestrian in complex backgrounds, poses, illuminations, due to which it is a source of an active research for the last two decades. With the evolution of deep learning, there is no need of designing features which describe the pedestrian characteristics, instead the features can be learnt with the help of Convolutional Neural Networks (CNN). Our work includes training the YOLO v3 model on BDD100k dataset which is the largest and most diverse video dataset so far. It contains more pedestrian instances than previous specialized datasets, which makes it more viable for performing pedestrian detection. The results of training show that the proposed YOLO v3 network for pedestrian detection is well-suited for real-time applications due to its high detection rate and faster implementation. Idea By: Aditya Sharma, Microsoft
Python
17
star
5

Road-damage-detection

Keeping roads in a good condition is vital to safe driving. To monitor the degradation of road conditions is one of the important component in transportation maintenance which is labor intensive and requires domain expertise. Automatic detection of road damage is an important task in transportation maintenance for driving safety assurance. The intensity of damage and complexity of the background, makes this process a challenging task. A deep-learning based methodology for damage detection is proposed in this project after being inspired by recent success on applying Deep- learning in Computer Sciences. A dataset of 9,053 images is taken with the help of a low cost smart phone and a quantitative evaluation is conducted, which in turn demonstrates that the superior damage detection performance using deep-learning methods perform extremely well when compared with features extracted with existing hand-craft methods. Using convolutional neural networks to train the damage detection model with our dataset, we use the state-of-the-art object detection method, and compute the accuracy and runtime speed on a GPU server. At the end, we show that the type of damage can be distinguished into eight types with acceptable accuracy by applying the proposed object detection method.
Jupyter Notebook
17
star
6

Click-Fraud-Detection

Click Fraud is a type of fraud that occurs on the Internet in pay-per-click (PPC) online advertising. It occurs by intentional clicking of online advertisements with no actual interest in the advertised product or service. Click Fraud is an important threat to advertisement world that affects the revenue and trust of the advertisers also. To tackle this issue, we use various machine learning and deep learning model that can learn from the data given to train the system. Model will identify the fraud clicks based upon the data provided to the system in training stage. We made a deep learning model to classify the fraud and non-fraud clicks and compared its result with various machine learning approaches like SVM, Naive Bayes and logistic Regression.
Jupyter Notebook
14
star
7

LANE-DETECTION-USING-DEEP-LEARNING

Autonomous self-driving is in the trend for implementing it in our real life to remove all the hassles and accidents. Modern-day transport has come a long way but still far away from perfection and all-around safety. Lane Detection is a concept of demarcating lanes on the roads while the vehicle is moving. It has the capability of changing the vehicular movements on road, making them more organized and safe. This leap could provide for driver carelessness and avoid a lot of mishaps on the roads. Ride-hailing services like Uber and Ola can use them to monitor drivers and rate them based on driving skills. We have designed and trained a deep Convolutional Network model from scratch for lane detection since a CNN based model is known to work best for image datasets. We have used BDD100k dataset for training and testing for our model. We have used various metrics values for hyper-parameters tuning and took the ones which gave the best result. The training is done on Supercomputer NVIDIA-DGX V100. Idea By: Aditya Sharma, Microsoft
MATLAB
14
star
8

Chatbot-using-Recurrent-Neural-Networks

A conversational agent or a chatbot is piece of software which can communicate with human users with the help of natural language processing (NLP). Modelling conversation is a very crucial task in natural language processing and artificial intelligence (AI). Since the discovery of artificial intelligence, creating a good chatbot is one of the field’s hardest and complex challenges. Chatbots can be used for various tasks such as make phone calls, provide reminders etc; in general they have to understand users’ utterances and provide relevant responses for the problem in hand. Previously, methods which were used for constructing chatbot architectures relied on hand-written rules, templates or simple statistical methods. Rising and innovating field of deep learning have replaced previous models with trainable neural network models. The recurrent encoder-decoder model is the dominating model in the field modelling conversations. Multiple variations and features have been presented that have changed the quality of the conversation that chatbots are capable of. In our project, we have surveyed recent literatures published, examining various publications related to chatbots. We started with taking Cornell movie dialogue corpus as our dataset then after training our model with it and fine tuning it with various parameters, non-satisfactory results lead us to take another dataset and we trained and tested our final model on modified Gunthercox dataset which gave us satisfactory results for an open domain chatbot or general domain chatbot.
Python
14
star
9

Drowsiness-Detection-Using-Facial-Images

The project focuses on the drowsiness of IT employees, drivers, pilots, crane operators, student etc. These people need a system which can alert them, and others when they start taking a nap. A nap during work is quite important, but can also be dangerous for some types of work. So it is quite sensible to create a system which can detect drowsiness. The approaches which we used for the project are Support Vector Machine; YOLO architecture and Resnet-101 model of deep learning. The best accuracy was however achieved using SVM and HOG implementation, since they used mathematical approach to designate facial properties, based on a fixed ratio of facial features. Thus, we also conclude that a problem must be identified before implementation and every deep learning model cannot bring accurate predictions and accuracy.
Python
12
star
10

Wake-UP-word-detection

Wake-up-word(WUW)system is an emerging development in recent times. Voice interaction with systems have made life ease and aids in multi-tasking. Apple, Google, Microsoft, Amazon have developed a custom wake-word engine, which are addressed by words such as ‘Hey Siri’. ‘Ok Google’, ‘Cortana’, ‘Alexa’. Our project focuses initially only detection and response to a customized wake-up command. The wake-up command used is “GOLUMOLU”. A wake-up-word detection system search for specific word and reads the word, where it rejects all other words, phrases and sounds. WUW system needs only less memory space, low computational cost and high precision. Artificial Neural Networks(ANN) have reduced the complexity, computational time, latency, thus the efficiency of system has improved. Deep learning has improved the efficiency of automatic speech recognition(SR), where wake word detection is a subset of SR but unlike keyword spotting and voice recognition. A deep learning RNN model is used for the training of the network. RNN are specifically used in case of temporal sequence data and has the ability to process data of different length but of same dimension. For training a model, labelled dataset is needed. We generated three forms of data: golumolu, negative and background. Such that, the model learns circumspectly and attentively detects when specific word found. To start communication with system, the wake word should be delivered. The main task of WUW detection system is to detect the speech, to identify WUW words among spoken words, to check whether the word spoken in altering context.
Jupyter Notebook
12
star
11

Volume-Control-using-Hand-Gestures-Recognition

Gesture recognition helps computers to understand human body language. This helps to build a more potent link between humans and machines, rather than just the basic text user interfaces or graphical user interfaces (GUIs). In this project for gesture recognition, the human body's motions are read by computer camera. The computer then makes use of this data as input to handle applications. The objective of this project is to develop an interface which will capture human hand gesture dynamically and will control the volume level. For this, Deep Learning techniques such as Yolo model, Inception Net model+LSTM, 3-D CNN+LSTM and Time Distributed CNN+LSTM have been studied to compare the results of hand detection. The results of Yolo model outperform the other three models. The models were trained using Kaggle and 20% of the videos available in 20 billion jester dataset. After the hand detection in captured frames, the next step is to control the system volume depending on direction of hand movement. The hand movement direction is determined by generating and locating the bounding box on the detected hand.
Python
11
star
12

Generation-of-Videos-from-Text

Jupyter Notebook
10
star
13

ROUTING-ALGORITHMS-FOR-ENERGY-EFFICIENCY-IN-UNDERWATER-WIRELESS-SENSOR-NETWORKS

MATLAB
10
star
14

Segmentation-of-CT-thoracic-organs-using-ResU-Net

Segmentation of medical images have brought a considerable impact on the diagnosis, medicine, and treatment. This segmentation of images helps the doctors in exploring the internal anatomy. There are many existing techniques based on cross-section images and X-Ray like Computed Tomography (CT), or Magnetic Resonance Imaging, or others like Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), or ultrasound. The Computed Tomography images are complex, so the identification as well as the localization of organs manually is interminable and hard.
Jupyter Notebook
9
star
15

Classifying-Different-Crop-Categories-Using-Hyperspectral

Nowdays Hyperspectral data are more widely used for crop classification, we are trying to use Deep Learning to for the task of segmentation of Hyperspectral Satellite images to segment different Categories of crops. Data contains 128x128 pixels images having 7 channels or spectrums for input. We used a modified version of U-net for the purpose of Classification.
Python
8
star
16

Controlling-a-Mobile-Robot-using-Brain-Waves

When people suffer from neuro motor disability, they are in a condition where they are unable to perform any action due to paralysis in the body. In most cases, the can control the eye movement but, in some cases, only the brain is active. The people who are suffering from such disorder may not be able to communicate properly through voice. The only way in which they can communicate is through their thoughts or mind. The EEG signals helps in reading the brain activity and now through this input, we can analyse and understand that person. In this project, the EEG signals captured through the device called Neurosky Mindwave has been used to control the left, right, forward and backward movement of a Robot.
C#
7
star
17

Weed-Detection-in-Dense-Culture-using-Deep-Learning-

In recent years, Weeds have been responsible for the agricultural losses. To get rid of this problem the farmers have to uniformly spray the whole field with the weedicides which requires a huge quantity of weedicides as well as is affects the environment.
Jupyter Notebook
7
star
18

Computer-Vision-for-Wildlife-Conservation

In our Ecosystem, wildlife is a very important to maintain our nature. So protection and conservation of wildlife is our responsibility, those which are at risk of being extinct. One such species Amur tiger is in danger zone. Therefore, we have implemented three object detection methods using different deep learning technique for detecting Amur tiger which can further be deployed in UAVs. Using the recently released ATWR (Amur Tiger Re-identification in the wild) dataset by Computer Vision for Wildlife Conservation that contain 2485 images along with their annotations for training and 277 images along with their annotations for validation. By using our model tiger can be tracked and detected easily.
Python
7
star
19

Armed-Injured-and-other-Suspicious-Activity-Recognition-using-Drone-Surveillance

Python
6
star
20

Automated-Detction-of-Liver-cancer-in-WSI

As the number of cancer patient increase day by day, we can only take control with the help of early stage cancer detection. We use deep learning techniques to build and train the models. The whole slide image is a higher resolution image. We proposed two methods for semantic segmentation of WSIs in liver. The methods are U-Net and Auto Encoder. Semantic segmentation is applied to identify the tumor with the given WSIs. We were able to segment the viable tumor area.
Python
6
star
21

Emotion-Analysis-Using-Speech

When you are listening to music and the music player automatically plays songs matching to your emotions. This is one of the many use cases of Emotion detection using speech. Our main goal is to come up with a robust deep learning model which can accurately and efficiently classify the emotions from given audio. For this we have used two methods, one by directly analysing the speech and another by changing the speech into text.
Python
6
star
22

Household-Power-Forecasting-Master

A convolution neural network is a part deep neural network. Usually a convolution neural networks consist of many neurons which is similar to human brain neural network . It collects the input and converts it into a series of hidden layers. Hidden layer is made up of neurons A convolution neural network different kinds of activation function, which is used for passing the output to the next set of layer. A recurrent neural network (rnn) is a class of artificial neural network where connections between nodes from a directed graph along a sequence . This allows it to exhibit temporal dynamic behavior for a time sequence . The term recurrent neural network is used indiscriminately to refer to two broad class of neural networks with a similar general structure, where one is a finite impulse and other is infinite impulse.
Python
6
star
23

Liver-Tumor-Segmentation

In this research we concentrate on the different algorithm of machine learning and deep learning. Computed tomography (CT) scan images used for the research .for the implementation of the liver tumor segmentation we used convolution neural network algorithm like U-Net and V-Net. We processed our experiment on liver tumor segmentation (LiTS) dataset and evaluate segmentation of CT scan images.
Python
5
star
24

Red-Ball-Kicking-by-NAO

In this project we are programming Nao, a humanoid robot to kick a ball to a goal post. The robot first detects the ball using its camera then walk towards it, adjusts itself and finally kicks the ball.
Python
5
star
25

-IMAGE-TO-SPEECH-CONVERTOR-

The aim of the project was to convert an image to speech. An image is processed and segmented to identify the text in the image. Then the characters are combined to form words and save it as a text file. This text file is converted to speech. We use two tools for the completion of image to text to speech conversion. They are OCR (Optical Character Recognition) and TTS (Text to Speech) engines. Using OCR, we can optically recognize the characters in an image. TTS is used to convert the text file to speech. The audio output can be heard by using a python library Pygame for playing the audio at runtime
Python
5
star
26

Sussex-Hauwei-Human-Activity-Recognition

The goal of Human Activity Recognition (HAR) predict the action performed by the user with respect to the environment like, walking, running, sitting, sleeping etc.The Sussex Huawei Locomotion Challenge provides an data set to detect activity performed by humans with the help of the mobile sensors providing the information like acceleration, gravity, linear acceleration, pressure, gyroscope, magnetic field, etc.
Jupyter Notebook
5
star
27

Short-Messages-Spam-Filtering-Combining-Personality-Recognition-and-Sentiment-Analysis

As social media is increasing and consumption of data is increasing there is a big increase in spam SMS. As an example, the people using SMS- capable are almost 6.1 billion another great example is the currently famous WhatsApp which has reached 1 billion users. Increase in such social activities has also given rise to more and more illegal activities. The current activities of SMS is carried mostly in Asia. About 20-30% of SMS traffic from China and India. That’s why this spam is an emerging problem in Asia. This growth is an open invitation for malicious organizations and more illegal activities are being carried out though this devices, many organization are arrested by the copes in many country for doing spam on people using attractive SMS with big offers and gifts, what we have done in this project is developed a model for filtering spam or ham SMS using sentiment analysis and personality recognition techniques. Spam is an irrelevant message also used for advertisement and marketing, spreading malware. The message must be filtered out so that such messages won’t disturb the privacy of the user. The main aim of the project is to sort the message using personality recognition and sentimental analysis combined. As previously only on the bases of sentiments the spams were filtered. Spam can be described as uninvited electronic messages sent in bulk to a group of receivers. The messages are characterized as electronic, unsolicited, commercial, mass constitutes a growing threat mainly due to the following factors: 1) the availability of low-cost bulk SMS plans; 2) reliability; 3) low chance of receiving responses from some unsuspecting receivers; and 4) the message can be personalized. Mobile SMS spam detection and prevention is an important matter. It has taken on a lot of issues and solutions inherited from relatively older scenarios of email spam detection and filtering. The main objective of this project is to analyse these-techniques in short instant message spam filtering and also the polarity and personality dimension can improve the results obtained previously. In these project, we focus on SMS messages, which are structurally similar to other instant short messages.
Jupyter Notebook
5
star
28

VIDEO-ENHANCEMENT-USING-SINGLE-IMAGE-SUPER-RESOLUTION

Jupyter Notebook
4
star
29

Digital-Retinal-Images-for-Vessel-Extraction

Retinal vessel segmentation and depiction of morphological attributes of retinal blood vessels, such as length, width, tortuosity, branching patterns and angles are utilized for the diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases such as diabetes, hypertension, arteriosclerosis. Automatic detection and analysis of the vasculature can assist in the implementation of screening programs for diabetic retinopathy and can help find relation of vascular tortuosity with diagnosis of hypertension, and computer-assisted laser surgery.
Python
4
star
30

Human-detection-for-Search-and-Rescue-operation-in-UAV-s-using-SSD

Traditionally, in times of disaster rescue is done by means of helicopters and land rescue team but it takes a considerable amount of response time. Furthermore, there are few areas which are inaccessible to the facilities available. Recently with the invention of UAV’s (drone), aerial surveillance has proven to be of greater use. Earthquake, flood and fire affected areas can be explored only by aerial surveillance. UAVs can be used for monitoring the environment during such situations. But it involves some manual work like the need of someone to scan the video shot by the drone. And there are no proper AI models available to detect people from aerial view.
Python
4
star
31

Video-Super-Resolution

Nowadays the crime rates all over the globe are increasing at an alarming rate and for that measures are being taken but not quite so good. We witness these criminals are let loose everyday just because the evidence isn’t enough or isn’t strong enough to be held in the court of law. One of the major issues in gathering evidence for such heinous crimes that are captured in the CCTV camera is that its not an evidence good enough to be accepted by the court and the sole reason behind it being the quality of these videos are not good enough in terms of resolution and which aches our hearts out to see those criminals living the life of free man without any guilt or punishment. This is why we interns thought we might put our knowledge about deep learning into some good use for the society and help to maintain the valuable parameters of the jurisdiction to some extent.
Python
4
star
32

Visual-Relationship-Detection-Using-VTransE-Network-paper

Visual relationships for e.g. person is talking to another person and a clock above the person offer a comprehensive scene. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we used a Visual Translation Embedding network (VTransE) for visual relation.
Python
4
star
33

Real-time-multiple-object-detection-on-road

The ability of the computer to locate and identify each object in an image/video is known as object detection. Object detection has many applications in self-driving cars, pedestrian counting, face detection, vehicle detection etc. One of the crucial element of the self-driving car is the detection of various objects on the road like traffic signals, pedestrian’s other vehicles, sign boards etc. In this project, Convolutional Neural Network (CNN) based approach is used for real-time detection of multiple objects on the road. YOLO (You Only Look Once) v2 Deep Learning model is trained on PASCAL VOC dataset. We achieved mAP score of 78 on test dataset after training the model on NVIDIA DGX-1 V100 Super Computer. The trained model is then applied on recorded videos and on live streaming received through web cam.
Python
4
star
34

Galaxy-classifier

Galaxy classification system helps astronomers with the method of grouping galaxies as per their visual form. The foremost notable being the Hubble sequence, which is considered one among the foremost used schemes in galaxy morphological classification. The Edwin Powell Hubble sequence was created by Hubble in 1926. In this project, Galaxy Image Classification using a Deep Convolutional Neural Network is presented. The galaxy can be classified based on its features into three main categories, namely: Elliptical, Spiral, and Irregular. The proposed deep galaxy architecture consists of one input convolutional layer having 16 filters, followed by 3 hidden layers, 1 penultimate dense layer and an output Softmax layer. It is trained over 3232 images for 200 epochs and achieved a testing accuracy 97.38% which outperformed conventional classifiers like Support Vector Machine and previous research contributions in the same domain of Galaxy Image Classification.
Jupyter Notebook
4
star
35

Emotion-Detection-using-Physiological-Signals

Emotions are a result of internal and external factors which are unique to every individual and they influence human decisions to a considerable extent. Emotion detection contributes for a large domain of research. Identifying and predicting emotions based on data can, in fact, sabotage many mishaps in an early stage. In this project, we have taken into consideration four physiological signals – body temperature, heart rate, skin resistance and pulse wave. These signals are obtained from a skin temperature sensor, a heart rate sensor, a skin response sensor and a custom designed pulse wave sensor. These signals are processed using Arduino Uno microcontroller. The microcontroller transmits the data to a computer via USB. The data is used for analyses and classification using machine learning algorithms to find out which algorithm provides the highest accuracy. The four basic emotions taken into account in this project are normal (relaxed), happy, sad and angry. The data has been collected from 22 healthy individuals, including both male and female, with ages ranging from 20 to 22 years. The performance of the dataset for different machine-learning algorithms are checked through Weka and TensorFlow. Among all the algorithms applied to the dataset, Random Forest Tree proved to provide the highest accuracy of 82.55% for the entire dataset using Weka. We were able to achieve an accuracy of 98.75% for individual dataset through fully connected 10 hidden layered neural network using TensorFlow.
4
star
36

Detecting-Fake-Profiles-On-Social-Media

Jupyter Notebook
3
star
37

Speech-Emotion-Recognition-using-Deep-learning

Jupyter Notebook
3
star
38

Plant-Leaf-Disease-Detection

Python
3
star
39

Sleep-Disorder

Python
3
star
40

Prediction-of-Dynamic-Cloud-Resources-Provisioning-for-Workflow

Jupyter Notebook
3
star
41

Human-detection-and-Activity-recognition-for-Search-and-Rescue-operation-in-UAV-s-using-RCNN-and-The

The Human detection and Activity recognition for Search and Rescue operation in UAV’s (Unmanned aerial vehicle) using RCNN will focus on the detecting object in Aerial surveillance for the small object size and for other problem like it appears as the pixel level object in UAV’s. Here we used TensorFlow object detection API. TensorFlow object detection API which is an open source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models and it provides a collection of detection models pre-trained on the COCO dataset and then trained with our custom dataset.
Python
3
star
42

Pneumothorax-Segmentation

A pneumothorax is an abnormal collection of air in the specific pleural space between the lung and the chest wall caused during chest injuries. Such regions are diagnosed by studying X-Ray images containing the affected area. However, the task of segmenting out the affected region becomes cumbersome due to the complex details and multi-dimensional features.
Python
3
star
43

Comprehensive-Evaluation-of-Multivariate-Chaotic-Time-series-using-Neural-Networks

Multivariate chaotic time series prediction, a popular research topic which is concerned with many disciplines (weather forecasting and predicting stocks), where the end goal is to predict the future of the time series based on past observations. Various neural networks has been proposed to forecast future values in time series data but existing methods are not comprehensively evaluated.
Python
3
star
44

-REAL-TIME-VEHICLE-CLASSIFICATION-AND-LOCALIZATION-USING-EDGE-COMPUTING-

The project aims to develop a traffic monitoring system using convolution neural network. We had modified existing tiny YOLO model for Indian vehicle such as auto rickshaw, bicycle, motorbikes etc. To do this, first we developed a data set of these vehicles then we retrained the existing tiny YOLO model. Moreover, with raspberry Pi we have developed a prototype for edge device which can count incoming and outgoing traffic from a particular point. There are various applications of such devices for example, it can be used as traffic monitoring system, surveillance, traffic load prediction. This is a fully functional independent device. All the decision has been done locally, this make this device highly useful for IoTs.
Python
3
star
45

Cardiovascular-Disease-Prediction

The main objective of our project is to predict if a person has a chance of getting a cardiovascular disease based on the person’s medical examination readings (Age ,BMI , Blood pressure , Cholesterol etc). This is a project for predicting potential cardiovascular disease. The significance of heart disease is increasing as the population ages. Existing modes of diagnosis is typically slow and may have undesirable side effects.
Jupyter Notebook
3
star
46

Real-Time-Age-Gender-and-Emotion-Detection-using-Deep-Learning-Techniques

Emotion is a strong feeling deriving from one's circumstances, mood, or relationship with others. It tells us about one’s social behaviour and presence of mind. Social etiquettes play a major role in one’s personality. A growing number of applications rely on the ability to extract information about people from real time input through the camera. Examples are the person identification for surveillance or access control the estimation of gender and age that have been addressed in isolation in the past, there often exists a variety of methods for each.
C
3
star
47

Early-Stage-Blindness-Detection

Convolutional neural networks (CNN) became very popular for image classification in deep learning. In this competition, we are provided a dataset of images which are classified into 5 classes from 0 to 4 showing the degree of disease known as Diabetic Retinopathy. This disease is the major cause of blindness among the working-aged people.
Python
3
star
48

TECHNICAL-SOLUTIONS-FOR-VISUALLY-IMPAIRED

Designed for the blind and low vision community, this research project harnesses the power of AI to describe people, text and objects. This project brings together the power of AI to deliver an intelligent system designed to help you navigate your day. Point your phone’s camera, select a channel, and hear a description of what the AI has recognized around you. With its intelligent system, just hold up your camera and hear information about the world around you. Our system can speak short text as it appears in front of the camera, provide audio guidance to capture a printed page, and recognizes and narrates the text. -Recognize and locate the faces of people you’re with. -Reads text quickly and get audio guidance to capture full documents. Our project is an extended work of real time object detection. We have implemented real time object detection using COCO API, which detects the object on live video stream and converts the objects to speech and give a gist of where the object is.
Python
3
star
49

-Face-recognition-technique-in-bank-locker-systems-for-security-purpose-using-deep-learning-

Face Recognition is turning into another pattern in the security validation frameworks. Present day FR frameworks can even identify, if the individual is real (live) or not, while doing face acknowledgment, keeping the frameworks being hacked by demonstrating the photo of a genuine individual. I am certain, everybody pondered when Facebook executed the auto-labeling method. It recognizes the individual and label him/her at whatever point you transfer a photo. It is efficient to the point that, notwithstanding when the individual's face is blocked or the photo is taken in obscurity, it labels precisely. All these effective face acknowledgment frameworks are the after effects of ongoing progressions in the field of PC vision, which is upheld by intense profound learning calculations. In the present current world, security assumes an imperative part. For that reason, we proposed propel security frameworks for saving money locker framework and the bank clients. This specific security is proposed through two unique modules in mix i.e. confront identification procedure and password verification. All these means are followed in the grouping on the off chance, that if anything turns out badly he or she can't get to the framework. Presently clients don't need to stress over the illicit access to their locker frameworks. These propelled procedures in this day and age influence individuals to feel anchor. This likewise prompts aversion of burglary. We have developed a Web Application to showcase our project. It has been observed by comparing all the models that CNN provides high accuracy (98.3%).
JavaScript
3
star
50

Image-Inpainting

Python
2
star
51

HUMAN-WITH-MASK-OR-WITHOUT-MASK

Jupyter Notebook
2
star
52

STRAY-ANIMAL-DETECTION

Python
2
star
53

Using-DCGAN-To-Generate-Faces-Of-Animated-Characters

Jupyter Notebook
2
star
54

Sentimental-Analysis-Of-Memes

When someone sends you a meme can you tell the sender actually happy, angry, or neutral? This makes sentiment analysis more important than ever.
Jupyter Notebook
2
star
55

Leaf-Segmentation-Challenge-Using-UNET

In this paper, we are concentrating on the problem of segmenting tobacco and arbidopsis leaves from an RGB image, an important task in plant phenotyping.To complete this project this task, we use state-of-the-art deep learning architectures: UNET, a convolutional neural network for initial segmentation.
Python
2
star
56

Deep-Learning-Based-Water-Feature-Mapping-Using-Sentinel--2-Satellite-Image

Jupyter Notebook
2
star
57

-CLASSIFICATION-OF-IMAGES-INTO-NATURAL-DIBR-RETARGETED-OR-SCREEN-CONTENT-

The project deals with Image Classification. Image classification is an important topic in computer vision systems and has drawn a significant amount of interest over the last decade. This field aims to classify an input image based on visual content. The main objective of this project is to make a deep learning model for categorisation of images into DIBR, Natural, Retargeted or Screenshot, so that user can check to which category a test image belongs to and to improve the accuracy value or the prediction score of classifying the test image into one of these four categories. In this project, 3 different choices of models (building from scratch, VGG16, InceptionV3) were tested. Building own network from scratch did not yield great results when compared to the other two because the dataset was not extensive enough to train the model with good accuracy from scratch. Best results were obtained on InceptionV3 model followed by VGG16. Top performing model achieved an accuracy of 98.3 %. The reason why team could arrive at such great results is probably the similarity between the ImageNet dataset and our prediction classes.
Python
2
star
58

Face-Recognition-and-Tracking

The human-face is one of the easiest ways to distinguish the individual identity of each other. Face recognition is a very important task and has wide variety of application in security systems, authentication etc. Tracking the individuals can give us the valuable insights. In this project, we have developed Computer Vision based face recognition and tracking system with OpenCV and dlib. Our model is able to recognize and differentiate between known and unknown faces. For the tracing part, time when a person enters and leaves the premises is captured and the difference is calculated. The model was trained on dataset of 50 different individuals with 10 different images of each. We achieved state-of-the-art accuracy.
Python
2
star
59

-DESIGN-AND-IMPLEMENTATION-OF-PROCESSING-MODULE-FOR-OBJECT-DETECTION-AND-WEAPON-CLASSIFICATION-WITH-

Deep Learning has emerged as a new area in machine learning and is applied to a number of image applications. The main purpose of the this work is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on standard COCO datasets. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets. Then we used visualization technique on the particular image for understanding which part of a given image led to convert to its final classification decision. For this we used CAM Visualisation technique. We also tried in doing object detection by using Pytorch.
Python
2
star
60

WIDER-Face-and-Persons-Challenge-2019-Track-2-Pedestrian-Detection

Pedestrian Detection is an application of computer vision which is close to object detection, which has a wide range of applications. It can be used in surveillance monitoring, autonomous vehicles, face recognition, etc. This project is based on pedestrian detection that detects pedestrians and cyclists equally. The detector used must have high accuracy and precision. The model used is Faster R-CNN, which has more accurate than YOLOv3 but is comparatively slower.
Python
2
star
61

Characterization-of-Binary-Machine-Learning-Classifier-for-Robust-Heart-Disease-Prediction

Large number of patients related data is stored and maintained in the health industry. Heart disease is the most common one nowadays. The different ways of predicting it are Electrocardiogram (ECG), stress test, and Heart MRI. Here, the proposed model uses 13 parameters for the prediction of heart disease that includes heart rate, chest pain, cholesterol level, blood pressure, Age etc. The aim of this model is to predict whether heart disease is present or not using the various machine learning models such as Decision Tree, Random Forest, Logistic Regression, Naïve Bayes. We have achieved 0.3312 log loss using the Logistic Regression.
Jupyter Notebook
2
star
62

Image-Classification-Using-CNN

The identification and classification of items in this world surrounding us has been a part of machine learning and a primary element of image recognition. Computers can be taught to identify the visual factors on images given so that it could use the trained neural network to extract features for this demanding responsibility. In our project, we trained multinomial classifiers and used Convolutional neural networks for learning the features of the images and analysed the output to improve the accuracy. We built database by collecting the universally used databases available. Our initial attempt for training using VGG16 model wasn’t providing satisfactory results. So we assessed the performance of the CNN architectures chosen namely VGG16, InceptionV3 & MobileNet to have knowledge on the outputs obtained from different models and then to select the model with highest performance comparatively by analysing the output. We got an accuracy of around 78% for VGG16, around 87% for MobileNet, around 90% for InceptionV3. We selected InceptionV3 since it had better performance comparatively, for further processing, we used the model to train on NVIDIA DGX and acquired an accuracy of 97.88. The output acquired over the dataset consisting of four classifiers shows that this is the maximum accuracy that can be obtained using the currently available dataset and techniques. For learning purpose, we tried increasing the amount of dataset and observed that the accuracy is improved. The initial dataset consisted of images around 3300 and we further increased it to 5100. The results point out that the performance and accuracy for the model can be enhanced using advanced architectures and providing large amount of images as input in the dataset.
Python
2
star
63

DOG-BREED-IDENTIFICATION

Dog breed identification is a very specific application of convolutional neural networks. It falls under the category of finedgrained image classification problem, where inter -class variations are small and often one small part of the image considered makes the difference in the classification and identification. The various classes of ImageNet can have large inter-class variations, making it easier to categorize correctly.
Jupyter Notebook
1
star
64

MODULE-ON-ENGLISH-TO-HINDI-NEURAL-MACHINE-TRANSLATION

Jupyter Notebook
1
star
65

-A-Probabilistic-Object-Detection-in-Computer-Vision-Using-Deep-Learning-Approach

Jupyter Notebook
1
star
66

Identification-of-Aortic-Valve-Opening-Points-using-Seismocardiogram-Signals

1
star
67

Technical-Analysis-Based-Stock-Prediction

Jupyter Notebook
1
star
68

Evaluation-of-Object-Detection-Approaches-for-Real-Time-Face-Mask-Detection

Jupyter Notebook
1
star
69

Chest-X-Ray-Images-Pneumonia-Detection

Jupyter Notebook
1
star
70

Stock-Market-prediction

Python
1
star
71

FLU-SHOT-LEARNING-PREDICT-H1N1-AND-SEASONAL-FLU-VACCINES

Jupyter Notebook
1
star
72

Multi-Person-Pose-Estimation

Jupyter Notebook
1
star
73

Mortality-Prediction-using-Machine-Learning-Techniques

The prediction of mortality of a human is a foremost challenging task in today’s era. We are evaluating the prediction model on 79999 patients with 342 features. Here we have predicted mortality of patient i.e. (DEAD or ALIVE) who is admitted in the hospital using Deep Neural Networks and various Machine Learning methods where Linear SVM showed the best accuracy.
Jupyter Notebook
1
star
74

Classification-of-COVID-19-chest-X-ray-images-using-Convolutional-Neural-Networks-CNN-

Jupyter Notebook
1
star
75

Disease-Identification-in-Kharif-Crop

Jupyter Notebook
1
star
76

Video-Object-Segmentation-from-One-Frame-Annotation

Python
1
star
77

Surveillance-of-Identifying-Vehicles-Parked-In-No-Parking

Jupyter Notebook
1
star
78

SDN-Traffic-Classification-using-Deep-Learning

Jupyter Notebook
1
star
79

Native-and-Non-Native-English-Speech-Classification--A-premise-to-Accent-Conversion

Jupyter Notebook
1
star
80

Aerial-Cactus-Identification-using-Deep-Learning-

This paper focusses on various convolutional neural network architectures for the aerial cactus identification task. Our main effort is a thorough experimentation and evaluation of performance of different networks to identify columnar cactus in aerial imagery using deep learning.
Jupyter Notebook
1
star
81

Crop-Yield-Prediction-through-Different-Machine-Learning

Jupyter Notebook
1
star
82

AI-based-Prediction-of-Rainfall-from-Satellite-Observation-for-Disaster-Management

Jupyter Notebook
1
star
83

Recommendation-Engine-Personalized-Approach

With Personalized and Efficient performance taken as primary goals, this paper focuses on building a recommendation engine. We have proposed a method which recommends product based on user’s like and dislike.It focuses on building a Restricted Boltzmann Machine which suggest product to buyer and helps in providing ease while shopping.Reader gets an insight on how RBM helps in having a good recommending engine as compared to other long running traditional methods.In this paper we have explored the use of RBM with two layers by converting a tabular data into user item matrix.
Jupyter Notebook
1
star
84

Human-activity-Detection

Human activity detection plays a significant role in human-to-human interaction and interpersonal relations. Because it provides information about the identity of a person, their personality, and psychological state, it is difficult to extract. The human ability to recognize another person’s activities is one of the main subjects of study of the scientific areas of computer vision and machine learning. As a result of this research, many applications, including video surveillance systems, human-computer interaction, and robotics for human behavior characterization, require a multiple activity detection system.
Jupyter Notebook
1
star
85

Multi-Person-Pose-Tracking

Accessible structures for a video based pose estimation and tracking struggle to perform well on practical accounts with numerous people , every now and again fail to yield body-pose directions unsurprising over some time. So, in this work, we present the troublesome issue of joint multi-singular pose estimation and tracking of many number of individuals in unconstrained recordings.
Python
1
star
86

LUNG-DISEASE-CLASSIFICATION-AND-QUANTIFICATION-WITH-EXPLAINABLE-AI

The project undertaken aims at harnessing the power of Computer Vision, Image Processing, and Deep Learning Techniques to develop a Smart Radiology System that correctly distinguishes between three classes - COVID-19, PNEUMONIA, and NORMAL Lungs using Chest X-RAY images. The system aims at segmenting the infected part of the lungs and provides an Activation map to highlight the regions of interest.
1
star
87

Heart-Disease-Prediction-Using-Machine-Learning-Techniques

Heart diseases are the number one cause of deaths of people around the world. They refer to the disorders related to the heart and blood vessels. The health sector maintains an enormous quantity of patient-related information. This stored information may be helpful for future disease prediction. In this research paper, we attempt to concentrate on different algorithms for machine learning that effectively predict heart diseases.
Jupyter Notebook
1
star
88

Swedish-Leaf-Dataset-Classification

This paper introduces a specific approach for leaf classification based on Machine Learning (ML), Transfer Learning (TL), and Convolutional Neural Network (CNN). The proposed method consists of three stages, pre-processing, feature extraction, and classification.
Jupyter Notebook
1
star
89

Review-On-Machine-Learning-Algorithms-For-Dengue-Disease-Spread-Prediction

Dengue is a disease caused by four types of related viruses transmitted by a mosquito, most commonly Aedes aegypti. In its less severe form infected patients will experience flu like symptoms that vary from mild to intense, but severe dengue or, Dengue Hemorrhagic Fever, can be fatal without proper medical care. The disease is considered an alarming threat to the health of populations spanning millions of people living in tropical and subtropical areas of the globe where the mosquito thrives. A large number of studies have confirmed that the incidence of dengue is positively correlated with climatic conditions, specifically, temperature, humidity and precipitation levels.
Jupyter Notebook
1
star
90

Traffic-Flow-Prediction-

Traffic flow predicting has long been regarded as a critical problem for the intelligent transportation system. It aims at estimating traffic flow of a road in next several time intervals to the future. Time intervals are usually defined as short-term intervals varying from 5 minutes to 15 minutes. Two types of data are usually used in traffic flow prediction.
1
star
91

RICHTER-PREDICTION-PREDICTING-DAMAGE-CAUSED-BY-EARTHQUAKES

In this we are going to do the Earthquake magnitude damage prediction using region and it has been carried out using the temporal sequence of historic seismic activities in combination with machine learning classifier. These parameters are based on geophysical fact distribution of characteristic earthquake magnitude and seismic quiescence.
Jupyter Notebook
1
star
92

COCO-DATASET-STUFF-SEGMENTATION-CHALLENGE

There are three levels in image analysis, Classification, Detection and Segmentation. Image segmentation is the division of an image into regions or categories. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.
Python
1
star
93

Video-Object-Segmentation

Video object segmentation allows computer vision to identify objects as they move through the space in a given video. Here we presents One-Shot Video Object Segmentation (OSVOS), a CNN architecture to tackle the problem of semisupervised video object segmentation, that is, the classification of all pixels of a video sequence into background and foreground,given the manual annotation of one of its frames.
Python
1
star
94

SIMAH-SocIaL-Media-And-Harassment-

The social networking platform has made millions of users to share their opinions and thoughts concerning totally different aspects and events on the micro-blogging platforms. However, the utilization of social media and lack in protection of private information has lead to broadcasting of bully and harassment messages and has conjointly brought with it a risk of online harassment.
Jupyter Notebook
1
star
95

ImageNet-Object-Localization-Challenge

Cutting edge object finding systems rely upon district scheme cunnings to speculate object areas. Advances Faster RCNN has diminished the running time of the dis-covery systems, uncovering area proposition calculation of jam. In present work, we present a RPN that offers whole-images convolutional highlights with location or-ganize, along these lines empowering almost sans cost locale recommendations.
Python
1
star
96

DESIGN-AND-IMPLEMENTATION-OF-PROCESSING-MODULE-FOR-OBJECT-DETECTION-AND-WEAPON-CLASSIFICATION...

Deep Learning has emerged as a new area in machine learning and is applied to a number of image applications. The main purpose of the this work is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on standard COCO datasets. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets. Then we used visualization technique on the particular image for understanding which part of a given image led to convert to its final classification decision. For this we used CAM Visualisation technique. We also tried in doing object detection by using Pytorch.
Python
1
star
97

Human-Identification-Using-Autonomous-Drone

Detecting a specific person from the crowd using drone along with some resource constraint device is a major concern which we are discussing in the paper. Combining the advanced algorithms and some smart hardware material, we will be finding a way to search for a missing individual in a crowd or at some location. We can also search for a person at a particular location by setting our aerial vehicle to fly autonomously and search for the required person. This will help us to cover areas which cannot be reached by humans easily.
Python
1
star
98

Video-Segmentation

Video Segmentation -which means segmentation of all objects of the scene and classifying them based on certain classes. • Video segmentation will have a major role in day to day activities like traffic counting ,movie editing ,individual tracking etc.
Python
1
star
99

WIDER-FACE-DETECTION

Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. We propose a deep cascaded multi-task framework which boost up the detection performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging WIDER FACE benchmarks for face detection while keeps real time performance.
Python
1
star
100

Emotion-Detection-using-Video

Facial expressions play a key role for detecting emotion and understanding humans. The term “interface” suggests importance of face plays in communication between two entities. Studies have shown that reading of facial expressions can significantly determine the personality of the person. Regardless of cultural barriers and language and cultural there will always be a set of fundamental facial expressions that people assess and communicate with. Humans ability to interpret emotions is very important to effective communication, accounting for up to 93% of communication used in a normal conversation depends on emotion of an entity. The seven basic emotions that are considered universal to human beings. Neutral, angry, disgust, fear, happy, sad, and surprise are the basic emotions can be recognized from human’s facial expression.
Python
1
star