• Stars
    star
    236
  • Rank 170,480 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].

Speech-Signal-Processing-and-Classification

Aristotle University of Thessaloniki - University of Groningen

Abstract of my thesis conducted during the 7-8th semester. " Two-class classification problems by analyzing the speech signal. "

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking traditional features will be tested against agnostic-features extracted by convolutive neural networks (CNNs) (e.g., auto-encoders) [4]. Additionally as concerns SVM algortihm the dimensionality reduction step took place using algorithms like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) also the kernel form of PCA - KernelPCA. In the multiclass implementation the use of (KPCA) following with LDA was essential, in the binary classification we compare the use of PCA and KernelPCA. For experimental purposes Graph Spectral Analysis(IsoMap, LLE) was used for dimensionality reduction followed with Spectral Clustering in order to investigate subsets. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be used toward achieving our goal, such as KALDI. Comparisons will be made against [5-7].

[1]X. Huang, A. Acero, and H.-W. Hon, Spoken Language Processing. Up- per Saddle River, N.J.: Pearson Education-Prentice Hall, 2001.

[2] J. R. Deller, J. H. L. Hansen, and J. G. Proakis, Discrete-Time Pro- cessing of Speech Signals. New York, N.Y.: Wiley-IEEE, 1999.

[3] L. R. Rabiner and R. W. Schafer, Theory and Applications of Digital Speech Processing. Upper Saddle River, N.J.: Pearson Education- Prentice Hall, 2011.

[4] Wei-Ning Hsu, Yu Zhang, and James R. Glass, Unsupervised Domain Adaptation for Robust Speech Recognition via Variational Autoencoder- Based Data Augmentation,2017, http://arxiv.org/abs/1707.06265.

[5] C. Kotropoulos and G.R. Arce, ”Linear discriminant classifier with re- ject option for the detection of vocal fold paralysis and vocal fold edema”, EURASIP Advances in Signal Processing, vol. 2009, article ID 203790, 13 pages, 2009 (DOI:10.1155/2009/203790).

[6] E.Ziogas and C.Kotropoulos, ”Detection of vocal fold paralysis and edema using linear discriminant classifiers” in Proc. 4th Panhellenic Ar- tificial Intelligence Conf. (SETN-06), Heraklion, Greece, vol. LNAI 3966, pp. 454-464, May 19-20, 2006.

[7] M.Marinaki, C.Kotropoulos, I.Pitas, and N.Maglaveras, ”Automatic de- tection of vocal fold paralysis and edema” in Proc. 8th Int. Conf. Spoken Language Processing (INTERSPEECH 2004), Jeju, Korea, pp. 537-540, Oc- tober, 2004.

More Repositories

1

Cryptography

Crypto projects in python, e.g. Attacks to Vigenere, RSA, Telnet Protocol, Hip Replacement , Vernam cipher, Crack Zip Files, Encryptions RC4, Steganography, Feistel cipher, Superincreasing Knapsac, Elliptic Curve Cryptography, Diffie Hellman & EDF.
Python
20
star
2

Neural_Machine_Translation

Neural Machine Translation using LSTMs and Attention mechanism. Two approaches were implemented, models, one without out attention using repeat vector, and the other using encoder decoder architecture and attention mechanism.
Python
12
star
3

OpiRec

Opinion recommendation is a task, recently introduced, for consistently generating a text review and a rating score that a certain user would give to a certain product, which has never seen before. Input information driving recommendation is text reviews and ratings for this product contributed by other users and text reviews submitted by the user under consideration for other products. The aforementioned task faces the same problems emerging in text generation using neural networks, namely repetition and specificity. In this paper, it is experi- mentally demonstrated that by employing coverage loss during training, repetition is reduced without adding extra parameters. Furthermore, the amount of repetition in the generated text review is defined as a measure of the captured information. Such measure is used to improve rating score prediction significantly during testing.
Python
7
star
4

ChessGame

Machine learning apply in chess for making optimal decisions, using Neural networks and Supervised learning algorithms
Python
5
star
5

Nerd-Game

Nerd Game is an app game which tests you in very nerd questions of computer science, maths etc. You have three lives and you can modify your game, if you want to play against the clock. All the questions are in a SQL server which is connected to the GUI part.
Java
4
star