Shakkala Project مشروع شكّالة
Introduction
The Shakkala project presents a recurrent neural network for Arabic text vocalization that automatically forms Arabic characters (تشكيل الحروف) to enhance text-to-speech systems. The model can also be used in other applications such as improving search results. In the beta version, the model was trained on over a million sentences, including a majority of historical Arabic data from books and some modern data from the internet. The accuracy of the model reached up to 95%, and in some data sets it achieved even higher levels of accuracy depending on complexity and data distribution. This innovative approach has the potential to significantly improve the quality of writing and text-to-speech systems for the Arabic language.
Requirements
Easy setup
No GitHub repository installation is needed for pip case:
pip install shakkala
Project setup
Execute the source code from Github:
cd requirements
pip install -r requirements.txt
cd ..
Note: Shakkala has been tested with Tensorflow 2.9.3.
Code Examples (How to)
Check full example in (demo.py) file.
- Import
from shakkala import Shakkala
- Create Shakkala object
sh = Shakkala()
OR for advanced usage:
sh = Shakkala(version={version_num})
- Prepare input
input_text = "فإن لم يكونا كذلك أتى بما يقتضيه الحال وهذا أولى"
input_int = sh.prepare_input(input_text)
- Call the neural network
model, graph = sh.get_model()
logits = model.predict(input_int)[0]
- Predict output
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
Available models:
- version_num=1: First test of the solution.
- version_num=2: Main release version.
- version_num=3: Some enhancements from version number 2.
It worth to try both version_num=2 and version_num=3.
Demo run
The fastest way to start with Shakkala by running the demo from Github:
python demo.py
Perfomance Tips
Shakkala built in object oriented way to load the model once into memory for faster prediction, to make sure you dont load it multiple times in your service or application follow the steps:
- Load the model in global variable:
sh = Shakkala(folder_location, version={version_num})
model, graph = sh.get_model()
- Then inside your request function or loop add:
input_int = sh.prepare_input(input_text)
logits = model.predict(input_int)[0]
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)
Accuracy
In this beta version 2 accuracy reached up to 95% and in some data it reach more based on complexity and data disribution.
This beta version trained on more than million sentences with majority of historical Arabic data from books and some of available formed modern data in the internet.
Prediction Example
For live demo based on Shakkala library click the link
Real output | Predicted output |
---|---|
فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى | فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى |
قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا | قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا |
طَابِعَةٌ ثُلَاثِيَّةُ الْأَبْعَاد | طَابِعَةٌ ثَلَاثِيَّةُ الْأَبْعَادِ |
Accuracy Enhancements
The model can be enhanced to reach more than 95% accuracy with following:
- Availability of more formed modern data to train the network. (because current version trained with mostly available historical Arabic data and some modern data)
- Stack different models
Model Design
References
- A paper compare different arabic text diacritization models and show that shakkala is the best among available neural networks for this solution: Arabic Text Diacritization Using Deep Neural Networks, 2019
Citation
For academic work use
Shakkala, Arabic text vocalization, Barqawi & Zerrouki
OR bibtex format
@misc{
title={Shakkala, Arabic text vocalization},
author={Barqawi, Zerrouki},
url={https://github.com/Barqawiz/Shakkala},
year={2017}
}
Contribution
Core Team
- Ahmad Barqawi: Neural Network Developer.
- Taha Zerrouki: Mentor Data and Results.
Contributors
- Zaid Farekh & propellerinc.me: Provide infrastructure and consultation support.
- Mohammad Issam Aklik: Artist.
- Brahim Sidi: Form new sentences.
- Fadi Bakoura: Aggregate online content.
- Ola Ghanem: Testing.
- Ali Hamdi Ali Fadel: Contribute code.
License
Free to use and distribute only mention the original project name Shakkala as base model.
The MIT License (MIT)
Copyright (c) 2017 Shakkala Project
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.