There are no reviews yet. Be the first to send feedback to the community and the maintainers!
Towards_Developing_a_Machine_Learning_Model_For_Suicidal_Attempt_Prediction
In this study [N=469], we have classified the samples whohaveattemptedforsuicideandwhodidnot.Forclassification purpose, we have used random forest classifier and we got 83.7% accuracyregardingclassificationofthesetwogroupsofpeople.In addition to our classification analysis, we have also shown which factors can be important for suicidal attemptFrequent-Pattern-Mining
Data mining is the process of finding interesting patterns or information in selected data using certain algorithms or techniques or methods. The techniques, methods,or algorithms in data mining vary greatly. The selection of the right method or algorithm depends very much on the objectives and the Knowledge Discovery in Dataset (chess and kosarak) process in its entirety. Data mining techniques to find associative rules or relationships between items are called association rule mining. The algorithm used to find association rules is both algorithm like apriori algorithm and fp growth algorithm.The Apriori algorithm uses frequent itemset to generate association rules, and it is designed to figure on the databases that contain transactions. It supports the concept that a subset of a frequent itemset must even be a frequent itemset. Fp-growth algorithm is an improved version of the apriori algorithm which is used for frequent pattern mining. To understand both algorithms we need to first understand frequent itemset and association rules. We know that, frequent itemset is an itemset whose support value is greater than a threshold value and association rules uncover the relationship between two or more attributes. In a given dataset frequent itemsets can be found using two types of algorithms. One is apriori algorithm and another is fp-growth algorithm. Apriori algorithm generates all itemset by scanning the full transactional database and the other hand fp growth algorithm only generates the frequent itemsets according to minimum support defined by the users. Since apriori scans the entire database multiple times, it's more resource hungry and therefore the time to get the association rules increases exponentially with the rise within the database size. On the opposite hand the fp growth algorithm doesn’t scan the entire database multiple times and therefore the scanning time increases linearly. Hence the fp growth algorithm is much faster than the apriori algorithmLove Open Source and this site? Check out how you can help us