Aman kapoor (@aman1391)
  • Stars
    star
    16
  • Global Rank 731,219 (Top 26 %)
  • Followers 36
  • Following 47
  • Registered almost 9 years ago
  • Most used languages
    R
    62.5 %
    Python
    25.0 %
  • Location 🇮🇳 India
  • Country Total Rank 30,281
  • Country Ranking
    R
    82

Top repositories

1

H20-learnup

Created a model for the speed of the wind turbine , and created features like speed , direction and t sne distribution and used decision trees to combine the date and the month columns and used glmnet for feature selection Ran a bag of 10's xg boost and secured a rmse of 0.17321 rmse (root mean square estimation ) in public leaderboard but on the private leaderboard leads to overfitting inspite of good CVs . Anyways great learning experience , and I thank analytics vidya for conducting this competition Rank : 12th / 109 participants Team Name : Aegis Team Meambers : Aman Kapoor Anchal Gupta
R
4
star
2

redhat_kaggle

Kaggle Project Red Hat
R
3
star
3

Smart--Recruits

Public leader Board : 0.684
R
3
star
4

sbs

sequential backward selection , it is an algorithm which starts with n no of feature and check the accuracy of model with d no of features where n>d and remove the noise by removing the irrevelant features *Chapter 4* dimensionality reduction techniques
Python
2
star
5

ultimate-studenthunt

R
1
star
6

M5-Competion

Hi.
Jupyter Notebook
1
star
7

MLWARE_1

Totally i new area of competition for me and my team mate(anchal gupta ) , we havent work on recommendation system before , but was a nice experience to compete with 100 teams on a national platform and securing a rank of 29th
Python
1
star
8

Knockoctober

Med camp one of the organisation built a health camp for the low life work balance , our aim to generate target variable with the attributes like health camp detail , patient details , health camp 1 , camp 2 and camp 3 We did a predictive modeling by extracted some of the features on dates and other attritubes like linkedin shared , facebook shared , age , income and employer details Final model comprise of ~13 features with the blend of logistic regression , deccision trees and xg boost which gives us the auc of ~0.81 on the public leaderboard and ~0.75 on private leaderboard Final standing 27th out of 286 participants
R
1
star