Siva Arwin (@SivaArwin)
  • Stars
    star
    62
  • Global Rank 291,579 (Top 11 %)
  • Followers 7
  • Following 14
  • Registered almost 5 years ago
  • Most used languages
    R
    58.3 %
    HTML
    16.7 %
    Python
    8.3 %
  • Location 🇮🇳 India
  • Country Total Rank 9,480
  • Country Ranking
    R
    74
    HTML
    7,512

Top repositories

1

Text-Mining

The codes which is uploaded here is generally based on extracting the customer reviews of a travel website called tripadvisor, most of the websites doesn't provide their customer data but websites/platforms like tripadvisor, amazon, twitter,imdb, etc are much helpful for data science students, similarly all these data's can be processed and used for model validation as well as visualization, we can also perform sentimental analysis which I haven't done in this repository , these codes are used in various websites which works based on the html nodes and link which is specifically used as per user concern, all these reviews will be extracted and saved in text file format so that's it
R
4
star
2

IBM-SPSS-MODELER-C2090-930-preparation-18-v3-

First of all u need a IBM Exam Registration :
3
star
3

MS-SQL-SERVER---Practices

3
star
4

Forecasting-Techniques-with-some-examples

Start with README
R
3
star
5

Arwin-s-Data

Summary blogdown: Creating Websites with R Markdown provides a practical guide for creating websites using the blogdown package in R. In this book, we show you how to use dynamic R Markdown documents to build static websites featuring R code (or other programming languages) with automatically rendered output such as graphics, tables, analysis results, and HTML widgets. The blogdown package is also suitable for technical writing with elements such as citations, footnotes, and LaTeX math. This makes blogdown an ideal platform for any website designed to communicate information about data science, data analysis, data visualization, or R programming. Note that blogdown is not just for blogging or sites about R; it can also be used to create general-purpose websites. By default, blogdown uses Hugo, a popular open-source static website generator, which provides a fast and flexible way to build your site content to be shared online. Other website generators like Jekyll and Hexo are also supported. In this book, you will learn how to: Build a website using the blogdown package; Create blog posts and other website content as dynamic documents that can be easily edited and updated; Customize Hugo templates to suit your site’s needs; Publish your website online; Migrate your existing websites to blogdown and Hugo.I like to analyze data to answer research questions and test hypotheses. Currently I investigate questions related to breast cancer through my work as a Research Biostatistician at [Memorial Sloan Kettering Cancer Center](https://www.mskcc.org/departments/epidemiology-biostatistics) in the department of Epidemiology & Biostatistics.
HTML
3
star
6

Decision-Tree---Siva-K

A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements. A decision tree is a flowchart-like structure in which each internal node represents a “test” on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represent classification rules. Tree based learning algorithms are considered to be one of the best and mostly used supervised learning methods. Tree based methods empower predictive models with high accuracy, stability and ease of interpretation. Unlike linear models, they map non-linear relationships quite well. They are adaptable at solving any kind of problem at hand (classification or regression). Decision Tree algorithms are referred to as CART (Classification and Regression Trees).
R
2
star
7

Machine-Learning-BCCC

1
star
8

Salary---Forecasting-

R
1
star
9

Flipkart-Scrapping-

1
star
10

Naive-Bayes---Siva-K

1
star
11

Dashboards-Dossiers-reports

1
star
12

Maths-for-ML-

Jupyter Notebook
1
star
13

NLP-Fusion-and-Cleaning-XML-tags-

HTML
1
star
14

Siva-

aasdjgwuhocan
1
star
15

NAMMA-Python

Understand that Python doesn't need to compile. Calculate powers. You can use the ** operator to signify powers. Python can quickly calculate large numbers. See the box below for examples.Python is an interpreted language, which means you can run the program as soon as you make changes to the file. This makes iterating, revising, and troubleshooting programs is much quicker than many other languages. Python is one of the easier languages to learn, and you can have a basic program up and running in just a few minutes.Mess around in the interpreter. You can use the interpreter to test out code without having to add it to your program first. This is great for learning how specific commands work, or writing a throw-away program.
Jupyter Notebook
1
star
16

Basics-of-Python-rough-practices-

I fell in love with Python after reading a bunch of answers on Quora about how people were doing wonderful things with Python. Some were writing scripts to automate their Whats app messages. Some wrote a script to download their favourite songs, while some built a system to receive cricket score updates on their phones. All of this seemed very excited to me and I finally decided that I would love to learn Python.
1
star
17

Energhelpline-Webscraping

Web scraping is nothing but collecting data from various websites. You can extract information, such as product pricing and discounts. The data that you acquire can help in enhancing user experience. This usage, in return, will ensure that the customers prefer you over your competitors.
1
star
18

SVM-NAA-Ennanu-theriyuma

A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimentional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side.SVM is a supervised machine learning algorithm which can be used for classification or regression problems. It uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputs.
1
star
19

KNN---Assignment-

Calculate the distance from x to all points in your data. Sort the points in your data by increasing distance from x. Predict the majority label of the k closest points. Note that the value of k effects the results, its ideal to test the model for different values of k for better results and there by a better model.
R
1
star
20

Scraping---Uswitch.com

Web scraping is the process of extracting data from websites. Some data that is available on the web is presented in a format that makes it easier to collect and use it, for example in the form of downloadable comma-separated values (CSV) datasets that can then be imported in a spreadsheet or loaded into a data analysis script. Often however, even though it is publicly available, data is not readily available for reuse. For example it can be contained in a PDF, or a table on a website, or spread across multiple web pages. There are a variety of ways to scrape a website to extract information for reuse. In its simplest form, this can be achieved by copying and pasting snippets from a web page, but this can be unpractical if there is a large amount of data to be extracted, or if it is spread over a large number of pages. Instead, specialized tools and techniques can be used to automate this process, by defining what sites to visit, what information to look for, and whether data extraction should stop once the end of a page has been reached, or whether to follow hyperlinks and repeat the process recursively. Automating web scraping also allows to define whether the process should be run at regular intervals and capture changes in the data.
Python
1
star
21

Neural-Networks

neural networks are the usual representation we make of the brain : neurons interconnected to other neurons which forms a network. A simple information transits in a lot of them before becoming an actual thing, like “move the hand to pick up this pencil”. The operation of a complete neural network is straightforward : one enter variables as inputs (for example an image if the neural network is supposed to tell what is on an image), and after some calculations, an output is returned (following the first example, giving an image of a cat should return the word “cat”). Now, you should know that artificial neural network are usually put on columns, so that a neuron of the column n can only be connected to neurons from columns n-1 and n+1. There are few types of networks that use a different architecture, but we will focus on the simplest for now.
R
1
star
22

Regression-analysis

Linear and Logistic regressions are usually the first modeling algorithms that people learn for Machine Learning and Data Science. Both are great since they’re easy to use and interpret. However, their inherent simplicity also comes with a few drawbacks and in many cases, they’re not really the best choice of regression model. There are in fact several different types of regressions, each with their own strengths and weaknesses. In this post, we’re going to look at 7 of the most common types of regression algorithms and their properties. We’ll soon find that many of them are biased to working well in certain types of situations and with certain types of data. In the end, this post will give you a few more tools in your regression toolbox and give greater insight into regression models as a whole!
R
1
star
23

PYTORCH-BUD

PyTorch is a GPU accelerated tensor computational framework with a Python front end. Functionality can be easily extended with common Python libraries such as NumPy, SciPy, and Cython. Automatic differentiation is done with a tape-based system at both a functional and neural network layer level. This functionality brings a high level of flexibility and speed as a deep learning framework and provides accelerated NumPy-like functionality.
1
star
24

Basic-Python-rough-works-

Learning Python basics is a piece of cake, it is extremely simple to get up and running with Python. Basics like variables, operators and control structures are extremely easy to learn as opposed to other languages like Java.Its been 15 days that I have started Python learning so I feel that I am eligible to answer this question , I am person who doesn't have any prior programming experience. But my job profile pushing me to learn Python and that's it, I have browse internet and I got so many sources free as well as paid but I was confused which source should I follow whether youtube, or some website , but finally I got one PDF which has 250 pages , it's very good I will not say it's great but for starting point it's very good, to know the Python's fundamental, once you complete this 250 pages then you can move to the other sources by the time you will have more clarity in Python . If you need I may send you this to your mail id.
1
star