Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $9.99/month after trial. Cancel anytime.

Data Analysis with R
Data Analysis with R
Data Analysis with R
Ebook723 pages5 hours

Data Analysis with R

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Load, wrangle, and analyze your data using the world's most powerful statistical programming language

About This Book

- Load, manipulate and analyze data from different sources
- Gain a deeper understanding of fundamentals of applied statistics
- A practical guide to performing data analysis in practice

Who This Book Is For

Whether you are learning data analysis for the first time, or you want to deepen the understanding you already have, this book will prove to an invaluable resource. If you are looking for a book to bring you all the way through the fundamentals to the application of advanced and effective analytics methodologies, and have some prior programming experience and a mathematical background, then this is for you.

What You Will Learn

- Navigate the R environment
- Describe and visualize the behavior of data and relationships between data
- Gain a thorough understanding of statistical reasoning and sampling
- Employ hypothesis tests to draw inferences from your data
- Learn Bayesian methods for estimating parameters
- Perform regression to predict continuous variables
- Apply powerful classification methods to predict categorical data
- Handle missing data gracefully using multiple imputation
- Identify and manage problematic data points
- Employ parallelization and Rcpp to scale your analyses to larger data
- Put best practices into effect to make your job easier and facilitate reproducibility

In Detail

Frequently the tool of choice for academics, R has spread deep into the private sector and can be found in the production pipelines at some of the most advanced and successful enterprises. The power and domain-specificity of R allows the user to express complex analytics easily, quickly, and succinctly. With over 7,000 user contributed packages, it’s easy to find support for the latest and greatest algorithms and techniques.
Starting with the basics of R and statistical reasoning, Data Analysis with R dives into advanced predictive analytics, showing how to apply those techniques to real-world data though with real-world examples.
Packed with engaging problems and exercises, this book begins with a review of R and its syntax. From there, get to grips with the fundamentals of applied statistics and build on this knowledge to perform sophisticated and powerful analytics. Solve the difficulties relating to performing data analysis in practice and find solutions to working with “messy data”, large data, communicating results, and facilitating reproducibility.
This book is engineered to be an invaluable resource through many stages of anyone’s career as a data analyst.

Style and approach

Learn data analysis using engaging examples and fun exercises, and with a gentle and friendly but comprehensive "learn-by-doing" approach.
LanguageEnglish
Release dateDec 22, 2015
ISBN9781785286445
Data Analysis with R

Read more from Fischetti Tony

Related to Data Analysis with R

Related ebooks

Databases For You

View More

Related articles

Reviews for Data Analysis with R

Rating: 5 out of 5 stars
5/5

2 ratings1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5
    I absolutely loved the book. It was very well done. The chapters were well organized. The information was easy to follow. Last, the examples were relevant​. I most enjoyed the last section on reproducibility and using R Markdown.

Book preview

Data Analysis with R - Fischetti Tony

Table of Contents

Data Analysis with R

Credits

About the Author

About the Reviewer

www.PacktPub.com

Support files, eBooks, discount offers, and more

Why subscribe?

Free access for Packt account holders

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

1. RefresheR

Navigating the basics

Arithmetic and assignment

Logicals and characters

Flow of control

Getting help in R

Vectors

Subsetting

Vectorized functions

Advanced subsetting

Recycling

Functions

Matrices

Loading data into R

Working with packages

Exercises

Summary

2. The Shape of Data

Univariate data

Frequency distributions

Central tendency

Spread

Populations, samples, and estimation

Probability distributions

Visualization methods

Exercises

Summary

3. Describing Relationships

Multivariate data

Relationships between a categorical and a continuous variable

Relationships between two categorical variables

The relationship between two continuous variables

Covariance

Correlation coefficients

Comparing multiple correlations

Visualization methods

Categorical and continuous variables

Two categorical variables

Two continuous variables

More than two continuous variables

Exercises

Summary

4. Probability

Basic probability

A tale of two interpretations

Sampling from distributions

Parameters

The binomial distribution

The normal distribution

The three-sigma rule and using z-tables

Exercises

Summary

5. Using Data to Reason About the World

Estimating means

The sampling distribution

Interval estimation

How did we get 1.96?

Smaller samples

Exercises

Summary

6. Testing Hypotheses

Null Hypothesis Significance Testing

One and two-tailed tests

When things go wrong

A warning about significance

A warning about p-values

Testing the mean of one sample

Assumptions of the one sample t-test

Testing two means

Don't be fooled!

Assumptions of the independent samples t-test

Testing more than two means

Assumptions of ANOVA

Testing independence of proportions

What if my assumptions are unfounded?

Exercises

Summary

7. Bayesian Methods

The big idea behind Bayesian analysis

Choosing a prior

Who cares about coin flips

Enter MCMC – stage left

Using JAGS and runjags

Fitting distributions the Bayesian way

The Bayesian independent samples t-test

Exercises

Summary

8. Predicting Continuous Variables

Linear models

Simple linear regression

Simple linear regression with a binary predictor

A word of warning

Multiple regression

Regression with a non-binary predictor

Kitchen sink regression

The bias-variance trade-off

Cross-validation

Striking a balance

Linear regression diagnostics

Second Anscombe relationship

Third Anscombe relationship

Fourth Anscombe relationship

Advanced topics

Exercises

Summary

9. Predicting Categorical Variables

k-Nearest Neighbors

Using k-NN in R

Confusion matrices

Limitations of k-NN

Logistic regression

Using logistic regression in R

Decision trees

Random forests

Choosing a classifier

The vertical decision boundary

The diagonal decision boundary

The crescent decision boundary

The circular decision boundary

Exercises

Summary

10. Sources of Data

Relational Databases

Why didn't we just do that in SQL?

Using JSON

XML

Other data formats

Online repositories

Exercises

Summary

11. Dealing with Messy Data

Analysis with missing data

Visualizing missing data

Types of missing data

So which one is it?

Unsophisticated methods for dealing with missing data

Complete case analysis

Pairwise deletion

Mean substitution

Hot deck imputation

Regression imputation

Stochastic regression imputation

Multiple imputation

So how does mice come up with the imputed values?

Methods of imputation

Multiple imputation in practice

Analysis with unsanitized data

Checking for out-of-bounds data

Checking the data type of a column

Checking for unexpected categories

Checking for outliers, entry errors, or unlikely data points

Chaining assertions

Other messiness

OpenRefine

Regular expressions

tidyr

Exercises

Summary

12. Dealing with Large Data

Wait to optimize

Using a bigger and faster machine

Be smart about your code

Allocation of memory

Vectorization

Using optimized packages

Using another R implementation

Use parallelization

Getting started with parallel R

An example of (some) substance

Using Rcpp

Be smarter about your code

Exercises

Summary

13. Reproducibility and Best Practices

R Scripting

RStudio

Running R scripts

An example script

Scripting and reproducibility

R projects

Version control

Communicating results

Exercises

Summary

Index

Data Analysis with R


Data Analysis with R

Copyright © 2015 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2015

Production reference: 1171215

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78528-814-2

www.packtpub.com

Credits

Author

Tony Fischetti

Reviewer

Dipanjan Sarkar

Commissioning Editor

Akram Hussain

Acquisition Editor

Meeta Rajani

Content Development Editor

Anish Dhurat

Technical Editor

Siddhesh Patil

Copy Editor

Sonia Mathur

Project Coordinator

Bijal Patel

Proofreader

Safis Editing

Indexer

Monica Ajmera Mehta

Graphics

Disha Haria

Production Coordinator

Conidon Miranda

Cover Work

Conidon Miranda

About the Author

Tony Fischetti is a data scientist at College Factual, where he gets to use R everyday to build personalized rankings and recommender systems. He graduated in cognitive science from Rensselaer Polytechnic Institute, and his thesis was strongly focused on using statistics to study visual short-term memory.

Tony enjoys writing and contributing to open source software, blogging at http://www.onthelambda.com, writing about himself in third person, and sharing his knowledge using simple, approachable language and engaging examples.

The more traditionally exciting of his daily activities include listening to records, playing the guitar and bass (poorly), weight training, and helping others.

Because I'm aware of how incredibly lucky I am, it's really hard to express all the gratitude I have for everyone in my life that helped me—either directly, or indirectly—in completing this book. The following (partial) list is my best attempt at balancing thoroughness whilst also maximizing the number of people who will read this section by keeping it to a manageable length.

First, I'd like to thank all of my educators. In particular, I'd like to thank the Bronx High School of Science and Rensselaer Polytechnic Institute. More specifically, I'd like the Bronx Science Robotics Team, all it's members, it's team moms, the wonderful Dena Ford and Cherrie Fleisher-Strauss; and Justin Fox. From the latter institution, I'd like to thank all of my professors and advisors. Shout out to Mike Kalsher, Michael Schoelles, Wayne Gray, Bram van Heuveln, Larry Reid, and Keith Anderson (especially Keith Anderson).

I'd like to thank the New York Public Library, Wikipedia, and other freely available educational resources. On a related note, I need to thank the R community and, more generally, all of the authors of R packages and other open source software I use for spending their own personal time to benefit humanity. Shout out to GNU, the R core team, and Hadley Wickham (who wrote a majority of the R packages I use daily).

Next, I'd like to thank the company I work for, College Factual, and all of my brilliant co-workers from whom I've learned so much.

I also need to thank my support network of millions, and my many many friends that have all helped me more than they will likely ever realize.

I'd like to thank my partner, Bethany Wickham, who has been absolutely instrumental in providing much needed and appreciated emotional support during the writing of this book, and putting up with the mood swings that come along with working all day and writing all night.

Next, I'd like to express my gratitude for my sister, Andrea Fischetti, who means the world to me. Throughout my life, she's kept me warm and human in spite of the scientist in me that likes to get all reductionist and cerebral.

Finally, and most importantly, I'd like to thank my parents. This book is for my father, to whom I owe my love of learning and my interest in science and statistics; and to my mother for her love and unwavering support and, to whom I owe my work ethic and ability to handle anything and tackle any challenge.

About the Reviewer

Dipanjan Sarkar is an IT engineer at Intel, the world's largest silicon company, where he works on analytics, business intelligence, and application development. He received his master's degree in information technology from the International Institute of Information Technology, Bangalore. Dipanjan's area of specialization includes software engineering, data science, machine learning, and text analytics.

His interests include learning about new technologies, disruptive start-ups, and data science. In his spare time, he loves reading, playing games, and watching popular sitcoms. Dipanjan also reviewed Learning R for Geospatial Analysis and R Data Analysis Cookbook, both by Packt Publishing.

I would like to thank Bijal Patel, the project coordinator of this book, for making the reviewing experience really interactive and enjoyable.

www.PacktPub.com

Support files, eBooks, discount offers, and more

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Free access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.

Preface

I'm going to shoot it to you straight: there are a lot of books about data analysis and the R programming language. I'll take it on faith that you already know why it's extremely helpful and fruitful to learn R and data analysis (if not, why are you reading this preface?!) but allow me to make a case for choosing this book to guide you in your journey.

For one, this subject didn't come naturally to me. There are those with an innate talent for grasping the intricacies of statistics the first time it is taught to them; I don't think I'm one of these people. I kept at it because I love science and research and knew that data analysis was necessary, not because it immediately made sense to me. Today, I love the subject in and of itself, rather than instrumentally, but this only came after months of heartache. Eventually, as I consumed resource after resource, the pieces of the puzzle started to come together. After this, I started tutoring all of my friends in the subject—and have seen them trip over the same obstacles that I had to learn to climb. I think that coming from this background gives me a unique perspective on the plight of the statistics student and allows me to reach them in a way that others may not be able to. By the way, don't let the fact that statistics used to baffle me scare you; I have it on fairly good authority that I know what I'm talking about today.

Secondly, this book was born of the frustration that most statistics texts tend to be written in the driest manner possible. In contrast, I adopt a light-hearted buoyant approach—but without becoming agonizingly flippant.

Third, this book includes a lot of material that I wished were covered in more of the resources I used when I was learning about data analysis in R. For example, the entire last unit specifically covers topics that present enormous challenges to R analysts when they first go out to apply their knowledge to imperfect real-world data.

Lastly, I thought long and hard about how to lay out this book and which order of topics was optimal. And when I say long and hard I mean I wrote a library and designed algorithms to do this. The order in which I present the topics in this book was very carefully considered to (a) build on top of each other, (b) follow a reasonable level of difficulty progression allowing for periodic chapters of relatively simpler material (psychologists call this intermittent reinforcement), (c) group highly related topics together, and (d) minimize the number of topics that require knowledge of yet unlearned topics (this is, unfortunately, common in statistics). If you're interested, I detail this procedure in a blog post that you can read at http://bit.ly/teach-stats.

The point is that the book you're holding is a very special one—one that I poured my soul into. Nevertheless, data analysis can be a notoriously difficult subject, and there may be times where nothing seems to make sense. During these times, remember that many others (including myself) have felt stuck, too. Persevere… the reward is great. And remember, if a blockhead like me can do it, you can, too. Go you!

What this book covers

Chapter 1, RefresheR, reviews the aspects of R that subsequent chapters will assume knowledge of. Here, we learn the basics of R syntax, learn R's major data structures, write functions, load data and install packages.

Chapter 2, The Shape of Data, discusses univariate data. We learn about different data types, how to describe univariate data, and how to visualize the shape of these data.

Chapter 3, Describing Relationships, goes on to the subject of multivariate data. In particular, we learn about the three main classes of bivariate relationships and learn how to describe them.

Chapter 4, Probability, kicks off a new unit by laying foundation. We learn about basic probability theory, Bayes' theorem, and probability distributions.

Chapter 5, Using Data to Reason About the World, discusses sampling and estimation theory. Through examples, we learn of the central limit theorem, point estimation and confidence intervals.

Chapter 6, Testing Hypotheses, introduces the subject of Null Hypothesis Significance Testing (NHST). We learn many popular hypothesis tests and their non-parametric alternatives. Most importantly, we gain a thorough understanding of the misconceptions and gotchas of NHST.

Chapter 7, Bayesian Methods, introduces an alternative to NHST based on a more intuitive view of probability. We learn the advantages and drawbacks of this approach, too.

Chapter 8, Predicting Continuous Variables, thoroughly discusses linear regression. Before the chapter's conclusion, we learn all about the technique, when to use it, and what traps to look out for.

Chapter 9, Predicting Categorical Variables, introduces four of the most popular classification techniques. By using all four on the same examples, we gain an appreciation for what makes each technique shine.

Chapter 10, Sources of Data, is all about how to use different data sources in R. In particular, we learn how to interface with databases, and request and load JSON and XML via an engaging example.

Chapter 11, Dealing with Messy Data, introduces some of the snags of working with less than perfect data in practice. The bulk of this chapter is dedicated to missing data, imputation, and identifying and testing for messy data.

Chapter 12, Dealing with Large Data, discusses some of the techniques that can be used to cope with data sets that are larger than can be handled swiftly without a little planning. The key components of this chapter are on parallelization and Rcpp.

Chapter 13, Reproducibility and Best Practices, closes with the extremely important (but often ignored) topic of how to use R like a professional. This includes learning about tooling, organization, and reproducibility.

What you need for this book

All code in this book has been written against the latest version of R—3.2.2 at the time of writing. As a matter of good practice, you should keep your R version up to date but most, if not all, code should work with any reasonably recent version of R. Some of the R packages we will be installing will require more recent versions, though. For the other software that this book uses, instructions will be furnished pro re nata. If you want to get a head start, however, install RStudio, JAGS, and a C++ compiler (or Rtools if you use Windows).

Who this book is for

Whether you are learning data analysis for the first time, or you want to deepen the understanding you already have, this book will prove to an invaluable resource. If you are looking for a book to bring you all the way through the fundamentals to the application of advanced and effective analytics methodologies, and have some prior programming experience and a mathematical background, then this is for you.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: We will use the system.time function to time the execution.

A block of code is set as follows:

library(VIM)

aggr(miss_mtcars, numbers=TRUE)

Any command-line input or output is written as follows:

# R --vanilla CMD BATCH nothing.R

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: Clicking the Next button moves you to the next screen.

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/Data_Analysis_With_R_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Chapter 1. RefresheR

Before we dive into the (other) fun stuff (sampling multi-dimensional probability distributions, using convex optimization to fit data models, and so on), it would be helpful if we review those aspects of R that all subsequent chapters will assume knowledge of.

If you fancy yourself as an R guru, you should still, at least, skim through this chapter, because you'll almost certainly find the idioms, packages, and style introduced here to be beneficial in following along with the rest of the material.

If you don't care much about R (yet), and are just in this for the statistics, you can heave a heavy sigh of relief that, for the most part, you can run the code given in this book in the interactive R interpreter with very little modification, and just follow along with the ideas. However, it is my belief (read: delusion) that by the end of this book, you'll cultivate a newfound appreciation of R alongside a robust understanding of methods in data analysis.

Fire up your R interpreter, and let's get started!

Navigating the basics

In the interactive R interpreter, any line starting with a > character denotes R asking for input (If you see a + prompt, it means that you didn't finish typing a statement at the prompt and R is asking you to provide the rest of the expression.). Striking the return key will send your input to R to be evaluated. R's response is then spit back at you in the line immediately following your input, after which R asks for more input. This is called a REPL (Read-Evaluate-Print-Loop). It is also possible for R to read a batch of commands saved in a file (unsurprisingly called batch mode), but we'll be using the interactive mode for most of the book.

As you might imagine, R supports all the familiar mathematical operators as most other languages:

Arithmetic and assignment

Check out the following example:

  > 2 + 2

  [1] 4

 

  > 9 / 3

  [1] 3

 

  > 5 %% 2    # modulus operator (remainder of 5 divided by 2)

  [1] 1

Anything that occurs after the octothorpe or pound sign, #, (or hash-tag for you young'uns), is ignored by the R interpreter. This is useful for documenting the code in natural language. These are called comments.

In a multi-operation arithmetic expression, R will follow the standard order of operations from math. In order to override this natural order, you have to use parentheses flanking the sub-expression that you'd like to be performed first.

  > 3 + 2 - 10 ^ 2        # ^ is the exponent operator

  [1] -95

  > 3 + (2 - 10) ^ 2

  [1] 67

In practice, almost all compound expressions are split up with intermediate values assigned to variables which, when used in future expressions, are just like substituting the variable with the value that was assigned to it. The (primary) assignment operator is <-.

  > # assignments follow the form VARIABLE <- VALUE

  > var <- 10

  > var

  [1] 10

  > var ^ 2

  [1] 100

  > VAR / 2            # variable names are case-sensitive

  Error: object 'VAR' not found

Notice that the first and second lines in the preceding code snippet didn't have an output to be displayed, so R just immediately asked for more input. This is because assignments don't have a return value. Their only job is to give a value to a variable, or to change the existing value of a variable. Generally, operations and functions on variables in R don't change the value of the variable. Instead, they return the result of the operation. If you want to change a variable to the result of an operation using that variable, you have to reassign that variable as follows:

  > var              # var is 10

  [1] 10

  > var ^ 2

  [1] 100

  > var              # var is still 10

  [1] 10

  > var <- var ^ 2    # no return value

  > var              # var is now 100

  [1] 100

Be aware that variable names may contain numbers, underscores, and periods; this is something that trips up a lot of people who are familiar with other programming languages that disallow using periods in variable names. The only further restrictions on variable names are that it must start with a letter (or a period and then a letter), and that it must not be one of the reserved words in R such as TRUE, Inf, and so on.

Although the arithmetic operators that we've seen thus far are functions in their own right, most functions in R take the form: function_name (value(s) supplied to the function). The values supplied to the function are called arguments of that function.

  > cos(3.14159)      # cosine function

  [1] -1

  > cos(pi)          # pi is a constant that R provides

  [1] -1

  > acos(-1)          # arccosine function

  [1] 2.141593

  > acos(cos(pi)) + 10

  [1] 13.14159

  > # functions can be used as arguments to other functions

(If you paid attention in math class, you'll know that the cosine of π is -1, and that arccosine is the inverse function of cosine.)

There are hundreds of such useful functions defined in base R, only a handful of which we will see in this book. Two sections from now, we will be building our very own functions.

Before we move on from arithmetic, it will serve us well to visit some of the odd values that may result from certain operations:

  > 1 / 0

  [1] Inf

  >  0 / 0

  [1] NaN

It is common during practical usage of R to accidentally divide by zero. As you can see, this undefined operation yields an infinite value in R. Dividing zero by zero yields the value NaN, which stands for Not a Number.

Logicals and characters

So far, we've only been dealing with numerics, but there are other atomic data types in R. To wit:

  > foo <- TRUE        # foo is of the logical data type

  > class(foo)        # class() tells us the type

  [1] logical

  > bar <- hi!      # bar is of the character data type

  > class(bar)

  [1] character

The logical data type (also called Booleans) can hold the values TRUE or FALSE or, equivalently, T or F. The familiar operators from Boolean algebra are defined for these types:

  > foo

  [1] TRUE

  > foo && TRUE                # boolean and

  [1] TRUE

  > foo && FALSE

  [1] FALSE

  > foo || FALSE                # boolean or

  [1] TRUE

  > !foo                        # negation operator

  [1] FALSE

In a Boolean expression with a logical value and a number, any number that is not 0 is interpreted as TRUE.

  > foo && 1

  [1] TRUE

  > foo && 2

  [1] TRUE

  > foo && 0

  [1] FALSE

Additionally, there are functions and operators that return logical values such as:

  > 4 < 2          # less than operator

  [1] FALSE

  > 4 >= 4          # greater than or equal to

  [1] TRUE

  > 3 == 3          # equality operator

  [1] TRUE

  > 3 != 2          # inequality operator

  [1] TRUE

Just as there are functions in R that are only defined for work on the numeric and logical data type, there are other functions that are designed to work only with the character data type, also known as strings:

  > lang.domain <- statistics

  > lang.domain <- toupper(lang.domain)

  > print(lang.domain)

  [1] STATISTICS

  > # retrieves substring from first character to fourth character

  > substr(lang.domain, 1, 4)         

  [1] STAT

  > gsub(I, 1, lang.domain)  # substitutes every I for 1

  [1] STAT1ST1CS

  # combines character strings

  > paste(R does, lang.domain, !!!)

  [1] R does STATISTICS !!!

Flow of control

The last topic in this section will be flow of control constructs.

The most basic flow of control construct is the if statement. The argument to an if statement (what goes between the parentheses), is an expression that returns a logical value. The block of code following the if statement gets executed only if the expression yields TRUE. For example:

  > if(2 + 2 == 4)

  +    print(very good)

  [1] very good

  > if(2 + 2 == 5)

  +    print(all hail to the thief)

  >

It is possible to execute more than one statement if an if condition is triggered; you just have to use curly brackets ({}) to contain the statements.

  > if((4/2==2) && (2*2==4)){

  +    print(four divided by two is two...)

  +    print(and two times two is four)

  + }

  [1] four divided by two is two...

  [1] and two times two is four

  >

It is also possible to specify a block of code that will get executed if the if conditional is FALSE.

  > closing.time <- TRUE

  > if(closing.time){

  +    print(you don't have to go home)

  +    print(but you can't stay here)

  + } else{

  +    print(you can stay here!)

  + }

  [1] you don't have to go home

  [1] but you can't stay here

  > if(!closing.time){

  +    print(you don't have to go home)

  +    print(but you can't stay here)

  + } else{

  +    print(you can stay here!)

  + }

  [1] you can stay here!

  >

There are other flow of control constructs (like while and for), but we won't directly be using them much in this text.

Getting help in R

Before we go further, it would serve us well to have a brief section detailing how to get help in

Enjoying the preview?
Page 1 of 1