An Introduction To R: W. N. Venables, D. M. Smith and The R Core Team
An Introduction To R: W. N. Venables, D. M. Smith and The R Core Team
An Introduction To R: W. N. Venables, D. M. Smith and The R Core Team
Notes on R: A Programming Environment for Data Analysis and Graphics Version 2.15.3 (2013-03-01)
c c c c c
1990 W. N. Venables 1992 W. N. Venables & D. M. Smith 1997 R. Gentleman & R. Ihaka 1997, 1998 M. Maechler 1997 R Core Team
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Core Team. ISBN 3-900051-12-7
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 Introduction and preliminaries . . . . . . . . . . . . . . . . 2
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 The R environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related software and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . R and statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R and the window system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using R interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An introductory session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting help with functions and features . . . . . . . . . . . . . . . . . . . . . . . . R commands, case sensitivity, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recall and correction of previous commands . . . . . . . . . . . . . . . . . . . . Executing commands from or diverting output to a file . . . . . . . . Data permanency and removing objects . . . . . . . . . . . . . . . . . . . . . . . 2 2 2 3 3 4 4 5 5 6 6
ii
Probability distributions . . . . . . . . . . . . . . . . . . . . . . 35
8.1 8.2 8.3 R as a set of statistical tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Examining the distribution of a set of data . . . . . . . . . . . . . . . . . . . . 36 One- and two-sample tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
iii
10
10.1 Simple examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Defining new binary operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Named arguments and defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 The ... argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Assignments within functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 More advanced examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Efficiency factors in block designs . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Dropping all names in a printed array . . . . . . . . . . . . . . . . . . . 10.6.3 Recursive numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Customizing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Classes, generic functions and object orientation . . . . . . . . . . . . . .
11
Statistical models in R . . . . . . . . . . . . . . . . . . . . . . . 54
54 56 57 57 58 59 59 60 60 61 63 64 65 65
11.1 Defining statistical models; formulae . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Generic functions for extracting model information . . . . . . . . . . . 11.4 Analysis of variance and model comparison . . . . . . . . . . . . . . . . . . . 11.4.1 ANOVA tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Updating fitted models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Generalized linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 The glm() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Nonlinear least squares and maximum likelihood models . . . . . . 11.7.1 Least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 Maximum likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Some non-standard models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
Graphical procedures . . . . . . . . . . . . . . . . . . . . . . . . 67
67 67 68 68 69 70 71 72 72 73 73 74 74 75 76 76
12.1 High-level plotting commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 The plot() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Displaying multivariate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.3 Display graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.4 Arguments to high-level plotting functions . . . . . . . . . . . . . . . 12.2 Low-level plotting commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Mathematical annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Hershey vector fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Interacting with graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Using graphics parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 Permanent changes: The par() function . . . . . . . . . . . . . . . . . 12.4.2 Temporary changes: Arguments to graphics functions . . . . 12.5 Graphics parameters list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Graphical elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.2 Axes and tick marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.3 Figure margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv 12.5.4 Multiple figure environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Device drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 PostScript diagrams for typeset documents . . . . . . . . . . . . . . 12.6.2 Multiple graphics devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Dynamic graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 79 80 80 81
13
Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Standard packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Contributed packages and CRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Appendix A Appendix B
B.1 B.2 B.3 B.4
Invoking R from the command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking R under Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking R under Mac OS X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scripting with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix C
C.1 C.2 C.3
Preface
Preface
This introduction to R is derived from an original set of notes describing the S and SPlus environments written in 19902 by Bill Venables and David M. Smith when at the University of Adelaide. We have made a number of small changes to reflect differences between the R and S programs, and expanded some of the material. We would like to extend warm thanks to Bill Venables (and David Smith) for granting permission to distribute this modified version of the notes in this way, and for being a supporter of R from way back. Comments and corrections are always welcome. Please address email correspondence to [email protected].
packages supplied with R (called standard and recommended packages) and many more are available through the CRAN family of Internet sites (via http://CRAN.R-project.org) and elsewhere. More details on packages are given later (see Chapter 13 [Packages], page 82). Most classical statistics and much of the latest methodology is available for use with R, but users may need to be prepared to do a little work to find it. There is an important difference in philosophy between S (and hence R) and the other main statistical systems. In S a statistical analysis is normally done as a series of steps, with intermediate results being stored in objects. Thus whereas SAS and SPSS will give copious output from a regression or discriminant analysis, R will give minimal output and store the results in a fit object for subsequent interrogation by further R functions.
do) to save the data before quitting, quit without saving, or return to the R session. Data which is saved will be available in future R sessions. Further R sessions are simple. 1. Make work the working directory and start the program as before: $ cd work $ R 2. Use the R program, terminating with the q() command at the end of the session. To use R under Windows the procedure to follow is basically the same. Create a folder as the working directory, and set that in the Start In field in your R shortcut. Then launch R by double clicking on the icon.
> example(topic ) Windows versions of R have other optional help systems: use > ?help for further details.
For portable R code (including that to be used in R packages) only AZaz09 should be used. not inside strings, nor within the argument list of a function definition some of the consoles will not allow you to enter more, and amongst those which do some will silently discard the excess and some will use it as the start of the next line.
Alternatively, the Emacs text editor provides more general support mechanisms (via ESS, Emacs Speaks Statistics ) for working interactively with R. See Section R and Emacs in The R statistical system FAQ .
of unlimited length. The leading dot in this file name makes it invisible in normal file listings in UNIX.
With other than vector types of argument, such as list mode arguments, the action of c() is rather different. See Section 6.2.1 [Concatenating lists], page 29. Actually, it is still available as .Last.value before any other statements are executed.
The function seq() is a more general facility for generating sequences. It has five arguments, only some of which may be specified in any one call. The first two arguments, if given, specify the beginning and end of the sequence, and if these are the only two arguments given the result is the same as the colon operator. That is seq(2,10) is the same vector as 2:10. Parameters to seq(), and to many other R functions, can also be given in named form, in which case the order in which they appear is irrelevant. The first two parameters may be named from=value and to=value ; thus seq(1,30), seq(from=1, to=30) and seq(to=30, from=1) are all the same as 1:30. The next two parameters to seq() may be named by=value and length=value , which specify a step size and a length for the sequence respectively. If neither of these is given, the default by=1 is assumed. For example > seq(-5, 5, by=.2) -> s3 generates in s3 the vector c(-5.0, -4.8, -4.6, ..., 4.6, 4.8, 5.0). Similarly > s4 <- seq(length=51, from=-5, by=.2) generates the same vector in s4. The fifth parameter may be named along=vector , which if used must be the only parameter, and creates a sequence 1, 2, ..., length(vector ), or the empty sequence if the vector is empty (as it can be). A related function is rep() which can be used for replicating an object in various complicated ways. The simplest form is > s5 <- rep(x, times=5) which will put five copies of x end-to-end in s5. Another useful version is > s6 <- rep(x, each=5) which repeats each element of x five times before moving on to the next.
10
11
> labs <- paste(c("X","Y"), 1:10, sep="") makes labs into the character vector c("X1", "Y2", "X3", "Y4", "X5", "Y6", "X7", "Y8", "X9", "Y10") Note particularly that recycling of short lists takes place here too; thus c("X", "Y") is repeated 5 times to match the sequence 1:10.3
paste(..., collapse=ss ) joins the arguments into a single character string putting ss in between. There are more tools for character manipulation, see the help for sub and substring.
12
> fruit <- c(5, 10, 1, 20) > names(fruit) <- c("orange", "banana", "apple", "peach") > lunch <- fruit[c("apple","orange")] The advantage is that alphanumeric names are often easier to remember than numeric indices. This option is particularly useful in connection with data frames, as we shall see later. An indexed expression can also appear on the receiving end of an assignment, in which case the assignment operation is performed only on those elements of the vector. The expression must be of the form vector[index_vector ] as having an arbitrary expression in place of the vector name does not make much sense here. The vector assigned must match the length of the index vector, and in the case of a logical index vector it must again be the same length as the vector it is indexing. For example > x[is.na(x)] <- 0 replaces any missing values in x by zeros and > y[y < 0] <- -y[y < 0] has the same effect as > y <- abs(y)
13
numeric mode is actually an amalgam of two distinct modes, namely integer and double precision, as explained in the manual. Note however that length(object ) does not always contain intrinsic useful information, e.g., when object is a function.
14
> d <- as.integer(digits) Now d and z are the same.3 There is a large collection of functions of the form as.something () for either coercion from one mode to another, or for investing an object with some other attribute it may not already possess. The reader should consult the different help files to become familiar with them.
In general, coercion from numeric to character and back again will not be exactly reversible, because of roundoff errors in the character representation.
15
16
Notice that in the case of a character vector, sorted means sorted in alphabetical order. A factor is similarly created using the factor() function: > statef <- factor(state) The print() function handles factors slightly differently from other objects: > statef [1] tas sa qld nsw nsw nt wa wa qld vic nsw vic qld qld sa [16] tas sa nt wa vic qld nsw nsw wa sa act nsw vic vic act Levels: act nsw nt qld sa tas vic wa To find out the levels of a factor the function levels() can be used. > levels(statef) [1] "act" "nsw" "nt" "qld" "sa" "tas" "vic" "wa"
Readers should note that there are eight states and territories in Australia, namely the Australian Capital Territory, New South Wales, the Northern Territory, Queensland, South Australia, Tasmania, Victoria and Western Australia.
17
The function tapply() is used to apply a function, here mean(), to each group of components of the first argument, here incomes, defined by the levels of the second component, here statef2 , as if they were separate vector structures. The result is a structure of the same length as the levels attribute of the factor containing the results. The reader should consult the help document for more details. Suppose further we needed to calculate the standard errors of the state income means. To do this we need to write an R function to calculate the standard error for any given vector. Since there is an builtin function var() to calculate the sample variance, such a function is a very simple one liner, specified by the assignment: > stderr <- function(x) sqrt(var(x)/length(x)) (Writing functions will be considered later in Chapter 10 [Writing your own functions], page 44, and in this case was unnecessary as R also has a builtin function sd().) After this assignment, the standard errors are calculated by > incster <- tapply(incomes, statef, stderr) and the values calculated are then > incster act nsw nt qld sa tas vic wa 1.5 4.3102 4.5 4.1061 2.7386 0.5 5.244 2.6575 As an exercise you may care to find the usual 95% confidence limits for the state mean incomes. To do this you could use tapply() once more with the length() function to find the sample sizes, and the qt() function to find the percentage points of the appropriate t-distributions. (You could also investigate Rs facilities for t-tests.) The function tapply() can also be used to handle more complicated indexing of a vector by multiple categories. For example, we might wish to split the tax accountants by both state and sex. However in this simple instance (just one factor) what happens can be thought of as follows. The values in the vector are collected into groups corresponding to the distinct entries in the factor. The function is then applied to each of these groups individually. The value is a vector of function results, labelled by the levels attribute of the factor. The combination of a vector and a labelling factor is an example of what is sometimes called a ragged array, since the subclass sizes are possibly irregular. When the subclass sizes are all the same the indexing may be done implicitly and much more efficiently, as we see in the next section.
Note that tapply() also works in this case when its second argument is not a factor, e.g., tapply(incomes, state), and this is true for quite a few other functions, since arguments are coerced to factors when necessary (using as.factor()).
18
and unordered factors is that the former are printed showing the ordering of the levels, but the contrasts generated for them in fitting linear models are different.
19
20
21
> Xb[ib] <- 1 > Xv[iv] <- 1 > X <- cbind(Xb, Xv) To construct the incidence matrix, N say, we could use > N <- crossprod(Xb, Xv) However a simpler direct way of producing this matrix is to use table(): > N <- table(blocks, varieties) Index matrices must be numerical: any other form of matrix (e.g. a logical or character matrix) supplied as a matrix is treated as an indexing vector.
22
Any short vector operands are extended by recycling their values until they match the size of any other operands. As long as short vectors and arrays only are encountered, the arrays must all have the same dim attribute or an error results. Any vector operand longer than a matrix or array operand generates an error. If array structures are present and no error or coercion to vector has been precipitated, the result is an array structure with the common dim attribute of its array operands.
23
Note that x %*% x is ambiguous, as it could mean either xT x or xxT , where x is the column form. In such cases the smaller matrix seems implicitly to be the interpretation adopted, so the scalar xT x is in this case the result. The matrix xxT may be calculated either by cbind(x) %*% x or x %*% rbind(x) since the result of rbind() or cbind() is always a matrix. However, the best way to compute xT x or xxT is crossprod(x) or x %o% x respectively.
24
diag(M), where M is a matrix, gives the vector of main diagonal entries of M. This is the same convention as that used for diag() in Matlab. Also, somewhat confusingly, if k is a single numeric value then diag(k) is the k by k identity matrix!
Even better would be to form a matrix square root B with A = BB T and find the squared length of the solution of By = x, perhaps using the Cholesky or eigendecomposition of A.
25
If M is in fact square, then, it is not hard to see that > absdetM <- prod(svd(M)$d) calculates the absolute value of the determinant of M. If this calculation were needed often with a variety of matrices it could be defined as an R function > absdet <- function(M) prod(svd(M)$d) after which we could use absdet() as just another R function. As a further trivial but potentially useful example, you might like to consider writing a function, say tr(), to calculate the trace of a square matrix. [Hint: You will not need to use an explicit loop. Look again at the diag() function.] R has a builtin function det to calculate a determinant, including the sign, and another, determinant, to give the sign and modulus (optionally on log scale),
These compute the orthogonal projection of y onto the range of X in fit, the projection onto the orthogonal complement in res and the coefficient vector for the projection in b, that is, b is essentially the result of the Matlab backslash operator. It is not assumed that X has full column rank. Redundancies will be discovered and removed as they are found. This alternative is the older, low-level way to perform least squares calculations. Although still useful in some contexts, it would now generally be replaced by the statistical models features, as will be discussed in Chapter 11 [Statistical models in R], page 54.
26
the arguments to cbind() must be either vectors of any length, or matrices with the same column size, that is the same number of rows. The result is a matrix with the concatenated arguments arg 1, arg 2, . . . forming the columns. If some of the arguments to cbind() are vectors they may be shorter than the column size of any matrices present, in which case they are cyclically extended to match the matrix column size (or the length of the longest vector if no matrices are given). The function rbind() does the corresponding operation for rows. In this case any vector argument, possibly cyclically extended, are of course taken as row vectors. Suppose X1 and X2 have the same number of rows. To combine these by columns into a matrix X, together with an initial column of 1s we can use > X <- cbind(1, X1, X2) The result of rbind() or cbind() always has matrix status. Hence cbind(x) and rbind(x) are possibly the simplest ways explicitly to allow the vector x to be treated as a column or row matrix respectively.
27
> factor(cut(incomes, breaks = 35+10*(0:7))) -> incomef Then to calculate a two-way table of frequencies: > table(incomef,statef) statef incomef act nsw nt qld sa tas vic wa (35,45] 1 1 0 1 0 0 1 0 (45,55] 1 1 1 1 2 0 1 3 (55,65] 0 3 1 3 2 2 2 1 (65,75] 0 1 0 0 0 0 1 0 Extension to higher-way frequency tables is immediate.
28
29
30
The simplest way to construct a data frame from scratch is to use the read.table() function to read an entire data frame from an external file. This is discussed further in Chapter 7 [Reading data from files], page 32.
31
finally remove all unwanted variables from the working directory and keep it as clean of left-over temporary variables as possible. In this way it is quite simple to work with many problems in the same directory, all of which have variables named x, y and z, for example.
See the on-line help for autoload for the meaning of the second term.
32
Input file form with names and row labels: Price 52.00 54.75 57.50 57.50 59.75 Floor 111.0 128.0 101.0 131.0 93.0 Area 830 710 1000 690 900 Rooms 5 5 5 6 5 Age 6.2 7.5 4.2 8.8 1.9 Cent.heat no no no no yes
01 02 03 04 05 ...
By default numeric items (except row labels) are read as numeric variables and nonnumeric variables, such as Cent.heat in the example, as factors. This can be changed if necessary. The function read.table() can then be used to read the data frame directly > HousePrice <- read.table("houses.data") Often you will want to omit including the row labels directly and use the default labels. In this case the file may omit the row label column as in the following.
1
33
Input file form without row labels: Price 52.00 54.75 57.50 57.50 59.75 ... Floor 111.0 128.0 101.0 131.0 93.0 Area 830 710 1000 690 900 Rooms 5 5 5 6 5 Age 6.2 7.5 4.2 8.8 1.9 Cent.heat no no no no yes
The data frame may then be read as > HousePrice <- read.table("houses.data", header=TRUE) where the header=TRUE option specifies that the first line is a line of headings, and hence, by implication from the form of the file, that no explicit row labels are given.
34
data() As from R version 2.0.0 all the datasets supplied with R are available directly by name. However, many packages still use the earlier convention in which data was also used to load datasets into R, for example data(infert) and this can still be used with the standard packages (as in this example). In most cases this will load an R object of the same name. However, in a few cases it loads several objects, so see the on-line help for the object to see what to expect.
35
8 Probability distributions
8.1 R as a set of statistical tables
One convenient use of R is to provide a comprehensive set of statistical tables. Functions are provided to evaluate the cumulative distribution function P (X x), the probability density function and the quantile function (given q , the smallest x such that P (X x) > q ), and to simulate from the distribution. Distribution beta binomial Cauchy chi-squared exponential F gamma geometric hypergeometric log-normal logistic negative binomial normal Poisson signed rank Students t uniform Weibull Wilcoxon R name beta binom cauchy chisq exp f gamma geom hyper lnorm logis nbinom norm pois signrank t unif weibull wilcox additional arguments shape1, shape2, ncp size, prob location, scale df, ncp rate df1, df2, ncp shape, scale prob m, n, k meanlog, sdlog location, scale size, prob mean, sd lambda n df, ncp min, max shape, scale m, n
Prefix the name given here by d for the density, p for the CDF, q for the quantile function and r for simulation (r andom deviates). The first argument is x for dxxx , q for pxxx , p for qxxx and n for rxxx (except for rhyper, rsignrank and rwilcox, for which it is nn). In not quite all cases is the non-centrality parameter ncp currently available: see the on-line help for details. The pxxx and qxxx functions all have logical arguments lower.tail and log.p and the dxxx ones have log. This allows, e.g., getting the cumulative (or integrated) hazard function, H (t) = log(1 F (t)), by - pxxx (t, ..., lower.tail = FALSE, log.p = TRUE) or more accurate log-likelihoods (by dxxx (..., log = TRUE)), directly. In addition there are functions ptukey and qtukey for the distribution of the studentized range of samples from a normal distribution, and dmultinom and rmultinom for the multinomial distribution. Further distributions are available in contributed packages, notably SuppDists. Here are some examples
36
## 2-tailed p-value for t distribution 2*pt(-2.43, df = 13) ## upper 1% point for an F(2, 7) distribution qf(0.01, 2, 7, lower.tail = FALSE)
See the on-line help on RNG for how random-number generation is done in R.
Max. 5.100
The decimal point is 1 digit(s) to the left of the | 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 | | | | | | | | | | | | | | | | | | 070355555588 000022233333335577777777888822335777888 00002223378800035778 0002335578023578 00228 23 080 7 2337 250077 0000823577 2333335582225577 0000003357788888002233555577778 03335555778800233333555577778 02222335557780000000023333357778888 0000233357700000023578 00000022335800333 0370
A stem-and-leaf plot is like a histogram, and R has a function hist to plot histograms. > hist(eruptions) ## make the bins smaller, make a plot of density > hist(eruptions, seq(1.6, 5.2, 0.2), prob=TRUE) > lines(density(eruptions, bw=0.1)) > rug(eruptions) # show the actual data points
37
More elegant density plots can be made by density, and we added a line produced by density in this example. The bandwidth bw was chosen by trial-and-error as the default gives too much smoothing (it usually does for interesting densities). (Better automated methods of bandwidth choice are available, and in this example bw = "SJ" gives a good result.)
Histogram of eruptions
0.7 Relative Frequency 0.0 1.5 0.1 0.2 0.3 0.4 0.5 0.6
2.0
2.5
3.0
3.5 eruptions
4.0
4.5
5.0
We can plot the empirical cumulative distribution function by using the function ecdf. > plot(ecdf(eruptions), do.points=FALSE, verticals=TRUE) This distribution is obviously far from any standard distribution. How about the righthand mode, say eruptions of longer than 3 minutes? Let us fit a normal distribution and overlay the fitted CDF. > long <- eruptions[eruptions > 3] > plot(ecdf(long), do.points=FALSE, verticals=TRUE) > x <- seq(3, 5.4, 0.01) > lines(x, pnorm(x, mean=mean(long), sd=sqrt(var(long))), lty=3)
ecdf(long)
1.0 Fn(x) 0.0 0.2 0.4 0.6 0.8
3.0
3.5
4.0 x
4.5
5.0
38
Quantile-quantile (Q-Q) plots can help us examine this more carefully. par(pty="s") # arrange for a square figure region qqnorm(long); qqline(long) which shows a reasonable fit but a shorter right tail than one would expect from a normal distribution. Let us compare this with some simulated data from a t distribution
Normal QQ Plot
Sample Quantiles
3.0
3.5
4.0
4.5
5.0
Theoretical Quantiles
x <- rt(250, df = 5) qqnorm(x); qqline(x) which will usually (if it is a random sample) show longer tails than expected for a normal. We can make a Q-Q plot against the generating distribution by qqplot(qt(ppoints(250), df = 5), x, xlab = "Q-Q plot for t dsn") qqline(x) Finally, we might want a more formal test of agreement with normality (or not). R provides the Shapiro-Wilk test > shapiro.test(long) Shapiro-Wilk normality test data: long W = 0.9793, p-value = 0.01052 and the Kolmogorov-Smirnov test > ks.test(long, "pnorm", mean = mean(long), sd = sqrt(var(long))) One-sample Kolmogorov-Smirnov test data: long D = 0.0661, p-value = 0.4284 alternative hypothesis: two.sided (Note that the distribution theory is not valid here as we have estimated the parameters of the normal distribution from the same sample.)
39
79.94
79.96
79.98
80.00
80.02
80.04
To test for the equality of the means of the two examples, we can use an unpaired t-test by > t.test(A, B) Welch Two Sample t-test data: A and B t = 3.2499, df = 12.027, p-value = 0.00694 alternative hypothesis: true difference in means is not equal to 0
40
95 percent confidence interval: 0.01385526 0.07018320 sample estimates: mean of x mean of y 80.02077 79.97875 which does indicate a significant difference, assuming normality. By default the R function does not assume equality of variances in the two samples (in contrast to the similar S-Plus t.test function). We can use the F test to test for equality in the variances, provided that the two samples are from normal populations. > var.test(A, B) F test to compare two variances data: A and B F = 0.5837, num df = 12, denom df = 7, p-value = 0.3938 alternative hypothesis: true ratio of variances is not equal to 1 95 percent confidence interval: 0.1251097 2.1052687 sample estimates: ratio of variances 0.5837405 which shows no evidence of a significant difference, and so we can use the classical t-test that assumes equality of the variances. > t.test(A, B, var.equal=TRUE) Two Sample t-test data: A and B t = 3.4722, df = 19, p-value = 0.002551 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.01669058 0.06734788 sample estimates: mean of x mean of y 80.02077 79.97875 All these tests assume normality of the two samples. The two-sample Wilcoxon (or Mann-Whitney) test only assumes a common continuous distribution under the null hypothesis. > wilcox.test(A, B) Wilcoxon rank sum test with continuity correction data: A and B W = 89, p-value = 0.007497 alternative hypothesis: true location shift is not equal to 0
41
Warning message: Cannot compute exact p-value with ties in: wilcox.test(A, B) Note the warning: there are several ties in each sample, which suggests strongly that these data are from a discrete distribution (probably due to rounding). There are several ways to compare graphically the two samples. We have already seen a pair of boxplots. The following > plot(ecdf(A), do.points=FALSE, verticals=TRUE, xlim=range(A, B)) > plot(ecdf(B), do.points=FALSE, verticals=TRUE, add=TRUE) will show the two empirical CDFs, and qqplot will perform a Q-Q plot of the two samples. The Kolmogorov-Smirnov test is of the maximal vertical distance between the two ecdfs, assuming a common continuous distribution: > ks.test(A, B) Two-sample Kolmogorov-Smirnov test data: A and B D = 0.5962, p-value = 0.05919 alternative hypothesis: two-sided Warning message: cannot compute correct p-values with ties in: ks.test(A, B)
42
43
abline(lsfit(xc[[i]], yc[[i]])) } (Note the function split() which produces a list of vectors obtained by splitting a larger vector according to the classes specified by a factor. This is a useful function, mostly used in connection with boxplots. See the help facility for further details.) Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a whole object view is likely to be both clearer and faster in R. Other looping facilities include the > repeat expr statement and the > while (condition ) expr statement. The break statement can be used to terminate any loop, possibly abnormally. This is the only way to terminate repeat loops. The next statement can be used to discontinue one particular cycle and skip to the next. Control statements are most often used in connection with functions which are discussed in Chapter 10 [Writing your own functions], page 44, and where more examples will emerge.
44
45
} After this object is created it may be used in statements such as > regcoeff <- bslash(Xmat, yvar) and so on. The classical R function lsfit() does this job quite well, and more1 . It in turn uses the functions qr() and qr.coef() in the slightly counterintuitive way above to do this part of the calculation. Hence there is probably some value in having just this part isolated in a simple to use function if it is going to be in frequent use. If so, we may wish to make it a matrix binary operator for even more convenient use.
See also the methods described in Chapter 11 [Statistical models in R], page 54
46
> ans <- fun1(d, df) which is now equivalent to the three cases above, or as > ans <- fun1(d, df, limit=10) which changes one of the defaults. It is important to note that defaults may be arbitrary expressions, even involving other arguments to the same function; they are not restricted to be constants as in our simple example here.
47
N is the b by v incidence matrix, then the efficiency factors are defined as the eigenvalues of the matrix E = Iv R1/2 N T K 1 N R1/2 = Iv AT A, where A = K 1/2 N R1/2 . One way to write the function is given below. > bdeff <- function(blocks, varieties) { blocks <- as.factor(blocks) # minor safety move b <- length(levels(blocks)) varieties <- as.factor(varieties) # minor safety move v <- length(levels(varieties)) K <- as.vector(table(blocks)) # remove dim attr R <- as.vector(table(varieties)) # remove dim attr N <- table(blocks, varieties) A <- 1/sqrt(K) * N * rep(1/sqrt(R), rep(b, v)) sv <- svd(A) list(eff=1 - sv$d^2, blockcv=sv$u, varietycv=sv$v) } It is numerically slightly better to work with the singular value decomposition on this occasion rather than the eigenvalue routines. The result of the function is a list giving not only the efficiency factors as the first component, but also the block and variety canonical contrasts, since sometimes these give additional useful qualitative information.
48
> no.dimnames(X) This is particularly useful for large integer arrays, where patterns are the real interest rather than the values.
10.7 Scope
The discussion in this section is somewhat more technical than in other parts of this document. However, it details one of the major differences between S-Plus and R. The symbols which occur in the body of a function can be divided into three classes; formal parameters, local variables and free variables. The formal parameters of a function are those occurring in the argument list of the function. Their values are determined by the process of binding the actual function arguments to the formal parameters. Local
49
variables are those whose values are determined by the evaluation of expressions in the body of the functions. Variables which are not formal parameters or local variables are called free variables. Free variables become local variables if they are assigned to. Consider the following function definition. f <- function(x) { y <- 2*x print(x) print(y) print(z) } In this function, x is a formal parameter, y is a local variable and z is a free variable. In R the free variable bindings are resolved by first looking in the environment in which the function was created. This is called lexical scope. First we define a function called cube. cube <- function(n) { sq <- function() n*n n*sq() } The variable n in the function sq is not an argument to that function. Therefore it is a free variable and the scoping rules must be used to ascertain the value that is to be associated with it. Under static scope (S-Plus) the value is that associated with a global variable named n. Under lexical scope (R) it is the parameter to the function cube since that is the active binding for the variable n at the time the function sq was defined. The difference between evaluation in R and evaluation in S-Plus is that S-Plus looks for a global variable called n while R first looks for a variable called n in the environment created when cube was invoked. ## first evaluation in S S> cube(2) Error in sq(): Object "n" not found Dumped S> n <- 3 S> cube(2) [1] 18 ## then the same function evaluated in R R> cube(2) [1] 8 Lexical scope can also be used to give functions mutable state. In the following example we show how R can be used to mimic a bank account. A functioning bank account needs to have a balance or total, a function for making withdrawals, a function for making deposits and a function for stating the current balance. We achieve this by creating the three functions within account and then returning a list containing them. When account is invoked it takes a numerical argument total and returns a list containing the three functions. Because these functions are defined in an environment which contains total, they will have access to its value. The special assignment operator, <<-, is used to change the value associated with total. This operator looks back in enclosing environments for an environment that contains the
50
symbol total and when it finds such an environment it replaces the value, in that environment, with the value of right hand side. If the global or top-level environment is reached without finding the symbol total then that variable is created and assigned to there. For most users <<- creates a global variable and assigns the value of the right hand side to it2 . Only when <<- has been used in a function that was returned as the value of another function will the special behavior described here occur. open.account <- function(total) { list( deposit = function(amount) { if(amount <= 0) stop("Deposits must be positive!\n") total <<- total + amount cat(amount, "deposited. Your balance is", total, "\n\n") }, withdraw = function(amount) { if(amount > total) stop("You dont have that much money!\n") total <<- total - amount cat(amount, "withdrawn. Your balance is", total, "\n\n") }, balance = function() { cat("Your balance is", total, "\n\n") } ) } ross <- open.account(100) robert <- open.account(200) ross$withdraw(30) ross$balance() robert$balance() ross$deposit(50) ross$balance() ross$withdraw(500)
In some sense this mimics the behavior in S-Plus since in S-Plus this operator always creates or assigns to a global variable.
51
subdirectory etc is used. This file should contain the commands that you want to execute every time R is started under your system. A second, personal, profile file named .Rprofile3 can be placed in any directory. If R is invoked in that directory then that file will be sourced. This file gives individual users control over their workspace and allows for different startup procedures in different working directories. If no .Rprofile file is found in the startup directory, then R looks for a .Rprofile file in the users home directory and uses that (if it exists). If the environment variable R_PROFILE_USER is set, the file it points to is used instead of the .Rprofile files. Any function named .First() in either of the two profile files or in the .RData image has a special status. It is automatically performed at the beginning of an R session and may be used to initialize the environment. For example, the definition in the example below alters the prompt to $ and sets up various other useful things that can then be taken for granted in the rest of the session. Thus, the sequence in which files are executed is, Rprofile.site, the user profile, .RData and then .First(). A definition in later files will mask definitions in earlier files. > .First <- function() { options(prompt="$ ", continue="+\t") # $ is the prompt options(digits=5, length=999) # custom numbers and printout x11() # for graphics par(pch = "+") # plotting character source(file.path(Sys.getenv("HOME"), "R", "mystuff.R")) # my personal functions library(MASS) # attach a package } Similarly a function .Last(), if defined, is (normally) executed at the very end of the session. An example is given below. > .Last <- function() { graphics.off() cat(paste(date(),"\nAdios\n")) } # a small safety measure. # Is it time for lunch?
52
The number of generic functions that can treat a class in a specific way can be quite large. For example, the functions that can accommodate in some fashion objects of class "data.frame" include [ [<[[<mean any plot as.matrix summary
A currently complete list can be got by using the methods() function: > methods(class="data.frame") Conversely the number of classes a generic function can handle can also be quite large. For example the plot() function has a default method and variants for objects of classes "data.frame", "density", "factor", and more. A complete list can be got again by using the methods() function: > methods(plot) For many generic functions the function body is quite short, for example > coef function (object, ...) UseMethod("coef") The presence of UseMethod indicates this is a generic function. To see what methods are available we can use methods() > methods(coef) [1] coef.aov* [5] coef.nls* coef.Arima* coef.default* coef.summary.nls* coef.listof*
Non-visible functions are asterisked In this example there are six methods, none of which can be seen by typing its name. We can read these by either of > getAnywhere("coef.aov") A single object matching coef.aov was found It was found in the following places registered S3 method for coef from namespace stats namespace:stats with value function (object, ...) { z <- object$coef z[!is.na(z)] } > getS3method("coef", "aov") function (object, ...) { z <- object$coef z[!is.na(z)] }
53
The reader is referred to the R Language Definition for a more complete discussion of this mechanism.
54
11 Statistical models in R
This section presumes the reader has some familiarity with statistical methodology, in particular with regression analysis and the analysis of variance. Later we make some rather more ambitious presumptions, namely that something is known about generalized linear models and nonlinear regression. The requirements for fitting statistical models are sufficiently well defined to make it possible to construct general tools that apply in a broad spectrum of problems. R provides an interlocking suite of facilities that make fitting statistical models very simple. As we mention in the introduction, the basic output is minimal, and one needs to ask for the details by calling extractor functions.
yi =
j =0
j xij + ei ,
ei NID(0, 2 ),
i = 1, . . . , n
In matrix terms this would be written y = X + e where the y is the response vector, X is the model matrix or design matrix and has columns x0 , x1 , . . . , xp , the determining variables. Very often x0 will be a column of ones defining an intercept term.
Examples
Before giving a formal specification, a few examples may usefully set the picture. Suppose y, x, x0, x1, x2, . . . are numeric variables, X is a matrix and A, B, C, . . . are factors. The following formulae on the left side below specify statistical models as described on the right. y~x y~1+x Both imply the same simple linear regression model of y on x. The first has an implicit intercept term, and the second an explicit one.
y~0+x y ~ -1 + x y ~ x - 1 Simple linear regression of y on x through the origin (that is, without an intercept term). log(y) ~ x1 + x2 Multiple regression of the transformed variable, log(y ), on x1 and x2 (with an implicit intercept term).
55
y ~ poly(x,2) y ~ 1 + x + I(x^2) Polynomial regression of y on x of degree 2. The first form uses orthogonal polynomials, and the second uses explicit powers, as basis. y ~ X + poly(x,2) Multiple regression y with model matrix consisting of the matrix X as well as polynomial terms in x to degree 2. y~A y~A+x y y y y ~ ~ ~ ~ Single classification analysis of variance model of y , with classes determined by A. Single classification analysis of covariance model of y , with classes determined by A, and with covariate x.
A*B A + B + A:B B %in% A A/B Two factor non-additive model of y on A and B . The first two specify the same crossed classification and the second two specify the same nested classification. In abstract terms all four specify the same model subspace.
y ~ (A + B + C)^2 y ~ A*B*C - A:B:C Three factor experiment but with a model containing main effects and two factor interactions only. Both formulae specify the same model. y~A*x y ~ A/x y ~ A/(1 + x) - 1 Separate simple linear regression models of y on x within the levels of A, with different codings. The last form produces explicit estimates of as many different intercepts and slopes as there are levels in A. y ~ A*B + Error(C) An experiment with two treatment factors, A and B , and error strata determined by factor C . For example a split plot experiment, with whole plots (and hence also subplots), determined by factor C . The operator ~ is used to define a model formula in R. The form, for an ordinary linear model, is response ~ op_1 term_1 op_2 term_2 op_3 term_3 ... where response op i term i is a vector or matrix, (or expression evaluating to a vector or matrix) defining the response variable(s). is an operator, either + or -, implying the inclusion or exclusion of a term in the model, (the first is optional). is either a vector or matrix expression, or 1,
56
a factor, or a formula expression consisting of factors, vectors or matrices connected by formula operators. In all cases each term defines a collection of columns either to be added to or removed from the model matrix. A 1 stands for an intercept column and is by default included in the model matrix unless explicitly removed. The formula operators are similar in effect to the Wilkinson and Rogers notation used by such programs as Glim and Genstat. One inevitable change is that the operator . becomes : since the period is a valid name character in R. The notation is summarized below (based on Chambers & Hastie, 1992, p.29): Y ~M Y is modeled as M. M_1 + M_2 Include M 1 and M 2. M_1 - M_2 Include M 1 leaving out terms of M 2. M_1 : M_2 The tensor product of M 1 and M 2. If both terms are factors, then the subclasses factor. M_1 %in% M_2 Similar to M_1 :M_2 , but with a different coding. M_1 * M_2 M_1 + M_2 + M_1 :M_2 . M_1 / M_2 M_1 + M_2 %in% M_1 . M ^n I(M ) All terms in M together with interactions up to order n Insulate M. Inside M all operators have their normal arithmetic meaning, and that term appears in the model matrix.
Note that inside the parentheses that usually enclose function arguments all operators have their normal arithmetic meaning. The function I() is an identity function used to allow terms in model formulae to be defined using arithmetic operators. Note particularly that the model formulae specify the columns of the model matrix, the specification of the parameters being implicit. This is not the case in other contexts, for example in specifying nonlinear models.
11.1.1 Contrasts
We need at least some idea how the model formulae specify the columns of the model matrix. This is easy if we have continuous variables, as each provides one column of the model matrix (and the intercept will provide a column of ones if included in the model). What about a k -level factor A? The answer differs for unordered and ordered factors. For unordered factors k 1 columns are generated for the indicators of the second, . . . , k th levels of the factor. (Thus the implicit parameterization is to contrast the response at each level with that at the first.) For ordered factors the k 1 columns are the orthogonal polynomials on 1, . . . , k , omitting the constant term. Although the answer is already complicated, it is not the whole story. First, if the intercept is omitted in a model that contains a factor term, the first such term is encoded into k columns giving the indicators for all the levels. Second, the whole behavior can be changed by the options setting for contrasts. The default setting in R is
57
options(contrasts = c("contr.treatment", "contr.poly")) The main reason for mentioning this is that R and S have different defaults for unordered factors, S using Helmert contrasts. So if you need to compare your results to those of a textbook or paper which used S-Plus, you will need to set options(contrasts = c("contr.helmert", "contr.poly")) This is a deliberate difference, as treatment contrasts (Rs default) are thought easier for newcomers to interpret. We have still not finished, as the contrast scheme to be used can be set for each term in the model using the functions contrasts and C. We have not yet considered interaction terms: these generate the products of the columns introduced for their component terms. Although the details are complicated, model formulae in R will normally generate the models that an expert statistician would expect, provided that marginality is preserved. Fitting, for example, a model with an interaction but not the corresponding main effects will in general lead to surprising results, and is for experts only.
58
deviance(object ) Residual sum of squares, weighted if appropriate. formula(object ) Extract the model formula. plot(object ) Produce four plots, showing residuals, fitted values and some diagnostics. predict(object, newdata=data.frame ) The data frame supplied must have variables specified with the same labels as the original. The value is a vector or matrix of predicted values corresponding to the determining variable values in data.frame. print(object ) Print a concise version of the object. Most often used implicitly. residuals(object ) Extract the (matrix of) residuals, weighted as appropriate. Short form: resid(object ). step(object ) Select a suitable model by adding or dropping terms and preserving hierarchies. The model with the smallest value of AIC (Akaikes An Information Criterion) discovered in the stepwise search is returned. summary(object ) Print a comprehensive summary of the results of the regression analysis. vcov(object ) Returns the variance-covariance matrix of the main parameters of a fitted model object.
59
60
where is a scale parameter (possibly known), and is constant for all observations, A represents a prior weight, assumed known but possibly varying with the observations, and is the mean of y . So it is assumed that the distribution of y is determined by its mean and possibly a scale parameter as well. The mean, , is a smooth invertible function of the linear predictor: = m( ), = m1 () = ()
and this inverse function, (), is called the link function. These assumptions are loose enough to encompass a wide class of models useful in statistical practice, but tight enough to allow the development of a unified methodology of estimation and inference, at least approximately. The reader is referred to any of the current reference works on the subject for full details, such as McCullagh & Nelder (1989) or Dobson (1990).
11.6.1 Families
The class of generalized linear models handled by facilities supplied in R includes gaussian, binomial, poisson, inverse gaussian and gamma response distributions and also quasilikelihood models where the response distribution is not explicitly specified. In the latter case the variance function must be specified as a function of the mean, but in other cases this function is implied by the response distribution. Each response distribution admits a variety of link functions to connect the mean with the linear predictor. Those automatically available are shown in the following table: Family name binomial gaussian Gamma Link functions logit, probit, log, cloglog identity, log, inverse identity, inverse, log
61
1/mu^2, identity, inverse, log identity, log, sqrt logit, probit, cloglog, identity, inverse, log, 1/mu^2, sqrt The combination of a response distribution, a link function and various other pieces of information that are needed to carry out the modeling exercise is called the family of the generalized linear model. inverse.gaussian poisson quasi
62
The problem we consider is to fit both logistic and probit models to this data, and to estimate for each model the LD50, that is the age at which the chance of blindness for a male inhabitant is 50%. If y is the number of blind at age x and n the number tested, both models have the form y B(n, F (0 + 1 x)) where for the probit case, F (z ) = (z ) is the standard normal distribution function, and in the logit case (the default), F (z ) = ez /(1 + ez ). In both cases the LD50 is LD50 = 0 /1 that is, the point at which the argument of the distribution function is zero. The first step is to set the data up as a data frame > kalythos <- data.frame(x = c(20,35,45,55,70), n = rep(50,5), y = c(6,17,26,37,44)) To fit a binomial model using glm() there are three possibilities for the response: If the response is a vector it is assumed to hold binary data, and so must be a 0/1 vector. If the response is a two-column matrix it is assumed that the first column holds the number of successes for the trial and the second holds the number of failures. If the response is a factor, its first level is taken as failure (0) and all other levels as success (1). Here we need the second of these conventions, so we add a matrix to our data frame: > kalythos$Ymat <- cbind(kalythos$y, kalythos$n - kalythos$y) To fit the models we use > fmp <- glm(Ymat ~ x, family = binomial(link=probit), data = kalythos) > fml <- glm(Ymat ~ x, family = binomial, data = kalythos) Since the logit link is the default the parameter may be omitted on the second call. To see the results of each fit we could use > summary(fmp) > summary(fml) Both models fit (all too) well. To find the LD50 estimate we can use a simple function: > ld50 <- function(b) -b[1]/b[2] > ldp <- ld50(coef(fmp)); ldl <- ld50(coef(fml)); c(ldp, ldl) The actual estimates from this data are 43.663 years and 43.601 years respectively.
Poisson models
With the Poisson family the default link is the log, and in practice the major use of this family is to fit surrogate Poisson log-linear models to frequency data, whose actual distribution is often multinomial. This is a large and important subject we will not discuss further here. It even forms a major part of the use of non-gaussian generalized models overall.
63
Occasionally genuinely Poisson data arises in practice and in the past it was often analyzed as gaussian data after either a log or a square-root transformation. As a graceful alternative to the latter, a Poisson generalized linear model may be fitted as in the following example: > fmod <- glm(y ~ A + B + x, family = poisson(link=sqrt), data = worm.counts)
Quasi-likelihood models
For all families the variance of the response will depend on the mean and will have the scale parameter as a multiplier. The form of dependence of the variance on the mean is a characteristic of the response distribution; for example for the poisson distribution Var[y ] = . For quasi-likelihood estimation and inference the precise response distribution is not specified, but rather only a link function and the form of the variance function as it depends on the mean. Since quasi-likelihood estimation uses formally identical techniques to those for the gaussian distribution, this family provides a way of fitting gaussian models with non-standard link functions or variance functions, incidentally. For example, consider fitting the non-linear regression y= which may be written alternatively as y= 1 +e 1 x1 + 2 x2 1 z1 +e z2 2
where x1 = z2 /z1 , x2 = 1/z1 , 1 = 1/1 and 2 = 2 /1 . Supposing a suitable data frame to be set up we could fit this non-linear regression as > nlfit <- glm(y ~ x1 + x2 - 1, family = quasi(link=inverse, variance=constant), data = biochem) The reader is referred to the manual and the help document for further information, as needed.
64
We could do better, but these starting values of 200 and 0.1 seem adequate. Now do the
The standard package stats provides much more extensive facilities for fitting non-linear models by least squares. The model we have just fitted is the Michaelis-Menten model, so we can use > df <- data.frame(x=x, y=y) > fit <- nls(y ~ SSmicmen(x, Vm, K), df) > fit Nonlinear regression model model: y ~ SSmicmen(x, Vm, K) data: df Vm K 212.68370711 0.06412123 residual sum-of-squares: 1195.449
65
> summary(fit) Formula: y ~ SSmicmen(x, Vm, K) Parameters: Estimate Std. Error t value Pr(>|t|) Vm 2.127e+02 6.947e+00 30.615 3.24e-11 K 6.412e-02 8.281e-03 7.743 1.57e-05 Residual standard error: 10.93 on 10 degrees of freedom Correlation of Parameter Estimates: Vm K 0.7651
The negative log-likelihood to minimize is: > fn <- function(p) sum( - (y*(p[1]+p[2]*x) - n*log(1+exp(p[1]+p[2]*x)) + log(choose(n, y)) )) We pick sensible starting values and do the fit: > out <- nlm(fn, p = c(-50,20), hessian = TRUE) After the fitting, out$minimum is the negative log-likelihood, and out$estimate are the maximum likelihood estimates of the parameters. To obtain the approximate SEs of the estimates we do: > sqrt(diag(solve(out$hessian))) A 95% confidence interval would be the parameter estimate 1.96 SE.
66
Local approximating regressions. The loess() function fits a nonparametric regression by using a locally weighted regression. Such regressions are useful for highlighting a trend in messy data or for data reduction to give some insight into a large data set. Function loess is in the standard package stats, together with code for projection pursuit regression. Robust regression. There are several functions available for fitting regression models in a way resistant to the influence of extreme outliers in the data. Function lqs in the recommended package MASS provides state-of-art algorithms for highly-resistant fits. Less resistant but statistically more efficient methods are available in packages, for example function rlm in package MASS. Additive models. This technique aims to construct a regression function from smooth additive functions of the determining variables, usually one for each determining variable. Functions avas and ace in package acepack and functions bruto and mars in package mda provide some examples of these techniques in user-contributed packages to R. An extension is Generalized Additive Models, implemented in user-contributed packages gam and mgcv. Tree-based models. Rather than seek an explicit global linear model for prediction or interpretation, tree-based models seek to bifurcate the data, recursively, at critical points of the determining variables in order to partition the data ultimately into groups that are as homogeneous as possible within, and as heterogeneous as possible between. The results often lead to insights that other data analysis methods tend not to yield. Models are again specified in the ordinary linear model form. The model fitting function is tree(), but many other generic functions such as plot() and text() are well adapted to displaying the results of a tree-based model fit in a graphical way. Tree models are available in R via the user-contributed packages rpart and tree.
67
12 Graphical procedures
Graphical facilities are an important and extremely versatile component of the R environment. It is possible to use the facilities to display a wide variety of statistical graphs and also to build entirely new types of graph. The graphics facilities can be used in both interactive and batch modes, but in most cases, interactive use is more productive. Interactive use is also easy because at startup time R initiates a graphics device driver which opens a special graphics window for the display of interactive graphics. Although this is done automatically, it is useful to know that the command used is X11() under UNIX, windows() under Windows and quartz() under Mac OS X. Once the device driver is running, R plotting commands can be used to produce a variety of graphical displays and to create entirely new kinds of display. Plotting commands are divided into three basic groups: High-level plotting functions create a new plot on the graphics device, possibly with axes, labels, titles and so on. Low-level plotting functions add more information to an existing plot, such as extra points, lines and labels. Interactive graphics functions allow you interactively add information to, or extract information from, an existing plot, using a pointing device such as a mouse. In addition, R maintains a list of graphical parameters which can be manipulated to customize your plots. This manual only describes what are known as base graphics. A separate graphics sub-system in package grid coexists with base it is more powerful but harder to use. There is a recommended package lattice which builds on grid and provides ways to produce multi-panel plots akin to those in the Trellis system in S.
68
x is a complex vector, it produces a plot of imaginary versus real parts of the vector elements. plot(f ) plot(f, y ) f is a factor object, y is a numeric vector. The first form generates a bar plot of f ; the second form produces boxplots of y for each level of f. plot(df ) plot(~ expr ) plot(y ~ expr ) df is a data frame, y is any object, expr is a list of object names separated by + (e.g., a + b + c). The first two forms produce distributional plots of the variables in a data frame (first form) or of a number of named objects (second form). The third form plots y against every object named in expr.
69
qqnorm(x) qqline(x) qqplot(x, y) Distribution-comparison plots. The first form plots the numeric vector x against the expected Normal order scores (a normal scores plot) and the second adds a straight line to such a plot by drawing a line through the distribution and data quartiles. The third form plots the quantiles of x against those of y to compare their respective distributions. hist(x) hist(x, nclass=n ) hist(x, breaks=b, ...) Produces a histogram of the numeric vector x. A sensible number of classes is usually chosen, but a recommendation can be given with the nclass= argument. Alternatively, the breakpoints can be specified exactly with the breaks= argument. If the probability=TRUE argument is given, the bars represent relative frequencies divided by bin width instead of counts. dotchart(x, ...) Constructs a dotchart of the data in x. In a dotchart the y -axis gives a labelling of the data in x and the x-axis gives its value. For example it allows easy visual selection of all data entries with values lying in specified ranges. image(x, y, z, ...) contour(x, y, z, ...) persp(x, y, z, ...) Plots of three variables. The image plot draws a grid of rectangles using different colours to represent the value of z, the contour plot draws contour lines to represent the value of z, and the persp plot draws a 3D surface.
Causes the x, y or both axes to be logarithmic. This will work for many, but not all, types of plot. The type= argument controls the type of plot produced, as follows: type="p" type="l" type="b" Plot individual points (the default) Plot lines Plot points connected by lines (both )
70
Plot points overlaid by lines Plot vertical lines from points to the zero axis (high-density ) Step-function plots. In the first form, the top of the vertical defines the point; in the second, the bottom. No plotting at all. However axes are still drawn (by default) and the coordinate system is set up according to the data. Ideal for creating plots with subsequent low-level graphics functions.
xlab=string ylab=string Axis labels for the x and y axes. Use these arguments to change the default labels, usually the names of the objects used in the call to the high-level plotting function. main=string Figure title, placed at the top of the plot in a large font. sub=string Sub-title, placed just below the x-axis in a smaller font.
71
v=x similarly for the x-coordinates for vertical lines. Also lm.obj may be list with a coefficients component of length 2 (such as the result of model-fitting functions,) which are taken as an intercept and slope, in that order. polygon(x, y, ...) Draws a polygon defined by the ordered vertices in (x, y) and (optionally) shade it in with hatch lines, or fill it if the graphics device allows the filling of figures. legend(x, y, legend, ...) Adds a legend to the current plot at the specified position. Plotting characters, line styles, colors etc., are identified with the labels in the character vector legend. At least one other argument v (a vector the same length as legend) with the corresponding values of the plotting unit must also be given, as follows: legend( , fill=v ) Colors for filled boxes legend( , col=v ) Colors in which points or lines will be drawn legend( , lty=v ) Line styles legend( , lwd=v ) Line widths legend( , pch=v ) Plotting characters (character vector) title(main, sub) Adds a title main to the top of the current plot in a large font and (optionally) a sub-title sub at the bottom in a smaller font. axis(side, ...) Adds an axis to the current plot on the side given by the first argument (1 to 4, counting clockwise from the bottom.) Other arguments control the positioning of the axis within or beside the plot, and tick positions and labels. Useful for adding custom axes after calling plot() with the axes=FALSE argument. Low-level plotting functions usually require some positioning information (e.g., x and y coordinates) to determine where to place the new plot elements. Coordinates are given in terms of user coordinates which are defined by the previous high-level graphics command and are chosen based on the supplied data. Where x and y arguments are required, it is also sufficient to supply a single argument being a list with elements named x and y. Similarly a matrix with two columns is also valid input. In this way functions such as locator() (see below) may be used to specify positions on a plot interactively.
72
> text(x, y, expression(paste(bgroup("(", atop(n, x), ")"), p^x, q^{n-x}))) More information, including a full listing of the features available can obtained from within R using the commands: > help(plotmath) > example(plotmath) > demo(plotmath)
73
the index number of the point if labels is absent). Returns the indices of the selected points when another button is pressed. Sometimes we want to identify particular points on a plot, rather than their positions. For example, we may wish the user to select some observation of interest from a graphical display and then manipulate that observation in some way. Given a number of (x, y ) coordinates in two numeric vectors x and y, we could use the identify() function as follows: > plot(x, y) > identify(x, y) The identify() functions performs no plotting itself, but simply allows the user to move the mouse pointer and click the left mouse button near a point. If there is a point near the mouse pointer it will be marked with its index number (that is, its position in the x/y vectors) plotted nearby. Alternatively, you could use some informative string (such as a case name) as a highlight by using the labels argument to identify(), or disable marking altogether with the plot = FALSE argument. When the process is terminated (see above), identify() returns the indices of the selected points; you can use these indices to extract the selected points from the original vectors x and y.
par(c("col", "lty")) With a character vector argument, returns only the named graphics parameters (again, as a list.) par(col=4, lty=2) With named arguments (or a single list argument), sets the values of the named graphics parameters, and returns the original values of the parameters as a list. Setting graphics parameters with the par() function changes the value of the parameters permanently, in the sense that all future calls to graphics functions (on the current device)
74
will be affected by the new value. You can think of setting graphics parameters in this way as setting default values for the parameters, which will be used by all graphics functions unless an alternative value is given. Note that calls to par() always affect the global values of graphics parameters, even when par() is called from within a function. This is often undesirable behaviorusually we want to set some graphics parameters, do some plotting, and then restore the original values so as not to affect the users R session. You can restore the initial values by saving the result of par() when making changes, and restoring the initial values when plotting is complete. > oldpar <- par(col=4, lty=2) . . . plotting commands . . . > par(oldpar) To save and restore all settable1 graphical parameters use > oldpar <- par(no.readonly=TRUE) . . . plotting commands . . . > par(oldpar)
Some graphics parameters such as the size of the current device are for information only.
75
pch=4
lwd=2
The color to be used for axis annotation, x and y labels, main and sub-titles, respectively. An integer which specifies which font to use for text. If possible, device drivers arrange so that 1 corresponds to plain text, 2 to bold face, 3 to italic, 4 to bold italic and 5 to a symbol font (which include Greek letters).
font.axis font.lab font.main font.sub The font to be used for axis annotation, x and y labels, main and sub-titles, respectively. adj=-0.1 Justification of text relative to the plotting position. 0 means left justify, 1 means right justify and 0.5 means to center horizontally about the plotting position. The actual value is the proportion of text that appears to the left of the plotting position, so a value of -0.1 leaves a gap of 10% of the text width between the text and the plotting position.
76
cex=1.5
Character expansion. The value is the desired size of text characters (including plotting characters) relative to the default text size.
The character expansion to be used for axis annotation, x and y labels, main and sub-titles, respectively.
mgp=c(3, 1, 0) Positions of axis components. The first component is the distance from the axis label to the axis position, in text lines. The second component is the distance to the tick labels, and the final component is the distance from the axis position to the axis line (usually zero). Positive numbers measure outside the plot region, negative numbers inside. tck=0.01 Length of tick marks, as a fraction of the size of the plotting region. When tck is small (less than 0.5) the tick marks on the x and y axes are forced to be the same size. A value of 1 gives grid lines. Negative values give tick marks outside the plotting region. Use tck=0.01 and mgp=c(1,-1.5,0) for internal tick marks. Axis styles for the x and y axes, respectively. With styles "i" (internal) and "r" (the default) tick marks always fall within the range of the data, however style "r" leaves a small amount of space at the edges. (S has other styles not implemented in R.)
xaxs="r" yaxs="i"
77
A typical figure is
3.0
mar[3]
Plot region
1.5
mai[2]
1.5 3.0 3.0
0.0
1.5
0.0 x
1.5
3.0
mai[1]
Margin
mai=c(1, 0.5, 0.5, 0) Widths of the bottom, left, top and right margins, respectively, measured in inches.
mar and mai are equivalent in the sense that setting one changes the value of the other. The default values chosen for this parameter are often too large; the right-hand margin is rarely needed, and neither is the top margin if no title is being used. The bottom and left margins must be large enough to accommodate the axis and tick labels. Furthermore, the default is chosen without regard to the size of the device surface: for example, using the postscript() driver with the height=4 argument will result in a plot which is about 50% margin unless mar or mai are set explicitly. When multiple figures are in use (see below) the margins are reduced, however this may not be enough when many figures share the same page.
78
omi[4]
mfg=c(3,2,3,2)
mfrow=c(3,2)
omi[1]
The graphical parameters relating to multiple figures are as follows: mfcol=c(3, 2) mfrow=c(2, 4) Set the size of a multiple figure array. The first value is the number of rows; the second is the number of columns. The only difference between these two parameters is that setting mfcol causes figures to be filled by column; mfrow fills by rows. The layout in the Figure could have been created by setting mfrow=c(3,2); the figure shows the page after four plots have been drawn. Setting either of these can reduce the base size of symbols and text (controlled by par("cex") and the pointsize of the device). In a layout with exactly two rows and columns the base size is reduced by a factor of 0.83: if there are three or more of either rows or columns, the reduction factor is 0.66. mfg=c(2, 2, 3, 2) Position of the current figure in a multiple figure environment. The first two numbers are the row and column of the current figure; the last two are the number of rows and columns in the multiple figure array. Set this parameter to jump between figures in the array. You can even use different values for the last two numbers than the true values for unequally-sized figures on the same page. fig=c(4, 9, 1, 4)/10 Position of the current figure on the page. Values are the positions of the left, right, bottom and top edges respectively, as a percentage of the page measured
79
from the bottom left corner. The example value would be for a figure in the bottom right of the page. Set this parameter for arbitrary positioning of figures within a page. If you want to add a figure to a current page, use new=TRUE as well (unlike S). oma=c(2, 0, 3, 0) omi=c(0, 0, 0.8, 0) Size of outer margins. Like mar and mai, the first measures in text lines and the second in inches, starting with the bottom margin and working clockwise. Outer margins are particularly useful for page-wise titles, etc. Text can be added to the outer margins with the mtext() function with argument outer=TRUE. There are no outer margins by default, however, so you must create them explicitly using oma or omi. More complicated arrangements of multiple figures can be produced by the split.screen() and layout() functions, as well as by the grid and lattice packages.
When you have finished with a device, be sure to terminate the device driver by issuing the command > dev.off() This ensures that the device finishes cleanly; for example in the case of hardcopy devices this ensures that every page is completed and has been sent to the printer. (This will happen automatically at the normal end of a session.)
80
postscript() pdf() png() jpeg() tiff() bitmap() ... Each new call to a device driver function opens a new graphics device, thus extending by one the device list. This device becomes the current device, to which graphics output will be sent.
81
dev.list() Returns the number and name of all active devices. The device at position 1 on the list is always the null device which does not accept graphics commands at all. dev.next() dev.prev() Returns the number and name of the graphics device next to, or previous to the current device, respectively. dev.set(which=k ) Can be used to change the current graphics device to the one at position k of the device list. Returns the number and label of the device. dev.off(k ) Terminate the graphics device at point k of the device list. For some devices, such as postscript devices, this will either print the file immediately or correctly complete the file for later printing, depending on how the device was initiated. dev.copy(device, ..., which=k ) dev.print(device, ..., which=k ) Make a copy of the device k. Here device is a device function, such as postscript, with extra arguments, if needed, specified by .... dev.print is similar, but the copied device is immediately closed, so that end actions, such as printing hardcopies, are immediately performed. graphics.off() Terminate all graphics devices on the list, except the null device.
82
13 Packages
All R functions and datasets are stored in packages. Only when a package is loaded are its contents available. This is done both for efficiency (the full list would take more memory and would take longer to search than a subset), and to aid package developers, who are protected from name clashes with other code. The process of developing packages is described in Section Creating R packages in Writing R Extensions . Here, we will describe them from a users point of view. To see which packages are installed at your site, issue the command > library() with no arguments. To load a particular package (e.g., the boot package containing functions from Davison & Hinkley (1997)), use a command like > library(boot) Users connected to the Internet can use the install.packages() and update.packages() functions (available through the Packages menu in the Windows and RAqua GUIs, see Section Installing packages in R Installation and Administration) to install and update packages. To see which packages are currently loaded, use > search() to display the search list. Some packages may be loaded but not available on the search list (see Section 13.3 [Namespaces], page 83): these will be included in the list given by > loadedNamespaces() To see a list of all available help topics in an installed package, use > help.start() to start the HTML help system, and then navigate to the package listing in the Reference section.
83
13.3 Namespaces
Packages can have namespaces, and currently all of the base and recommended packages do except the datasets package. Namespaces do three things: they allow the package writer to hide functions and data that are meant only for internal use, they prevent functions from breaking when a user (or other package writer) picks a name that clashes with one in the package, and they provide a way to refer to an object within a particular package. For example, t() is the transpose function in R, but users might define their own function named t. Namespaces prevent the users definition from taking precedence, and breaking every function that tries to transpose a matrix. There are two operators that work with namespaces. The double-colon operator :: selects definitions from a particular namespace. In the example above, the transpose function will always be available as base::t, because it is defined in the base package. Only functions that are exported from the package can be retrieved in this way. The triple-colon operator ::: may be seen in a few places in R code: it acts like the double-colon operator but also allows access to hidden objects. Users are more likely to use the getAnywhere() function, which searches multiple packages. Packages are often inter-dependent, and loading one may cause others to be automatically loaded. The colon operators described above will also cause automatic loading of the associated package. When packages with namespaces are loaded automatically they are not added to the search list.
84
x <- 1:20 Make x = (1, 2, . . . , 20). w <- 1 + sqrt(x)/2 A weight vector of standard deviations. dummy <- data.frame(x=x, y= x + rnorm(x)*w) dummy Make a data frame of two columns, x and y , and look at it. fm <- lm(y ~ x, data=dummy) summary(fm) Fit a simple linear regression and look at the analysis. With y to the left of the tilde, we are modelling y dependent on x. fm1 <- lm(y ~ x, data=dummy, weight=1/w^2) summary(fm1) Since we know the standard deviations, we can do a weighted regression. attach(dummy) Make the columns in the data frame visible as variables. lrf <- lowess(x, y) Make a nonparametric local regression function. plot(x, y) Standard point plot.
85
lines(x, lrf$y) Add in the local regression. abline(0, 1, lty=3) The true regression line: (intercept 0, slope 1). abline(coef(fm)) Unweighted regression line. abline(coef(fm1), col = "red") Weighted regression line. detach() Remove data frame from the search path.
plot(fitted(fm), resid(fm), xlab="Fitted values", ylab="Residuals", main="Residuals vs Fitted") A standard regression diagnostic plot to check for heteroscedasticity. Can you see it? qqnorm(resid(fm), main="Residuals Rankit Plot") A normal scores plot to check for skewness, kurtosis and outliers. (Not very useful here.) rm(fm, fm1, lrf, x, dummy) Clean up again. The next section will look at data from the classical experiment of Michelson to measure the speed of light. This dataset is available in the morley object, but we will read it to illustrate the read.table function. filepath <- system.file("data", "morley.tab" , package="datasets") filepath Get the path to the data file. file.show(filepath) Optional. Look at the file. mm <- read.table(filepath) mm Read in the Michelson data as a data frame, and look at it. There are five experiments (column Expt) and each has 20 runs (column Run) and sl is the recorded speed of light, suitably coded. mm$Expt <- factor(mm$Expt) mm$Run <- factor(mm$Run) Change Expt and Run into factors. attach(mm) Make the data frame visible at position 3 (the default). plot(Expt, Speed, main="Speed of Light Data", xlab="Experiment No.") Compare the five experiments with simple boxplots. fm <- aov(Speed ~ Run + Expt, data=mm) summary(fm) Analyze as a randomized block, with runs and experiments as factors.
86
fm0 <- update(fm, . ~ . - Run) anova(fm0, fm) Fit the sub-model omitting runs, and compare using a formal analysis of variance. detach() rm(fm, fm0) Clean up before moving on. We now look at some more graphical features: contour and image plots. x <- seq(-pi, pi, len=50) y <- x x is a vector of 50 equally spaced values in x . y is the same. f <- outer(x, y, function(x, y) cos(y)/(1 + x^2)) f is a square matrix, with rows and columns indexed by x and y respectively, of values of the function cos(y )/(1 + x2 ). oldpar <- par(no.readonly = TRUE) par(pty="s") Save the plotting parameters and set the plotting region to square. contour(x, y, f) contour(x, y, f, nlevels=15, add=TRUE) Make a contour map of f ; add in more lines for more detail. fa <- (f-t(f))/2 fa is the asymmetric part of f . (t() is transpose). contour(x, y, fa, nlevels=15) Make a contour plot, . . . par(oldpar) . . . and restore the old graphics parameters. image(x, y, f) image(x, y, fa) Make some high density image plots, (of which you can get hardcopies if you wish), . . . objects(); rm(x, y, f, fa) . . . and clean up before moving on. R can do complex arithmetic, also. th <- seq(-pi, pi, len=100) z <- exp(1i*th) 1i is used for the complex number i. par(pty="s") plot(z, type="l") Plotting complex arguments means plot imaginary versus real parts. This should be a circle.
87
w <- rnorm(100) + rnorm(100)*1i Suppose we want to sample points within the unit circle. One method would be to take complex numbers with standard normal real and imaginary parts . . . w <- ifelse(Mod(w) > 1, 1/w, w) . . . and to map any outside the circle onto their reciprocal. plot(w, xlim=c(-1,1), ylim=c(-1,1), pch="+",xlab="x", ylab="y") lines(z) All points are inside the unit circle, but the distribution is not uniform. w <- sqrt(runif(100))*exp(2*pi*runif(100)*1i) plot(w, xlim=c(-1,1), ylim=c(-1,1), pch="+", xlab="x", ylab="y") lines(z) The second method uses the uniform distribution. The points should now look more evenly spaced over the disc. rm(th, w, z) Clean up again. q() Quit the R program. You will be asked if you want to save the R workspace, and for an exploratory session like this, you probably do not want to save it.
Appendix B: Invoking R
88
Appendix B Invoking R
Users of R on Windows or Mac OS X should read the OS-specific section first, but commandline use is also supported.
Appendix B: Invoking R
89
R accepts the following command-line options. --help -h Print short help message to standard output and exit successfully.
--version Print version information to standard output and exit successfully. --encoding=enc Specify the encoding to be assumed for input from the console or stdin. This needs to be an encoding known to iconv: see its help page. (--encoding enc is also accepted.) RHOME Print the path to the R home directory to standard output and exit successfully. Apart from the front-end shell script and the man page, R installation puts everything (executables, packages, etc.) into this directory.
--save --no-save Control whether data sets should be saved or not at the end of the R session. If neither is given in an interactive session, the user is asked for the desired behavior when ending the session with q(); in non-interactive use one of these must be specified or implied by some other option (see below). --no-environ Do not read any user file to set environment variables. --no-site-file Do not read the site-wide profile at startup. --no-init-file Do not read the users profile at startup. --restore --no-restore --no-restore-data Control whether saved images (file .RData in the directory where R was started) should be restored at startup or not. The default is to restore. (--no-restore implies all the specific --no-restore-* options.) --no-restore-history Control whether the history file (normally file .Rhistory in the directory where R was started, but can be set by the environment variable R_HISTFILE) should be restored at startup or not. The default is to restore. --no-Rconsole (Windows only) Prevent loading the Rconsole file at startup. --vanilla Combine --no-save, --no-environ, --no-site-file, --no-init-file and --no-restore. Under Windows, this also includes --no-Rconsole.
Appendix B: Invoking R
90
-f file --file=file (not Rgui.exe) Take input from file : - means stdin. Implies --no-save unless --save has been set. On a Unix-alike, shell metacharacters should be avoided in file (but as from R 2.14.0 spaces are allowed). -e expression (not Rgui.exe) Use expression as an input line. One or more -e options can be used, but not together with -f or --file. Implies --no-save unless --save has been set. (There is a limit of 10,000 bytes on the total length of expressions used in this way. Expressions containing spaces or shell metacharacters will need to be quoted.) --no-readline (UNIX only) Turn off command-line editing via readline. This is useful when running R from within Emacs using the ESS (Emacs Speaks Statistics) package. See Appendix C [The command-line editor], page 96, for more information. Command-line editing is enabled by default interactive use (see --interactive). This option also affects tilde-expansion: see the help for path.expand. --min-vsize=N --min-nsize=N For expert use only: set the initial trigger sizes for garbage collection of vector heap (in bytes) and cons cells (number) respectively. Suffix M specifies megabytes or millions of cells respectively. The defaults are 6Mb and 350k respectively. --max-ppsize=N Specify the maximum size of the pointer protection stack as N locations. This defaults to 10000, but can be increased to allow large and complicated calculations to be done. Currently the maximum value accepted is 100000. --max-mem-size=N (Windows only) Specify a limit for the amount of memory to be used both for R objects and working areas. This is set by default to the smaller of the amount of physical RAM in the machine and for 32-bit R, 1.5Gb1 , and must be between 32Mb and the maximum allowed on that version of Windows. --quiet --silent -q Do not print out the initial copyright and welcome messages. --slave Make R run as quietly as possible. This option is intended to support programs which use R to compute results for them. It implies --quiet and --no-save.
--interactive (UNIX only) Assert that R really is being run interactively even if input has been redirected: use if input is from a FIFO or pipe and fed from an interactive
1
2.5Gb on versions of Windows that support 3Gb per process and have the support enabled: see the rw-FAQ Q2.9; 3.5Gb on some 64-bit versions of Windows.
Appendix B: Invoking R
91
program. (The default is to deduce that R is being run interactively if and only if stdin is connected to a terminal or pty.) Using -e, -f or --file asserts non-interactive use even if --interactive is given. --ess (Windows only) Set Rterm up for use by R-inferior-mode in ESS, including asserting interactive use without the command-line editor.
--verbose Print more information about progress, and in particular set Rs option verbose to TRUE. R code uses this option to control the printing of diagnostic messages. --debugger=name -d name (UNIX only) Run R through debugger name. For most debuggers (the exceptions are valgrind and recent versions of gdb), further command line options are disregarded, and should instead be given when starting the R executable from inside the debugger. --gui=type -g type (UNIX only) Use type as graphical user interface (note that this also includes interactive graphics). Currently, possible values for type are X11 (the default) and, provided that Tcl/Tk support is available, Tk. (For back-compatibility, x11 and tk are accepted.) --arch=name (UNIX only) Run the specified sub-architecture. Most commonly used on Mac OS X, where the possible values are i386, x86_64 and ppc. --args This flag does nothing except cause the rest of the command line to be skipped: this can be useful to retrieve values from it with commandArgs(TRUE).
Note that input and output can be redirected in the usual way (using < and >), but the line length limit of 4095 bytes still applies. Warning and error messages are sent to the error channel (stderr). The command R CMD allows the invocation of various tools which are useful in conjunction with R, but not intended to be called directly. The general form is R CMD command args where command is the name of the tool and args the arguments passed on to it. Currently, the following tools are available. BATCH COMPILE SHLIB INSTALL REMOVE build check LINK Run R in batch mode. Runs R --restore --save with possibly further options (see ?BATCH). (UNIX only) Compile C, C++, Fortran . . . files for use with R. Build shared library for dynamic loading. Install add-on packages. Remove add-on packages. Build (that is, package) add-on packages. Check add-on packages. (UNIX only) Front-end for creating executable programs.
Appendix B: Invoking R
92
Extract S/R code from Sweave documentation Process Sweave documentation Diff R output ignoring headers etc Obtain configuration information (Unix only) Update the Java configuration variables
(Unix only) Create Emacs-style tag files from C, R, and Rd files (Windows only) Open a file via Windows file associations (Windows only) Process (La)TeX files with Rs style files
Use R CMD command --help to obtain usage information for each of the tools accessible via the R CMD interface. In addition, you can use2 options --arch=, --no-environ, --no-init-file, --no-site-file and --vanilla between R and CMD: these affect any R processes run by the tools. (Here --vanilla is equivalent to --no-environ --no-site-file --no-init-file.) However, note that R CMD does not of itself use any R startup files (in particular, neither user nor site Renviron files), and all of the R processes run by these tools (except BATCH) use --no-restore. Most use --vanilla and so invoke no R startup files: the current exceptions are INSTALL, REMOVE, Sweave and SHLIB (which uses --no-site-file --no-init-file). R CMD cmd args for any other executable cmd on the path or given by an absolute filepath: this is useful to have the same environment as R or the specific commands run under, for example to run ldd or pdflatex. Under Windows cmd can be an executable or a batch file, or if it has extension .sh or .pl the appropriate interpreter (if available) is called to run it.
as from R 2.13.0.
Appendix B: Invoking R
93
Windows. If the environment variable R_USER is defined, that gives the home directory. Next, if the environment variable HOME is defined, that gives the home directory. After those two user-controllable settings, R tries to find system defined home directories. It first tries to use the Windows "personal" directory (typically C:\Documents and Settings\username\My Documents in Windows XP). If that fails, and environment variables HOMEDRIVE and HOMEPATH are defined (and they normally are) these define the home directory. Failing all those, the home directory is taken to be the starting directory. You need to ensure that either the environment variables TMPDIR, TMP and TEMP are either unset or one of them points to a valid place to create temporary files and directories. Environment variables can be supplied as name =value pairs on the command line. If there is an argument ending .RData (in any case) it is interpreted as the path to the workspace to be restored: it implies --restore and sets the working directory to the parent of the named file. (This mechanism is used for drag-and-drop and file association with RGui.exe, but also works for Rterm.exe. If the named file does not exist it sets the working directory if the parent directory exists.) The following additional command-line options are available when invoking RGui.exe. --mdi --sdi --no-mdi Control whether Rgui will operate as an MDI program (with multiple child windows within one main window) or an SDI application (with multiple toplevel windows for the console, graphics and pager). The command-line setting overrides the setting in the users Rconsole file. --debug Enable the Break to debugger menu item in Rgui, and trigger a break to the debugger during command line processing.
Under Windows with R CMD you may also specify your own .bat, .exe, .sh or .pl file. It will be run under the appropriate interpreter (Perl for .pl) with several environment variables set appropriately, including R_HOME, R_OSTYPE, PATH, BSTINPUTS and TEXINPUTS. For example, if you already have latex.exe on your path, then R CMD latex.exe mydoc
A will run L TEX on mydoc.tex, with the path to Rs share/texmf macros appended to A TEXINPUTS. (Unfortunately, this does not help with the MiKTeX build of L TEX, but R CMD texify mydoc will work in that case.)
Appendix B: Invoking R
94
Appendix B: Invoking R
95
R program goes here... EOF but here stdin() refers to the program source and "stdin" will not be usable. Very short scripts can be passed to Rscript on the command-line via the -e flag. Note that on a Unix-alike the input filename (such as foo.R) should not contain spaces nor shell metacharacters.
96
Go to the previous command (backwards in the history). Go to the next command (forwards in the history). Find the last command with the text string in it.
The Emacs Speaks Statistics package; see the URL http://ESS.R-project.org On a PC keyboard this is usually the Alt key, occasionally the Windows key. On a Mac keyboard normally no meta key is available.
97
On most terminals, you can also use the up and down arrow keys instead of C-p and C-n, respectively.
On most terminals, you can also use the left and right arrow keys instead of C-b and C-f, respectively.
The final RET terminates the command line editing sequence. The readline key bindings can be customized in the usual way via a ~/.inputrc file. As from R 2.12.0, these customizations can be conditioned on application R, that is by including a section like $if R "\C-xd": "q(no)\n" $endif
98
>
>............................................... 9 >= . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
%
%*% . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 %o% . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
?
?............................................... 4 ?? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
&
&............................................... 9 && . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
^
^............................................... 8
| *
*............................................... 8 |............................................... 9 || . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
+
+............................................... 8
~
~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
-............................................... 8
A
abline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . anova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57, aov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . aperm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . as.data.frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . as.vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . avas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 66 59 59 58 23 21 29 26 30 14 14 66 71
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 .First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 .Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
/
/............................................... 8
:
:............................................... 8 :: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 ::: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
B
boxplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 bruto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
<
<............................................... 9 <<- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 <= . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
C
c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7, 10, 26, C.............................................. cbind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 57 25 57 57 69 57
=
== . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
99
I
identify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ifelse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . is.na . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . is.nan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 42 42 69 10 10
D
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data.frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . det . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . detach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dev.list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dev.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dev.off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dev.prev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dev.set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deviance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . diag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dotchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . drop1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 29 36 25 30 25 81 81 81 81 81 57 23 19 69 59
J
jpeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
K
ks.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
L
legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 13 levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 lm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 lme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 locator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 loess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 lqs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 lsfit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
E
ecdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 eigen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
M
mars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
F
F............................................... 9 factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 FALSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 fivenum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
N
NA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NaN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nlm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63, 64, nlme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nlminb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 10 23 43 65 65 63 23
G
getAnywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 getS3method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 glm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
H
help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 help.search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 help.start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 hist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 69
O
optim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 outer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
100
P
pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 paste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 pdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 persp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58, 67 pmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 pmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 png . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 postscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 predict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 prod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
sink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 stem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58, 59 sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 58 svd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
T
t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 T............................................... 9 t.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 26 tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 tapply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 TRUE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Q
qqline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38, qqnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38, qqplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . qr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quartz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 69 69 25 79
R
range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 rbind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 read.table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 rep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 resid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 rlm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 rm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
U
unclass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
V
var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 17 var.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 vcov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
S
scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 sd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 shapiro.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
W
while . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 wilcox.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
X
X11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
101
I
Indexing of and by arrays . . . . . . . . . . . . . . . . . . . . . . 19 Indexing vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
K
Kolmogorov-Smirnov test . . . . . . . . . . . . . . . . . . . . . . 38
B
Binary operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Box plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
L
Least squares fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Local approximating regressions . . . . . . . . . . . . . . . Loops and conditional execution . . . . . . . . . . . . . . . 25 24 57 28 66 42
C
Character vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, Concatenating lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . CRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Customizing the environment . . . . . . . . . . . . . . . . . . 10 51 29 56 42 82 50
M
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . Maximum likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . Missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 23 65 10 65
D
Data frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Default values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Density estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Diverting input and output . . . . . . . . . . . . . . . . . . . . . 6 Dynamic graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
N
Named arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Nonlinear least squares . . . . . . . . . . . . . . . . . . . . . . . . 63
O
Object orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One- and two-sample tests . . . . . . . . . . . . . . . . . . . . . Ordered factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, Outer products of arrays . . . . . . . . . . . . . . . . . . . . . . . 51 13 39 56 22
E
Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . 24 Empirical CDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
F
Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 56 Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
P
Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 82 Probability distributions . . . . . . . . . . . . . . . . . . . . . . . 35
G
Generalized linear models . . . . . . . . . . . . . . . . . . . . . . Generalized transpose of an array . . . . . . . . . . . . . . Generic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphics device drivers . . . . . . . . . . . . . . . . . . . . . . . . Graphics parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . Grouped expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 23 51 79 73 42
Q
QR decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Quantile-quantile plots . . . . . . . . . . . . . . . . . . . . . . . . 38
R
Reading data from files . . . . . . . . . . . . . . . . . . . . . . . . 32 Recycling rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 21
102
Tree-based models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
U
Updating fitted models . . . . . . . . . . . . . . . . . . . . . . . . 59
S
Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Search path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shapiro-Wilk test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Singular value decomposition . . . . . . . . . . . . . . . . . . Statistical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Students t test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 31 38 24 54 39
V
Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
W
Wilcoxon test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Writing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
T
Tabulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Appendix F: References
103
Appendix F References
D. M. Bates and D. G. Watts (1988), Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, New York. Richard A. Becker, John M. Chambers and Allan R. Wilks (1988), The New S Language. Chapman & Hall, New York. This book is often called the Blue Book . John M. Chambers and Trevor J. Hastie eds. (1992), Statistical Models in S. Chapman & Hall, New York. This is also called the White Book . John M. Chambers (1998) Programming with Data. Springer, New York. This is also called the Green Book . A. C. Davison and D. V. Hinkley (1997), Bootstrap Methods and Their Applications, Cambridge University Press. Annette J. Dobson (1990), An Introduction to Generalized Linear Models, Chapman and Hall, London. Peter McCullagh and John A. Nelder (1989), Generalized Linear Models. Second edition, Chapman and Hall, London. John A. Rice (1995), Mathematical Statistics and Data Analysis. Second edition. Duxbury Press, Belmont, CA. S. D. Silvey (1970), Statistical Inference. Penguin, London.