$\mathit{AIC} = 2k - 2\ln(L)$$$\text{AIC} = 2k - 2\ln(L)$$
where $k$ is the number of parameters in the statistical model, and $L$ is the maximized value of the likelihood function for the estimated model.
${AICc} = AIC + \frac{2k(k + 1)}{n - k - 1}$$$\text{AICc} = \text{AIC} + \frac{2k(k + 1)}{n - k - 1}$$
AICc is AIC with a correction for finite sample sizes, where $n$ denotes the sample size. Thus, AICc is AIC with a greater penalty for extra parameters.
AIC was introduced by Hirotugu Akaike in his seminal 1973 paper "Information Theory and an Extension of the Maximum Likelihood Principle" (in: B. N. Petrov and F. Csaki, eds., 2nd International Symposium on Information Theory, Akademia Kiado, Budapest, pp. 267{281).
References:
"Information Theory and an Extension of the Maximum Likelihood Principle" (starts on page 610).