“poly()”如何生成正交多项式?如何理解返回的“coefs”?

2024-04-12

我对正交多项式的理解是它们采用以下形式

y(x) = a1 + a2(x - c1) + a3(x - c2)(x - c3) + a4(x - c4)(x - c5)(x - c6)...最多达到所需的术语数

where a1, a2 etc是每个正交项的系数(拟合之间有所不同),并且c1, c2 etc是正交项内的系数,确定这些系数以使项保持正交性(使用相同的拟合之间一致x values)

我明白poly()用于拟合正交多项式。一个例子

x = c(1.160, 1.143, 1.126, 1.109, 1.079, 1.053, 1.040, 1.027, 1.015, 1.004, 0.994, 0.985, 0.977) # abscissae not equally spaced

y = c(1.217395, 1.604360, 2.834947, 4.585687, 8.770932, 9.996260, 9.264800, 9.155079, 7.949278, 7.317690, 6.377519, 6.409620, 6.643426)

# construct the orthogonal polynomial
orth_poly <- poly(x, degree = 5)

# fit y to orthogonal polynomial
model <- lm(y ~ orth_poly) 

我想提取两个系数a1, a2 etc,以及正交系数c1, c2 etc。我不知道该怎么做。我的猜测是

model$coefficients

返回第一组系数,但我正在努力解决如何提取其他系数的问题。也许之内

attributes(orth_poly)$coefs

?

非常感谢。


我刚刚意识到有一个密切相关的问题从R的poly()函数中提取正交多项式系数? https://stackoverflow.com/q/26728289/48917382年前。答案只是解释什么predict.poly确实如此,但我的回答给出了完整的画面。


第 1 部分:如何poly表示正交多项式

我对正交多项式的理解是它们采用以下形式

y(x) = a1 + a2(x - c1) + a3(x - c2)(x - c3) + a4(x - c4)(x - c5)(x - c6)...最多达到所需的术语数

不不,没有这么干净的形式。poly()生成可以用以下递归算法表示的单调正交多项式。就是这样predict.poly生成线性预测矩阵。出奇,poly它本身并没有使用这样的递归,而是使用了一种残酷的力量:正交跨度的普通多项式模型矩阵的 QR 分解。然而,这相当于递归。


第 2 部分:输出说明poly()

让我们考虑一个例子。采取x在你的帖子中,

X <- poly(x, degree = 5)

#                 1           2           3            4           5
# [1,]  0.484259711  0.48436462  0.48074040  0.351250507  0.25411350
# [2,]  0.406027697  0.20038942 -0.06236564 -0.303377083 -0.46801416
# [3,]  0.327795682 -0.02660187 -0.34049024 -0.338222850 -0.11788140
# ...           ...          ...        ...          ...         ...
#[12,] -0.321069852  0.28705108 -0.15397819 -0.006975615  0.16978124
#[13,] -0.357884918  0.42236400 -0.40180712  0.398738364 -0.34115435
#attr(,"coefs")
#attr(,"coefs")$alpha
#[1] 1.054769 1.078794 1.063917 1.075700 1.063079
# 
#attr(,"coefs")$norm2
#[1] 1.000000e+00 1.300000e+01 4.722031e-02 1.028848e-04 2.550358e-07
#[6] 5.567156e-10 1.156628e-12

这些属性如下:

  • alpha[1]给出x_bar = mean(x),即中心;
  • alpha - alpha[1] gives alpha0, alpha1, ..., alpha4 (alpha5已计算但之前被丢弃poly回报X,因为它不会用于predict.poly);
  • 第一个值是norm2始终为 1。倒数第二个是l0, l1, ..., l5,给出平方柱范数X; l0是丢弃的列平方范数P0(x - x_bar),这始终是n (i.e., length(x));而第一个1只是为了让递归在内部进行而被填充predict.poly.
  • beta0, beta1, beta2, ..., beta_5不返回,但可以通过以下方式计算norm2[-1] / norm2[-length(norm2)].

第 3 节:实施poly同时使用 QR 分解和递归算法

如前面提到的,poly不使用递归,而predict.poly做。就我个人而言,我不明白这种不一致设计背后的逻辑/原因。在这里我会提供一个功能my_poly我自己写的,使用递归来生成矩阵,如果QR = FALSE. When QR = TRUE,这是一个类似但不完全相同的实现poly。代码注释得很好,有助于您理解这两种方法。

## return a model matrix for data `x`
my_poly <- function (x, degree = 1, QR = TRUE) {
  ## check feasibility
  if (length(unique(x)) < degree)
    stop("insufficient unique data points for specified degree!")
  ## centring covariates (so that `x` is orthogonal to intercept)
  centre <- mean(x)
  x <- x - centre
  if (QR) {
    ## QR factorization of design matrix of ordinary polynomial
    QR <- qr(outer(x, 0:degree, "^"))
    ## X <- qr.Q(QR) * rep(diag(QR$qr), each = length(x))
    ## i.e., column rescaling of Q factor by `diag(R)`
    ## also drop the intercept
    X <- qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE]
    ## now columns of `X` are orthorgonal to each other
    ## i.e., `crossprod(X)` is diagonal
    X2 <- X * X
    norm2 <- colSums(X * X)    ## squared L2 norm
    alpha <- drop(crossprod(X2, x)) / norm2
    beta <- norm2 / (c(length(x), norm2[-degree]))
    colnames(X) <- 1:degree
    } 
  else {
    beta <- alpha <- norm2 <- numeric(degree)
    ## repeat first polynomial `x` on all columns to initialize design matrix X
    X <- matrix(x, nrow = length(x), ncol = degree, dimnames = list(NULL, 1:degree))
    ## compute alpha[1] and beta[1]
    norm2[1] <- new_norm <- drop(crossprod(x))
    alpha[1] <- sum(x ^ 3) / new_norm
    beta[1] <- new_norm / length(x)
    if (degree > 1L) {
      old_norm <- new_norm
      ## second polynomial
      X[, 2] <- Xi <- (x - alpha[1]) * X[, 1] - beta[1]
      norm2[2] <- new_norm <- drop(crossprod(Xi))
      alpha[2] <- drop(crossprod(Xi * Xi, x)) / new_norm
      beta[2] <- new_norm / old_norm
      old_norm <- new_norm
      ## further polynomials obtained from recursion
      i <- 3
      while (i <= degree) {
        X[, i] <- Xi <- (x - alpha[i - 1]) * X[, i - 1] - beta[i - 1] * X[, i - 2]
        norm2[i] <- new_norm <- drop(crossprod(Xi))
        alpha[i] <- drop(crossprod(Xi * Xi, x)) / new_norm
        beta[i] <- new_norm / old_norm
        old_norm <- new_norm
        i <- i + 1
        }
      }
    }
  ## column rescaling so that `crossprod(X)` is an identity matrix
  scale <- sqrt(norm2)
  X <- X * rep(1 / scale, each = length(x))
  ## add attributes and return
  attr(X, "coefs") <- list(centre = centre, scale = scale, alpha = alpha[-degree], beta = beta[-degree])
  X
  }

第 4 节:输出说明my_poly

X <- my_poly(x, 5, FALSE)

结果矩阵与生成的矩阵相同poly因此被排除在外。属性不太一样。

#attr(,"coefs")
#attr(,"coefs")$centre
#[1] 1.054769

#attr(,"coefs")$scale
#[1] 2.173023e-01 1.014321e-02 5.050106e-04 2.359482e-05 1.075466e-06

#attr(,"coefs")$alpha
#[1] 0.024025005 0.009147498 0.020930616 0.008309835

#attr(,"coefs")$beta
#[1] 0.003632331 0.002178825 0.002478848 0.002182892

my_poly更明显地返回构造信息:

  • centre gives x_bar = mean(x);
  • scale给出列范数(的平方根norm2由返回poly);
  • alpha gives alpha1, alpha2, alpha3, alpha4;
  • beta gives beta1, beta2, beta3, beta4.

第 5 节:预测例程my_poly

Since my_poly返回不同的属性,stats:::predict.poly不兼容my_poly。这是适当的例程my_predict_poly:

## return a linear predictor matrix, given a model matrix `X` and new data `x`
my_predict_poly <- function (X, x) {
  ## extract construction info
  coefs <- attr(X, "coefs")
  centre <- coefs$centre
  alpha <- coefs$alpha
  beta <- coefs$beta
  degree <- ncol(X)
  ## centring `x`
  x <- x - coefs$centre
  ## repeat first polynomial `x` on all columns to initialize design matrix X
  X <- matrix(x, length(x), degree, dimnames = list(NULL, 1:degree))
  if (degree > 1L) {
    ## second polynomial
    X[, 2] <- (x - alpha[1]) * X[, 1] - beta[1]
    ## further polynomials obtained from recursion
    i <- 3
    while (i <= degree) {
      X[, i] <- (x - alpha[i - 1]) * X[, i - 1] - beta[i - 1] * X[, i - 2]
      i <- i + 1
      }
    }
  ## column rescaling so that `crossprod(X)` is an identity matrix
  X * rep(1 / coefs$scale, each = length(x))
  }

考虑一个例子:

set.seed(0); x1 <- runif(5, min(x), max(x))

and

stats:::predict.poly(poly(x, 5), x1)
my_predict_poly(my_poly(x, 5, FALSE), x1)

给出完全相同的结果预测矩阵:

#               1          2           3          4          5
#[1,]  0.39726381  0.1721267 -0.10562568 -0.3312680 -0.4587345
#[2,] -0.13428822 -0.2050351  0.28374304 -0.0858400 -0.2202396
#[3,] -0.04450277 -0.3259792  0.16493099  0.2393501 -0.2634766
#[4,]  0.12454047 -0.3499992 -0.24270235  0.3411163  0.3891214
#[5,]  0.40695739  0.2034296 -0.05758283 -0.2999763 -0.4682834

请注意,预测例程仅采用现有的构造信息,而不是重建多项式。


第 6 节:只管治疗poly and predict.poly作为一个黑匣子

很少需要了解内部的一切。对于统计建模来说,知道这一点就足够了poly构造模型拟合的多项式基,其系数可以在lmObject$coefficients。在进行预测时,predict.poly永远不需要被用户调用,因为predict.lm会为你做的。这样一来,只要治疗就完全可以了poly and predict.poly作为一个黑匣子。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

“poly()”如何生成正交多项式?如何理解返回的“coefs”? 的相关文章

随机推荐