This function is now deprecated. Please use optPenalty.kCVauto instead.

optPenalty.LOOCVauto(
  Y,
  lambdaMin,
  lambdaMax,
  lambdaInit = (lambdaMin + lambdaMax)/2,
  cor = FALSE,
  target = default.target(covML(Y)),
  type = "Alt"
)

Arguments

Y

Data matrix. Variables assumed to be represented by columns.

lambdaMin

A numeric giving the minimum value for the penalty parameter.

lambdaMax

A numeric giving the maximum value for the penalty parameter.

lambdaInit

A numeric giving the initial (starting) value for the penalty parameter.

cor

A logical indicating if the evaluation of the LOOCV score should be performed on the correlation scale.

target

A target matrix (in precision terms) for Type I ridge estimators.

type

A character indicating the type of ridge estimator to be used. Must be one of: "Alt", "ArchI", "ArchII".

Value

An object of class list:

optLambda

A numeric giving the optimal value for the penalty parameter.

optPrec

A matrix representing the precision matrix of the chosen type (see ridgeP) under the optimal value of the penalty parameter.

Details

Function that performs an 'automatic' search for the optimal penalty parameter for the ridgeP call by employing Brent's method to the calculation of a cross-validated negative log-likelihood score.

The function determines the optimal value of the penalty parameter by application of the Brent algorithm (1971) to the (leave-one-out) cross-validated negative log-likelihood score (using a regularized ridge estimator for the precision matrix). The search for the optimal value is automatic in the sense that in order to invoke the root-finding abilities of the Brent method, only a minimum value and a maximum value for the penalty parameter need to be specified as well as a starting penalty value. The value at which the (leave-one-out) cross-validated negative log-likelihood score is minimized is deemed optimal. The function employs the Brent algorithm as implemented in the optim function.

Note

When cor = TRUE correlation matrices are used in the computation of the (cross-validated) negative log-likelihood score, i.e., the leave-one-out sample covariance matrix is a matrix on the correlation scale. When performing evaluation on the correlation scale the data are assumed to be standardized. If cor = TRUE and one wishes to used the default target specification one may consider using target = default.target(covML(Y, cor = TRUE)). This gives a default target under the assumption of standardized data.

References

Brent, R.P. (1971). An Algorithm with Guaranteed Convergence for Finding a Zero of a Function. Computer Journal 14: 422-425.

Author

Wessel N. van Wieringen, Carel F.W. Peeters <carel.peeters@wur.nl>

Examples


## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]

## Obtain regularized precision under optimal penalty
OPT <- optPenalty.LOOCVauto(X, lambdaMin = .001, lambdaMax = 30); OPT
#> $optLambda
#> [1] 13.42467
#> 
#> $optPrec
#> A 25 x 25 ridge precision matrix estimate with lambda = 13.424670
#>              a            b           c            d            e            f …
#> a  0.755139352 -0.003028252 -0.04933164 -0.002421719  0.013977632  0.001114962 …
#> b -0.003028252  0.806519739 -0.01581207 -0.006536546 -0.024427366  0.005763776 …
#> c -0.049331636 -0.015812067  0.75928308 -0.030454558  0.020626620  0.012856311 …
#> d -0.002421719 -0.006536546 -0.03045456  0.801308975  0.020062895  0.001788722 …
#> e  0.013977632 -0.024427366  0.02062662  0.020062895  0.767347416 -0.005726924 …
#> f  0.001114962  0.005763776  0.01285631  0.001788722 -0.005726924  0.811360616 …
#> … 19 more rows and 19 more columns
#> 
OPT$optLambda # Optimal penalty
#> [1] 13.42467
OPT$optPrec   # Regularized precision under optimal penalty
#> A 25 x 25 ridge precision matrix estimate with lambda = 13.424670
#>              a            b           c            d            e            f …
#> a  0.755139352 -0.003028252 -0.04933164 -0.002421719  0.013977632  0.001114962 …
#> b -0.003028252  0.806519739 -0.01581207 -0.006536546 -0.024427366  0.005763776 …
#> c -0.049331636 -0.015812067  0.75928308 -0.030454558  0.020626620  0.012856311 …
#> d -0.002421719 -0.006536546 -0.03045456  0.801308975  0.020062895  0.001788722 …
#> e  0.013977632 -0.024427366  0.02062662  0.020062895  0.767347416 -0.005726924 …
#> f  0.001114962  0.005763776  0.01285631  0.001788722 -0.005726924  0.811360616 …
#> … 19 more rows and 19 more columns

## Another example with standardized data
X <- scale(X, center = TRUE, scale = TRUE)
OPT <- optPenalty.LOOCVauto(X, lambdaMin = .001, lambdaMax = 30, cor = TRUE,
                            target = default.target(covML(X, cor = TRUE))); OPT
#> $optLambda
#> [1] 1.706836
#> 
#> $optPrec
#> A 25 x 25 ridge precision matrix estimate with lambda = 1.706836
#>              a            b           c            d           e            f …
#> a  0.873376744  0.005931254 -0.10948686  0.020928299  0.02758587 -0.031070735 …
#> b  0.005931254  0.861084093 -0.05863234 -0.061263278 -0.11026791  0.013545902 …
#> c -0.109486858 -0.058632336  0.92644720 -0.109288247  0.04836945  0.035669003 …
#> d  0.020928299 -0.061263278 -0.10928825  0.873288367  0.05739944 -0.008329021 …
#> e  0.027585872 -0.110267908  0.04836945  0.057399442  0.90730969 -0.041836580 …
#> f -0.031070735  0.013545902  0.03566900 -0.008329021 -0.04183658  0.864574960 …
#> … 19 more rows and 19 more columns
#> 
OPT$optLambda # Optimal penalty
#> [1] 1.706836
OPT$optPrec   # Regularized precision under optimal penalty
#> A 25 x 25 ridge precision matrix estimate with lambda = 1.706836
#>              a            b           c            d           e            f …
#> a  0.873376744  0.005931254 -0.10948686  0.020928299  0.02758587 -0.031070735 …
#> b  0.005931254  0.861084093 -0.05863234 -0.061263278 -0.11026791  0.013545902 …
#> c -0.109486858 -0.058632336  0.92644720 -0.109288247  0.04836945  0.035669003 …
#> d  0.020928299 -0.061263278 -0.10928825  0.873288367  0.05739944 -0.008329021 …
#> e  0.027585872 -0.110267908  0.04836945  0.057399442  0.90730969 -0.041836580 …
#> f -0.031070735  0.013545902  0.03566900 -0.008329021 -0.04183658  0.864574960 …
#> … 19 more rows and 19 more columns