이전: , 상위 문서: Optimization   [차례][찾아보기]


25.4 Linear Least Squares

Octave also supports linear least squares minimization. That is, Octave can find the parameter b such that the model y = x*b fits data (x,y) as well as possible, assuming zero-mean Gaussian noise. If the noise is assumed to be isotropic the problem can be solved using the ‘\’ or ‘/’ operators, or the ols function. In the general case where the noise is assumed to be anisotropic the gls is needed.

[beta, sigma, r] = ols (세로, 가로)

Ordinary least squares (OLS) estimation.

OLS applies to the multivariate model 세로 = 가로*b + e where 세로 is a t-by-p matrix, 가로 is a t-by-k matrix, b is a k-by-p matrix, and e is a t-by-p matrix.

Each row of 세로 is a p-variate observation in which each column represents a variable. Likewise, the rows of 가로 represent k-variate observations or possibly designed values. Furthermore, the collection of observations 가로 must be of adequate rank, k, otherwise b cannot be uniquely estimated.

The observation errors, e, are assumed to originate from an underlying p-variate distribution with zero mean and p-by-p covariance matrix S, both constant conditioned on 가로. Furthermore, the matrix S is constant with respect to each observation such that mean (e) = 0 and cov (vec (e)) = kron (s, I). (For cases that don’t meet this criteria, such as autocorrelated errors, see generalized least squares, gls, for more efficient estimations.)

The return values beta, sigma, and r are defined as follows.

beta

The OLS estimator for matrix b. beta is calculated directly via inv (가로'*가로) * 가로' * 세로 if the matrix 가로'*가로 is of full rank. Otherwise, beta = pinv (가로) * 세로 where pinv (가로) denotes the pseudoinverse of 가로.

sigma

The OLS estimator for the matrix s,

sigma = (세로-가로*beta)' * (세로-가로*beta) / (t-rank(가로))
r

The matrix of OLS residuals, r = 세로 - 가로*beta.

같이 보기: gls, pinv.

[beta, v, r] = gls (세로, 가로, o)

Generalized least squares (GLS) model.

Perform a generalized least squares estimation for the multivariate model 세로 = 가로*B + E where 세로 is a t-by-p matrix, 가로 is a t-by-k matrix, b is a k-by-p matrix and e is a t-by-p matrix.

Each row of 세로 is a p-variate observation in which each column represents a variable. Likewise, the rows of 가로 represent k-variate observations or possibly designed values. Furthermore, the collection of observations 가로 must be of adequate rank, k, otherwise b cannot be uniquely estimated.

The observation errors, e, are assumed to originate from an underlying p-variate distribution with zero mean but possibly heteroscedastic observations. That is, in general, mean (e) = 0 and cov (vec (e)) = (s^2)*o in which s is a scalar and o is a t*p-by-t*p matrix.

The return values beta, v, and r are defined as follows.

beta

The GLS estimator for matrix b.

v

The GLS estimator for scalar s^2.

r

The matrix of GLS residuals, r = 세로 - 가로*beta.

같이 보기: ols.

가로 = lsqnonneg (c, d)
가로 = lsqnonneg (c, d, x0)
가로 = lsqnonneg (c, d, x0, options)
[가로, resnorm] = lsqnonneg (…)
[가로, resnorm, residual] = lsqnonneg (…)
[가로, resnorm, residual, exitflag] = lsqnonneg (…)
[가로, resnorm, residual, exitflag, output] = lsqnonneg (…)
[가로, resnorm, residual, exitflag, output, lambda] = lsqnonneg (…)

Minimize norm (c*가로 - d) subject to 가로 >= 0.

c and d must be real matrices.

x0 is an optional initial guess for the solution 가로.

options is an options structure to change the behavior of the algorithm (see optimset). lsqnonneg recognizes these options: "MaxIter", "TolX".

Outputs:

resnorm

The squared 2-norm of the residual: norm (c*가로-d)^2

residual

The residual: d-c*가로

exitflag

An indicator of convergence. 0 indicates that the iteration count was exceeded, and therefore convergence was not reached; >0 indicates that the algorithm converged. (The algorithm is stable and will converge given enough iterations.)

output

A structure with two fields:

  • "algorithm": The algorithm used ("nnls")
  • "iterations": The number of iterations taken.
lambda

Undocumented output

같이 보기: pqpnonneg, lscov, optimset.

가로 = lscov (A, b)
가로 = lscov (A, b, V)
가로 = lscov (A, b, V, alg)
[가로, stdx, mse, S] = lscov (…)

Compute a generalized linear least squares fit.

Estimate 가로 under the model b = A가로 + w, where the noise w is assumed to follow a normal distribution with covariance matrix {\sigma^2} V.

If the size of the coefficient matrix A is n-by-p, the size of the vector/array of constant terms b must be n-by-k.

The optional input argument V may be an n-element vector of positive weights (inverse variances), or an n-by-n symmetric positive semi-definite matrix representing the covariance of b. If V is not supplied, the ordinary least squares solution is returned.

The alg input argument, a guidance on solution method to use, is currently ignored.

Besides the least-squares estimate matrix 가로 (p-by-k), the function also returns stdx (p-by-k), the error standard deviation of estimated 가로; mse (k-by-1), the estimated data error covariance scale factors (\sigma^2); and S (p-by-p, or p-by-p-by-k if k > 1), the error covariance of 가로.

Reference: Golub and Van Loan (1996), Matrix Computations (3rd Ed.), Johns Hopkins, Section 5.6.3

같이 보기: ols, gls, lsqnonneg.

optimset ()
options = optimset ()
options = optimset (par, , …)
options = optimset (old, par, , …)
options = optimset (old, new)

Create options structure for optimization functions.

When called without any input or output arguments, optimset prints a list of all valid optimization parameters.

When called with one output and no inputs, return an options structure with all valid option parameters initialized to [].

When called with a list of parameter/value pairs, return an options structure with only the named parameters initialized.

When the first input is an existing options structure old, the values are updated from either the par/ list or from the options structure new.

Valid parameters are:

AutoScaling
ComplexEqn
Display

Request verbose display of results from optimizations. Values are:

"off" [default]

No display.

"iter"

Display intermediate results for every loop iteration.

"final"

Display the result of the final loop iteration.

"notify"

Display the result of the final loop iteration if the function has failed to converge.

FinDiffType
FunValCheck

When enabled, display an error if the objective function returns an invalid value (a complex number, NaN, or Inf). Must be set to "on" or "off" [default]. Note: the functions fzero and fminbnd correctly handle Inf values and only complex values or NaN will cause an error in this case.

GradObj

When set to "on", the function to be minimized must return a second argument which is the gradient, or first derivative, of the function at the point 가로. If set to "off" [default], the gradient is computed via finite differences.

Jacobian

When set to "on", the function to be minimized must return a second argument which is the Jacobian, or first derivative, of the function at the point 가로. If set to "off" [default], the Jacobian is computed via finite differences.

MaxFunEvals

Maximum number of function evaluations before optimization stops. Must be a positive integer.

MaxIter

Maximum number of algorithm iterations before optimization stops. Must be a positive integer.

OutputFcn

A user-defined function executed once per algorithm iteration.

TolFun

Termination criterion for the function output. If the difference in the calculated objective function between one algorithm iteration and the next is less than TolFun the optimization stops. Must be a positive scalar.

TolX

Termination criterion for the function input. If the difference in 가로, the current search point, between one algorithm iteration and the next is less than TolX the optimization stops. Must be a positive scalar.

TypicalX
Updating

같이 보기: optimget.

optimget (options, parname)
optimget (options, parname, default)

Return the specific option parname from the optimization options structure options created by optimset.

If parname is not defined then return default if supplied, otherwise return an empty matrix.

같이 보기: optimset.


이전: , 상위 문서: Optimization   [차례][찾아보기]