c=[8.1;10.8] c = 8.1000 10.8000 A=[0.8 0.44;0.05 0.1;0.1 0.36] A = 0.8000 0.4400 0.0500 0.1000 0.1000 0.3600 b=[24000; 2000; 6000] b = 24000 2000 6000 op=optimset('linprog','LargeScale','off') ??? Error using ==> optimset Arguments must occur in name-value pairs. op=optimset('linprog'); op=optimset(op,'LargeScale','off') op = ActiveConstrTol: [] DerivativeCheck: [] Diagnostics: 'off' DiffMaxChange: [] DiffMinChange: [] Display: 'final' GoalsExactAchieve: [] GradConstr: [] GradObj: [] Hessian: [] HessMult: [] HessPattern: [] HessUpdate: [] Jacobian: [] JacobMult: [] JacobPattern: [] LargeScale: 'off' LevenbergMarquardt: [] LineSearchType: [] MaxFunEvals: [] MaxIter: 85 MaxPCGIter: [] MeritFunction: [] MinAbsMax: [] Preconditioner: [] PrecondBandWidth: [] ShowStatusWindow: [] TolCon: [] TolFun: 1.0000e-008 TolPCG: [] TolX: [] TypicalX: [] help linprog LINPROG Linear programming. X=LINPROG(f,A,b) solves the linear programming problem: min f'*x subject to: A*x <= b x X=LINPROG(f,A,b,Aeq,beq) solves the problem above while additionally satisfying the equality constraints Aeq*x = beq. X=LINPROG(f,A,b,Aeq,beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0) sets the starting point to X0. This option is only available with the active-set algorithm. The default interior point algorithm will ignore any non-empty starting point. X=LINPROG(f,A,b,Aeq,Beq,LB,UB,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Use options are Display, Diagnostics, TolFun, LargeScale, MaxIter. Currently, only 'final' and 'off' are valid values for the parameter Display when LargeScale is 'off' ('iter' is valid when LargeScale is 'on'). [X,FVAL]=LINPROG(f,A,b) returns the value of the objective function at X: FVAL = f'*X. [X,FVAL,EXITFLAG] = LINPROG(f,A,b) returns EXITFLAG that describes the exit condition of LINPROG. If EXITFLAG is: > 0 then LINPROG converged with a solution X. 0 then LINPROG reached the maximum number of iterations without converging. < 0 then the problem was infeasible or LINPROG failed. [X,FVAL,EXITFLAG,OUTPUT] = LINPROG(f,A,b) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the type of algorithm used in OUTPUT.algorithm, the number of conjugate gradient iterations (if used) in OUTPUT.cgiterations. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(f,A,b) returns the set of Lagrangian multipliers LAMBDA, at the solution: LAMBDA.ineqlin for the linear inequalities A, LAMBDA.eqlin for the linear equalities Aeq, LAMBDA.lower for LB, and LAMBDA.upper for UB. NOTE: the LargeScale (the default) version of LINPROG uses a primal-dual method. Both the primal problem and the dual problem must be feasible for convergence. Infeasibility messages of either the primal or dual, or both, are given as appropriate. The primal problem in standard form is min f'*x such that A*x = b, x >= 0. The dual problem is max b'*y such that A'*y + s = f, s >= 0. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. X = 0 0 FVAL = 0 EXITFLAG = 1 OUTPUT = iterations: 2 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. X = 1.0e+004 * 2.6207 0.6897 FVAL = -2.8676e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] LAMBA.lower ??? Undefined function or variable 'LAMBA.lower'. LAMBDA.lower ans = 0 0 LAMBDA.ineqlin ans = 4.6552 87.5172 0 2400*4.6552 ans = 1.1172e+004 200*87.5172 ans = 1.7503e+004 [U,FVAL,EXITFLAG,OUTPUT,LAMBDAU]=LINPROG(b,-A',-c,[],[],[0 0],[],[],op) Optimization terminated successfully. U = 0 229.5000 -33.7500 FVAL = 2.5650e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAU = lower: [3x1 double] upper: [3x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] [U,FVAL,EXITFLAG,OUTPUT,LAMBDAU]=LINPROG(b,-A',c,[],[],[0 0],[],[],op) Optimization terminated successfully. U = 0 0 -30 FVAL = -180000 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAU = lower: [3x1 double] upper: [3x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] [U,FVAL,EXITFLAG,OUTPUT,LAMBDAU]=LINPROG(-b,-A',-c,[],[],[0 0],[],[],op) Exiting: The solution is unbounded and at infinity; the constraints are not restrictive enough. U = 1.0e+015 * 9.6699 0.8058 2.4175 FVAL = -2.4819e+020 EXITFLAG = -1 OUTPUT = iterations: 1 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAU = lower: [3x1 double] upper: [3x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] [U,FVAL,EXITFLAG,OUTPUT,LAMBDAU]=LINPROG(b,-A',-c,[],[],[0 0],[],[],op) Optimization terminated successfully. U = 0 229.5000 -33.7500 FVAL = 2.5650e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAU = lower: [3x1 double] upper: [3x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] [U,FVAL,EXITFLAG,OUTPUT,LAMBDAU]=LINPROG(b,-A',-c,[],[],[0 0 0],[],[],op) Optimization terminated successfully. U = 4.6552 87.5172 0 FVAL = 2.8676e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAU = lower: [3x1 double] upper: [3x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] c=[6.1;10.8] c = 6.1000 10.8000 [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. X = 1.0e+004 * 2.6207 0.6897 FVAL = -2.3434e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] LAMBDA.ineqlin ans = 1.2069 102.6897 0 c=[6.1;15.8] c = 6.1000 15.8000 [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. X = 1.0e+004 * 1.5000 1.2500 FVAL = -289000 EXITFLAG = 1 OUTPUT = iterations: 2 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] LAMBDA.ineqlin ans = 0 77.0000 22.5000 X X = 1.0e+004 * 1.5000 1.2500 FVAL FVAL = -289000 c=[8.1;10.8] c = 8.1000 10.8000 [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op)´ ??? [],[0 0],[],[],op)´ | Missing operator, comma, or semi-colon. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. X = 1.0e+004 * 2.6207 0.6897 FVAL = -2.8676e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] LAMBDA.ineqlin ans = 4.6552 87.5172 0 b=[26400; 2000; 6000] b = 26400 2000 6000 [Xg,FVALg,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. Xg = 1.0e+004 * 3.0345 0.4828 FVALg = -2.9793e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] FVAL FVAL = -2.8676e+005 FVALg FVALg = -2.9793e+005 b=[24000; 2200; 6000] b = 24000 2200 6000 [Xq,FVALq,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(-c,A,b,[],[],[0 0],[],[],op) Optimization terminated successfully. Xq = 1.0e+004 * 2.4690 0.9655 FVALq = -3.0426e+005 EXITFLAG = 1 OUTPUT = iterations: 3 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [3x1 double] [FVAL FVALg FVALq] ans = 1.0e+005 * -2.8676 -2.9793 -3.0426 c=[1;2] c = 1 2 A=[-3 -4;1 1] A = -3 -4 1 1 b=[-5;4] b = -5 4 [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(c,A,b,[],[],[0 0],[],[0 0],op) Optimization terminated successfully. X = 1.6667 -0.0000 FVAL = 1.6667 EXITFLAG = 1 OUTPUT = iterations: 2 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDA = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] ineqlin: [2x1 double] c=[1;1] c = 1 1 Aeq=[3 4 -1 1;1 1 1 1] Aeq = 3 4 -1 1 1 1 1 1 beq=[5;4] beq = 5 4 [V,FVALv,EXITFLAG,OUTPUT,LAMBDAv]=LINPROG(c,[],[],Aeq,beq,[0 0 0 0],[],[],op) ??? Error using ==> * Inner matrix dimensions must agree. Error in ==> C:\Apps\Matlab\toolbox\optim\private\qpsub.m On line 289 ==> SD=-Z*Z'*gf; Error in ==> C:\Apps\Matlab\toolbox\optim\linprog.m On line 160 ==> [x,lambdaqp,exitflag,output]= ... c=[0;0;0;0;1;1] c = 0 0 0 0 1 1 Aeq=[3 4 -1 0 1 0;1 1 0 1 0 1] Aeq = 3 4 -1 0 1 0 1 1 0 1 0 1 beq=[5;4] beq = 5 4 [V,FVALv,EXITFLAG,OUTPUT,LAMBDAv]=LINPROG(c,[],[],Aeq,beq,[0 0 0 0 0 0],[],[],op) Optimization terminated successfully. V = 0 1.2500 0 2.7500 0 0 FVALv = 0 EXITFLAG = 1 OUTPUT = iterations: 2 algorithm: 'medium-scale: activeset' firstorderopt: [] cgiterations: [] LAMBDAv = lower: [6x1 double] upper: [6x1 double] eqlin: [2x1 double] ineqlin: [0x1 double] X X = 1.6667 -0.0000 U(1:2) ans = 4.6552 87.5172 V(1:2) ans = 0 1.2500 X X = 1.6667 -0.0000 edit karmarkar disp(sprintf('%d',feature('SessionTool'))) 0 ; dbstatus dbstack dbstack ; disp(which('karmarkar')); v:\cursos\pos\otimiza\aulas\karmarkar.m mdbstatus 'v:\cursos\pos\otimiza\aulas\karmarkar.m' ; quit help bfgs Unconstrained optimization using BFGS. [xo,Ot,nS]=bfgs(S,x0,ip,G,method,Lb,Ub,problem,tol,mxit) S: objective function x0: initial point ip: (0) no plot (default), (>0) plot figure ip with pause, (<0) plot figure ip G: gradient vector function method: line-search method: (0) quadratic+cubic (default), (1) cubic Lb, Ub: lower and upper bound vectors to plot (default = x0*(1+/-2)) problem: (-1): minimum (default), (1): maximum tol: tolerance (default = 1e-4) mxit: maximum number of iterations (default = 50*(1+4*~(ip>0))) xo: optimal point Ot: optimal value of S nS: number of objective function evaluations [xo,Ot,nS]=bfgs('test1',[-1.9 2],1,'gtest1'); Pause: hit any key to continue... Directional Iteration Func-count f(x) Step-size derivative 1 0 267.62 0.001 -1.62e+006 Pause: hit any key to continue... 2 4 6.03794 0.000360666 -1.04e+004 Pause: hit any key to continue... 3 9 5.98743 0.0965279 -0.000505 Pause: hit any key to continue... 4 13 5.40846 4.68383 -0.0605 Pause: hit any key to continue... 5 17 4.92028 0.711958 -0.445 Pause: hit any key to continue... 6 22 3.24268 2.43564 -0.36 Pause: hit any key to continue... 7 26 2.50923 2.67761 -0.0841 Pause: hit any key to continue... 8 30 2.16838 0.169625 -0.716 Pause: hit any key to continue... 9 34 1.80258 0.291737 -0.131 Pause: hit any key to continue... 10 38 1.70097 0.827231 -0.00481 Pause: hit any key to continue... 11 43 0.989729 3.24197 -0.0992 Pause: hit any key to continue... 12 47 0.91702 0.657718 -5.35e-005 Pause: hit any key to continue... 13 51 0.694465 0.618833 -0.0302 Pause: hit any key to continue... 14 55 0.62892 0.976686 -0.00195 Pause: hit any key to continue... 15 61 0.164357 4.56498 0.00461 Pause: hit any key to continue... 16 65 0.151537 0.13408 -2.35e-005 Pause: hit any key to continue... 17 69 0.0969989 2.21491 -0.0146 Pause: hit any key to continue... 18 73 0.0754868 0.554223 -0.0122 Pause: hit any key to continue... 19 78 0.0195495 2.37813 -0.00809 Pause: hit any key to continue... 20 82 0.0150854 0.475979 -2.31e-010 Pause: hit any key to continue... 21 86 0.00526284 1.21915 -0.00189 Pause: hit any key to continue... 22 91 0.00179369 1.66767 -1.92e-005 Pause: hit any key to continue... 23 95 0.000313689 0.611599 -5.16e-010 Pause: hit any key to continue... 24 100 1.08378e-005 1.57231 -1.44e-006 Optimization terminated successfully: Current search direction is a descent direction, and magnitude of directional derivative in search direction less than 2*options.TolFun Pause: hit any key to continue... nS nS = 101 Ot Ot = 1.8495e-006 xo xo = 0.9998 0.9998 type restr1 function [g,h]=restr(x) % restricao adicionada a funcao de Rosenbrock % x0 = [-1.2, 1]' % xo = [1, 1]' % S(xo) = 0 g=x(1).^2+x(2).^2-2; h=[]; %op=optimset('LargeScale','off','MaxFunEvals',inf,'MaxIter',200,'Display','iter',... % 'TolFun',1e-6,'TolX',1e-6,'LineSearch','quadcubic','TolCon',1e-6); %[x,S,ex,out,lambda]=fmincon('test1',[-1.9 2],[],[],[],[],[],[],'restr1',op) help sqp Constrained optimization using SQP to solve problems like: min S(x) subject to: g(x) <= 0, h(x) = 0 (nonlinear constraints) x Lb <= x <= Ub [xo,Ot,nS,lambda]=sqp(S,Res,x0,ip,Gr,linesearch,Lb,Ub,problem,tol,mxit) S: objective function x0: initial point ip: (0) no plot (default), (>0) plot figure ip with pause, (<0) plot figure ip Res: contraint function returning [g(x),h(x)] Gr: gradient vector function of S(x) linesearch: (0) quadratic+cubic (default), (1) cubic Lb, Ub: lower and upper bound vectors [used also to plot (plot default = x0*(1+/-2))] problem: (-1): minimum (default), (1): maximum tol: tolerance (default = 1e-6) mxit: maximum number of iterations (default = 50*(1+4*~(ip>0))) xo: optimal point Ot: optimal value of S nS: number of objective function evaluations lambda: Lagrange multipliers [xo,Ot,nS,lambda]=sqp('test1','restr1',[-1.9 2],1) Pause: hit any key to continue... max Directional Iter F-count f(x) constraint Step-size derivative Procedure 1 3 267.62 5.61 1 -1.62e+006 Pause: hit any key to continue... 2 16 565.61 5.162 0.00195 -6.32e+005 Pause: hit any key to continue... 3 29 397.898 3.254 0.00195 -1.77e+003 Pause: hit any key to continue... 4 33 141.346 1.367 1 -136 Pause: hit any key to continue... 5 37 37.3103 0.2034 1 -75.1 Pause: hit any key to continue... 6 41 0.0553945 -0.3941 1 -0.111 Pause: hit any key to continue... 7 45 0.00523106 -0.381 1 -0.000944 Pause: hit any key to continue... 8 49 0.00475214 -0.3818 1 -1.08e-005 Hessian modified Pause: hit any key to continue... 9 53 0.00474145 -0.3814 1 -0.000942 Hessian modified Pause: hit any key to continue... 10 57 0.00404631 -0.3469 1 -0.00109 Hessian modified Pause: hit any key to continue... 11 61 0.0032356 -0.2895 1 -0.00147 Hessian modified Pause: hit any key to continue... 12 65 0.00215099 -0.2059 1 -0.00185 Hessian modified Pause: hit any key to continue... 13 69 0.000809399 -0.1128 1 -0.00121 Hessian modified Pause: hit any key to continue... 14 73 8.17019e-005 -0.05096 1 -0.000128 Hessian modified Pause: hit any key to continue... 15 77 4.76847e-006 -0.0113 1 -8.81e-006 Hessian modified Pause: hit any key to continue... 16 81 3.36076e-008 -0.0009993 1 -6.7e-008 Hessian modified Optimization terminated successfully: Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon Active Constraints: 1 Pause: hit any key to continue... xo = 0.9999 0.9998 Ot = 8.7239e-009 nS = 84 lambda = lower: [2x1 double] upper: [2x1 double] eqlin: [1x0 double] eqnonlin: [1x0 double] ineqlin: [1x0 double] ineqnonlin: 7.2543e-007 quit