Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
sagemath
GitHub Repository: sagemath/sagesmc
Path: blob/master/src/sage/numerical/optimize.py
8815 views
1
"""
2
Numerical Root Finding and Optimization
3
4
AUTHOR:
5
6
- William Stein (2007): initial version
7
- Nathann Cohen (2008) : Bin Packing
8
9
10
Functions and Methods
11
----------------------
12
"""
13
14
15
from sage.misc.superseded import deprecated_function_alias
16
from sage.modules.free_module_element import vector
17
from sage.rings.real_double import RDF
18
19
20
def find_root(f, a, b, xtol=10e-13, rtol=4.5e-16, maxiter=100, full_output=False):
21
"""
22
Numerically find a root of ``f`` on the closed interval `[a,b]`
23
(or `[b,a]`) if possible, where ``f`` is a function in the one variable.
24
Note: this function only works in fixed (machine) precision, it is not
25
possible to get arbitrary precision approximations with it.
26
27
INPUT:
28
29
- ``f`` -- a function of one variable or symbolic equality
30
31
- ``a``, ``b`` -- endpoints of the interval
32
33
- ``xtol``, ``rtol`` -- the routine converges when a root is known
34
to lie within ``xtol`` of the value return. Should be `\geq 0`.
35
The routine modifies this to take into account the relative precision
36
of doubles.
37
38
- ``maxiter`` -- integer; if convergence is not achieved in
39
``maxiter`` iterations, an error is raised. Must be `\geq 0`.
40
41
- ``full_output`` -- bool (default: ``False``), if ``True``, also return
42
object that contains information about convergence.
43
44
45
EXAMPLES:
46
47
An example involving an algebraic polynomial function::
48
49
sage: R.<x> = QQ[]
50
sage: f = (x+17)*(x-3)*(x-1/8)^3
51
sage: find_root(f, 0,4)
52
2.999999999999995
53
sage: find_root(f, 0,1) # note -- precision of answer isn't very good on some machines.
54
0.124999...
55
sage: find_root(f, -20,-10)
56
-17.0
57
58
In Pomerance's book on primes he asserts that the famous Riemann
59
Hypothesis is equivalent to the statement that the function `f(x)`
60
defined below is positive for all `x \geq 2.01`::
61
62
sage: def f(x):
63
... return sqrt(x) * log(x) - abs(Li(x) - prime_pi(x))
64
65
We find where `f` equals, i.e., what value that is slightly smaller
66
than `2.01` that could have been used in the formulation of the Riemann
67
Hypothesis::
68
69
sage: find_root(f, 2, 4, rtol=0.0001)
70
2.0082590205656166
71
72
This agrees with the plot::
73
74
sage: plot(f,2,2.01)
75
"""
76
try:
77
return f.find_root(a=a,b=b,xtol=xtol,rtol=rtol,maxiter=maxiter,full_output=full_output)
78
except AttributeError:
79
pass
80
a = float(a); b = float(b)
81
if a > b:
82
a, b = b, a
83
left = f(a)
84
right = f(b)
85
if left > 0 and right > 0:
86
# Refine further -- try to find a point where this
87
# function is negative in the interval
88
val, s = find_local_minimum(f, a, b)
89
if val > 0:
90
if val < rtol:
91
if full_output:
92
return s, "No extra data"
93
else:
94
return s
95
raise RuntimeError("f appears to have no zero on the interval")
96
# If we found such an s, then we just instead find
97
# a root between left and s or s and right.
98
a = s # arbitrary choice -- maybe should try both and take one that works?
99
100
elif left < 0 and right < 0:
101
# Refine further
102
val, s = find_local_maximum(f, a, b)
103
if val < 0:
104
if abs(val) < rtol:
105
if full_output:
106
return s, "No extra data"
107
else:
108
return s
109
raise RuntimeError("f appears to have no zero on the interval")
110
a = s
111
112
import scipy.optimize
113
return scipy.optimize.brentq(f, a, b,
114
full_output=full_output, xtol=xtol, rtol=rtol, maxiter=maxiter)
115
116
def find_local_maximum(f, a, b, tol=1.48e-08, maxfun=500):
117
"""
118
Numerically find a local maximum of the expression `f` on the interval
119
`[a,b]` (or `[b,a]`) along with the point at which the maximum is attained.
120
121
Note that this function only finds a *local* maximum, and not the
122
global maximum on that interval -- see the examples with
123
:func:`find_local_maximum`.
124
125
See the documentation for :func:`find_local_maximum` for more
126
details and possible workarounds for finding the global minimum on
127
an interval.
128
129
EXAMPLES::
130
131
sage: f = lambda x: x*cos(x)
132
sage: find_local_maximum(f, 0, 5)
133
(0.561096338191..., 0.8603335890...)
134
sage: find_local_maximum(f, 0, 5, tol=0.1, maxfun=10)
135
(0.561090323458..., 0.857926501456...)
136
sage: find_local_maximum(8*e^(-x)*sin(x) - 1, 0, 7)
137
(1.579175535558..., 0.7853981...)
138
"""
139
try:
140
return f.find_local_maximum(a=a, b=b, tol=tol, maxfun=maxfun)
141
except AttributeError:
142
pass
143
minval, x = find_local_minimum(lambda z: -f(z), a=a, b=b, tol=tol, maxfun=maxfun)
144
return -minval, x
145
146
def find_local_minimum(f, a, b, tol=1.48e-08, maxfun=500):
147
"""
148
Numerically find a local minimum of the expression ``f`` on the
149
interval `[a,b]` (or `[b,a]`) and the point at which it attains that
150
minimum. Note that ``f`` must be a function of (at most) one
151
variable.
152
153
Note that this function only finds a *local* minimum, and not the
154
global minimum on that interval -- see the examples below.
155
156
INPUT:
157
158
- ``f`` -- a function of at most one variable.
159
160
- ``a``, ``b`` -- endpoints of interval on which to minimize self.
161
162
- ``tol`` -- the convergence tolerance
163
164
- ``maxfun`` -- maximum function evaluations
165
166
167
OUTPUT:
168
169
- ``minval`` -- (float) the minimum value that self takes on in the
170
interval `[a,b]`
171
172
- ``x`` -- (float) the point at which self takes on the minimum value
173
174
175
EXAMPLES::
176
177
sage: f = lambda x: x*cos(x)
178
sage: find_local_minimum(f, 1, 5)
179
(-3.28837139559..., 3.4256184695...)
180
sage: find_local_minimum(f, 1, 5, tol=1e-3)
181
(-3.28837136189098..., 3.42575079030572...)
182
sage: find_local_minimum(f, 1, 5, tol=1e-2, maxfun=10)
183
(-3.28837084598..., 3.4250840220...)
184
sage: show(plot(f, 0, 20))
185
sage: find_local_minimum(f, 1, 15)
186
(-9.4772942594..., 9.5293344109...)
187
188
Only local minima are found; if you enlarge the interval, the
189
returned minimum may be *larger*! See :trac:`2607`.
190
191
::
192
193
sage: f(x) = -x*sin(x^2)
194
sage: find_local_minimum(f, -2.5, -1)
195
(-2.182769784677722, -2.1945027498534686)
196
197
Enlarging the interval returns a larger minimum::
198
199
sage: find_local_minimum(f, -2.5, 2)
200
(-1.3076194129914434, 1.3552111405712108)
201
202
One work-around is to plot the function and grab the minimum from
203
that, although the plotting code does not necessarily do careful
204
numerics (observe the small number of decimal places that we
205
actually test)::
206
207
sage: plot(f, (x,-2.5, -1)).ymin()
208
-2.1827...
209
sage: plot(f, (x,-2.5, 2)).ymin()
210
-2.1827...
211
212
ALGORITHM:
213
214
Uses `scipy.optimize.fminbound
215
<http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fminbound.html>`_
216
which uses Brent's method.
217
218
219
AUTHOR:
220
221
- William Stein (2007-12-07)
222
"""
223
try:
224
return f.find_local_minimum(a=a, b=b, tol=tol, maxfun=maxfun)
225
except AttributeError:
226
pass
227
a = float(a); b = float(b)
228
import scipy.optimize
229
xmin, fval, iter, funcalls = scipy.optimize.fminbound(f, a, b, full_output=1, xtol=tol, maxfun=maxfun)
230
return fval, xmin
231
232
233
find_maximum_on_interval = deprecated_function_alias(2607, find_local_maximum)
234
find_minimum_on_interval = deprecated_function_alias(2607, find_local_minimum)
235
236
237
def minimize(func,x0,gradient=None,hessian=None,algorithm="default",**args):
238
r"""
239
This function is an interface to a variety of algorithms for computing
240
the minimum of a function of several variables.
241
242
243
INPUT:
244
245
- ``func`` -- Either a symbolic function or a Python function whose
246
argument is a tuple with `n` components
247
248
- ``x0`` -- Initial point for finding minimum.
249
250
- ``gradient`` -- Optional gradient function. This will be computed
251
automatically for symbolic functions. For Python functions, it allows
252
the use of algorithms requiring derivatives. It should accept a
253
tuple of arguments and return a NumPy array containing the partial
254
derivatives at that point.
255
256
- ``hessian`` -- Optional hessian function. This will be computed
257
automatically for symbolic functions. For Python functions, it allows
258
the use of algorithms requiring derivatives. It should accept a tuple
259
of arguments and return a NumPy array containing the second partial
260
derivatives of the function.
261
262
- ``algorithm`` -- String specifying algorithm to use. Options are
263
``'default'`` (for Python functions, the simplex method is the default)
264
(for symbolic functions bfgs is the default):
265
266
- ``'simplex'``
267
268
- ``'powell'``
269
270
- ``'bfgs'`` -- (Broyden-Fletcher-Goldfarb-Shanno) requires
271
``gradient``
272
273
- ``'cg'`` -- (conjugate-gradient) requires gradient
274
275
- ``'ncg'`` -- (newton-conjugate gradient) requires gradient and hessian
276
277
278
EXAMPLES::
279
280
sage: vars=var('x y z')
281
sage: f=100*(y-x^2)^2+(1-x)^2+100*(z-y^2)^2+(1-y)^2
282
sage: minimize(f,[.1,.3,.4],disp=0)
283
(1.00..., 1.00..., 1.00...)
284
285
sage: minimize(f,[.1,.3,.4],algorithm="ncg",disp=0)
286
(0.9999999..., 0.999999..., 0.999999...)
287
288
Same example with just Python functions::
289
290
sage: def rosen(x): # The Rosenbrock function
291
... return sum(100.0r*(x[1r:]-x[:-1r]**2.0r)**2.0r + (1r-x[:-1r])**2.0r)
292
sage: minimize(rosen,[.1,.3,.4],disp=0)
293
(1.00..., 1.00..., 1.00...)
294
295
Same example with a pure Python function and a Python function to
296
compute the gradient::
297
298
sage: def rosen(x): # The Rosenbrock function
299
... return sum(100.0r*(x[1r:]-x[:-1r]**2.0r)**2.0r + (1r-x[:-1r])**2.0r)
300
sage: import numpy
301
sage: from numpy import zeros
302
sage: def rosen_der(x):
303
... xm = x[1r:-1r]
304
... xm_m1 = x[:-2r]
305
... xm_p1 = x[2r:]
306
... der = zeros(x.shape,dtype=float)
307
... der[1r:-1r] = 200r*(xm-xm_m1**2r) - 400r*(xm_p1 - xm**2r)*xm - 2r*(1r-xm)
308
... der[0] = -400r*x[0r]*(x[1r]-x[0r]**2r) - 2r*(1r-x[0])
309
... der[-1] = 200r*(x[-1r]-x[-2r]**2r)
310
... return der
311
sage: minimize(rosen,[.1,.3,.4],gradient=rosen_der,algorithm="bfgs",disp=0)
312
(1.00..., 1.00..., 1.00...)
313
"""
314
from sage.symbolic.expression import Expression
315
from sage.ext.fast_eval import fast_callable
316
import scipy
317
from scipy import optimize
318
if isinstance(func, Expression):
319
var_list=func.variables()
320
var_names=map(str,var_list)
321
fast_f=fast_callable(func, vars=var_names, domain=float)
322
f=lambda p: fast_f(*p)
323
gradient_list=func.gradient()
324
fast_gradient_functions=[fast_callable(gradient_list[i], vars=var_names, domain=float) for i in xrange(len(gradient_list))]
325
gradient=lambda p: scipy.array([ a(*p) for a in fast_gradient_functions])
326
else:
327
f=func
328
329
if algorithm=="default":
330
if gradient==None:
331
min=optimize.fmin(f,map(float,x0),**args)
332
else:
333
min= optimize.fmin_bfgs(f,map(float,x0),fprime=gradient,**args)
334
else:
335
if algorithm=="simplex":
336
min= optimize.fmin(f,map(float,x0),**args)
337
elif algorithm=="bfgs":
338
min= optimize.fmin_bfgs(f,map(float,x0),fprime=gradient,**args)
339
elif algorithm=="cg":
340
min= optimize.fmin_cg(f,map(float,x0),fprime=gradient,**args)
341
elif algorithm=="powell":
342
min= optimize.fmin_powell(f,map(float,x0),**args)
343
elif algorithm=="ncg":
344
if isinstance(func, Expression):
345
hess=func.hessian()
346
hess_fast= [ [fast_callable(a, vars=var_names, domain=float) for a in row] for row in hess]
347
hessian=lambda p: [[a(*p) for a in row] for row in hess_fast]
348
hessian_p=lambda p,v: scipy.dot(scipy.array(hessian(p)),v)
349
min= optimize.fmin_ncg(f,map(float,x0),fprime=gradient,fhess=hessian,fhess_p=hessian_p,**args)
350
return vector(RDF,min)
351
352
def minimize_constrained(func,cons,x0,gradient=None,algorithm='default', **args):
353
r"""
354
Minimize a function with constraints.
355
356
357
INPUT:
358
359
- ``func`` -- Either a symbolic function, or a Python function whose
360
argument is a tuple with n components
361
362
- ``cons`` -- constraints. This should be either a function or list of
363
functions that must be positive. Alternatively, the constraints can
364
be specified as a list of intervals that define the region we are
365
minimizing in. If the constraints are specified as functions, the
366
functions should be functions of a tuple with `n` components
367
(assuming `n` variables). If the constraints are specified as a list
368
of intervals and there are no constraints for a given variable, that
369
component can be (``None``, ``None``).
370
371
- ``x0`` -- Initial point for finding minimum
372
373
- ``algorithm`` -- Optional, specify the algorithm to use:
374
375
- ``'default'`` -- default choices
376
377
- ``'l-bfgs-b'`` -- only effective if you specify bound constraints.
378
See [ZBN97]_.
379
380
- ``gradient`` -- Optional gradient function. This will be computed
381
automatically for symbolic functions. This is only used when the
382
constraints are specified as a list of intervals.
383
384
385
EXAMPLES:
386
387
Let us maximize `x + y - 50` subject to the following constraints:
388
`50x + 24y \leq 2400`, `30x + 33y \leq 2100`, `x \geq 45`,
389
and `y \geq 5`::
390
391
sage: y = var('y')
392
sage: f = lambda p: -p[0]-p[1]+50
393
sage: c_1 = lambda p: p[0]-45
394
sage: c_2 = lambda p: p[1]-5
395
sage: c_3 = lambda p: -50*p[0]-24*p[1]+2400
396
sage: c_4 = lambda p: -30*p[0]-33*p[1]+2100
397
sage: a = minimize_constrained(f,[c_1,c_2,c_3,c_4],[2,3])
398
sage: a
399
(45.0, 6.25)
400
401
Let's find a minimum of `\sin(xy)`::
402
403
sage: x,y = var('x y')
404
sage: f = sin(x*y)
405
sage: minimize_constrained(f, [(None,None),(4,10)],[5,5])
406
(4.8..., 4.8...)
407
408
Check, if L-BFGS-B finds the same minimum::
409
410
sage: minimize_constrained(f, [(None,None),(4,10)],[5,5], algorithm='l-bfgs-b')
411
(4.7..., 4.9...)
412
413
Rosenbrock function, [http://en.wikipedia.org/wiki/Rosenbrock_function]::
414
415
sage: from scipy.optimize import rosen, rosen_der
416
sage: minimize_constrained(rosen, [(-50,-10),(5,10)],[1,1],gradient=rosen_der,algorithm='l-bfgs-b')
417
(-10.0, 10.0)
418
sage: minimize_constrained(rosen, [(-50,-10),(5,10)],[1,1],algorithm='l-bfgs-b')
419
(-10.0, 10.0)
420
421
422
REFERENCES:
423
424
.. [ZBN97] C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778:
425
L-BFGS-B, FORTRAN routines for large scale bound constrained
426
optimization. ACM Transactions on Mathematical Software, Vol 23, Num. 4,
427
pp.550--560, 1997.
428
"""
429
from sage.symbolic.expression import Expression
430
import scipy
431
from scipy import optimize
432
function_type=type(lambda x,y: x+y)
433
434
if isinstance(func, Expression):
435
var_list=func.variables()
436
var_names=map(str,var_list)
437
fast_f=func._fast_float_(*var_names)
438
f=lambda p: fast_f(*p)
439
gradient_list=func.gradient()
440
fast_gradient_functions=[gradient_list[i]._fast_float_(*var_names) for i in xrange(len(gradient_list))]
441
gradient=lambda p: scipy.array([ a(*p) for a in fast_gradient_functions])
442
else:
443
f=func
444
445
if isinstance(cons,list):
446
if isinstance(cons[0],tuple) or isinstance(cons[0],list) or cons[0]==None:
447
if gradient!=None:
448
if algorithm=='l-bfgs-b':
449
min= optimize.fmin_l_bfgs_b(f,x0,gradient,bounds=cons, iprint=-1, **args)[0]
450
else:
451
min= optimize.fmin_tnc(f,x0,gradient,bounds=cons,messages=0,**args)[0]
452
else:
453
if algorithm=='l-bfgs-b':
454
min= optimize.fmin_l_bfgs_b(f,x0,approx_grad=True,bounds=cons,iprint=-1, **args)[0]
455
else:
456
min= optimize.fmin_tnc(f,x0,approx_grad=True,bounds=cons,messages=0,**args)[0]
457
458
elif isinstance(cons[0],function_type):
459
min= optimize.fmin_cobyla(f,x0,cons,iprint=0,**args)
460
elif isinstance(cons, function_type):
461
min= optimize.fmin_cobyla(f,x0,cons,iprint=0,**args)
462
return vector(RDF,min)
463
464
465
def linear_program(c,G,h,A=None,b=None,solver=None):
466
"""
467
Solves the dual linear programs:
468
469
- Minimize `c'x` subject to `Gx + s = h`, `Ax = b`, and `s \geq 0` where
470
`'` denotes transpose.
471
472
- Maximize `-h'z - b'y` subject to `G'z + A'y + c = 0` and `z \geq 0`.
473
474
475
INPUT:
476
477
- ``c`` -- a vector
478
479
- ``G`` -- a matrix
480
481
- ``h`` -- a vector
482
483
- ``A`` -- a matrix
484
485
- ``b`` --- a vector
486
487
- ``solver`` (optional) --- solver to use. If None, the cvxopt's lp-solver
488
is used. If it is 'glpk', then glpk's solver
489
is used.
490
491
These can be over any field that can be turned into a floating point
492
number.
493
494
495
OUTPUT:
496
497
A dictionary ``sol`` with keys ``x``, ``s``, ``y``, ``z`` corresponding
498
to the variables above:
499
500
- ``sol['x']`` -- the solution to the linear program
501
502
- ``sol['s']`` -- the slack variables for the solution
503
504
- ``sol['z']``, ``sol['y']`` -- solutions to the dual program
505
506
507
EXAMPLES:
508
509
First, we minimize `-4x_1 - 5x_2` subject to `2x_1 + x_2 \leq 3`,
510
`x_1 + 2x_2 \leq 3`, `x_1 \geq 0`, and `x_2 \geq 0`::
511
512
sage: c=vector(RDF,[-4,-5])
513
sage: G=matrix(RDF,[[2,1],[1,2],[-1,0],[0,-1]])
514
sage: h=vector(RDF,[3,3,0,0])
515
sage: sol=linear_program(c,G,h)
516
sage: sol['x']
517
(0.999..., 1.000...)
518
519
Next, we maximize `x+y-50` subject to `50x + 24y \leq 2400`,
520
`30x + 33y \leq 2100`, `x \geq 45`, and `y \geq 5`::
521
522
sage: v=vector([-1.0,-1.0,-1.0])
523
sage: m=matrix([[50.0,24.0,0.0],[30.0,33.0,0.0],[-1.0,0.0,0.0],[0.0,-1.0,0.0],[0.0,0.0,1.0],[0.0,0.0,-1.0]])
524
sage: h=vector([2400.0,2100.0,-45.0,-5.0,1.0,-1.0])
525
sage: sol=linear_program(v,m,h)
526
sage: sol['x']
527
(45.000000..., 6.2499999...3, 1.00000000...)
528
sage: sol=linear_program(v,m,h,solver='glpk')
529
GLPK Simplex Optimizer...
530
OPTIMAL SOLUTION FOUND
531
sage: sol['x']
532
(45.0..., 6.25, 1.0...)
533
"""
534
from cvxopt.base import matrix as m
535
from cvxopt import solvers
536
solvers.options['show_progress']=False
537
if solver=='glpk':
538
from cvxopt import glpk
539
glpk.options['LPX_K_MSGLEV'] = 0
540
c_=m(c.base_extend(RDF).numpy())
541
G_=m(G.base_extend(RDF).numpy())
542
h_=m(h.base_extend(RDF).numpy())
543
if A!=None and b!=None:
544
A_=m(A.base_extend(RDF).numpy())
545
b_=m(b.base_extend(RDF).numpy())
546
sol=solvers.lp(c_,G_,h_,A_,b_,solver=solver)
547
else:
548
sol=solvers.lp(c_,G_,h_,solver=solver)
549
status=sol['status']
550
if status != 'optimal':
551
return {'primal objective':None,'x':None,'s':None,'y':None,
552
'z':None,'status':status}
553
x=vector(RDF,list(sol['x']))
554
s=vector(RDF,list(sol['s']))
555
y=vector(RDF,list(sol['y']))
556
z=vector(RDF,list(sol['z']))
557
return {'primal objective':sol['primal objective'],'x':x,'s':s,'y':y,
558
'z':z,'status':status}
559
560
561
def find_fit(data, model, initial_guess = None, parameters = None, variables = None, solution_dict = False):
562
r"""
563
Finds numerical estimates for the parameters of the function model to
564
give a best fit to data.
565
566
567
INPUT:
568
569
- ``data`` -- A two dimensional table of floating point numbers of the
570
form `[[x_{1,1}, x_{1,2}, \ldots, x_{1,k}, f_1],
571
[x_{2,1}, x_{2,2}, \ldots, x_{2,k}, f_2],
572
\ldots,
573
[x_{n,1}, x_{n,2}, \ldots, x_{n,k}, f_n]]` given as either a list of
574
lists, matrix, or numpy array.
575
576
- ``model`` -- Either a symbolic expression, symbolic function, or a
577
Python function. ``model`` has to be a function of the variables
578
`(x_1, x_2, \ldots, x_k)` and free parameters
579
`(a_1, a_2, \ldots, a_l)`.
580
581
- ``initial_guess`` -- (default: ``None``) Initial estimate for the
582
parameters `(a_1, a_2, \ldots, a_l)`, given as either a list, tuple,
583
vector or numpy array. If ``None``, the default estimate for each
584
parameter is `1`.
585
586
- ``parameters`` -- (default: ``None``) A list of the parameters
587
`(a_1, a_2, \ldots, a_l)`. If model is a symbolic function it is
588
ignored, and the free parameters of the symbolic function are used.
589
590
- ``variables`` -- (default: ``None``) A list of the variables
591
`(x_1, x_2, \ldots, x_k)`. If model is a symbolic function it is
592
ignored, and the variables of the symbolic function are used.
593
594
- ``solution_dict`` -- (default: ``False``) if ``True``, return the
595
solution as a dictionary rather than an equation.
596
597
598
EXAMPLES:
599
600
First we create some data points of a sine function with some random
601
perturbations::
602
603
sage: data = [(i, 1.2 * sin(0.5*i-0.2) + 0.1 * normalvariate(0, 1)) for i in xsrange(0, 4*pi, 0.2)]
604
sage: var('a, b, c, x')
605
(a, b, c, x)
606
607
We define a function with free parameters `a`, `b` and `c`::
608
609
sage: model(x) = a * sin(b * x - c)
610
611
We search for the parameters that give the best fit to the data::
612
613
sage: find_fit(data, model)
614
[a == 1.21..., b == 0.49..., c == 0.19...]
615
616
We can also use a Python function for the model::
617
618
sage: def f(x, a, b, c): return a * sin(b * x - c)
619
sage: fit = find_fit(data, f, parameters = [a, b, c], variables = [x], solution_dict = True)
620
sage: fit[a], fit[b], fit[c]
621
(1.21..., 0.49..., 0.19...)
622
623
We search for a formula for the `n`-th prime number::
624
625
sage: dataprime = [(i, nth_prime(i)) for i in xrange(1, 5000, 100)]
626
sage: find_fit(dataprime, a * x * log(b * x), parameters = [a, b], variables = [x])
627
[a == 1.11..., b == 1.24...]
628
629
630
ALGORITHM:
631
632
Uses ``scipy.optimize.leastsq`` which in turn uses MINPACK's lmdif and
633
lmder algorithms.
634
"""
635
import numpy
636
637
if not isinstance(data, numpy.ndarray):
638
try:
639
data = numpy.array(data, dtype = float)
640
except (ValueError, TypeError):
641
raise TypeError("data has to be a list of lists, a matrix, or a numpy array")
642
elif data.dtype == object:
643
raise ValueError("the entries of data have to be of type float")
644
645
if data.ndim != 2:
646
raise ValueError("data has to be a two dimensional table of floating point numbers")
647
648
from sage.symbolic.expression import Expression
649
650
if isinstance(model, Expression):
651
if variables is None:
652
variables = list(model.arguments())
653
if parameters is None:
654
parameters = list(model.variables())
655
for v in variables:
656
parameters.remove(v)
657
658
if data.shape[1] != len(variables) + 1:
659
raise ValueError("each row of data needs %d entries, only %d entries given" % (len(variables) + 1, data.shape[1]))
660
661
if parameters is None or len(parameters) == 0 or \
662
variables is None or len(variables) == 0:
663
raise ValueError("no variables given")
664
665
if initial_guess == None:
666
initial_guess = len(parameters) * [1]
667
668
if not isinstance(initial_guess, numpy.ndarray):
669
try:
670
initial_guess = numpy.array(initial_guess, dtype = float)
671
except (ValueError, TypeError):
672
raise TypeError("initial_guess has to be a list, tuple, or numpy array")
673
elif initial_guess.dtype == object:
674
raise ValueError("the entries of initial_guess have to be of type float")
675
676
if len(initial_guess) != len(parameters):
677
raise ValueError("length of initial_guess does not coincide with the number of parameters")
678
679
if isinstance(model, Expression):
680
var_list = variables + parameters
681
var_names = map(str, var_list)
682
func = model._fast_float_(*var_names)
683
else:
684
func = model
685
686
def function(x_data, params):
687
result = numpy.zeros(len(x_data))
688
for row in xrange(len(x_data)):
689
fparams = numpy.hstack((x_data[row], params)).tolist()
690
result[row] = func(*fparams)
691
return result
692
693
def error_function(params, x_data, y_data):
694
result = numpy.zeros(len(x_data))
695
for row in xrange(len(x_data)):
696
fparams = x_data[row].tolist() + params.tolist()
697
result[row] = func(*fparams)
698
return result - y_data
699
700
x_data = data[:, 0:len(variables)]
701
y_data = data[:, -1]
702
703
from scipy.optimize import leastsq
704
estimated_params, d = leastsq(error_function, initial_guess, args = (x_data, y_data))
705
706
if isinstance(estimated_params, float):
707
estimated_params = [estimated_params]
708
else:
709
estimated_params = estimated_params.tolist()
710
711
if solution_dict:
712
dict = {}
713
for item in zip(parameters, estimated_params):
714
dict[item[0]] = item[1]
715
return dict
716
717
return [item[0] == item[1] for item in zip(parameters, estimated_params)]
718
719
def binpacking(items,maximum=1,k=None):
720
r"""
721
Solves the bin packing problem.
722
723
The Bin Packing problem is the following :
724
725
Given a list of items of weights `p_i` and a real value `K`, what is
726
the least number of bins such that all the items can be put in the
727
bins, while keeping sure that each bin contains a weight of at most `K` ?
728
729
For more informations : http://en.wikipedia.org/wiki/Bin_packing_problem
730
731
Two version of this problem are solved by this algorithm :
732
* Is it possible to put the given items in `L` bins ?
733
* What is the assignment of items using the
734
least number of bins with the given list of items ?
735
736
INPUT:
737
738
- ``items`` -- A list of real values (the items' weight)
739
740
- ``maximum`` -- The maximal size of a bin
741
742
- ``k`` -- Number of bins
743
744
- When set to an integer value, the function returns a partition
745
of the items into `k` bins if possible, and raises an
746
exception otherwise.
747
748
- When set to ``None``, the function returns a partition of the items
749
using the least number possible of bins.
750
751
OUTPUT:
752
753
A list of lists, each member corresponding to a box and containing
754
the list of the weights inside it. If there is no solution, an
755
exception is raised (this can only happen when ``k`` is specified
756
or if ``maximum`` is less that the size of one item).
757
758
EXAMPLES:
759
760
Trying to find the minimum amount of boxes for 5 items of weights
761
`1/5, 1/4, 2/3, 3/4, 5/7`::
762
763
sage: from sage.numerical.optimize import binpacking
764
sage: values = [1/5, 1/3, 2/3, 3/4, 5/7]
765
sage: bins = binpacking(values)
766
sage: len(bins)
767
3
768
769
Checking the bins are of correct size ::
770
771
sage: all([ sum(b)<= 1 for b in bins ])
772
True
773
774
Checking every item is in a bin ::
775
776
sage: b1, b2, b3 = bins
777
sage: all([ (v in b1 or v in b2 or v in b3) for v in values ])
778
True
779
780
One way to use only three boxes (which is best possible) is to put
781
`1/5 + 3/4` together in a box, `1/3+2/3` in another, and `5/7`
782
by itself in the third one.
783
784
Of course, we can also check that there is no solution using only two boxes ::
785
786
sage: from sage.numerical.optimize import binpacking
787
sage: binpacking([0.2,0.3,0.8,0.9], k=2)
788
Traceback (most recent call last):
789
...
790
ValueError: This problem has no solution !
791
"""
792
793
if max(items) > maximum:
794
raise ValueError("This problem has no solution !")
795
796
if k==None:
797
from sage.functions.other import ceil
798
k=ceil(sum(items)/maximum)
799
while True:
800
from sage.numerical.mip import MIPSolverException
801
try:
802
return binpacking(items,k=k,maximum=maximum)
803
except MIPSolverException:
804
k = k + 1
805
806
from sage.numerical.mip import MixedIntegerLinearProgram, MIPSolverException
807
p=MixedIntegerLinearProgram()
808
809
# Boolean variable indicating whether
810
# the i th element belongs to box b
811
box=p.new_variable(dim=2)
812
813
# Each bin contains at most max
814
for b in range(k):
815
p.add_constraint(p.sum([items[i]*box[i][b] for i in range(len(items))]),max=maximum)
816
817
# Each item is assigned exactly one bin
818
for i in range(len(items)):
819
p.add_constraint(p.sum([box[i][b] for b in range(k)]),min=1,max=1)
820
821
p.set_objective(None)
822
p.set_binary(box)
823
824
try:
825
p.solve()
826
except MIPSolverException:
827
raise ValueError("This problem has no solution !")
828
829
box=p.get_values(box)
830
831
boxes=[[] for i in range(k)]
832
833
for b in range(k):
834
boxes[b].extend([items[i] for i in range(len(items)) if round(box[i][b])==1])
835
836
return boxes
837
838
839
840
841