不,不可能矢量化fminbound
因为它期望单变量的标量函数。但是,您仍然可以通过重新表述底层数学优化问题来向量化循环:
The N
标量优化问题
min f_1(t) s.t. t_l <= t <= t_u
min f_2(t) s.t. t_l <= t <= t_u
.
.
.
min f_N(t) s.t. t_l <= t <= t_u
对于标量函数f_i
等价于一个优化问题
在N
变量:
min f_1(t_1)**2 + ... + f_N(t_N)**2 s.t. t_l <= t_i <= t_u for all i = 1, .., N
可以通过以下方式解决scipy.optimize.minimize https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize。根据您的整个算法,您可以使用这种方法来进一步消除更多循环,即您只解决一个大规模优化问题,而不是数千个标量优化问题。
清理代码后,可以按如下方式完成:
import numpy as np
from scipy.optimize import minimize
data = np.array([
(212.82275865, 1650.40828168, 0., 0),
(214.22056952, 1649.99898924, 10.38, 0),
(212.86786868, 1644.25228805, 116.288, 0),
(212.78680031, 1643.87461108, 122.884, 0),
(212.57489485, 1642.01124032, 156.313, 0),
(212.53483954, 1641.61858242, 162.618, 0),
(212.43922274, 1639.58782771, 196.314, 0),
(212.53726315, 1639.13842423, 202.619, 0),
(212.2888428, 1637.24641296, 236.306, 0),
(212.2722447, 1636.92307229, 242.606, 0),
(212.15559302, 1635.0529813, 276.309, 0),
(212.17535631, 1634.60618711, 282.651, 0),
(212.02545613, 1632.72139574, 316.335, 0),
(211.99988779, 1632.32053329, 322.634, 0),
(211.33419846, 1631.07592039, 356.334, 0),
(211.58972239, 1630.21971902, 362.633, 0),
(211.70332122, 1628.2088542, 396.316, 0),
(211.74610735, 1627.67591368, 402.617, 0),
(211.88819518, 1625.67310022, 436.367, 0),
(211.90709414, 1625.39410321, 442.673, 0),
(212.00090348, 1623.42655008, 476.332, 0),
(211.9249017, 1622.94540583, 482.63, 0),
(212.34321938, 1616.32949197, 597.329, 0),
(213.09638942, 1615.2869643, 610.4, 0),
(219.41313491, 1580.22234313, 1197.332, 0),
(220.38660128, 1579.20043302, 1210.37, 0),
(236.35472669, 1542.30863041, 1798.267, 0),
(237.41755384, 1541.41679119, 1810.383, 0),
(264.08373622, 1502.66620597, 2398.244, 0),
(265.65655239, 1501.64308908, 2410.443, 0),
(304.66999824, 1460.94068336, 2997.263, 0),
(306.22550945, 1459.75817211, 3010.38, 0),
(358.88879764, 1416.472238, 3598.213, 0),
(361.14046402, 1415.40942931, 3610.525, 0),
(429.96379858, 1369.7972467, 4198.282, 0),
(432.06565776, 1368.22265539, 4210.505, 0),
(519.30493383, 1319.01141844, 4798.277, 0),
(522.12134083, 1317.68234967, 4810.4, 0),
(630.00294242, 1265.05368942, 5398.236, 0),
(633.67624272, 1263.63633508, 5410.431, 0),
(766.29767476, 1206.91262814, 5997.266, 0),
(770.78300649, 1205.48393374, 6010.489, 0),
(932.92308019, 1145.87780431, 6598.279, 0),
(937.54373403, 1141.55438694, 6609.525, 0)])
# the coefficients
coeffs = np.polyfit(data[:, 2], data[:, 0:2], 3)
# the points
points = data[:, :2]
# vectorized version of your objective function
# i.e. evaluates f_1, ..., f_N
def f(t, points):
poly_x = np.polyval(coeffs[:, 0], t)
poly_y = np.polyval(coeffs[:, 1], t)
return np.hypot(poly_x - points[:, 0], poly_y - points[:, 1])
# the scalar objective function we want to minimize
def obj_vec(t, points):
vals = f(t, points)
return np.sum(vals**2)
# variable bounds
bnds = [(-50, 6659.525)]*len(points)
# solve the optimization problem
res = minimize(lambda t: obj_vec(t, points), x0=np.zeros(len(points)), bounds=bnds)
dmins = f(res.x, points)
为了进一步加速优化,强烈建议将目标函数的精确梯度传递给minimize
。目前,梯度是通过有限差分来近似的,速度相当慢:
In [7]: %timeit res = minimize(lambda t: obj_vec(t, points), x0=np.zeros(len(points)), bounds=bnds)
91.5 ms ± 868 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Since the gradient can easily be computed by the chain rule, I'll leave it as an exercise for the reader :). By the chain rule, the gradient reads as
def grad(t, points):
poly_x = np.polyval(coeffs[:, 0], t)
poly_y = np.polyval(coeffs[:, 1], t)
poly_x_deriv = np.polyval(np.polyder(coeffs[:, 0], m=1), t)
poly_y_deriv = np.polyval(np.polyder(coeffs[:, 1], m=1), t)
return 2*poly_x_deriv*(poly_x - points[:, 0]) + 2*(poly_y_deriv)*(poly_y - points[:, 1])
并将其传递给minimize
显着减少运行时间:
In [9]: %timeit res = minimize(lambda t: obj_vec(t, points), jac=lambda t: grad(t, points), x0=np.zeros(len(points)),
...: bounds=bnds)
6.13 ms ± 63.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
最后,正如您在评论中已经指出的那样,设置另一个起点也可能会导致迭代次数减少。