Scipy.optimize.leastsq Returns Best Guess Parameters Not New Best Fit
Solution 1:
A quick google search hints at a problem with the data being single precision (your other programs almost certainly upcast to double precision too, though this explicitely is a problem with scipy as well, see also this bug report). If you look at your full_output=1
result, you see the the Jacobian is approximated as zero everywhere.
So giving the Jacobian explicitely might help (though even then you might want to upcast, because the minimum precision for a relative error you can get with single precision is just very limited).
Answer : the easiest and numerically best solution (of course giving the real Jacobian is also a bonus) is to just cast your x
and y
data to double precision (x = x.astype(np.float64)
will do for example).
I would not suggest this, but you also may be able to fix it by setting epsfcn
keyword argument (and also the tolerance keyword arguments probably) by hand, something along epsfcn=np.finfo(np.float32).eps
. This seems to fix the issue in a way, but (since most calculations are with scalars, and scalars do not force an upcast in your calculation) the calculations are done in float32 and the precision loss seem to be rather big, at least when not providing Dfunc.
Post a Comment for "Scipy.optimize.leastsq Returns Best Guess Parameters Not New Best Fit"