渐近复杂度(big-O 和 big-Theta 都代表的意思)完全忽略了所涉及的常数因素 - 它只是为了表明随着输入大小变大,运行时间将如何变化。
So it's certainly possible that an Θ(n)
algorithm can take longer than an Θ(n2)
one for some given n
- which n
this will happen for will really depend on the algorithms involved - for your specific example, this will be the case for n < 100
, ignoring the possibility of optimizations differing between the two.
For any two given algorithms taking Θ(n)
and Θ(n2)
time respectively, what you're likely to see is that either:
- The
Θ(n)
algorithm is slower when n
is small, then the Θ(n2)
one becomes slower as n
increases
(which happens if the Θ(n)
one is more complex, i.e. has higher constant factors), or
- The
Θ(n2)
one is always slower.
Although it's certainly possible that the Θ(n)
algorithm can be slower, then the Θ(n2)
one, then the Θ(n)
one again, and so on as n
increases, until n
gets very large, from which point onwards the Θ(n2)
one will always be slower, although it's greatly unlikely to happen.
用更数学化的术语来说:
Let's say the Θ(n2)
algorithm takes cn2
operations for some c
.
And the Θ(n)
算法需要dn
一些操作d
.
这符合正式定义 http://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations因为我们可以假设这适用于n
大于 0(即对于所有n
)并且运行时间所在的两个函数是相同的。
In line with your example, if you were to say c = 1
and d = 100
, then the Θ(n)
algorithm would be slower until n = 100
, at which point the Θ(n2)
algorithm would become slower.
(courtesy of WolframAlpha https://www.wolframalpha.com/input/?i=n%5E2+vs+100n+from+0+to+130).
注释:
Technically big-O is only an upper bound, meaning you can say an O(1)
algorithm (or really any algorithm taking O(n2)
or less time) takes O(n2)
as well. Thus I instead used big-Theta (Θ) notation, which is just a tight bound. See the formal definitions http://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations for more information.
Big-O 通常被非正式地视为或教导为紧界,因此您可能已经在不知情的情况下本质上使用了 big-Theta。
If we're talking about an upper bound only (as per the formal definition of big-O), that would rather be an "anything goes" situation - the O(n)
one can be faster, the O(n2)
one can be faster or they can take the same amount of time (asymptotically) - one usually can't make particularly meaningful conclusions with regard to comparing the big-O of algorithms, one can only say that, given a big-O of some algorithm, that that algorithm won't take any longer than that amount of time (asymptotically).