• More on computational complexity..

    From Wisdom90@21:1/5 to All on Fri Jan 10 14:49:09 2020
    Hello..


    Read this:


    More on computational complexity..

    Notice how Horand gassmann has answered in sci.math newsgroup:

    Horand gassmann wrote the following:

    "You are right, of course, on one level. An O(log n)
    algorithm is better than an O(n) algorithm *for
    large enough inputs*. Lemire understands that, and he
    addresses it in his blog. The important consideration
    is that _theoretical_ performance is a long way from
    _practical_ performance."


    And notice how what Lemire wrote about computational complexity:

    "But it gets worse: these are not scientific models. A scientific model
    would predict the running time of the algorithm given some
    implementation, within some error margin. However, these models do
    nothing of the sort. They are purely mathematical. They are not
    falsifiable. As long as they are mathematically correct, then they are
    always true. To be fair, some researchers like Knuth came up with models
    that closely mimic reasonable computers, but that’s not what people
    pursuing computational complexity bounds do, typically."


    So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them,
    because time complexity is like material resistance, and it
    informs us on important practical things such as an algorithm of log(n)
    time complexity is like much more resistant than O(n) when n becomes
    large, and i think this kind of information of time complexity is
    practical, this is why i don't agree with Lemire and Horand gassmann,
    because as you notice that time complexity is scientific and it is
    also engineering. Read the rest of my post to understand more what i
    want to say:


    More precision about computational complexity, read again:

    I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989,
    here is the webpage and read it carefully:

    Better computational complexity does not imply better speed

    https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/

    And here is his resume:

    https://lemire.me/pdf/resume/resumelemire.pdf

    So as you are noticing on the webpage above he is saying the following
    about computational complexity:

    "But it gets worse: these are not scientific models. A scientific model
    would predict the running time of the algorithm given some
    implementation, within some error margin. However, these models do
    nothing of the sort. They are purely mathematical. They are not
    falsifiable. As long as they are mathematically correct, then they are
    always true. To be fair, some researchers like Knuth came up with models
    that closely mimic reasonable computers, but that’s not what people
    pursuing computational complexity bounds do, typically."

    But i don't agree with him because i think he is not understanding
    the goal of computational complexity, because when we say that
    an algorithm has a time complexity of n*log(n), you have to
    understand that it is by logical analogy like saying in physics what is
    the material resistance, because the n*log(n) means how well the
    algorithm is "amortizing"(it means reducing) the "time" that it takes
    taking as a reference of measure the average time complexity of an
    algorithm, i said that the time complexity is like the material
    resistance in physics, because if the time complexity n grows large, it
    is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so
    it will amortize the time that it takes very well, so it is in physics
    like the good material resistance that amortizes very well the big force
    that is applied to it, and we can easily notice that an algorithm
    becomes faster, in front of the data that is giving to him, by going
    from an exponential time complexity towards a logarithmic time
    complexity, so we are noticing that the time complexity is "universal"
    and it measures how well the algorithm amortizes(that means it reduces)
    the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity
    is scientific and also it is engineering and it gives us information on
    the physical world.

    So to give an interesting example of science in computing, we can ask of
    what is the time complexity of a binary search algorithm, and here
    is my mathematical calculations of its time complexity:

    Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1

    Because the "1" is like a comparison that we do in each step of
    the divide and conquer method of the binary search algorithm.

    So the calculation of the recurrence equation gives:

    1st step=> T(n)=T(n/2) + 1

    2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ]

    3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ]

    .

    .

    kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times)

    Adding all the equations we get, T(n) = T(n/2^k) + k times 1

    This is the final equation.

    So how many times we need to divide by 2 until we have only one element
    left?

    So it must be:

    n/2^k= 1

    This gives: n=2^k

    this give: log n=k [taken log(base 2) on both sides ]

    Put k= log n in the final equation above and it gives:

    T(n) = T(1) + log n

    T(n) = 1 + log n [we know that T(1) = 1 , because it’s a base condition
    as we are left with only one element in the array and that is the
    element to be searched so we return 1]

    So it gives:

    T(n) = O(log n) [taking dominant polynomial, which is n here)

    This is how we got “log n” time complexity for binary search.



    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)