• Re: Debugging reason for python running unreasonably slow when adding n

    From Stefan Ram@21:1/5 to Alexander Nestorov on Tue Mar 14 16:54:25 2023
    Alexander Nestorov <alexandernst@gmail.com> writes:
    =C2=A0=C2=A0 =C2=A0 =C2=A0for key in input:
    =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0v =3D weights=5Bkey=5D
    =C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sum=5F +=3D v

    You have some non-breaking spaces where there should be
    spaces. When I just eliminate the "v" in

    for key in input_:
    v = weights[ key ]
    sum_ += v

    and use

    for key in input_: sum_ += weights[ key ]

    instead, it already gets a bit faster.

    (For the first loop, the compiler generates two additional
    instructions: "STORE_FAST v" and "LOAD_FAST v". CPython does
    not have an optimizer in the sense gcc has an optimizer.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Passin@21:1/5 to Alexander Nestorov on Tue Mar 14 15:27:35 2023
    On 3/14/2023 3:48 AM, Alexander Nestorov wrote:
    I'm working on an NLP and I got bitten by an unreasonably slow behaviour in Python while operating with small amounts of numbers.

    I have the following code:

    ```python
    import random, time
    from functools import reduce

    def trainPerceptron(perceptron, data):
      learningRate = 0.002
      weights = perceptron['weights']
      error = 0
      for chunk in data:
          input = chunk['input']
          output = chunk['output']

          # 12x slower than equivalent JS
          sum_ = 0
          for key in input:
              v = weights[key]
              sum_ += v

          # 20x slower than equivalent JS
          #sum_ = reduce(lambda acc, key: acc + weights[key], input)

          actualOutput = sum_ if sum_ > 0 else 0

          expectedOutput = 1 if output == perceptron['id'] else 0
          currentError = expectedOutput - actualOutput
          if currentError:
              error += currentError ** 2
              change = currentError * learningRate
              for key in input:
                  weights[key] += change

    [snip]
    Just a speculation, but the difference with the javascript behavior
    might be because the JS JIT compiler kicked in for these loops.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Angelico@21:1/5 to Peter J. Holzer on Wed Mar 15 09:05:46 2023
    On Wed, 15 Mar 2023 at 08:53, Peter J. Holzer <hjp-python@hjp.at> wrote:

    On 2023-03-14 16:48:24 +0900, Alexander Nestorov wrote:
    I'm working on an NLP and I got bitten by an unreasonably slow
    behaviour in Python while operating with small amounts of numbers.

    I have the following code:
    [...]
    # 12x slower than equivalent JS
    sum_ = 0
    for key in input:
    v = weights[key]
    sum_ += v

    # 20x slower than equivalent JS
    #sum_ = reduce(lambda acc, key: acc + weights[key], input)

    Not surprising. Modern JavaScript implementations have a JIT compiler. CPython doesn't.

    You may want to try PyPy if your code uses tight loops like that.

    Or alternatively it may be possible to use numpy to do these operations.


    Or use the sum() builtin rather than reduce(), which was
    *deliberately* removed from the builtins. The fact that you can get
    sum() without importing, but have to go and reach for functools to get reduce(), is a hint that you probably shouldn't use reduce when sum
    will work.

    Naive code is almost always going to be slower than smart code, and
    comparing "equivalent" code across languages is almost always an
    unfair comparison to one of them.

    ChrisA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter J. Holzer@21:1/5 to Alexander Nestorov on Tue Mar 14 22:52:11 2023
    On 2023-03-14 16:48:24 +0900, Alexander Nestorov wrote:
    I'm working on an NLP and I got bitten by an unreasonably slow
    behaviour in Python while operating with small amounts of numbers.

    I have the following code:
    [...]
          # 12x slower than equivalent JS
          sum_ = 0
          for key in input:
              v = weights[key]
              sum_ += v

          # 20x slower than equivalent JS
          #sum_ = reduce(lambda acc, key: acc + weights[key], input)

    Not surprising. Modern JavaScript implementations have a JIT compiler.
    CPython doesn't.

    You may want to try PyPy if your code uses tight loops like that.

    Or alternatively it may be possible to use numpy to do these operations.

    hp

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmQQ7IUACgkQ8g5IURL+ KF0+jg/+P4FzkvXxY4CojPdUqtMda2U/dlF7ncMgjFNA7XDwHVdb7EPCunApIBjO eVHDDVCYAH2LCfNcG3qN7GNFaeUqM6GJA2pKlfQc04oMsj1Pb7phRrQJ3+Sj5P00 1UGnbxsFvqgtFmyZZcSfykFQNRbJsXq/5qKoSBmukElEMEFa0dsTxpGC59lPd3xz Chst3ZzjZZmuNJV1yJXDkiWe2tI2SG/bDyTXjW2wTBr5Qc9/IVFYdCFZVEAFHMw3 zxB1HXQO2NHeZXtg2XYYnnK2wdRYpkmMq5O0CGbhw4mFVWlXlWKF5Ih01EdgH6/4 9+mspOuR3f5s6Jpp5J3t90xDic/+whQ8RjdpW5R8A2oq8zp1nQFvOMKIWoVUCzqj zgjbXMwn03WMvV1F5K03v2XAEjsRHaY0kOTkmupSFmXBbrw5CMvlJhvuhYA05bS4 b8WWBK1Gn+rQ0A0FoZqrX/VXH+0cVNzYfpMHDubF+BuqTcd9UNQSaM5JV+LvNI0Z p2/+5LkAzwTD5EYVXKta5IzU22ZDbffPm0PkKxEnFIyCnN/5snPeaWrL9weSTu7V Fnu4S4B48U/LR6Yl0LjCMpB1gj466Qp6v1ReqsmC+NZft5F0tYqHpmsdApQlm/g3 GExitmkIsylv+pbTV/pg8SoHPmYsWqjyoo3OxHI
  • From Oscar Benjamin@21:1/5 to Alexander Nestorov on Tue Mar 14 22:55:33 2023
    On Tue, 14 Mar 2023 at 16:27, Alexander Nestorov <alexandernst@gmail.com> wrote:

    I'm working on an NLP and I got bitten by an unreasonably slow behaviour in Python while operating with small amounts of numbers.

    I have the following code:

    ```python
    import random, time
    from functools import reduce

    def trainPerceptron(perceptron, data):
    learningRate = 0.002
    weights = perceptron['weights']
    error = 0
    for chunk in data:
    input = chunk['input']
    output = chunk['output']

    # 12x slower than equivalent JS
    sum_ = 0
    for key in input:
    v = weights[key]
    sum_ += v

    In Python something like your task here would usually use something
    along the lines of NumPy. Your two innermost loops involve adding up a
    subset of numbers from a list chosen using a list of indices. This is
    something that numpy can do much more efficiently with its fancy
    indexing e.g.:

    In [3]: a = np.array([1, 2, 3, 4, 5, 6, 7])

    In [4]: b = np.array([0, 3, 5])

    In [5]: a[b]
    Out[5]: array([1, 4, 6])

    In [6]: a[b].sum()
    Out[6]: 11

    This a[b].sum() operation in your code would be weights[input].sum()
    and would be much faster than the loop shown (the speed difference
    will be larger if you increase the size of the input array).

    --
    Oscar

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Angelico@21:1/5 to David Raymond on Thu Mar 16 02:01:36 2023
    On Thu, 16 Mar 2023 at 01:26, David Raymond <David.Raymond@tomtom.com> wrote:
    I'm not quite sure why the built-in sum functions are slower than the for loop,
    or why they're slower with the generator expression than with the list comprehension.

    For small-to-medium data sizes, genexps are slower than list comps,
    but use less memory. (At some point, using less memory translates
    directly into faster runtime.) But even the sum-with-genexp version is
    notably faster than reduce.

    Is 'weights' a dictionary? You're iterating over it, then subscripting
    every time. If it is, try simply taking the sum of weights.values(),
    as this should be significantly faster.

    ChrisA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Passin@21:1/5 to Chris Angelico on Wed Mar 15 11:12:46 2023
    On 3/15/2023 11:01 AM, Chris Angelico wrote:
    On Thu, 16 Mar 2023 at 01:26, David Raymond <David.Raymond@tomtom.com> wrote:
    I'm not quite sure why the built-in sum functions are slower than the for loop,
    or why they're slower with the generator expression than with the list comprehension.

    For small-to-medium data sizes, genexps are slower than list comps,
    but use less memory. (At some point, using less memory translates
    directly into faster runtime.) But even the sum-with-genexp version is notably faster than reduce.

    Is 'weights' a dictionary? You're iterating over it, then subscripting
    every time. If it is, try simply taking the sum of weights.values(),
    as this should be significantly faster.

    It's a list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Passin@21:1/5 to David Raymond on Wed Mar 15 10:58:13 2023
    On 3/15/2023 10:24 AM, David Raymond wrote:
    Or use the sum() builtin rather than reduce(), which was
    *deliberately* removed from the builtins. The fact that you can get
    sum() without importing, but have to go and reach for functools to get
    reduce(), is a hint that you probably shouldn't use reduce when sum
    will work.

    Out of curiosity I tried a couple variations and am a little confused by the results. Maybe I'm having a brain fart and am missing something obvious?

    Each of these was run with the same "data" and "perceptrons" values to keep that fair.
    Times are averages over 150 iterations like the original.
    The only thing changed in the trainPerceptron function was how to calculate sum_


    Original:
    sum_ = 0
    for key in input:
    v = weights[key]
    sum_ += v
    418ms

    The reduce version:
    sum_ = reduce(lambda acc, key: acc + weights[key], input)
    758ms

    Getting rid of the assignment to v in the original version:
    sum_ = 0
    for key in input:
    sum_ += weights[key]
    380ms

    But then using sum seems to be slower

    sum with generator expression:
    sum_ = sum(weights[key] for key in input)
    638ms

    sum with list comprehension:
    sum_ = sum([weights[key] for key in input])
    496ms

    math.fsum with generator expression:
    sum_ = math.fsum(weights[key] for key in input)
    618ms

    math.fsum with list comprehension:
    sum_ = math.fsum([weights[key] for key in input])
    480ms


    I'm not quite sure why the built-in sum functions are slower than the for loop,
    or why they're slower with the generator expression than with the list comprehension.

    I tried similar variations yesterday and got similar results. All the
    sum() versions I tried were slower. Like you, I got the smallest times for

    for key in input:
    sum_ += weights[key]

    but I didn't get as much of a difference as you did.

    I surmise that in using the sum() variations, that the entire sequence
    was constructed first, and then iterated over. In the non-sum()
    versions, no new sequence had to be constructed first, so it would make
    sense for the latter to be slower.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Raymond@21:1/5 to All on Wed Mar 15 14:24:34 2023
    Or use the sum() builtin rather than reduce(), which was
    *deliberately* removed from the builtins. The fact that you can get
    sum() without importing, but have to go and reach for functools to get reduce(), is a hint that you probably shouldn't use reduce when sum
    will work.

    Out of curiosity I tried a couple variations and am a little confused by the results. Maybe I'm having a brain fart and am missing something obvious?

    Each of these was run with the same "data" and "perceptrons" values to keep that fair.
    Times are averages over 150 iterations like the original.
    The only thing changed in the trainPerceptron function was how to calculate sum_


    Original:
    sum_ = 0
    for key in input:
    v = weights[key]
    sum_ += v
    418ms

    The reduce version:
    sum_ = reduce(lambda acc, key: acc + weights[key], input)
    758ms

    Getting rid of the assignment to v in the original version:
    sum_ = 0
    for key in input:
    sum_ += weights[key]
    380ms

    But then using sum seems to be slower

    sum with generator expression:
    sum_ = sum(weights[key] for key in input)
    638ms

    sum with list comprehension:
    sum_ = sum([weights[key] for key in input])
    496ms

    math.fsum with generator expression:
    sum_ = math.fsum(weights[key] for key in input)
    618ms

    math.fsum with list comprehension:
    sum_ = math.fsum([weights[key] for key in input])
    480ms


    I'm not quite sure why the built-in sum functions are slower than the for loop, or why they're slower with the generator expression than with the list comprehension.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Raymond@21:1/5 to All on Wed Mar 15 15:44:47 2023
    Then I'm very confused as to how things are being done, so I will shut
    up. There's not enough information here to give performance advice
    without actually being a subject-matter expert already.

    Short version: In this specific case "weights" is a 5,147 element list of floats, and "input" is a 10 element list of integers which has the indexes of the 10 elements in weights that he wants to add up.

    sum_ = 0
    for key in input:
    sum_ += weights[key]

    vs

    sum_ = sum(weights[key] for key in input)

    vs... other ways

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Angelico@21:1/5 to Thomas Passin on Thu Mar 16 02:20:39 2023
    On Thu, 16 Mar 2023 at 02:14, Thomas Passin <list1@tompassin.net> wrote:

    On 3/15/2023 11:01 AM, Chris Angelico wrote:
    On Thu, 16 Mar 2023 at 01:26, David Raymond <David.Raymond@tomtom.com> wrote:
    I'm not quite sure why the built-in sum functions are slower than the for loop,
    or why they're slower with the generator expression than with the list comprehension.

    For small-to-medium data sizes, genexps are slower than list comps,
    but use less memory. (At some point, using less memory translates
    directly into faster runtime.) But even the sum-with-genexp version is notably faster than reduce.

    Is 'weights' a dictionary? You're iterating over it, then subscripting every time. If it is, try simply taking the sum of weights.values(),
    as this should be significantly faster.

    It's a list.


    Then I'm very confused as to how things are being done, so I will shut
    up. There's not enough information here to give performance advice
    without actually being a subject-matter expert already.

    ChrisA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Weatherby,Gerard@21:1/5 to All on Wed Mar 15 17:09:52 2023
    Sum is faster than iteration in the general case.

    Lifting a test program from Stack Overflow https://stackoverflow.com/questions/24578896/python-built-in-sum-function-vs-for-loop-performance,

    import timeit

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))

    print('For Loop Sum:', timeit.timeit(sum1, number=100))
    print( 'Built-in Sum:', timeit.timeit(sum2, number=100))

    ---

    For Loop Sum: 7.726335353218019
    Built-in Sum: 1.0398506000638008

    ---

    From: Python-list <python-list-bounces+gweatherby=uchc.edu@python.org> on behalf of David Raymond <David.Raymond@tomtom.com>
    Date: Wednesday, March 15, 2023 at 11:46 AM
    To: python-list@python.org <python-list@python.org>
    Subject: RE: Debugging reason for python running unreasonably slow when adding numbers
    *** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***

    Then I'm very confused as to how things are being done, so I will shut
    up. There's not enough information here to give performance advice
    without actually being a subject-matter expert already.

    Short version: In this specific case "weights" is a 5,147 element list of floats, and "input" is a 10 element list of integers which has the indexes of the 10 elements in weights that he wants to add up.

    sum_ = 0
    for key in input:
    sum_ += weights[key]

    vs

    sum_ = sum(weights[key] for key in inpu
  • From Weatherby,Gerard@21:1/5 to All on Wed Mar 15 17:13:13 2023
    Moving the generator out:

    import timeit

    thedata = [i for i in range(1_000_000)]
    def sum1():
    s = 0
    for i in thedata:
    s += i
    return s

    def sum2():
    return sum(thedata)

    print('For Loop Sum:', timeit.timeit(sum1, number=100))
    print( 'Built-in Sum:', timeit.timeit(sum2, number=100))

    ---
    For Loop Sum: 6.984986504539847
    Built-in Sum: 0.5175364706665277

    From: Weatherby,Gerard <gweatherby@uchc.edu>
    Date: Wednesday, March 15, 2023 at 1:09 PM
    To: python-list@python.org <python-list@python.org>
    Subject: Re: Debugging reason for python running unreasonably slow when adding numbers
    Sum is faster than iteration in the general case.

    Lifting a test program from Stack Overflow https://stackoverflow.com/questions/24578896/python-built-in-sum-function-vs-for-loop-performance,


    import timeit

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))

    print('For Loop Sum:', timeit.timeit(sum1, number=100))
    print( 'Built-in Sum:', timeit.timeit(sum2, number=100))

    ---

    For Loop Sum: 7.726335353218019
    Built-in Sum: 1.0398506000638008

    ---


    From: Python-list <python-list-bounces+gweatherby=uchc.edu@python.org> on behalf of David Raymond <David.Raymond@tomtom.com>
    Date: Wednesday, March 15, 2023 at 11:46
  • From Roel Schroeven@21:1/5 to All on Thu Mar 16 11:08:12 2023
    Op 14/03/2023 om 8:48 schreef Alexander Nestorov:
    I have the following code:

    ...
    for i in range(151): # 150 iterations
       ...
    Nothing to do with your actual question and it's probably just a small oversight, but still I thought it was worth a mention: that comment does
    not accurately describe the code; the code is actually doing 151
    iterations, numbered 0 up to and including 150.

    --
    "I've come up with a set of rules that describe our reactions to technologies: 1. Anything that is in the world when you’re born is normal and ordinary and is
    just a natural part of the way the world works.
    2. Anything that's invented between when you’re fifteen and thirty-five is new
    and exciting and revolutionary and you can probably get a career in it.
    3. Anything invented after you're thirty-five is against the natural order of things."
    -- Douglas Adams, The Salmon of Doubt

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter J. Holzer@21:1/5 to Gerard on Sat Mar 18 13:20:42 2023
    On 2023-03-15 17:09:52 +0000, Weatherby,Gerard wrote:
    Sum is faster than iteration in the general case.

    I'd say this is the special case, not the general case.

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))

    Here you already have the numbers you want to add.

    The OP needed to compute those numbers first.

    hp

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmQVrJoACgkQ8g5IURL+ KF17VQ//SYWtZb+a9M2mlM4K7Xv8eSYycfo646Bv0GnDP86HCM3AtQGExvSMd96/ YVzMp9aLaBh16GiK76viTvNOGfgArXTQduzdI+76Zl9NhvRzim3EvhKKK0ZRWdsP ppW9Ln98pg88We5vWIdjThKDS0xkK4i6D/IescOAgQcOiEHF78eEYDfNxDmB/Qwt fHWwtRdSHmSmVksn2EcwsVlQUaY3qb7yd039evKncuFBwx0AsITmcn1JCPyM4kMC bF84EJgHdiQPjod+DNnZ/8GXTBtZdngKLO0V8uNnrMlqeGXLOuUgQqGpdCEB5bsL NMfz8ooFxRW7Md77aOe+98p5sGwuDcnNnmGgfNSenm08XyZVcpaYLjFu2j2iybf2 g3uf9NLE/S7y4//RhJ7Ps4ncExdyTyZUlkfI2arfPiDWBKQNkwx2IMtrdHS7MaFN 3LZSo4Dj9k9OlLfoD2bTTRfOu1QpliGXoPaA4X3y4KxoWxjTTcIANYwoQYvVl4Ji T4gUY5tCAarfmNBm83JsnP1QnAM3IU4Ibds/d5M5uCMw/+mrV2OucGmMh/fvci7R Aqwb1SM71ca8tRkKtDqbpI67kTB3ZBtx9XknDI7qMTbV3IkgBU7qbN0+pKllRSAo Kc0ZYzc4Oanu3Nw53tz6qQaigoluZiZ2cUmZfrk
  • From Edmondo Giovannozzi@21:1/5 to All on Mon Mar 20 08:21:12 2023
    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))
    Here you already have the numbers you want to add.

    Actually using numpy you'll be much faster in this case:

    § import numpy as np
    § def sum3():
    § return np.arange(1_000_000, dtype=np.int64).sum()

    On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms
    One problem is that sum2 gives the wrong result. This is why I used np.arange with dtype=np.int64.

    sum2 evidently doesn't uses the python "big integers" e restrict the result to 32 bits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MRAB@21:1/5 to Edmondo Giovannozzi on Mon Mar 20 17:42:44 2023
    On 2023-03-20 15:21, Edmondo Giovannozzi wrote:

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))
    Here you already have the numbers you want to add.

    Actually using numpy you'll be much faster in this case:

    § import numpy as np
    § def sum3():
    § return np.arange(1_000_000, dtype=np.int64).sum()

    On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms
    One problem is that sum2 gives the wrong result. This is why I used np.arange with dtype=np.int64.

    sum2 evidently doesn't uses the python "big integers" e restrict the result to 32 bits.

    On my computer they all give the same result, as I'd expect.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Passin@21:1/5 to Edmondo Giovannozzi on Mon Mar 20 13:45:01 2023
    On 3/20/2023 11:21 AM, Edmondo Giovannozzi wrote:

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))
    Here you already have the numbers you want to add.

    Actually using numpy you'll be much faster in this case:

    § import numpy as np
    § def sum3():
    § return np.arange(1_000_000, dtype=np.int64).sum()

    On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms
    One problem is that sum2 gives the wrong result. This is why I used np.arange with dtype=np.int64.

    On my computer they all give the same result.

    Python 3.10.9, PyQt version 6.4.1
    Windows 10 AMD64 (build 10.0.19044) SP0
    Processor: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, 1690 Mhz, 4
    Core(s), 8 Logical Processor(s)


    sum2 evidently doesn't uses the python "big integers" e restrict the result to 32 bits.

    What about your system? Let's see if we can figure the reason for the difference.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gilmeh Serda@21:1/5 to Edmondo Giovannozzi on Mon Mar 20 19:59:32 2023
    On Mon, 20 Mar 2023 08:21:12 -0700 (PDT), Edmondo Giovannozzi wrote:

    One problem is that sum2 gives the wrong result.

    Really?

    sum(range(101))
    5050

    numpy.arange(101, dtype=numpy.int64).sum()
    5050

    Using range(100) will give you numbers 0-99, not 1-100:

    for i in range(10): print(i)
    ...
    0
    1
    2
    3
    4
    5
    6
    7
    8
    9

    See docs. There's a reason for this.

    --
    Gilmeh

    "I think trash is the most important manifestation of culture we have in
    my lifetime." -- Johnny Legend

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edmondo Giovannozzi@21:1/5 to All on Tue Mar 21 05:22:43 2023
    Il giorno lunedì 20 marzo 2023 alle 19:10:26 UTC+1 Thomas Passin ha scritto:
    On 3/20/2023 11:21 AM, Edmondo Giovannozzi wrote:

    def sum1():
    s = 0
    for i in range(1000000):
    s += i
    return s

    def sum2():
    return sum(range(1000000))
    Here you already have the numbers you want to add.

    Actually using numpy you'll be much faster in this case:

    § import numpy as np
    § def sum3():
    § return np.arange(1_000_000, dtype=np.int64).sum()

    On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms
    One problem is that sum2 gives the wrong result. This is why I used np.arange with dtype=np.int64.
    On my computer they all give the same result.

    Python 3.10.9, PyQt version 6.4.1
    Windows 10 AMD64 (build 10.0.19044) SP0
    Processor: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, 1690 Mhz, 4 Core(s), 8 Logical Processor(s)
    sum2 evidently doesn't uses the python "big integers" e restrict the result to 32 bits.
    What about your system? Let's see if we can figure the reason for the difference.

    I'm using winpython on Windows 11 and the python version is, well, 3.11:

    But it is my fault, sorry, I realised now that ipython is importing numpy namespace and the numpy sum function is overwriting the intrinsic sum.
    The intrinsic sum is behaving correctly and is faster when used in sum(range(1_000_000)) then the numpy version.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)