• [after] takes way too long on MS Windows

    From Rich@21:1/5 to Erik Leunissen on Sat Sep 11 14:00:36 2021
    Erik Leunissen <look@the.footer.invalid> wrote:
    Please see below how [after] consumes way more time than requested.
    Observed on Windows 7.

    (Under Linux, the time command returns the expected value of around
    10000 microseconds per iteration.)

    Can others reproduce this?
    Can someone explain it?

    Yes, you are encountering a windows issue.

    See this link: https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution

    Windows' default clock tick is 15.6ms. So the minimum time that any
    after on windows can wait will be 15.6ms. 15657 micro seconds is 15.6
    ms.

    % time {after 10} 100
    15657.03288836854 microseconds per iteration

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to All on Sat Sep 11 15:26:55 2021
    Please see below how [after] consumes way more time than requested.
    Observed on Windows 7.

    (Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)

    Can others reproduce this?
    Can someone explain it?

    Thanks in advance for your attention,
    Erik
    --

    % time {after 10} 100
    15657.03288836854 microseconds per iteration
    % set tcl_patchLevel
    8.6.11
    % parray tcl_platform
    tcl_platform(machine) = amd64
    tcl_platform(os) = Windows NT
    tcl_platform(osVersion) = 6.1
    tcl_platform(pointerSize) = 8
    tcl_platform(threaded) = 1
    tcl_platform(wordSize) = 4

    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Uwe Klein@21:1/5 to All on Sat Sep 11 16:10:13 2021
    Am 11.09.21 um 16:00 schrieb Rich:
    Erik Leunissen <look@the.footer.invalid> wrote:
    Please see below how [after] consumes way more time than requested.
    Observed on Windows 7.

    (Under Linux, the time command returns the expected value of around
    10000 microseconds per iteration.)

    Can others reproduce this?
    Can someone explain it?

    Yes, you are encountering a windows issue.

    See this link: https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution

    Windows' default clock tick is 15.6ms. So the minimum time that any
    after on windows can wait will be 15.6ms. 15657 micro seconds is 15.6
    ms.
    and any longer time will be "granulated" by this limitation

    after 150 will return after 150ms
    after 151 will return after 165ms
    after 164 will return after 165ms
    after 165 will return after 165ms

    broken by design :-)

    % time {after 10} 100
    15657.03288836854 microseconds per iteration


    Uwe

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to Rich on Sat Sep 11 16:30:41 2021
    On 11/09/2021 16:00, Rich wrote:

    Yes, you are encountering a windows issue.

    See this link: https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution


    Quick and clear,
    Thanks,

    Erik.

    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Adrien Peulvast@21:1/5 to All on Mon Sep 13 10:11:03 2021
    Le 11/09/2021 à 22:30, Erik Leunissen a écrit :
    On 11/09/2021 16:00, Rich wrote:

    Yes, you are encountering a windows issue.

    See this link:
    https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution



    Quick and clear,
    Thanks,

    Erik.

    Interesting!

    could we define higher speed timer like:

    proc wait ms {
    set init [clock milliseconds]
    set end [expr $init + $ms]
    while {[clock milliseconds] < $end} {}
    }

    % time {wait 10} 100
    9998.01 microseconds per iteration


    --
    L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
    https://www.avast.com/antivirus

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Adrien Peulvast on Mon Sep 13 02:24:16 2021
    Adrien Peulvast <adrien.peulvast@hotmail.com> wrote:
    Le 11/09/2021 à 22:30, Erik Leunissen a écrit :
    On 11/09/2021 16:00, Rich wrote:

    Yes, you are encountering a windows issue.

    See this link:
    https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution



    Quick and clear,
    Thanks,

    Erik.

    Interesting!

    could we define higher speed timer like:

    proc wait ms {
    set init [clock milliseconds]
    set end [expr $init + $ms]
    while {[clock milliseconds] < $end} {}
    }

    % time {wait 10} 100
    9998.01 microseconds per iteration

    Well, you 'can', but that is called a "busy loop" and it burns 100% CPU
    (which if you are battery powered also means high battery drain) and
    also prevents the event loop from running during the wait time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Harald Oehlmann@21:1/5 to All on Mon Sep 13 08:28:27 2021
    Am 11.09.2021 um 15:26 schrieb Erik Leunissen:
    Please see below how [after] consumes way more time than requested.
    Observed on Windows 7.

    (Under Linux, the time command returns the expected value of around
    10000 microseconds per iteration.)

    Can others reproduce this?
    Can someone explain it?

    Thanks in advance for your attention,
    Erik
    --

    % time {after 10} 100
    15657.03288836854 microseconds per iteration
    % set tcl_patchLevel
    8.6.11
    % parray tcl_platform
    tcl_platform(machine)       = amd64 tcl_platform(os)            = Windows NT tcl_platform(osVersion)     = 6.1
    tcl_platform(pointerSize)   = 8
    tcl_platform(threaded)      = 1
    tcl_platform(wordSize)      = 4


    Dear Eric,

    I fear the issue is systematic. There are other issues, like not
    firering after, if the clock is changed while the after is running.

    I suppose, the solution is to apply the patch by great Sergey Wizard: https://core.tcl-lang.org/tcl/tktview/fdfbd5e10fefdb605abf34f65535054c323d9394

    I fear, we have nobody in the first ring of TCL who understands only
    small parts of it, so this is sleeping since 4 years...

    You may also look to TIP302.

    Take care,
    Harald

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ralf Fassel@21:1/5 to All on Mon Sep 13 14:47:11 2021
    * Erik Leunissen <look@the.footer.invalid>
    | Please see below how [after] consumes way more time than requested.
    | Observed on Windows 7.

    | (Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)

    | Can others reproduce this?

    No :-)

    % parray tcl_platform
    tcl_platform(byteOrder) = littleEndian
    tcl_platform(engine) = Tcl
    tcl_platform(machine) = amd64
    tcl_platform(os) = Windows NT
    tcl_platform(osVersion) = 10.0
    tcl_platform(pathSeparator) = ;
    tcl_platform(platform) = windows
    tcl_platform(pointerSize) = 8
    tcl_platform(threaded) = 1
    tcl_platform(user) = ralf
    tcl_platform(wordSize) = 4
    % info patchlevel
    8.6.11

    % time {after 10} 100
    10994.358 microseconds per iteration

    % time {after 11} 100
    11957.325 microseconds per iteration

    % time {after 12} 100
    13060.527000000002 microseconds per iteration

    % time {after 13} 100
    13998.957000000002 microseconds per iteration

    % time {after 14} 100
    14994.269000000002 microseconds per iteration

    % time {after 15} 100
    15938.196000000002 microseconds per iteration

    Off-by-one, 'but'... :-)

    | Can someone explain it?

    Others already have explained that your observation is the result of the default 15ms timer granularity on Windows.

    If you are able to compile binary extensions, this code will do the
    above trick:

    #undef WIN32_LEAN_AND_MEAN
    #include <windows.h>

    TIMECAPS tc;
    if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
    || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
    error("cant set timeBeginPeriod()");
    return;
    }

    References:
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod

    HTH
    R'

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Alexandru@21:1/5 to Ralf Fassel on Mon Sep 13 06:00:45 2021
    Ralf Fassel schrieb am Montag, 13. September 2021 um 14:47:15 UTC+2:
    * Erik Leunissen <lo...@the.footer.invalid>
    | Please see below how [after] consumes way more time than requested.
    | Observed on Windows 7.

    | (Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)

    | Can others reproduce this?
    No :-)

    % parray tcl_platform
    tcl_platform(byteOrder) = littleEndian
    tcl_platform(engine) = Tcl
    tcl_platform(machine) = amd64
    tcl_platform(os) = Windows NT
    tcl_platform(osVersion) = 10.0
    tcl_platform(pathSeparator) = ;
    tcl_platform(platform) = windows
    tcl_platform(pointerSize) = 8
    tcl_platform(threaded) = 1
    tcl_platform(user) = ralf
    tcl_platform(wordSize) = 4
    % info patchlevel
    8.6.11
    % time {after 10} 100
    10994.358 microseconds per iteration

    % time {after 11} 100
    11957.325 microseconds per iteration

    % time {after 12} 100
    13060.527000000002 microseconds per iteration

    % time {after 13} 100
    13998.957000000002 microseconds per iteration

    % time {after 14} 100
    14994.269000000002 microseconds per iteration

    % time {after 15} 100
    15938.196000000002 microseconds per iteration

    Off-by-one, 'but'... :-)

    | Can someone explain it?

    Others already have explained that your observation is the result of the default 15ms timer granularity on Windows.

    If you are able to compile binary extensions, this code will do the
    above trick:

    #undef WIN32_LEAN_AND_MEAN
    #include <windows.h>

    TIMECAPS tc;
    if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
    || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
    error("cant set timeBeginPeriod()");
    return;
    }

    References: https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod

    HTH
    R'

    I can confirm the issue under Windows 10, 64bit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rene@21:1/5 to Erik Leunissen on Mon Sep 13 06:30:59 2021
    Erik Leunissen schrieb am Samstag, 11. September 2021 um 15:38:04 UTC+2:
    Please see below how [after] consumes way more time than requested.
    Observed on Windows 7.

    (Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)

    Can others reproduce this?
    Not with 32bit Windows msys/mingw build and 8.7a5. See below:

    () 1 % time {after 10} 100
    10806.773547314573 microseconds per iteration
    () 2 % parray tcl_platform
    tcl_platform(byteOrder) = littleEndian
    tcl_platform(engine) = Tcl
    tcl_platform(machine) = intel
    tcl_platform(os) = Windows NT
    tcl_platform(osVersion) = 10.0
    tcl_platform(pathSeparator) = ;
    tcl_platform(platform) = windows
    tcl_platform(pointerSize) = 4
    tcl_platform(threaded) = 1
    tcl_platform(user) = rz
    tcl_platform(wordSize) = 4
    () 3 % set tcl_patchLevel
    8.7a5

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to Ralf Fassel on Mon Sep 13 19:23:37 2021
    On 13/09/2021 14:47, Ralf Fassel wrote:

    No :-)


    :-)


    If you are able to compile binary extensions, this code will do the
    above trick:

    #undef WIN32_LEAN_AND_MEAN
    #include <windows.h>

    TIMECAPS tc;
    if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
    || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
    error("cant set timeBeginPeriod()");
    return;
    }

    References:
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod


    Thanks for this hint,
    Erik.


    HTH
    R'



    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jrapdx@21:1/5 to Erik Leunissen on Mon Sep 13 13:42:45 2021
    On Monday, September 13, 2021 at 10:24:04 AM UTC-7, Erik Leunissen wrote:
    On 13/09/2021 14:47, Ralf Fassel wrote:

    No :-)


    :-)

    If you are able to compile binary extensions, this code will do the
    above trick:

    #undef WIN32_LEAN_AND_MEAN
    #include <windows.h>

    TIMECAPS tc;
    if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
    || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
    error("cant set timeBeginPeriod()");
    return;
    }

    References: https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod

    Thanks for this hint,
    Erik.


    HTH
    R'



    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    FWIW I'm running a prerelease version of Windows 11 on this computer. Indeed with tclsh in Windows terminal, the after command gives results similar to what was reported earlier:
    % time {after 10} 100
    15800.679000000002 microseconds per iteration

    However under Windows+Linux (WSL2/Ubuntu) results were "normal":
    % time {after 10} 100
    10285.67 microseconds per iteration

    WSL2 distributions utilize a modified Linux kernel which doesn't rely on Windows for timing functions. Windows 10/11 users needing more fine-grained/correct time resolution might consider running their Tcl scripts in a WSL2 environment.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to rene on Tue Sep 14 18:28:59 2021
    On 13/09/2021 15:30, rene wrote:
    Can others reproduce this?
    Not with 32bit Windows msys/mingw build and 8.7a5. See below:


    That's remarkable. The binaries that exhibit the 15.6 ms granularity in my MS Windows 7 system are
    cross compiled from Linux, using a mingw64 toolchain.

    Presuming that 32/64 bit is irrelevant to the issue, I am surprised that your msys/mingw build
    doesn't exhibit the issue.

    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Donal K. Fellows@21:1/5 to Erik Leunissen on Sat Sep 18 01:20:52 2021
    On Tuesday, 14 September 2021 at 17:29:03 UTC+1, Erik Leunissen wrote:
    On 13/09/2021 15:30, rene wrote:
    Can others reproduce this?
    Not with 32bit Windows msys/mingw build and 8.7a5. See below:
    That's remarkable. The binaries that exhibit the 15.6 ms granularity in my MS Windows 7 system are
    cross compiled from Linux, using a mingw64 toolchain.

    Presuming that 32/64 bit is irrelevant to the issue, I am surprised that your msys/mingw build
    doesn't exhibit the issue.

    It'll be a subtle difference in which libc is being used; the msys one includes the call to tell Windows to use the high-resolution timer, and the mingw default one doesn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ashok@21:1/5 to Erik Leunissen on Sat Sep 18 17:18:09 2021
    On 9/13/2021 10:53 PM, Erik Leunissen wrote:
    On 13/09/2021 14:47, Ralf Fassel wrote:

    No :-)


    :-)


    If you are able to compile binary extensions, this code will do the
    above trick:

         #undef WIN32_LEAN_AND_MEAN
         #include <windows.h>

         TIMECAPS tc;
         if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
             || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
           error("cant set timeBeginPeriod()");
           return;
         }

    References:

    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps


    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps


    https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod



    Thanks for this hint,
    Erik.


    HTH
    R'




    Or, if compiling a binary is not an option, you could use the CFFI
    extension (or equivalently FFIDL) :

    % package require cffi
    1.0a7
    % namespace path cffi
    % dyncall::Library create winmm winmm.dll
    ::winmm
    % Struct create TIMECAPS {min uint max uint}
    ::TIMECAPS
    % winmm function timeGetDevCaps int {tc {struct.TIMECAPS out} cb uint}
    % winmm function timeBeginPeriod uint {period uint}
    % winmm function timeEndPeriod uint {period uint}

    Verifying BEFORE calling timeBeginPeriod

    % time {after 10} 100
    16843.849000000002 microseconds per iteration

    Get the min time granularity and set resolution to that

    % timeGetDevCaps timecaps [dict get [TIMECAPS info] size]
    0
    % puts $timecaps
    min 1 max 1000000
    % timeBeginPeriod [dict get $timecaps min]
    0

    Verify 1ms resolution

    % time {after 10} 100
    10745.334 microseconds per iteration
    % time {after 11} 100
    11567.266000000001 microseconds per iteration

    Reset back to original

    % timeEndPeriod [dict get $timecaps min]
    0
    % time {after 10} 100
    17065.583 microseconds per iteration
    %


    /Ashok

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to Donal K. Fellows on Sat Sep 18 21:20:59 2021
    On 18/09/2021 10:20, Donal K. Fellows wrote:

    That's remarkable. The binaries that exhibit the 15.6 ms granularity in my MS Windows 7 system are
    cross compiled from Linux, using a mingw64 toolchain.

    Presuming that 32/64 bit is irrelevant to the issue, I am surprised that your msys/mingw build
    doesn't exhibit the issue.

    It'll be a subtle difference in which libc is being used; the msys one includes the call to tell Windows to use the high-resolution timer, and the mingw default one doesn't.


    That's a plausible reason indeed. I didn't think of that.

    Thanks.
    Erik.
    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erik Leunissen@21:1/5 to Ashok on Sat Sep 18 21:24:37 2021
    On 18/09/2021 13:48, Ashok wrote:

    Or, if compiling a binary is not an option, you could use the CFFI extension (or equivalently FFIDL) :

    % package require cffi

    That's indeed a very convenient method for my situation.

    Thanks,
    Erik
    --
    elns@ nl | Merge the left part of these two lines into one,
    xs4all. | respecting a character's position in a line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ralf Fassel@21:1/5 to All on Mon Sep 20 12:04:06 2021
    * Ashok <palmtcl@yahoo.com>
    | >> If you are able to compile binary extensions, this code will do the
    | >> above trick:
    --<snip-snip>--

    | Or, if compiling a binary is not an option, you could use the CFFI
    | extension (or equivalently FFIDL) :

    | % package require cffi
    | 1.0a7
    | % namespace path cffi
    | % dyncall::Library create winmm winmm.dll
    | ::winmm
    | % Struct create TIMECAPS {min uint max uint}
    | ::TIMECAPS
    | % winmm function timeGetDevCaps int {tc {struct.TIMECAPS out} cb uint}
    | % winmm function timeBeginPeriod uint {period uint}
    | % winmm function timeEndPeriod uint {period uint}

    Ashok wearing his pointy hat again... truely amazing!

    R'

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Harald Oehlmann@21:1/5 to All on Mon Sep 20 12:14:10 2021
    Am 20.09.2021 um 12:04 schrieb Ralf Fassel:
    Ashok wearing his pointy hat again... truely amazing!

    May I ask why we do not do this modification in general in TCL for Windows?

    Thanks,
    Harald

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ashok@21:1/5 to Harald Oehlmann on Tue Sep 21 11:26:43 2021
    On 9/20/2021 3:44 PM, Harald Oehlmann wrote:
    May I ask why we do not do this modification in general in TCL for Windows?

    Thanks,
    Harald

    Well, that is a general question of what should go into packages versus
    be present in the core language.

    With foreign function calling extensions like cffi, ffidl and their ilk,
    my personal opinion is that they do not belong to the core language for
    several reasons - First, I don't think they would be used or needed
    widely enough. Secondly, it is very hard (impossible in theory?) to
    crash the core language due to bugs in the script. FFI functionality
    makes crashes from a script level bug a piece of cake :-) Third, FFI
    depends on the underlying ffi library (dyncall for cffi, libffi for
    ffidl) for platform support. The core platform support is in all
    likelihood much wider. Last, use of ffi in all but the simplest cases,
    requires understanding the C programming environment (pointers,
    ownership etc.)

    /Ashok

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christian Gollwitzer@21:1/5 to All on Tue Sep 21 09:09:57 2021
    Am 21.09.21 um 07:56 schrieb Ashok:
    On 9/20/2021 3:44 PM, Harald Oehlmann wrote:
    May I ask why we do not do this modification in general in TCL for
    Windows?

    Thanks,
    Harald

    Well, that is a general question of what should go into packages versus
    be present in the core language.

    With foreign function calling extensions like cffi, ffidl and their ilk,
    my personal opinion is that they do not belong to the core language for several reasons

    While I'm not sure if Harald meant FFI or to modify the clock as a core feature, I have another data point for FFI. Python has it in the core
    language, the package is called "ctypes"[*] and it is used by many 3rd
    party libraries to wrap functionality at runtime. One advantage of this
    scheme is that you can check for the availability of a library at
    runtime and load it or do something else, aloso easily find the thing,
    whereas if you link the library in and the dynamic linker cannot find
    it, the program will not start.

    Christian

    [*] https://docs.python.org/3/library/ctypes.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Harald Oehlmann@21:1/5 to All on Tue Sep 21 11:23:09 2021
    Am 21.09.2021 um 07:56 schrieb Ashok:
    On 9/20/2021 3:44 PM, Harald Oehlmann wrote:
    May I ask why we do not do this modification in general in TCL for
    Windows?

    Thanks,
    Harald

    Well, that is a general question of what should go into packages versus
    be present in the core language.

    With foreign function calling extensions like cffi, ffidl and their ilk,
    my personal opinion is that they do not belong to the core language for several reasons - First, I don't think they would be used or needed
    widely enough. Secondly, it is very hard (impossible in theory?) to
    crash the core language due to bugs in the script. FFI functionality
    makes crashes from a script level bug a piece of cake :-) Third, FFI
    depends on the underlying ffi library (dyncall for cffi, libffi for
    ffidl) for platform support. The core platform support is in all
    likelihood much wider. Last, use of ffi in all but the simplest cases, requires understanding the C programming environment (pointers,
    ownership etc.)

    /Ashok

    Thank you, Ashok, great.

    Yes, my question was more why we do not set the high precision clock in
    TCL core instead requiring an additional function call.
    I did not follow the whole discussion so I am only asking slowly.

    Thanks,
    Harald

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Harald Oehlmann@21:1/5 to All on Tue Sep 21 13:39:03 2021
    Am 21.09.2021 um 13:16 schrieb heinrichmartin:
    On Tuesday, September 21, 2021 at 11:23:10 AM UTC+2, Harald Oehlmann wrote:
    Yes, my question was more why we do not set the high precision clock in
    TCL core instead requiring an additional function call.
    I did not follow the whole discussion so I am only asking slowly.

    Reading the remarks in https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod, I'd _not_ bring that to the "Tcl core". (Btw, would you bring it to tclsh or to any app that initializes an interpreter or just ship the code
    but let the app execute it if needed?)

    1. Resource hungry: "it can also reduce overall system performance, because the thread scheduler switches tasks more often." "High resolutions can also prevent the CPU power management system from entering power-saving modes."
    2. Global impact: "Prior to Windows 10, version 2004, this function affects a global Windows setting."
    3. Complex/fuzzy behaviour: "Starting with Windows 11, if a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution."


    Thank you, Heinrich, for the clarification.
    The next step would be the impact of the changes proposed in

    https://core.tcl-lang.org/tcl/tktview/fdfbd5e10fefdb605abf34f65535054c323d9394

    Thanks,
    Harald

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From heinrichmartin@21:1/5 to Harald Oehlmann on Tue Sep 21 04:16:30 2021
    On Tuesday, September 21, 2021 at 11:23:10 AM UTC+2, Harald Oehlmann wrote:
    Yes, my question was more why we do not set the high precision clock in
    TCL core instead requiring an additional function call.
    I did not follow the whole discussion so I am only asking slowly.

    Reading the remarks in https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod, I'd _not_ bring that to the "Tcl core". (Btw, would you bring it to tclsh or to any app that initializes an interpreter or just ship the code
    but let the app execute it if needed?)

    1. Resource hungry: "it can also reduce overall system performance, because the thread scheduler switches tasks more often." "High resolutions can also prevent the CPU power management system from entering power-saving modes."
    2. Global impact: "Prior to Windows 10, version 2004, this function affects a global Windows setting."
    3. Complex/fuzzy behaviour: "Starting with Windows 11, if a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)