Please see below how [after] consumes way more time than requested.
Observed on Windows 7.
(Under Linux, the time command returns the expected value of around
10000 microseconds per iteration.)
Can others reproduce this?
Can someone explain it?
% time {after 10} 100
15657.03288836854 microseconds per iteration
Erik Leunissen <look@the.footer.invalid> wrote:and any longer time will be "granulated" by this limitation
Please see below how [after] consumes way more time than requested.
Observed on Windows 7.
(Under Linux, the time command returns the expected value of around
10000 microseconds per iteration.)
Can others reproduce this?
Can someone explain it?
Yes, you are encountering a windows issue.
See this link: https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution
Windows' default clock tick is 15.6ms. So the minimum time that any
after on windows can wait will be 15.6ms. 15657 micro seconds is 15.6
ms.
% time {after 10} 100
15657.03288836854 microseconds per iteration
Yes, you are encountering a windows issue.
See this link: https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution
On 11/09/2021 16:00, Rich wrote:
Yes, you are encountering a windows issue.
See this link:
https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution
Quick and clear,
Thanks,
Erik.
Le 11/09/2021 à 22:30, Erik Leunissen a écrit :
On 11/09/2021 16:00, Rich wrote:Interesting!
Yes, you are encountering a windows issue.
See this link:
https://stackoverflow.com/questions/3744032/why-are-net-timers-limited-to-15-ms-resolution
Quick and clear,
Thanks,
Erik.
could we define higher speed timer like:
proc wait ms {
set init [clock milliseconds]
set end [expr $init + $ms]
while {[clock milliseconds] < $end} {}
}
% time {wait 10} 100
9998.01 microseconds per iteration
Please see below how [after] consumes way more time than requested.
Observed on Windows 7.
(Under Linux, the time command returns the expected value of around
10000 microseconds per iteration.)
Can others reproduce this?
Can someone explain it?
Thanks in advance for your attention,
Erik
--
% time {after 10} 100
15657.03288836854 microseconds per iteration
% set tcl_patchLevel
8.6.11
% parray tcl_platform
tcl_platform(machine)Â Â Â Â Â Â = amd64 tcl_platform(os)Â Â Â Â Â Â Â Â Â Â Â = Windows NT tcl_platform(osVersion)Â Â Â Â = 6.1
tcl_platform(pointerSize)Â Â = 8
tcl_platform(threaded)Â Â Â Â Â = 1
tcl_platform(wordSize)Â Â Â Â Â = 4
* Erik Leunissen <lo...@the.footer.invalid>
| Please see below how [after] consumes way more time than requested.
| Observed on Windows 7.
| (Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)
| Can others reproduce this?
No :-)
% parray tcl_platform
tcl_platform(byteOrder) = littleEndian
tcl_platform(engine) = Tcl
tcl_platform(machine) = amd64
tcl_platform(os) = Windows NT
tcl_platform(osVersion) = 10.0
tcl_platform(pathSeparator) = ;
tcl_platform(platform) = windows
tcl_platform(pointerSize) = 8
tcl_platform(threaded) = 1
tcl_platform(user) = ralf
tcl_platform(wordSize) = 4
% info patchlevel
8.6.11
% time {after 10} 100
10994.358 microseconds per iteration
% time {after 11} 100
11957.325 microseconds per iteration
% time {after 12} 100
13060.527000000002 microseconds per iteration
% time {after 13} 100
13998.957000000002 microseconds per iteration
% time {after 14} 100
14994.269000000002 microseconds per iteration
% time {after 15} 100
15938.196000000002 microseconds per iteration
Off-by-one, 'but'... :-)
| Can someone explain it?
Others already have explained that your observation is the result of the default 15ms timer granularity on Windows.
If you are able to compile binary extensions, this code will do the
above trick:
#undef WIN32_LEAN_AND_MEAN
#include <windows.h>
TIMECAPS tc;
if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
|| TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
error("cant set timeBeginPeriod()");
return;
}
References: https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod
HTH
R'
Please see below how [after] consumes way more time than requested.Not with 32bit Windows msys/mingw build and 8.7a5. See below:
Observed on Windows 7.
(Under Linux, the time command returns the expected value of around 10000 microseconds per iteration.)
Can others reproduce this?
No :-)
If you are able to compile binary extensions, this code will do the
above trick:
#undef WIN32_LEAN_AND_MEAN
#include <windows.h>
TIMECAPS tc;
if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
|| TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
error("cant set timeBeginPeriod()");
return;
}
References:
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod
HTH
R'
On 13/09/2021 14:47, Ralf Fassel wrote:
No :-)
:-)
If you are able to compile binary extensions, this code will do the
above trick:
#undef WIN32_LEAN_AND_MEAN
#include <windows.h>
TIMECAPS tc;
if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
|| TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
error("cant set timeBeginPeriod()");
return;
}
References: https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod
Thanks for this hint,
Erik.
HTH
R'
--
elns@ nl | Merge the left part of these two lines into one,
xs4all. | respecting a character's position in a line.
Can others reproduce this?Not with 32bit Windows msys/mingw build and 8.7a5. See below:
On 13/09/2021 15:30, rene wrote:
That's remarkable. The binaries that exhibit the 15.6 ms granularity in my MS Windows 7 system areCan others reproduce this?Not with 32bit Windows msys/mingw build and 8.7a5. See below:
cross compiled from Linux, using a mingw64 toolchain.
Presuming that 32/64 bit is irrelevant to the issue, I am surprised that your msys/mingw build
doesn't exhibit the issue.
On 13/09/2021 14:47, Ralf Fassel wrote:
No :-)
:-)
If you are able to compile binary extensions, this code will do the
above trick:
    #undef WIN32_LEAN_AND_MEAN
    #include <windows.h>
    TIMECAPS tc;
    if (MMSYSERR_NOERROR != timeGetDevCaps(&tc, sizeof(tc))
        || TIMERR_NOERROR != timeBeginPeriod(tc.wPeriodMin)) {
      error("cant set timeBeginPeriod()");
      return;
    }
References:
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/ns-timeapi-timecaps
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod
Thanks for this hint,
Erik.
HTH
R'
That's remarkable. The binaries that exhibit the 15.6 ms granularity in my MS Windows 7 system are
cross compiled from Linux, using a mingw64 toolchain.
Presuming that 32/64 bit is irrelevant to the issue, I am surprised that your msys/mingw build
doesn't exhibit the issue.
It'll be a subtle difference in which libc is being used; the msys one includes the call to tell Windows to use the high-resolution timer, and the mingw default one doesn't.
Or, if compiling a binary is not an option, you could use the CFFI extension (or equivalently FFIDL) :
% package require cffi
Ashok wearing his pointy hat again... truely amazing!
May I ask why we do not do this modification in general in TCL for Windows?
Thanks,
Harald
On 9/20/2021 3:44 PM, Harald Oehlmann wrote:
May I ask why we do not do this modification in general in TCL for
Windows?
Thanks,
Harald
Well, that is a general question of what should go into packages versus
be present in the core language.
With foreign function calling extensions like cffi, ffidl and their ilk,
my personal opinion is that they do not belong to the core language for several reasons
On 9/20/2021 3:44 PM, Harald Oehlmann wrote:
May I ask why we do not do this modification in general in TCL for
Windows?
Thanks,
Harald
Well, that is a general question of what should go into packages versus
be present in the core language.
With foreign function calling extensions like cffi, ffidl and their ilk,
my personal opinion is that they do not belong to the core language for several reasons - First, I don't think they would be used or needed
widely enough. Secondly, it is very hard (impossible in theory?) to
crash the core language due to bugs in the script. FFI functionality
makes crashes from a script level bug a piece of cake :-) Third, FFI
depends on the underlying ffi library (dyncall for cffi, libffi for
ffidl) for platform support. The core platform support is in all
likelihood much wider. Last, use of ffi in all but the simplest cases, requires understanding the C programming environment (pointers,
ownership etc.)
/Ashok
On Tuesday, September 21, 2021 at 11:23:10 AM UTC+2, Harald Oehlmann wrote:but let the app execute it if needed?)
Yes, my question was more why we do not set the high precision clock in
TCL core instead requiring an additional function call.
I did not follow the whole discussion so I am only asking slowly.
Reading the remarks in https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod, I'd _not_ bring that to the "Tcl core". (Btw, would you bring it to tclsh or to any app that initializes an interpreter or just ship the code
1. Resource hungry: "it can also reduce overall system performance, because the thread scheduler switches tasks more often." "High resolutions can also prevent the CPU power management system from entering power-saving modes."
2. Global impact: "Prior to Windows 10, version 2004, this function affects a global Windows setting."
3. Complex/fuzzy behaviour: "Starting with Windows 11, if a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution."
Yes, my question was more why we do not set the high precision clock in
TCL core instead requiring an additional function call.
I did not follow the whole discussion so I am only asking slowly.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (3 / 13) |
Uptime: | 31:02:38 |
Calls: | 6,449 |
Calls today: | 1 |
Files: | 12,052 |
Messages: | 5,254,627 |