in a server/client environment, is there a 'faster' alternative to 'set filter to'
e.g.
on a 18900 rec size .dbf (39 fields), when i code for generating a report: set filt to field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
go top
do whil !eof()
...
...
...
skip
end
-it slows down the process when 'skipping' records (or even at dbedit)
in a server/client environment, is there a 'faster' alternative to 'set filter to'
e.g.
on a 18900 rec size .dbf (39 fields), when i code for generating a report: >> set filt to field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
go top
do whil !eof()
...
...
...
skip
end
-it slows down the process when 'skipping' records (or even at dbedit)
copy to xxx.dbf for field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
And then no need for filter anymore.
Il 25/01/2023 10:38, Claude Roettgers ha scritto:emg:
in a server/client environment, is there a 'faster' alternative to 'set filter to'
e.g.
on a 18900 rec size .dbf (39 fields), when i code for generating a report: >> set filt to field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
go top
do whil !eof()
...
...
...
skip
end
-it slows down the process when 'skipping' records (or even at dbedit)
copy to xxx.dbf for field1=&abc .and. field7='C'.and.!dele().and.field8<invdateOr build a temporary index with FOR clause on the fly.
And then no need for filter anymore.
--
Enrico Maria Giordano
http://www.emagsoftware.it
http://www.emagsoftware.it/emgmusic
http://www.emagsoftware.it/spectrum
http://www.emagsoftware.it/tbosg
timec...@gmail.com schrieb am Mittwoch, 25. Januar 2023 um 10:27:44 UTC+1:but, i have multiple index files on that .dbf, and i also change indexord in the loop...
in a server/client environment, is there a 'faster' alternative to 'set filter to'copy to xxx.dbf for field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
e.g.
on a 18900 rec size .dbf (39 fields), when i code for generating a report: set filt to field1=&abc .and. field7='C'.and.!dele().and.field8<invdate
go top
do whil !eof()
...
...
...
skip
end
-it slows down the process when 'skipping' records (or even at dbedit)
And then no need for filter anymore.
Dear timec...it.
On Wednesday, January 25, 2023 at 3:35:29 AM UTC-7, timec...@gmail.com wrote:
...
...-it slows down the process when 'skipping' recordscopy to xxx.dbf for field1=&abc .and. field7='C'.and. !dele().and.field8<invdate
(or even at dbedit)
And then no need for filter anymore.
but, i have multiple index files on that .dbf, and i alsoUse DBFCDX, and you have only a single index file, but multiple orderbags in that file. Yes, you'd still need to create each index, just refer to them by their alias.
change indexord in the loop... wouldn't i then have to
create multiple index files also on 'xxx.dbf'
If you have the "correct" index in place when you make the copy, then it will be in that order when you make it.
Another option is to create a separate "pointer file" that has records for matches into your main file, with fields associated with index position in each of your orderbags. Then build indexes on each of those. Smaller on disk, and that is about it.
You've complained (rightly) about the speed. We're trying to give you 'easy' fixes here. Only you can present them to the user in a fashion that allows them to feel this is faster. If you need to change indexes on main, create a new xxx.dbf to go with
...-it slows down the process when 'skipping' recordscopy to xxx.dbf for field1=&abc .and. field7='C'.and. !dele().and.field8<invdate
(or even at dbedit)
And then no need for filter anymore.
but, i have multiple index files on that .dbf, and i also
change indexord in the loop... wouldn't i then have to
create multiple index files also on 'xxx.dbf'
Dear timec...it.
On Wednesday, January 25, 2023 at 3:35:29 AM UTC-7, timec...@gmail.com wrote:
...
...-it slows down the process when 'skipping' recordscopy to xxx.dbf for field1=&abc .and. field7='C'.and. !dele().and.field8<invdate
(or even at dbedit)
And then no need for filter anymore.
but, i have multiple index files on that .dbf, and i alsoUse DBFCDX, and you have only a single index file, but multiple orderbags in that file. Yes, you'd still need to create each index, just refer to them by their alias.
change indexord in the loop... wouldn't i then have to
create multiple index files also on 'xxx.dbf'
If you have the "correct" index in place when you make the copy, then it will be in that order when you make it.
Another option is to create a separate "pointer file" that has records for matches into your main file, with fields associated with index position in each of your orderbags. Then build indexes on each of those. Smaller on disk, and that is about it.
You've complained (rightly) about the speed. We're trying to give you 'easy' fixes here. Only you can present them to the user in a fashion that allows them to feel this is faster. If you need to change indexes on main, create a new xxx.dbf to go with
On Thursday, January 26, 2023 at 8:53:22 PM UTC+5:30, dlzc wrote:
thanks dlzc.
back to old dog...new tricks...
-it never ends.
btw: does chanting 'abracadabra' work.
the latter is MUCH faster (obviously).
On Thursday, January 26, 2023 at 8:53:22 PM UTC+5:30, dlzc wrote:
thanks dlzc.
back to old dog...new tricks...
-it never ends.
btw: does chanting 'abracadabra' work.
On Thursday, January 26, 2023 at 10:08:57 PM UTC-7, timec...@gmail.com wrote:whoa...whoa...
On Thursday, January 26, 2023 at 8:53:22 PM UTC+5:30, dlzc wrote:
thanks dlzc.
back to old dog...new tricks...
-it never ends.
btw: does chanting 'abracadabra' work.If you speak the language, yes it does. That advice was English, but I expected you wanted to UNDERSTAND, not have someone write the code for you.
I'll stop responding to your posts.
David A. Smith
Il 27/01/2023 06:08, timepro timesheet ha scritto:yeah dan, i tried it out without any .ntx and it definitely did browse/scroll/search faster in dbedit()
On Thursday, January 26, 2023 at 8:53:22 PM UTC+5:30, dlzc wrote:
thanks dlzc.
back to old dog...new tricks...
-it never ends.
btw: does chanting 'abracadabra' work.
BTW: keep in mind: traversating a dbf *with an index active* can be 10
times slower than with no indexes.
use dbf1
set index to ix1
go top
do while .not. eof()
skip
enddo
set index to
go top
do while .not. eof()
skip
enddo
the latter is MUCH faster (obviously).
Dan
On Friday, January 27, 2023 at 3:12:25 PM UTC+5:30, Dan wrote:the striking reduction in speed (due to multiple .ntx) in dbedit scroll/browse happens only in client systems not on the server.
Il 27/01/2023 06:08, timepro timesheet ha scritto:
On Thursday, January 26, 2023 at 8:53:22 PM UTC+5:30, dlzc wrote:
thanks dlzc.
back to old dog...new tricks...
-it never ends.
btw: does chanting 'abracadabra' work.
BTW: keep in mind: traversating a dbf *with an index active* can be 10 times slower than with no indexes.
use dbf1
set index to ix1
go top
do while .not. eof()
skip
enddo
set index to
go top
do while .not. eof()
skip
enddo
the latter is MUCH faster (obviously).
Danyeah dan, i tried it out without any .ntx and it definitely did browse/scroll/search faster in dbedit()
but, does it have to be zero index files -or- lesser the ntx faster the scrolling...
(or, no. of ntx does not matter. same speed with 5 ntx or 1 ntx)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 251 |
Nodes: | 16 (0 / 16) |
Uptime: | 37:08:07 |
Calls: | 5,571 |
Calls today: | 1 |
Files: | 11,685 |
Messages: | 5,129,598 |