I'm working with a large physical file. The file has rows in it. There are about 365.8 million active rows, and some 123.8 million deleted rows.
If I run
select * from the_lib.the_table
I get a result set of rows.
If I run
select count(*) from the_lib.the_table
I get back null.
The file has one member. The columns are all DEC and CHAR. The largest decimal column is 9 digits, and longest character column is 60 bytes.
None of the columns is nullable.
I've never seen SELECT COUNT(*) return a null value. If there are rows,
I get the number, and if the table or file is empty, I get zero.
Thoughts?
I was off by an order of magnitude on the number of active rows. There
are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at SYSTABLESTAT for the table, it shows the correct number of rows in the NUMBER_ROWS column.
On Friday, September 28, 2018 at 7:35:06 PM UTC-4, Jonathan Ball wrote:Yes, for sure.
I was off by an order of magnitude on the number of active rows. There
are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at
SYSTABLESTAT for the table, it shows the correct number of rows in the
NUMBER_ROWS column.
I suspect that the NULL is really overflow.
What happens if you CAST the count to a large int, or float?I bet nothing happens. He should use count_big instead of count.
On 9/28/2018 2:36 PM, Jonathan Ball wrote:
I'm working with a large physical file. The file has rows in it.
There are about 365.8 million active rows, and some 123.8 million
deleted rows.
I was off by an order of magnitude on the number of active rows. There
are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at SYSTABLESTAT for the table, it shows the correct number of rows in the NUMBER_ROWS column.
On 9/29/2018 8:50 AM, Dr.UgoGagliardelli wrote:
Il 29.09.2018 01.35, Jonathan Ball ha scritto:
On 9/28/2018 2:36 PM, Jonathan Ball wrote:
I'm working with a large physical file. The file has rows in it.
There are about 365.8 million active rows, and some 123.8 million
deleted rows.
I was off by an order of magnitude on the number of active rows.
There are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at
SYSTABLESTAT for the table, it shows the correct number of rows in
the NUMBER_ROWS column.
The result cannot mup to an integer, integer max value is about 2
billion, use COUNT_BIG instead, that can return a bigger number.
That was it! I hadn't noticed in my SQL script message area that I was getting a SQLSTATE of 01519: "The null value has been assigned to a variable, because a numeric value is out of range."
Also, while I like to think I know a lot of the built-in functions, I
hadn't encountered COUNT_BIG before. That fixed it.
grazie mille
Il 29.09.2018 14.46, Buck ha scritto:
On Friday, September 28, 2018 at 7:35:06 PM UTC-4, Jonathan Ball wrote:Yes, for sure.
I was off by an order of magnitude on the number of active rows. There >>> are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at
SYSTABLESTAT for the table, it shows the correct number of rows in the
NUMBER_ROWS column.
I suspect that the NULL is really overflow.
What happens if you CAST the count to a large int, or float?I bet nothing happens. He should use count_big instead of count.
Il 29.09.2018 01.35, Jonathan Ball ha scritto:
On 9/28/2018 2:36 PM, Jonathan Ball wrote:
I'm working with a large physical file. The file has rows in it.
There are about 365.8 million active rows, and some 123.8 million
deleted rows.
I was off by an order of magnitude on the number of active rows.
There are 3.658 *billion* rows.
I still wouldn't think that would make a difference. If I look at
SYSTABLESTAT for the table, it shows the correct number of rows in the
NUMBER_ROWS column.
The result cannot mup to an integer, integer max value is about 2
billion, use COUNT_BIG instead, that can return a bigger number.
I hadn't encountered COUNT_BIG before. That fixed it.It's there since V4R5, about 18 years ago!
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 234:00:05 |
Calls: | 6,624 |
Files: | 12,172 |
Messages: | 5,319,637 |