Hi there,
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about
it.
I'm using cfs on this new system.
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about
it.
I'm using cfs on this new system.
To add more detail. I am definitely missing some articles.
$ grephistory '<q34v5519vru4fco5d14eradn3iksal3ihr@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@
I have to got back 675 lines in the logfile before I am able to
retrieve an article.
$ grephistory '<q34v5519vru4fco5d14eradn3iksal3ihr@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
$ grephistory '<q34v5519vru4fco5d14eradn3iksal3ihr@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
As the articles seem to be known in the history file, the target server
will reject them as duplicates, even if they don't exist in the spool.
BTW: Is there another way to remove entries from history except manually deleting them?
Hi Wolfgang,
$ grephistory '<q34v5519vru4fco5d14eradn3iksal3ihr@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
As the articles seem to be known in the history file, the target server
will reject them as duplicates, even if they don't exist in the spool.
Oh, yes, you're totally right. These Message-IDs must be removed from
the history file before.
BTW: Is there another way to remove entries from history except manually
deleting them?
What I usually do to achieve that is:
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with the
same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).
I'm not aware of another way to totally remove entries from the history
file (it somehow needs rebuilding). If you see another method with the current programs shipped with INN, I would be glad to hear.
Hi Nigel,
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about
it.
I'm using cfs on this new system.
To add more detail. I am definitely missing some articles.
$ grephistory '<q34v5519vru4fco5d14eradn3iksal3ihr@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@
I have to got back 675 lines in the logfile before I am able to
retrieve an article.
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
Yet, the number of missing articles seems high. The CNFS headers are updated every 25 articles by default. Did you change the
cycbuffupdate setting in cycbuff.conf? (Or do you have lots of
cyclic buffers, as the refresh is related to each buffer, separately?)
grephistory gets information from the history file which is flushed
every 10 articles by default (icdsynccount setting in inn.conf).
Overview data is usually written less frequently on disk; it depends
on the overview storage method you are using (after each article
arrival for tradindexed, according to the transrowlimit and
transtimelimit settings in ovsqlite.conf for ovsqlite, the txn_nosync
setting in ovdb.conf for ovdb, and the ovflushcount setting in
inn.conf for buffindexed).
I can delete the last 675 entries from the history file, will that just
cause the overview record to be recreated? What about the history.hash
and history.index files?
I can't believe it's as easy as removing a few
lines from history and starting the transfer at the point of the last
missing message.
Hi Jesse,
BTW: Is there another way to remove entries from history except
manually deleting them?
What I usually do to achieve that is:
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with
the same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).
I'm not aware of another way to totally remove entries from the
history file (it somehow needs rebuilding). If you see another
method with the current programs shipped with INN, I would be glad
to hear.
Can you not remove the lines from "history" and then run makedbz?
Wolfgang asked for a way other than a manual deletion. Yes, editing
the history file by hand and then running "makedbz" or "makehistory
-O" will also work. It is just more error-prone, and naturally one
has to shutdown INN before any manual edition of the history file.
I would recommend to also rebuild the overview data ("makehistory
-O") and not only the dbz files ("makedbz") as otherwise it will be inconsistent. I think duplicated Message-IDs in the overview
database won't prevent the articles from being accepted as our
current overview methods work by article numbers, and not by
Message-IDs, but better be consistent.
Running "news.daily notdaily" will do all of that for you.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 308 |
Nodes: | 16 (2 / 14) |
Uptime: | 91:35:17 |
Calls: | 6,923 |
Calls today: | 1 |
Files: | 12,382 |
Messages: | 5,434,029 |