Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
DFS <nospam@dfs.com> writes:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and fragile procedure?
On Sat, 15 Apr 2017 23:17:34 -0400
DFS <nospam@dfs.com> wrote:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
This will be fine as long as the server(s) are stopped at the time of copy, and they are they running the same version.
On 4/23/2017 5:15 PM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and fragile
procedure?
1) it seems very uncumbersome to just copy a folder to a backup drive
2) why is it 'fragile'?
DFS <nospam@dfs.com> writes:
On 4/23/2017 5:15 PM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place
2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and fragile
procedure?
1) it seems very uncumbersome to just copy a folder to a backup drive
Why do you want to hunt the system for "folders" based on some
assumption about the disk usage of the DBMS instead of just using
pg_dump?
2) why is it 'fragile'?
Have you verified that your assumptions about the disk usage are correct
by checking the source and received from assurance from the developers
that they're set in stone?
On 4/28/2017 7:39 AM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
On 4/23/2017 5:15 PM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place >>>>> 2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and fragile >>>> procedure?
1) it seems very uncumbersome to just copy a folder to a backup drive
Why do you want to hunt the system for "folders" based on some
assumption about the disk usage of the DBMS instead of just using
pg_dump?
What 'hunting' are you talking about?
DFS <nospam@dfs.com> writes:
On 4/28/2017 7:39 AM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
On 4/23/2017 5:15 PM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
Windows
Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup place >>>>>> 2) reinstall Postgres (and set the /data directory to the same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and fragile >>>>> procedure?
1) it seems very uncumbersome to just copy a folder to a backup drive
Why do you want to hunt the system for "folders" based on some
assumption about the disk usage of the DBMS instead of just using
pg_dump?
What 'hunting' are you talking about?
A: Because I never read documentation!
I already figured so.
Compare the following procedures
What issues do you think I can expect if I restore the data folder from
a 9.6.2 'backup' onto a system running a later postgres version?
On Fri, 28 Apr 2017 00:45:48 -0400
DFS <nospam@dfs.com> wrote:
What issues do you think I can expect if I restore the data folder from
a 9.6.2 'backup' onto a system running a later postgres version?
This should work with minor versions eg 9.6.1 <> 9.6.2. A tool exists for migrating data dirs between major versions - pg_upgrade (https://www.postgresql.org/docs/9.6/static/pgupgrade.html)
Shocking eh? But it's been robust and is still speedy for SELECTs. Adding indexes and doing some other table operations now takes 20 minutes on the
big table. So I'm gonna upgrade to Postgres.
Do you have any experience with pg_dump and large databases? Are the
dumps real slow to create and restore?
The db I'm thinking about is currently a 10GB file in SQLite. Do you consider that large?
Shocking eh? But it's been robust and is still speedy for SELECTs.
Adding indexes and doing some other table operations now takes 20
minutes on the big table. So I'm gonna upgrade to Postgres.
DFS <nospam@dfs.com> writes:
Shocking eh? But it's been robust and is still speedy for SELECTs. Adding >> indexes and doing some other table operations now takes 20 minutes on the
big table. So I'm gonna upgrade to Postgres.
pgloader can migrate the schema and data over in a single command, so I
think you should have a look in case that's useful to you:
http://pgloader.io
pgloader ./test/sqlite/sqlite.db postgresql://user@host/newdb
Regards,
Rainer Weikusat <rweikusat@talktalk.net> writes:
DFS <nospam@dfs.com> writes:
On 4/28/2017 7:39 AM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
On 4/23/2017 5:15 PM, Rainer Weikusat wrote:
DFS <nospam@dfs.com> writes:
Windows Postgres 9.6.2
On install I set the /data directory to a data drive (D:\)
How much trouble will it cause if I:
1) backup that entire /data directory by copying it to a backup
place 2) reinstall Postgres (and set the /data directory to the
same
location as before)
3) copy over the new /data directory with my backup?
No pgdump - just copy and replace files.
Is there a reason why you want to follow such a cumbersome and
fragile procedure?
1) it seems very uncumbersome to just copy a folder to a backup
drive
Why do you want to hunt the system for "folders" based on some
assumption about the disk usage of the DBMS instead of just using
pg_dump?
What 'hunting' are you talking about?
A: Because I never read documentation!
I already figured so.
Compare the following procedures:
1. Stop database server.
2. Copy data directory 3. Copy configuration directory 4. Start
different version of database server and hope for the best
1. pg_dumpall >p 2. psql <p
This will be fine as long as the server(s) are stopped at the time of
copy,
and they are they running the same version.
On Tue, 2 May 2017 11:48:32 -0400
DFS <nospam@dfs.com> wrote:
Do you have any experience with pg_dump and large databases? Are the
dumps real slow to create and restore?
The db I'm thinking about is currently a 10GB file in SQLite. Do you
consider that large?
It took me 4mins to pg_dump a 3GB database just now to give an idea but
of course this depends on many (I/O etc) factors.
10GB is certainly not small for a SQLite database (not heard of any SQLite DBs that big before).
Shocking eh? But it's been robust and is still speedy for SELECTs.
Adding indexes and doing some other table operations now takes 20
minutes on the big table. So I'm gonna upgrade to Postgres.
There is a body of evidence to suggest SQLite is faster than many
multiuser rdbms' for general use but apparently CREATE INDEX is one
thing it does fall behind in.
PGLoader recommended by Dimitri sounds excellent and I will check this
out myself too as I have not heard of it before.
On 5/3/2017 8:57 AM, Terry Shanks wrote:
There is a body of evidence to suggest SQLite is faster than many
multiuser rdbms' for general use but apparently CREATE INDEX is one
thing it does fall behind in.
Apparently it creates an empty table with the new structure, copies the
data from the old table, then deletes the old table.
Extremely inefficient.
DFS <nospam@dfs.com> writes:
Shocking eh? But it's been robust and is still speedy for SELECTs. Adding >> indexes and doing some other table operations now takes 20 minutes on the
big table. So I'm gonna upgrade to Postgres.
pgloader can migrate the schema and data over in a single command, so I
think you should have a look in case that's useful to you:
http://pgloader.io
pgloader ./test/sqlite/sqlite.db postgresql://user@host/newdb
Regards,
DFS <nospam@dfs.com> writes:
1) Can I specify one table - or a list of a few specific tables - to migrate >> from SQLite to postgres?
Yes. What happens when you try?
2) How can I build/run pgloader on Windows (8.1)?
Yes. You will need either SBCL or Clozure-CL for building pgloader tho,
I've not tried it with clisp (it might work but I won't be able to help
you there).
1) Can I specify one table - or a list of a few specific tables - to migrate from SQLite to postgres?
2) How can I build/run pgloader on Windows (8.1)?
Bump.
Bump.
On 5/21/2017 7:46 AM, Dimitri Fontaine wrote:
DFS <nospam@dfs.com> writes:
1) Can I specify one table - or a list of a few specific tables - to
migrate
from SQLite to postgres?
Yes. What happens when you try?
I haven't tried yet, 'cause I don't know how to build it on Windows and
I don't have access to a Linux system right now.
Is it "INCLUDING ONLY TABLE NAMES IN ('Table1','Table2','Table3')?
2) How can I build/run pgloader on Windows (8.1)?
Yes. You will need either SBCL or Clozure-CL for building pgloader tho,
I've not tried it with clisp (it might work but I won't be able to help
you there).
OK. But how do I build pgloader on Windows (8.1)?
DFS <nospam@dfs.com> writes:
Bump.
Bump.
Have you read any docs, like the project's README on github maybe? or
which ones? why didn't it answer your questions? what did you try? what happened when you tried?
If your docs covered either question I wouldn't have asked.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 58:37:51 |
Calls: | 6,652 |
Calls today: | 4 |
Files: | 12,200 |
Messages: | 5,331,145 |