On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:
Whether it makes sense to code that on in a similar way is still unclear
to me. It's a one page procedure performing several lookups just to
insert a record, a step which in PostgreSQL would conflict only with
a similar insert—and the duration of the conflict would be short. Also, one purpose of your template is to avoid deadlocks, which in MVCC cannot happen anyway. In fact, adding explicit locking might introduce the possibility of deadlocks.
in MVCC [deadlocks] cannot happen anyway
Nicola
In the /Stored Proc for OLTP Transactions/ thread ...
On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:
Whether it makes sense to code that on in a similar way is still unclear
to me. It's a one page procedure performing several lookups just to
insert a record, a step which in PostgreSQL would conflict only with
a similar insert—and the duration of the conflict would be short. Also,
one purpose of your template is to avoid deadlocks, which in MVCC cannot
happen anyway. In fact, adding explicit locking might introduce the
possibility of deadlocks.
in MVCC [deadlocks] cannot happen anyway
Separate to the "serialisation failures", which are the rough
equivalent to deadlocks in a SQL Platform (with a Lock Manager), if
deadlocks cannot happen, what is this, that people are experiencing
problems with in PusGross, that they are calling deadlocks ?
__ https://dba.stackexchange.com/q/281846/64114
__ https://dba.stackexchange.com/q/151813/64114
and the duration of the conflict would be short
if deadlocks cannot happen, what is this, that people are experiencing
Separate to the "serialisation failures", which are the rough
equivalent to deadlocks in a SQL Platform (with a Lock Manager)
On Tuesday, 24 August 2021 at 03:24:58 UTC+10, Nicola wrote:
On 2021-08-23, Derek Ignatius Asirvadem wrote:
On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:
long post this time. Please skim through it entirely before commenting.
In the /Stored Proc for OLTP Transactions/ thread ...
Whether it makes sense to code that on in a similar way is still unclear >> to me. It's a one page procedure performing several lookups just to
insert a record, a step which in PostgreSQL would conflict only with
a similar insert—and the duration of the conflict would be short. Also, >> one purpose of your template is to avoid deadlocks, which in MVCC cannot >> happen anyway. In fact, adding explicit locking might introduce the
possibility of deadlocks.
in MVCC [deadlocks] cannot happen anyway
Separate to the "serialisation failures", which are the rough
equivalent to deadlocks in a SQL Platform (with a Lock Manager), if deadlocks cannot happen, what is this, that people are experiencing problems with in PusGross, that they are calling deadlocks ?
__ https://dba.stackexchange.com/q/281846/64114
__ https://dba.stackexchange.com/q/151813/64114
Ok, I stand corrected, in two ways. First:
and the duration of the conflict would be short
Strictly speaking, that is inaccurate, because it does not mention that
in case of write-write conflicts, one of the two transactions must be
rolled back.
Second, yes, deadlocks are possible, but they only involve write
operations (read statements do not acquire locks).
6. I am perfectly aware that you would not write T1 the way I have done,
and that you would avoid deadlocks. That's not the point of this post.
T1 is written that way for explanatory purposes only.
(note that I am not
discussing other important details, such as updating indexes, or the overhead of versions—I am keeping it simple)
In PostgreSQL: each transaction operates on its own private snapshot of
the database (that incurs a overhead, which I am not discussing for simplicity).
citations ...examples you cite ...
Example 1 ...
Example 2 ...
Example 1 ...
With row-level locking, or if the two records were on different pages, Sybase would behave the same.
I have already explained, several times, that this is a Straw Man that prevents understanding of an Ordinary Lock Manager.
But when read statements are involved,
things may run differently.
if deadlocks cannot happen, what is this, that people are experiencing
Ok, they are deadlocks, as above. Could also be "serialization errors", discussed below.
Do you agree with the above description of events?
Example 1 (assume SERIALIZABLE):
...
With row-level locking, or if the two records were on different pages, Sybase would behave the same.
Sorry, I cannot give you the whole lecture from our course, you have to try and understand this from my textual descriptions. Note that many senior SQL Platforms consultants do not understand this, it is a deep understanding of OLTP, confirmed every
But when read statements are involved,
things may run differently.
if deadlocks cannot happen, what is this, that people are experiencing
Ok, they are deadlocks, as above. Could also be "serialization errors", discussed below.
Example 2 (assume SERIALIZABLE):
...
In Sybase: at time (1), T1 locks the page containing the record, so that
at time (3) T2's request for a write lock is put into a queue and T2 is
put to wait. Sybase records T2->T1 (T2 depends on T1 to release a lock).
At time (4), T1's request for a write-lock is also denied, pending T2's request (w/w conflict). Sybase records T1->T2 (T1 depends on T2 to
release a lock). The concurrency control system detects a cycle in the dependency graph, and rolls back a transaction, which happens to be T1.
T2 can now commit. Note that without the select statement at time (2)
there would be no deadlock: the update at (4) would only be delayed
until T2 commits (or rolls back).
Do you agree with the above description of events?
T1 locks the page containing the record, so that
at time (3) T2's request for a write lock is put into a queue and T2 is put to wait.
In Sybase: at time (1), T1 locks the page containing the record, so that
at time (3) T2's request for a write lock is put into a queue and T2 is
put to wait. Sybase records T2->T1 (T2 depends on T1 to release a lock).
At time (4), T1's request for a write-lock is also denied, pending T2's request (w/w conflict).
At time (4), T1's request for a write-lock is also denied, pending T2'srequest (w/w conflict).
Sybase records T1->T2 (T1 depends on T2 torelease a lock). The concurrency control system detects a cycle in the dependency graph,
and rolls back a transaction, which happens to be T1.
T2 can now commit. Note that without the select statement at time (2)
there would be no deadlock: the update at (4) would only be delayed
until T2 commits (or rolls back).
In PostgreSQL: each transaction operates on its own private snapshot of
the database (that incurs a overhead, which I am not discussing for simplicity). In the sequence above, that would result in a lost update (namely, T2's update),
unless some mechanism were in place to prevent
that (and there is).
AFAICT, PostgreSQL implements a "first-updater
wins" rule: if a transaction attempts an update on an item that has been updated by a concurrent transaction, if the latter has already
committed, the former immediately aborts; if the latter is still active,
the former is put on hold. This rule may be implemented with locks or in other ways. I think that PostgreSQL uses a mechanism different from
locks for that, but that is not relevant in this context.
Anyway, in Example 2 it is not correct to talk about deadlocks in PostgreSQL, because the read statements do not acquire any locks, so
it's not possible for the two transactions to be waiting upon each
other. The update at time (3) proceeds normally (such update is local to T2). At time (4), though, T1 must be put on hold, because, according to
the rule above, another transaction has concurrently updated the same
record and is stil active. When T2 commits, PostgreSQL realises that T1 cannot sensibly continue (because that would result in a lost update),
and kills it. Now, T2 can commit.
I think we agree: in PoopGres, where it is reported as (a) rollbacks, or (b) “deadlocks”, or (c) “serialisation failures”, it is one logical category: the effects that should not happen, but that do happen.
it's not possible for the two transactions to be waiting upon each
other
Some remarks (recall that I am assuming SERIALIZABLE):
1. The outcome, in this specific example, is the same (Tom's balance
becomes 60), but the underlying mechanisms to achieve that outcome are profoundly different. In general, the outcome will be different as
well (read further).
2. Can there be a concurrent interleaving of operations that leads to
a deadlock in Sybase, but the same concurrent interleaving does not
yield a deadlock or a serialization error in PostgreSQL? No, because
a concurrent interleaving of operations leading to a deadlock is an incorrect interleaving. So, if Sybase "encounters a deadlock
situation", then PostgreSQL must also produce an error in that same situation, otherwise it would output an incorrect result. The
difference is that PostgreSQL may output a "serialization error",
rather than a deadlock error.
Separate to the "serialisation failures", which are the rough
equivalent to deadlocks in a SQL Platform (with a Lock Manager)
Ok, accepted.
I think we agree: in PoopGres, where it is reported as (a) rollbacks, or (b) “deadlocks”, or (c) “serialisation failures”, it is one logical category: the effects that should not happen, but that do happen.
3. Does the difference matter to the DBA? Probably not: what matters in practice is that a transaction was rolled back and must be retried.
4. Is there a concurrent interleaving of operations that makes PostgreSQL return a "serializable error", but the same interleaving succeeds in
Sybase? Yes, because MVCC reduces the number of acceptable
interleaving of operations. For instance, remove the select statement
from T1 (but keep the same timing for the other operations): then in
Sybase both T1 and T2 will commit, but PostgreSQL will still kill T1
as soon as T2 commits (as per the above rule).
5. Does this difference matter to the DBA? Probably yes: under the same workload, PostgreSQL might rollback
more transactions. According to
the posts you cite, that appears to be the case (note that I am not discussing other important details, such as updating indexes, or the overhead of versions—I am keeping it simple).
The more write-heavy the
concurrent workload, the more PostgreSQL is penalized
compared to
a locking-based system. This seems what Uber has discovered the hard
way.
The more write-heavy the concurrent workload, the more PostgreSQL is penalized
compared to a locking-based system.
6. I am perfectly aware that you would not write T1 the way I have done,
and that you would avoid deadlocks. That's not the point of this post.
T1 is written that way for explanatory purposes only.
So, PostgreSQL might have an edge, if any, in prevalently read-only workflows, because reads do not acquire locks.
But you would perform
reads in Sybase at READ COMMITTED, where locks are held for a very short time (it's still an overhead, though).
So, it's not clear to me who the
winner would be.
How do you justify preferring Sybase (locking) over
Example 2 (assume SERIALIZABLE):
2. As an academic example, yes, it is a [true] deadlock.
3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in Connection[T1] are Atomic, in the same contiguous code block, and thus
will execute at Time[0]. 4. There is no “interleaving” of Atomic operations.
The notion of “interleaved operations” is plain stupid, self-contradictory. The Transaction is Atomic.
Sybase has no “interleaved operations”, it preserves the Atom. Sybase has no internal deadlocks, all deadlocks are true (coding in the app).
You are not getting it.
as opposed to poor code], without changing either the database or the
app code, as a fixed price service. If you are interested, I can
provide proof.
-------------------------------------------
-- Example 2 Sybase Side --
-------------------------------------------
Let’s say Row[Tom] is on Page[Px].
__ at Time[2]
____ Connection[T1] obtains a Shared-Page-Lock on Page[Px], until EoT
__ at Time[3]
____ Connection[T2] obtains an Update[Intent]-Page-Lock on Page[Px]
(which does not escalate to Exclusive), and the lock is chained onto Page[Px], behind Connection[T1]’s lock
____ Connection[T2] is in TaskStatus *sleep*, in LockStatus *lock-wait*
__ at Time[4]
____ Connection[T1] lock on Page[Px] is escalated [change of
lock-type] to Update[Intent]-Page-Lock (succeeds),
and thence to Exclusive-Page-Lock (succeeds)
[At time [4], Sybase] rolls back a transaction, which happens to be
T1. T2 can now commit.
No. As explained in detail above. There is no deadlock. Both Connections[T1][T2] succeed.
Anyway, in Example 2 it is not correct to talk about deadlocks in
PostgreSQL, because the read statements do not acquire any locks, so
it's not possible for the two transactions to be waiting upon each
other. The update at time (3) proceeds normally (such update is local to
T2). At time (4), though, T1 must be put on hold, because, according to
the rule above, another transaction has concurrently updated the same
record and is stil active. When T2 commits, PostgreSQL realises that T1
cannot sensibly continue (because that would result in a lost update),
and kills it. Now, T2 can commit.
1. Repeating:
I think we agree: in PoopGres, where it is reported as (a) rollbacks,
or (b) “deadlocks”, or (c) “serialisation failures”,
it is one logical category: the effects that should not happen, but
that do happen.
Let’s call that category FalseDeadlocks, because even the [b] reported deadlocks are false, not caused by the app code, but internal.
3. “Kill”.
I hope you mean rolled back.
2. Can there be a concurrent interleaving of operations that leads to
a deadlock in Sybase, but the same concurrent interleaving does not
yield a deadlock or a serialization error in PostgreSQL? No, because
a concurrent interleaving of operations leading to a deadlock is an
incorrect interleaving. So, if Sybase "encounters a deadlock
situation", then PostgreSQL must also produce an error in that same
situation, otherwise it would output an incorrect result. The
difference is that PostgreSQL may output a "serialization error",
rather than a deadlock error.
Definitely not. As explained above, so I will not repeat.
On Wednesday, 25 August 2021 at 02:36:37 UTC+10, Nicola wrote:
Derek,
I run commands at human speed, because I can't understand full-speed execution if I cannot understand snail-speed execution.
I am planning
a full-speed benchmark, but bear with me for now.
On 2021-08-24, Derek Ignatius Asirvadem wrote:
Ok. More on that below.Example 2 (assume SERIALIZABLE):2. As an academic example, yes, it is a [true] deadlock.
3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in Connection[T1] are Atomic, in the same contiguous code block, and thus will execute at Time[0]. 4. There is no “interleaving” of Atomic operations.I am starting to. In general, how is an Atomic contiguous code block defined? Asked the other way round, what does break a contiguous code
The notion of “interleaved operations” is plain stupid, self-contradictory. The Transaction is Atomic.
Sybase has no “interleaved operations”, it preserves the Atom. Sybase has no internal deadlocks, all deadlocks are true (coding in the app).
You are not getting it.
block, allowing interleaving? Surely, BEGIN and COMMIT. Anything else
in between?
as opposed to poor code], without changing either the database or theYes, please. I do read your links. I am not always understand their
app code, as a fixed price service. If you are interested, I can
provide proof.
dense content at once.
-------------------------------------------Yes.
-- Example 2 Sybase Side --
-------------------------------------------
Let’s say Row[Tom] is on Page[Px].
__ at Time[2]
____ Connection[T1] obtains a Shared-Page-Lock on Page[Px], until EoT
__ at Time[3]
____ Connection[T2] obtains an Update[Intent]-Page-Lock on Page[Px]
(which does not escalate to Exclusive), and the lock is chained onto Page[Px], behind Connection[T1]’s lock
Ok.
____ Connection[T2] is in TaskStatus *sleep*, in LockStatus *lock-wait*So far, so good.
__ at Time[4]
____ Connection[T1] lock on Page[Px] is escalated [change of
lock-type] to Update[Intent]-Page-Lock (succeeds),
Ok.
and thence to Exclusive-Page-Lock (succeeds)
With all due respect, are you sure? T2 already has an Update lock on
that page. T1 cannot acquire an Exclusive lock *on the same page*,
right? At least, this is my understanding from the compatibility table
at §1.5 in ASE's Locking and Concurrency Control guide.
At least, this is my understanding from the compatibility table
at §1.5 in ASE's Locking and Concurrency Control guide.
[At time [4], Sybase] rolls back a transaction, which happens to be
T1. T2 can now commit.
No. As explained in detail above. There is no deadlock. Both Connections[T1][T2] succeed.
Really? See above.
Anyway, in Example 2 it is not correct to talk about deadlocks in
PostgreSQL, because the read statements do not acquire any locks, so
it's not possible for the two transactions to be waiting upon each
other. The update at time (3) proceeds normally (such update is local to >> T2). At time (4), though, T1 must be put on hold, because, according to >> the rule above, another transaction has concurrently updated the same
record and is stil active. When T2 commits, PostgreSQL realises that T1 >> cannot sensibly continue (because that would result in a lost update),
and kills it. Now, T2 can commit.
1. Repeating:
I think we agree: in PoopGres, where it is reported as (a) rollbacks,
or (b) “deadlocks”, or (c) “serialisation failures”,
Only (b) and (c). (a) is a consequence of (a) or (b), not a separate
error.
it is one logical category: the effects that should not happen, butWell, yes.
that do happen.
Let’s call that category FalseDeadlocks, because even the [b] reported deadlocks are false, not caused by the app code, but internal.
It may be that PostgreSQL deadlocks in situations in which Sybase does
not. Not sure that Example 2 is such a case: waiting for your reply.
3. “Kill”.Yes.
I hope you mean rolled back.
2. Can there be a concurrent interleaving of operations that leads to
a deadlock in Sybase, but the same concurrent interleaving does not
yield a deadlock or a serialization error in PostgreSQL? No, because
a concurrent interleaving of operations leading to a deadlock is an
incorrect interleaving. So, if Sybase "encounters a deadlock
situation", then PostgreSQL must also produce an error in that same
situation, otherwise it would output an incorrect result. The
difference is that PostgreSQL may output a "serialization error",
rather than a deadlock error.
Definitely not. As explained above, so I will not repeat.
I don't think that what you have said (whatever Sybase does) contradicts
my point. My point starts from the assumption that clients concurrently submit a set of transactions *and* Sybase returns 1205 to one of them
(say, badly coded applications). That means that Sybase had to roll back
a transaction to prevent incorrect behaviour.
Now, if the clients submit the same transactions to a PostgreSQL server, *and* if the server schedules those transactions *the same way* as
Sybase did, then PostgreSQL *must* return an error to one of them.
So, the point is: PostgreSQL cannot make all the transactions commit
where Sybase has rolled back one, because Sybase did that to prevent an incorrect execution. PostgreSQL would have a bug if it did not do the
same.
That means that Sybase had to roll back
a transaction to prevent incorrect behaviour.
*and* if the [PG] server schedules those transactions *the same way*
The vice versa, however, is not true (that was my other remark). There
are cases in which PostgreSQL rolls back some transaction, but Sybase,
under the same conditions, is able to commit all of them. This is one
such situation:
Example 3
Wall time | T1 | T2
----------|-------------------|------------------
(0) | begin |
(1) | | begin
(2) | | update Tom's data
(3) | update Tom's data |
(4) | | commit
(5) | commit |
While Sybase makes T1 wait until T2 commits and then commits T1,
PostgreSQL rolls back T1 as soon as T2 commits.
I agree with you that this is bad.
On Wednesday, 25 August 2021 at 02:36:37 UTC+10, Nicola wrote:
On 2021-08-24, Derek Ignatius Asirvadem wrote:
3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in Connection[T1] are Atomic, in the same contiguous code block, and thus will execute at Time[0].
4. There is no “interleaving” of Atomic operations.
The notion of “interleaved operations” is plain stupid, self-contradictory. The Transaction is Atomic.
Sybase has no “interleaved operations”, it preserves the Atom. Sybase has no internal deadlocks, all deadlocks are true (coding in the app).
You are not getting it.
I am starting to. In general, how is an Atomic contiguous code block defined? Asked the other way round, what does break a contiguous code
block, allowing interleaving? Surely, BEGIN and COMMIT. Anything else
in between?
Asked the other way round, what does break a contiguous code
block, allowing interleaving?
-- LOCK DURATION --
I offer getting rid of deadlocks [of that nature, due to lock duration,
as opposed to poor code], without changing either the database or the
app code, as a fixed price service. If you are interested, I can
provide proof.
Yes, please. I do read your links.
On Wednesday, 25 August 2021 at 02:36:37 UTC+10, Nicola wrote:
Derek,
I run commands at human speed, because I can't understand full-speed execution if I cannot understand snail-speed execution.
I am planning
a full-speed benchmark, but bear with me for now.
On 2021-08-24, Derek Ignatius Asirvadem wrote:
Ok. More on that below.Example 2 (assume SERIALIZABLE):2. As an academic example, yes, it is a [true] deadlock.
3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in Connection[T1] are Atomic, in the same contiguous code block, and thus will execute at Time[0]. 4. There is no “interleaving” of Atomic operations.I am starting to. In general, how is an Atomic contiguous code block defined? Asked the other way round, what does break a contiguous code
The notion of “interleaved operations” is plain stupid, self-contradictory. The Transaction is Atomic.
Sybase has no “interleaved operations”, it preserves the Atom. Sybase has no internal deadlocks, all deadlocks are true (coding in the app).
You are not getting it.
block, allowing interleaving? Surely, BEGIN and COMMIT. Anything else
in between?
as opposed to poor code], without changing either the database or theYes, please. I do read your links. I am not always understand their
app code, as a fixed price service. If you are interested, I can
provide proof.
dense content at once.
-------------------------------------------Yes.
-- Example 2 Sybase Side --
-------------------------------------------
Let’s say Row[Tom] is on Page[Px].
__ at Time[2]
____ Connection[T1] obtains a Shared-Page-Lock on Page[Px], until EoT
__ at Time[3]
____ Connection[T2] obtains an Update[Intent]-Page-Lock on Page[Px]
(which does not escalate to Exclusive), and the lock is chained onto Page[Px], behind Connection[T1]’s lock
Ok.
____ Connection[T2] is in TaskStatus *sleep*, in LockStatus *lock-wait*So far, so good.
__ at Time[4]
____ Connection[T1] lock on Page[Px] is escalated [change of
lock-type] to Update[Intent]-Page-Lock (succeeds),
Ok.
and thence to Exclusive-Page-Lock (succeeds)
With all due respect, are you sure? T2 already has an Update lock on
that page. T1 cannot acquire an Exclusive lock *on the same page*,
right? At least, this is my understanding from the compatibility table
at §1.5 in ASE's Locking and Concurrency Control guide.
At least, this is my understanding from the compatibility table
at §1.5 in ASE's Locking and Concurrency Control guide.
[At time [4], Sybase] rolls back a transaction, which happens to be
T1. T2 can now commit.
No. As explained in detail above. There is no deadlock. Both Connections[T1][T2] succeed.
Really? See above.
Anyway, in Example 2 it is not correct to talk about deadlocks in
PostgreSQL, because the read statements do not acquire any locks, so
it's not possible for the two transactions to be waiting upon each
other. The update at time (3) proceeds normally (such update is local to >> T2). At time (4), though, T1 must be put on hold, because, according to >> the rule above, another transaction has concurrently updated the same
record and is stil active. When T2 commits, PostgreSQL realises that T1 >> cannot sensibly continue (because that would result in a lost update),
and kills it. Now, T2 can commit.
1. Repeating:
I think we agree: in PoopGres, where it is reported as (a) rollbacks,
or (b) “deadlocks”, or (c) “serialisation failures”,
Only (b) and (c). (a) is a consequence of (a) or (b), not a separate
error.
it is one logical category: the effects that should not happen, butWell, yes.
that do happen.
Let’s call that category FalseDeadlocks, because even the [b] reported deadlocks are false, not caused by the app code, but internal.
It may be that PostgreSQL deadlocks in situations in which Sybase does
not. Not sure that Example 2 is such a case: waiting for your reply.
2. Can there be a concurrent interleaving of operations that leads to
a deadlock in Sybase, but the same concurrent interleaving does not
yield a deadlock or a serialization error in PostgreSQL? No, because
a concurrent interleaving of operations leading to a deadlock is an
incorrect interleaving. So, if Sybase "encounters a deadlock
situation", then PostgreSQL must also produce an error in that same
situation, otherwise it would output an incorrect result. The
difference is that PostgreSQL may output a "serialization error",
rather than a deadlock error.
Definitely not. As explained above, so I will not repeat.
I don't think that what you have said (whatever Sybase does) contradicts
my point. My point starts from the assumption that clients concurrently submit a set of transactions *and* Sybase returns 1205 to one of them
(say, badly coded applications). That means that Sybase had to roll back
a transaction to prevent incorrect behaviour.
Now, if the clients submit the same transactions to a PostgreSQL server, *and* if the server schedules those transactions *the same way* as
Sybase did, then PostgreSQL *must* return an error to one of them.
So, the point is: PostgreSQL cannot make all the transactions commit
where Sybase has rolled back one, because Sybase did that to prevent an incorrect execution. PostgreSQL would have a bug if it did not do the
same.
That means that Sybase had to roll back
a transaction to prevent incorrect behaviour.
*and* if the [PG] server schedules those transactions *the same way*
The vice versa, however, is not true (that was my other remark). There
are cases in which PostgreSQL rolls back some transaction, but Sybase,
under the same conditions, is able to commit all of them. This is one
such situation:
Example 3
Wall time | T1 | T2
----------|-------------------|------------------
(0) | begin |
(1) | | begin
(2) | | update Tom's data
(3) | update Tom's data |
(4) | | commit
(5) | commit |
While Sybase makes T1 wait until T2 commits and then commits T1,
PostgreSQL rolls back T1 as soon as T2 commits.
I agree with you that this is bad.
As a real world possibility, it is the usual insanity; nonsense,
because you are splitting the Atom (the Whole) (everything between
each BEGIN-COMMIT pair), and then fiddling and farting with the Parts,
like the other famous shell game you guys play to “find” the Key
because you have denied the extant Key. In the real world, Atoms
remain Atoms, not fragments. Even on the PeePeeGres side, as long as
the code is contiguous, as opposed to being artificially spread out on
a piece of paper, [T2] will execute after [T1], there will not be
a rollback.
Even on the PeePeeGres side, as long as
the code is contiguous, as opposed to being artificially spread out on
a piece of paper, [T2] will execute after [T1], there will not be
a rollback.
- Interestingly, when T1 and T2 are implemented as stored procedures,
no deadlocks occur
On Sunday, 29 August 2021 at 23:11:23 UTC+10, Nicola wrote:
On 2021-08-26, Derek Ignatius Asirvadem wrote:
As a real world possibility, it is the usual insanity; nonsense,
because you are splitting the Atom (the Whole) (everything between
each BEGIN-COMMIT pair), and then fiddling and farting with the Parts, like the other famous shell game you guys play to “find” the Key because you have denied the extant Key. In the real world, Atoms
remain Atoms, not fragments. Even on the PeePeeGres side, as long as
the code is contiguous, as opposed to being artificially spread out on
a piece of paper, [T2] will execute after [T1], there will not be
a rollback.
(btw, thanks for the tips from the Sybase
questions thread!).
So, I have run some tests
One of them is based of one of my previous examples,
which I summarize here for clarity (full code at the end of this
message):
T1: select Tom's balance; update Tom's balance.
T2: update Tom's balance.
In Sybase:
- when the statements of T1 and T2 are submitted to the server
individually, but within a begin.. commit block, then I can obtain
a deadlock. I think that this is to be expected.
It's like the "snail
speed" manual test.
- Interestingly, when T1 and T2 are implemented as stored procedures, no deadlocks occur, even when the number of concurrent threads is very
high (I have tried with up to 1000 threads),
no matter whether I open
two connections overall or one connection per thread (although the
free version has a limit on the number of connections, so I could use
<100 threads in this case).
So, it appears that no context switch ever
happens during the execution of T1 (resp., T2). Is that a correct interpretation?
- I have added an artificial delay in T1 between select and update.
Still, no deadlocks,
although I get several 5s timeouts. Is that
because a transaction is rolled back if it's waiting for too long?
In PostgreSQL:
- you cannot set the isolation level within a stored procedure;
that
must be done before calling the stored procedure.
- Even with a couple of threads, a serialization failure can be obtained:
ERROR: could not serialize access due to concurrent update
PostgreSQL's behaviour is consistent with what was previously discussed,
so I don't have anything else to add,
except that
Even on the PeePeeGres side, as long as
the code is contiguous, as opposed to being artificially spread out on
a piece of paper, [T2] will execute after [T1], there will not be
a rollback.
I find that not to be the case. If you know how I could run T1 and T2 concurrently without errors, please let me know.
######################################################################
# $ ruby test-ase.rb
# Dropping data...
# Populating db...
# Started At 2021-08-29 15:05:43 +0200
# End at 2021-08-29 15:05:43 +0200
# Elapsed time: 178ms
# Number of errors: 0
# Account table:
# {:balance=>1040}
On Monday, 30 August 2021 at 17:39:27 UTC+10, Nicola wrote:
On 2021-08-29, Nicola wrote:Wait. I haven't started a transaction.
- Interestingly, when T1 and T2 are implemented as stored procedures,
no deadlocks occur
After adding begin transaction..
commit to each stored procedure, I do get deadlocks. That makes more
sense to me.
On Monday, 30 August 2021 at 17:39:27 UTC+10, Nicola wrote:
On 2021-08-29, Nicola wrote:Wait. I haven't started a transaction.
- Interestingly, when T1 and T2 are implemented as stored procedures,
no deadlocks occur
After adding begin transaction..
commit to each stored procedure, I do get deadlocks. That makes more
sense to me.
On Monday, 30 August 2021 at 17:39:27 UTC+10, Nicola wrote:
On 2021-08-29, Nicola wrote:Wait. I haven't started a transaction.
- Interestingly, when T1 and T2 are implemented as stored procedures,
no deadlocks occur
Yes, that was not clear. Refer my questions in my response.
After adding begin transaction..
commit to each stored procedure, I do get deadlocks. That makes more
sense to me.
1. Per my comments, re the lock duration, yes, that makes more sense.
2. But from a Transaction perspective, no, it doesn't make sense to me.
__ stored proc [T1] is not holding anything that [T2] holds, so a deadlock between them is not possible.
3. Therefore (debugging this remotely, with the little detail
provided), it would be that you do have several concurrent [T1]
connections that are deadlocking each other, nothing to do with [T2].
Best to read my response and provide more detail (summary) for each test.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 380 |
Nodes: | 16 (2 / 14) |
Uptime: | 50:50:22 |
Calls: | 8,143 |
Calls today: | 6 |
Files: | 13,085 |
Messages: | 5,858,632 |