SELECT
#LOG.m_used AS Used ,
#LOG.m_max AS [Max] ,
#LOG.m_used - #LOG.m_max AS [Overrun] ,
PREV.m_used - #LOG.m_used AS [Delta Used],
PREV.m_max - #LOG.m_max AS [Delta Max]
FROM #LOG
LEFT JOIN #LOG PREV ON PREV.N = #LOG.N - 1
As you see from the Overrun column, most of the time actual
memory usage exceeds the configured limit by 31-33
megabytes, occasionally coinciding with it, at which moments
an interesting anomaly takes place: a higher memory limit
results in a futher decrease of used memory, for example: a
limit of 857 Mb caused memory usage to drop to 888 Mb, but
then a higher limit of 887 megabytes decreased usage to the
same 887 megabytes!
Is this behavior ducumented anywhere, and is there a method
of stable and predictable control of used memory? I can
think of setting the limit to at least 34 megabytes less
than current usage, but this may depend on MSSQL version and
environment...
Max server memory main controls the buffer cache, which
also is the main consumer. But there are also memory
allocations that are outside "max server memory", so it is
not unusual for the actual usage to exceeds the setting a
little bit.
Don't make this too dynamic. I don't know for sure, but I
would assume that max server memory are one of the options
that clear the plan cache, and thus causes a lot of
recompilations. (Because the amount of available memory
affects compilation of queries.)
We have a server with usually seven to ten active MSSQL
instances, of which only few are intensively used at each
moment. Manually stopping unused instances and starting
required ones several times a week is too burdensome for
several reasons. In order to improve RAM utilisation, I
have written a simple script that tries to decrease memory
use by instances not in active operation.
[...]
I continue to experiment with my memory balancer, and its
worst defect is that sometimes by decreasing
'max server memory' it causes the isntance to freeze and
become unresponsive until restarted. There are three ranges
of these setting:
freeze high CPU Works OK, zero CPU
until usage while usage while idle
restart idle
|--------|-------------|--------------------->
0 V1 V2 max server memory
...
Have you an idea how to estimate minimum working value for
server memory, that is V2 in the diagram above?
I continue to experiment with my memory balancer, and
its worst defect is that sometimes by decreasing
'max server memory' it causes the isntance to freeze and
become unresponsive until restarted. There are three
ranges of these setting:
freeze high CPU Works OK, zero CPU
until usage while usage while idle
restart idle
|--------|-------------|--------------------->
0 V1 V2 max server memory
where V1 and V2 are idiosyncratic to each instance. I
have tried to estimate V1 and V2 using:
[...]
Have you an idea how to estimate minimum working value
for server memory, that is V2 in the diagram above?
You don't say which absolute values you are working with.
What I've noticed is that if you set "max server memory"
to really low values, below 400 MB, SQL Server will not
even start.
I'm not surprised if you can set "max server memory" to
higher values and still send your server into nirvana.
Deliberately, because I want to arrive at a universal
approach that will not depend on hard-coded values of V1 and
V2 but will rather estimate them. In my case, V1 and V2 are
usually between 190 and 350 Mb. My databases are usually
between 5-50 Gb, 5-15 databases per instance, and I see no
correlation between DB size and minimum required RAM...
And I am surprised, because an enterprise system should
either refuse a setting that sends it into limbo or comply
with it even at the expense of a terribly slow performace.
Becoming unresponsive and unable to start is bad manners,
especially because setting server memory requires that the
instance be running! The documentation says the minimum
value is 128 Mb.
Even 350 MB is too low in my opinion. I have not tested
vigorously, but my gut feeling is that you need at least
400 MB.
And you been flipping between. let's say, 2 GB and 8 GB, I
could have had some sympathy for you. Now I only say:
don't do that.
I can certainly sympathize with that opinion, but it may
not be trivial to implement. Say that you have a server
with 2TB of RAM, and most of that is in use. Say now that
for some reason you set "Max server memory" to 10 GB. Do
you think the server will stand up? It will certainly
grind to a standstill. (Although in the end it may not be
entirely unresponsive, just unbearlingly slow.)
And 128 MB was a workable value for SQL 2005 and maybe
also for SQL 2008. But not for SQL 2019.
I have raised this issue with Microsoft and that they need
to raise the minimum value, and I have a feedback item
here: https://feedback.azure.com/d365community/idea/3b7f1124-6225-ec11-b6e6-000d3a4f0da0
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 219:46:23 |
Calls: | 6,622 |
Calls today: | 4 |
Files: | 12,171 |
Messages: | 5,317,875 |