The Linux kernel for powerpc since v5.2 has a bug which allows a
malicious KVM guest to crash the host, when the host is running on
Power8.
Only machines using Linux as the hypervisor, aka. KVM, powernv or bare
metal, are affected by the bug. Machines running PowerVM are not
affected.
The bug was introduced in:
10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")
Which was first released in v5.2.
The upstream fix is:
cdeb5d7d890e ("KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guest")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cdeb5d7d890e14f3b70e8087e745c4a6a7d9f337
Which will be included in the v5.16 release.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=206669
[2] https://buildd.debian.org/status/package.php?p=git&suite=experimental
Hi Michael!
The Linux kernel for powerpc since v5.2 has a bug which allows a
malicious KVM guest to crash the host, when the host is running on
Power8.
Only machines using Linux as the hypervisor, aka. KVM, powernv or bare
metal, are affected by the bug. Machines running PowerVM are not
affected.
The bug was introduced in:
10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")
Which was first released in v5.2.
The upstream fix is:
cdeb5d7d890e ("KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guest")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cdeb5d7d890e14f3b70e8087e745c4a6a7d9f337
Which will be included in the v5.16 release.
I have tested these patches against 5.14 but it seems the problem [1] still remains for me
for big-endian guests. I built a patched kernel yesterday, rebooted the KVM server and let
the build daemons do their work over night.
When I got up this morning, I noticed the machine was down, so I checked the serial console
via IPMI and saw the same messages again as reported in [1]:
[41483.963562] watchdog: BUG: soft lockup - CPU#104 stuck for 25521s! [migration/104:175]
[41507.963307] watchdog: BUG: soft lockup - CPU#104 stuck for 25544s! [migration/104:175]
[41518.311200] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41518.311216] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2729959
[41547.962882] watchdog: BUG: soft lockup - CPU#104 stuck for 25581s! [migration/104:175]
[41571.962627] watchdog: BUG: soft lockup - CPU#104 stuck for 25603s! [migration/104:175]
[41581.330530] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41581.330546] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2736378
[41611.962202] watchdog: BUG: soft lockup - CPU#104 stuck for 25641s! [migration/104:175]
[41635.961947] watchdog: BUG: soft lockup - CPU#104 stuck for 25663s! [migration/104:175]
[41644.349859] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41644.349876] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2742753
[41671.961564] watchdog: BUG: soft lockup - CPU#104 stuck for 25697s! [migration/104:175]
[41695.961309] watchdog: BUG: soft lockup - CPU#104 stuck for 25719s! [migration/104:175]
[41707.369190] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41707.369206] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2749151
[41735.960884] watchdog: BUG: soft lockup - CPU#104 stuck for 25756s! [migration/104:175]
[41759.960629] watchdog: BUG: soft lockup - CPU#104 stuck for 25778s! [migration/104:175]
[41770.388520] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41770.388548] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2755540
[41776.076307] rcu: rcu_sched kthread timer wakeup didn't happen for 1423 jiffies! g49897 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[41776.076327] rcu: Possible timer handling issue on cpu=32 timer-softirq=1056014
[41776.076336] rcu: rcu_sched kthread starved for 1424 jiffies! g49897 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=32
[41776.076350] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[41776.076360] rcu: RCU grace-period kthread stack dump:
[41776.076434] rcu: Stack dump where RCU GP kthread last ran:
[41783.960374] watchdog: BUG: soft lockup - CPU#104 stuck for 25801s! [migration/104:175]
[41807.960119] watchdog: BUG: soft lockup - CPU#104 stuck for 25823s! [migration/104:175]
[41831.959864] watchdog: BUG: soft lockup - CPU#104 stuck for 25846s! [migration/104:175]
[41833.407851] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41833.407868] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2760381
[41863.959524] watchdog: BUG: soft lockup - CPU#104 stuck for 25875s! [migration/104:175]
It seems that in this case, it was the testsuite of the git package [2] that triggered the bug. As you
can see from the overview, the git package has been in the building state for 8 hours meaning the
build server crashed and is no longer giving feedback to the database.
Hi Michael!
The Linux kernel for powerpc since v5.2 has a bug which allows a
malicious KVM guest to crash the host, when the host is running on
Power8.
Only machines using Linux as the hypervisor, aka. KVM, powernv or bare
metal, are affected by the bug. Machines running PowerVM are not
affected.
The bug was introduced in:
10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")
Which was first released in v5.2.
The upstream fix is:
cdeb5d7d890e ("KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guest")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cdeb5d7d890e14f3b70e8087e745c4a6a7d9f337
Which will be included in the v5.16 release.
I have tested these patches against 5.14 but it seems the problem [1] still remains for me
for big-endian guests. I built a patched kernel yesterday, rebooted the KVM server and let
the build daemons do their work over night.
When I got up this morning, I noticed the machine was down, so I checked the serial console
via IPMI and saw the same messages again as reported in [1]:
[41483.963562] watchdog: BUG: soft lockup - CPU#104 stuck for 25521s! [migration/104:175]
[41507.963307] watchdog: BUG: soft lockup - CPU#104 stuck for 25544s! [migration/104:175]
[41518.311200] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41518.311216] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2729959
[41547.962882] watchdog: BUG: soft lockup - CPU#104 stuck for 25581s! [migration/104:175]
[41571.962627] watchdog: BUG: soft lockup - CPU#104 stuck for 25603s! [migration/104:175]
[41581.330530] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41581.330546] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2736378
[41611.962202] watchdog: BUG: soft lockup - CPU#104 stuck for 25641s! [migration/104:175]
[41635.961947] watchdog: BUG: soft lockup - CPU#104 stuck for 25663s! [migration/104:175]
[41644.349859] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41644.349876] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2742753
[41671.961564] watchdog: BUG: soft lockup - CPU#104 stuck for 25697s! [migration/104:175]
[41695.961309] watchdog: BUG: soft lockup - CPU#104 stuck for 25719s! [migration/104:175]
[41707.369190] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41707.369206] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2749151
[41735.960884] watchdog: BUG: soft lockup - CPU#104 stuck for 25756s! [migration/104:175]
[41759.960629] watchdog: BUG: soft lockup - CPU#104 stuck for 25778s! [migration/104:175]
[41770.388520] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41770.388548] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2755540
[41776.076307] rcu: rcu_sched kthread timer wakeup didn't happen for 1423 jiffies! g49897 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[41776.076327] rcu: Possible timer handling issue on cpu=32 timer-softirq=1056014
[41776.076336] rcu: rcu_sched kthread starved for 1424 jiffies! g49897 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=32
[41776.076350] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[41776.076360] rcu: RCU grace-period kthread stack dump:
[41776.076434] rcu: Stack dump where RCU GP kthread last ran:
[41783.960374] watchdog: BUG: soft lockup - CPU#104 stuck for 25801s! [migration/104:175]
[41807.960119] watchdog: BUG: soft lockup - CPU#104 stuck for 25823s! [migration/104:175]
[41831.959864] watchdog: BUG: soft lockup - CPU#104 stuck for 25846s! [migration/104:175]
[41833.407851] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [41833.407868] rcu: 136-...0: (135 ticks this GP) idle=242/1/0x4000000000000000 softirq=32031/32033 fqs=2760381
[41863.959524] watchdog: BUG: soft lockup - CPU#104 stuck for 25875s! [migration/104:175]
I did test the repro case you gave me before (in the bugzilla), which
was building glibc, that passes for me with a patched host.
I guess we have yet another bug.
I tried the following in a debian BE VM and it completed fine:
$ dget -u http://ftp.debian.org/debian/pool/main/g/git/git_2.33.1-1.dsc
$ sbuild -d sid --arch=powerpc --no-arch-all git_2.33.1-1.dsc
Same for ppc64.
And I also tried both at once, repeatedly in a loop.
I guess it's something more complicated.
What exact host/guest kernel versions and configs are you running?
John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> writes:
Hi Michael!
On 10/27/21 07:30, Michael Ellerman wrote:
I did test the repro case you gave me before (in the bugzilla), which
was building glibc, that passes for me with a patched host.
Did you manage to crash the unpatched host?
Yes, the parallel builds of glibc you described crashed the unpatched
host 100% reliably for me.
I also have a standalone reproducer I'll send you.
Also, I'll try a kernel from git with Debian's config.
I guess we have yet another bug.
I tried the following in a debian BE VM and it completed fine:
$ dget -u http://ftp.debian.org/debian/pool/main/g/git/git_2.33.1-1.dsc >>> $ sbuild -d sid --arch=powerpc --no-arch-all git_2.33.1-1.dsc
Same for ppc64.
And I also tried both at once, repeatedly in a loop.
Did you try building gcc-11 for powerpc and ppc64 both at once?
No, I will try that now.
Hi Michael!
On 10/27/21 07:30, Michael Ellerman wrote:
I did test the repro case you gave me before (in the bugzilla), which
was building glibc, that passes for me with a patched host.
Did you manage to crash the unpatched host?
If the unpatched host crashes for you but the patched doesn't, I will
make sure I didn't accidentally miss anything.
Also, I'll try a kernel from git with Debian's config.
I guess we have yet another bug.
I tried the following in a debian BE VM and it completed fine:
$ dget -u http://ftp.debian.org/debian/pool/main/g/git/git_2.33.1-1.dsc
$ sbuild -d sid --arch=powerpc --no-arch-all git_2.33.1-1.dsc
Same for ppc64.
And I also tried both at once, repeatedly in a loop.
Did you try building gcc-11 for powerpc and ppc64 both at once?
I guess it's something more complicated.
What exact host/guest kernel versions and configs are you running?
Both the host and guest are running Debian's stock 5.14.12 kernel. The host has
a kernel with your patches applied, the guest doesn't.
Let me do some more testing.
On 10/27/21 13:06, Michael Ellerman wrote:
John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> writes:
On 10/27/21 07:30, Michael Ellerman wrote:
I did test the repro case you gave me before (in the bugzilla), which
was building glibc, that passes for me with a patched host.
Did you manage to crash the unpatched host?
Yes, the parallel builds of glibc you described crashed the unpatched
host 100% reliably for me.
OK, that is very good news!
I also have a standalone reproducer I'll send you.
Thanks, that would be helpful!
Also, I'll try a kernel from git with Debian's config.
I guess we have yet another bug.
I tried the following in a debian BE VM and it completed fine:
$ dget -u http://ftp.debian.org/debian/pool/main/g/git/git_2.33.1-1.dsc >>>> $ sbuild -d sid --arch=powerpc --no-arch-all git_2.33.1-1.dsc
Same for ppc64.
And I also tried both at once, repeatedly in a loop.
Did you try building gcc-11 for powerpc and ppc64 both at once?
No, I will try that now.
No, I will try that now.
That completed fine on my BE VM here.
I ran these in two tmux windows:
$ sbuild -d sid --arch=powerpc --no-arch-all gcc-11_11.2.0-10.dsc
$ sbuild -d sid --arch=ppc64 --no-arch-all gcc-11_11.2.0-10.dsc
The VM has 32 CPUs, with 4 threads per core:
$ ppc64_cpu --info
Core 0: 0* 1* 2* 3*
Core 1: 4* 5* 6* 7*
Core 2: 8* 9* 10* 11*
Core 3: 12* 13* 14* 15*
Core 4: 16* 17* 18* 19*
Core 5: 20* 21* 22* 23*
Core 6: 24* 25* 26* 27*
Core 7: 28* 29* 30* 31*
I am not sure what triggered my previous crash but I don't think it's related to this
particular bug. I will keep monitoring the server in any case and open a new bug report
in case I'm running into similar issues.
I have tested these patches against 5.14 but it seems the problem [1] still remains for me
for big-endian guests. I built a patched kernel yesterday, rebooted the KVM server and let
the build daemons do their work over night.
The following packages were being built at the same time:
- guest 1: virtuoso-opensource and openturns
- guest 2: llvm-toolchain-13
I really did a lot of testing today with no issues and just after I sent my report
to oss-security that the machine seems to be stable again, the issue showed up :(.
Hi Michael!
On 10/28/21 13:20, John Paul Adrian Glaubitz wrote:
It seems I also can no longer reproduce the issue, even when building the most problematic
packages and I think we should consider it fixed for now. I will keep monitoring the server,
of course, and will let you know in case the problem shows again.
The host machine is stuck again but I'm not 100% sure what triggered the problem:
[194817.984249] watchdog: BUG: soft lockup - CPU#80 stuck for 246s! [CPU 2/KVM:1836]
[194818.012248] watchdog: BUG: soft lockup - CPU#152 stuck for 246s! [CPU 3/KVM:1837]
[194825.960164] watchdog: BUG: soft lockup - CPU#24 stuck for 246s! [khugepaged:318]
[194841.983991] watchdog: BUG: soft lockup - CPU#80 stuck for 268s! [CPU 2/KVM:1836]
[194842.011991] watchdog: BUG: soft lockup - CPU#152 stuck for 268s! [CPU 3/KVM:1837]
[194849.959906] watchdog: BUG: soft lockup - CPU#24 stuck for 269s! [khugepaged:318]
[194865.983733] watchdog: BUG: soft lockup - CPU#80 stuck for 291s! [CPU 2/KVM:1836]
[194866.011733] watchdog: BUG: soft lockup - CPU#152 stuck for 291s! [CPU 3/KVM:1837]
[194873.959648] watchdog: BUG: soft lockup - CPU#24 stuck for 291s! [khugepaged:318]
[194889.983475] watchdog: BUG: soft lockup - CPU#80 stuck for 313s! [CPU 2/KVM:1836]
[194890.011475] watchdog: BUG: soft lockup - CPU#152 stuck for 313s! [CPU 3/KVM:1837]
[194897.959390] watchdog: BUG: soft lockup - CPU#24 stuck for 313s! [khugepaged:318]
[194913.983218] watchdog: BUG: soft lockup - CPU#80 stuck for 335s! [CPU 2/KVM:1836]
[194914.011217] watchdog: BUG: soft lockup - CPU#152 stuck for 335s! [CPU 3/KVM:1837]
[194921.959133] watchdog: BUG: soft lockup - CPU#24 stuck for 336s! [khugepaged:318]
Soft lockup should mean it's taking timer interrupts still, just not scheduling. Do you have the hard lockup detector enabled as well? Is
there anything stuck spinning on another CPU?
Do you have the full dmesg / kernel log for this boot?
Could you try a sysrq+w to get a trace of blocked tasks?
Are you able to shut down the guests and exit qemu normally?
[1] https://www.kernel.org/doc/html/latest/admin-guide/lockup-watchdogs.html
That completed fine on my BE VM here.
I ran these in two tmux windows:
$ sbuild -d sid --arch=powerpc --no-arch-all gcc-11_11.2.0-10.dsc
$ sbuild -d sid --arch=ppc64 --no-arch-all gcc-11_11.2.0-10.dsc
Hi Michael!
On 10/28/21 08:39, Michael Ellerman wrote:
That completed fine on my BE VM here.
I ran these in two tmux windows:
$ sbuild -d sid --arch=powerpc --no-arch-all gcc-11_11.2.0-10.dsc
$ sbuild -d sid --arch=ppc64 --no-arch-all gcc-11_11.2.0-10.dsc
Could you try gcc-10 instead? It's testsuite has crashed the host for me
with a patched kernel twice now.
$ dget -u https://deb.debian.org/debian/pool/main/g/gcc-10/gcc-10_10.3.0-12.dsc
$ sbuild -d sid --arch=powerpc --no-arch-all gcc-10_10.3.0-12.dsc
$ sbuild -d sid --arch=ppc64 --no-arch-all gcc-10_10.3.0-12.dsc
Sure, will give that a try.
I was able to crash my machine over the weekend, building openjdk, but I haven't been able to reproduce it for ~24 hours now (I didn't change anything).
Can you try running your guests with no SMT threads?
I think one of your guests was using:
-smp 32,sockets=1,dies=1,cores=8,threads=4
Can you change that to:
-smp 8,sockets=1,dies=1,cores=8,threads=1
And something similar for the other guest(s).
If the system is stable with those settings that would be useful
information, and would also mean you could use the system without it
crashing semi regularly.
I made another experiment and upgraded the host to 5.15-rc7 which contains your
fixes and made the guests build gcc-10. Interestingly, this time, the gcc-10 build crashed the guest but didn't manage to crash the host. I will update the
guest to 5.15-rc7 now as well and see how that goes.
Hi Nicholas!
On 10/29/21 02:41, Nicholas Piggin wrote:
Soft lockup should mean it's taking timer interrupts still, just not scheduling. Do you have the hard lockup detector enabled as well? Is
there anything stuck spinning on another CPU?
Could you try a sysrq+w to get a trace of blocked tasks?
Not sure how to send a magic sysrequest over the IPMI serial console. Any idea?
Sure, will give that a try.
I was able to crash my machine over the weekend, building openjdk, but I haven't been able to reproduce it for ~24 hours now (I didn't change anything).
Can you try running your guests with no SMT threads?
I think one of your guests was using:
-smp 32,sockets=1,dies=1,cores=8,threads=4
Can you change that to:
-smp 8,sockets=1,dies=1,cores=8,threads=1
And something similar for the other guest(s).--
If the system is stable with those settings that would be useful
information, and would also mean you could use the system without it
crashing semi regularly.
cheers
Hi Michael!
Sorry for the long time without any responses. Shall we continue debugging this?
We're currently running 5.15.x on the host system and the guests and the testsuite
for gcc-9 still reproducibly kills the KVM host.
On 11/1/21 07:53, Michael Ellerman wrote:
Sure, will give that a try.--
I was able to crash my machine over the weekend, building openjdk, but I
haven't been able to reproduce it for ~24 hours now (I didn't change
anything).
Can you try running your guests with no SMT threads?
I think one of your guests was using:
-smp 32,sockets=1,dies=1,cores=8,threads=4
Can you change that to:
-smp 8,sockets=1,dies=1,cores=8,threads=1
And something similar for the other guest(s).
If the system is stable with those settings that would be useful
information, and would also mean you could use the system without it
crashing semi regularly.
cheers
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer - glaubitz@debian.org
`. `' Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
We're currently running 5.15.x on the host system and the guests and the testsuite
for gcc-9 still reproducibly kills the KVM host.
Have you been able to try the different -smp options I suggested?
Can you separately test with (on the host):
# echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
Can you separately test with (on the host):
# echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
I'm trying to turn off "dynamic_mt_modes" first and see if that makes any difference.
I will report back.
On 1/7/22 12:20, John Paul Adrian Glaubitz wrote:
Can you separately test with (on the host):
# echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
I'm trying to turn off "dynamic_mt_modes" first and see if that makes any difference.
I will report back.
So far the machine is running stable now and the VM built gcc-9 without crashing the host. I will continue to monitor the machine and report back
if it crashes, but it looks like this could be it.
On 1/9/22 23:17, John Paul Adrian Glaubitz wrote:
On 1/7/22 12:20, John Paul Adrian Glaubitz wrote:
Can you separately test with (on the host):
# echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
I'm trying to turn off "dynamic_mt_modes" first and see if that makes any difference.
I will report back.
So far the machine is running stable now and the VM built gcc-9 without
crashing the host. I will continue to monitor the machine and report back
if it crashes, but it looks like this could be it.
So, it seems that setting "dynamic_mt_modes" actually did the trick, the host is no longer
crashing. However, I have observed on two occasions now that the build VM is just suddenly
off as if someone has shut it down using the "force-off" option in the virt-manager user
interface.
Hi Michael!
On 1/13/22 01:17, John Paul Adrian Glaubitz wrote:
On 1/9/22 23:17, John Paul Adrian Glaubitz wrote:any difference.
On 1/7/22 12:20, John Paul Adrian Glaubitz wrote:
Can you separately test with (on the host):
# echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
I'm trying to turn off "dynamic_mt_modes" first and see if that makes
back
I will report back.
So far the machine is running stable now and the VM built gcc-9 without
crashing the host. I will continue to monitor the machine and report
if it crashes, but it looks like this could be it.
So, it seems that setting "dynamic_mt_modes" actually did the trick, thehost is no longer
crashing. However, I have observed on two occasions now that the buildVM is just suddenly
off as if someone has shut it down using the "force-off" option in thevirt-manager user
interface.
Just as a heads-up. Ever since I set
echo 0 > /sys/module/kvm_hv/parameters/dynamic_mt_modes
on the host machine, I never saw the crash again. So the issue seems to be related to the
dynamic_mt_modes feature.
Thanks,
Adrian
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer - glaubitz@debian.org
`. `' Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 221:51:47 |
Calls: | 6,623 |
Calls today: | 5 |
Files: | 12,171 |
Messages: | 5,318,103 |