• Current limitations os VMS X86?

    From =?utf-8?Q?Stefan_M=C3=B6ding?=@21:1/5 to All on Thu Dec 7 14:09:19 2023
    Hi!

    Reading the vsi-openvms-x86-64-v921-installation-guide left me a bit
    confused. Maybe someone can enlighten me on the following two issues I am wondering about:

    Cluster with shared system disk

    Is this possible on VMS X86? The installation manual describes a cluster
    setup using a cloned disk for the seconds node. This looks to me as if currently only MSCP served local disks are possible in a VMS X86 cluster.

    Terminal connection for installation

    A serial port attached to the ESXi host seems to be required for the installation. Is this used only for the installation or will this be the operator console later on? Can VMS X86 run without a serial console? If
    the console is attached to an ESXI network port, then running a VMS
    machine on a VMware cluster where the VM can be migrated from one ESXi
    host to another seems not feasible.

    --
    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans Bachner@21:1/5 to All on Fri Dec 8 16:47:00 2023
    Hallo Stefan,

    Stefan Möding schrieb am 07.12.2023 um 14:09:
    Hi!

    Reading the vsi-openvms-x86-64-v921-installation-guide left me a bit confused. Maybe someone can enlighten me on the following two issues I am wondering about:

    Cluster with shared system disk

    Is this possible on VMS X86? The installation manual describes a cluster setup using a cloned disk for the seconds node. This looks to me as if currently only MSCP served local disks are possible in a VMS X86 cluster.

    I'm not sure about the current situation.

    My latest informations says, that you can't use a shared SCSI disk as a
    system disk when running under VMware. I don't know about the other virtualization products. I think you *can* use a fibre channel disk as a
    shared system disk.

    Terminal connection for installation

    A serial port attached to the ESXi host seems to be required for the installation. Is this used only for the installation or will this be the operator console later on? Can VMS X86 run without a serial console? If
    the console is attached to an ESXI network port, then running a VMS
    machine on a VMware cluster where the VM can be migrated from one ESXi
    host to another seems not feasible.

    No, (acess to) a serial port on the ESXi host is not required. You
    define a serial port for the VM you want to use for VMS, and connect to
    that via a raw TCP/IP port.

    Hope this helps,
    Hans.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?utf-8?Q?Stefan_M=C3=B6ding?=@21:1/5 to Hans Bachner on Sat Dec 9 10:47:37 2023
    Hi Hans!

    Hans Bachner <hans@bachner.priv.at> writes:

    My latest informations says, that you can't use a shared SCSI disk as
    a system disk when running under VMware.

    That's also what I understand from the installation manual. The
    clustering manual is also only referring to Alpha or Itanium.

    I think you *can* use a fibre channel disk as a shared system disk.

    And this would be on bare metal or also on VMware (using a raw device
    instead of a VMDK)?


    No, (acess to) a serial port on the ESXi host is not required. You
    define a serial port for the VM you want to use for VMS, and connect
    to that via a raw TCP/IP port.

    I was referring to the TCP/IP port. But this would force the VM to stay
    on the same ESXi host. I could live with that if it was only necessary
    during installation.

    A VMware cluster normally migrates VMs between ESXi hosts to optimize resources. Even restarting the VM on a different ESXi host after hardware failure would be more complicated if the console would have to be
    "rewired" on the new host first.

    There seems to be a serial port concentrator software for VMware that
    allows VM migration. But that's a licenced product from yet another
    company.

    Thanks!

    --
    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Jan-Erik_S=C3=B6derholm?=@21:1/5 to All on Sat Dec 9 14:40:34 2023
    Den 2023-12-09 kl. 10:47, skrev Stefan Möding:
    Hi Hans!

    Hans Bachner <hans@bachner.priv.at> writes:

    My latest informations says, that you can't use a shared SCSI disk as
    a system disk when running under VMware.

    That's also what I understand from the installation manual. The
    clustering manual is also only referring to Alpha or Itanium.

    I think you *can* use a fibre channel disk as a shared system disk.

    And this would be on bare metal or also on VMware (using a raw device
    instead of a VMDK)?


    No, (acess to) a serial port on the ESXi host is not required. You
    define a serial port for the VM you want to use for VMS, and connect
    to that via a raw TCP/IP port.

    I was referring to the TCP/IP port. But this would force the VM to stay
    on the same ESXi host. I could live with that if it was only necessary during installation.

    A VMware cluster normally migrates VMs between ESXi hosts to optimize resources. Even restarting the VM on a different ESXi host after hardware failure would be more complicated if the console would have to be
    "rewired" on the new host first.

    There seems to be a serial port concentrator software for VMware that
    allows VM migration. But that's a licenced product from yet another
    company.

    Thanks!


    But for a configured and running OpenVMS system, there is
    really no need for a "consol".

    And if properly setup with "auto-boot", it should boot just
    fine without a connection to the console IP port.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert A. Brooks@21:1/5 to All on Sat Dec 9 09:09:21 2023
    On 12/9/2023 4:47 AM, Stefan Möding wrote:
    Hi Hans!

    Hans Bachner <hans@bachner.priv.at> writes:

    My latest informations says, that you can't use a shared SCSI disk as
    a system disk when running under VMware.

    That's also what I understand from the installation manual. The
    clustering manual is also only referring to Alpha or Itanium.

    I think you *can* use a fibre channel disk as a shared system disk.

    And this would be on bare metal or also on VMware (using a raw device
    instead of a VMDK)?

    The only supported method to access fibre channel disks is if they
    are served via MSCP.

    Shared system disks are not supported on X86_64.

    We are investigating ESXi's fibre channel passthrough mechanism, which would allow direct access to fibre channel storage. Booting, however, is a bit more complicated, so if we do support this passthrough mechanism, the initial support
    would be data-only.

    We realize there are ways to get booting through fibre channel passthrough to work, but it takes some effort getting the relevant boot drivers in place.

    --

    --- Rob

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?utf-8?Q?Stefan_M=C3=B6ding?=@21:1/5 to Robert A. Brooks on Sat Dec 9 15:20:54 2023
    Hi Robert,

    "Robert A. Brooks" <FIRST.LAST@vmssoftware.com> writes:

    Shared system disks are not supported on X86_64.

    Thanks for the clarification!

    --
    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Robert A. Brooks on Sat Dec 9 10:32:02 2023
    On 12/9/2023 9:09 AM, Robert A. Brooks wrote:
    On 12/9/2023 4:47 AM, Stefan Möding wrote:
    Hans Bachner <hans@bachner.priv.at> writes:
    My latest informations says, that you can't use a shared SCSI disk as
    a system disk when running under VMware.

    That's also what I understand from the installation manual.  The
    clustering manual is also only referring to Alpha or Itanium.

    I think you *can* use a fibre channel disk as a shared system disk.

    And this would be on bare metal or also on VMware (using a raw device
    instead of a VMDK)?

    The only supported method to access fibre channel disks is if they
    are served via MSCP.

    Shared system disks are not supported on X86_64.

    We are investigating ESXi's fibre channel passthrough mechanism, which
    would
    allow direct access to fibre channel storage.  Booting, however, is a
    bit more
    complicated, so if we do support this passthrough mechanism, the initial support
    would be data-only.

    We realize there are ways to get booting through fibre channel
    passthrough to work, but it takes some effort getting the relevant boot drivers in place.

    You need some exciting new features to put in version 10.0!

    :-) :-) :-)

    I suspect the demand for VMS cluster is less now than it
    was 30 years ago.

    In the old days N MicroVAX was cheaper than a 6000 with N CPU and
    N DS series was cheaper than a GS with N CPU. But today
    a x86-64 VM with 16 or 32 VCPU is nothing. So no need for clustering
    to scale horizontally for existing VMS applications.

    Then we have redundancy. But a lot of newer applications does
    not require an OS cluster and are fine doing application clustering
    over multiple standalone nodes. There are still some applications
    build on VMS cluster - Rdb comes to my mind - but having 2 separate
    system disks are just a minor annoyance (*).

    *) And I have done it. It has always been necessary for mixed
    architecture cluster (VAX & Alpha in my case).

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)