INFO-VAX Sun, 22 Jun 2008 Volume 2008 : Issue 346 Contents: Re: ACME Authentication issues when LDAP server is down. Re: LMF and abandonned products Re: PerfectCache on Integrity - anyone else using it? Virtualized VMS in clusters (general questions) Re: Virtualized VMS in clusters (general questions) ---------------------------------------------------------------------- Date: Sun, 22 Jun 2008 01:46:32 GMT From: Malcolm Dunnett Subject: Re: ACME Authentication issues when LDAP server is down. Message-ID: Michael D. Ober wrote: >> In may be "correct", but it's certainly not robust. Certainly not what one would expect of VMS. > > Actually, after some thought, it isn't "correct" either. I think they meant "correct" as in "we intended this behaviour" not as a comment that the behaviour is useful to the customer. > A "correct" > solution would take the user's actual requirement of the login subsystem > always working and never hanging into account, which means that multiple > LDAP servers, or even quick and transparent fallback to the VMS UAF for > authorization (without having to use the /LOCAL switch on the userid) > would be a "correct" solution. > Yes, I was rather disappointed with the response, I would think a "Production Quality" authentication mechanism in the VMS world would be more robust. Accepting multiple LDAP servers, with a reasonable timeout between them (may 5-10 seconds, certainly much less than the 1 minute it now has) would be better. Even if the current behaviour was modified so that I could control the time it waits for the LDAP server, and so that it would bypass the LDAP server entirely when one requests the VMS DOI (when making an ACME call) would be better than the current situation. I could re-write my code to do an old-style "$HASH_PASSWORD and read the SYSUAF entry directly" authentication if the ACME call to the LDAP DOI times out, but that's a lot of work and defeats the purpose of having a generalized authentication API. (btw, "my code" is a shim for the OSU authenticator to allow users to authenticate to the OSU server with password checking being done against their Active Directory account) I can only hope that the "future release" they may consider fixing this in isn't too far off. I would investigate purchasing the Process Software product, but money is extremely tight this year (and may be for then next few years too). > VMS Engineering's answer falls into the category of be "technically > correct but totally useless." > > Mike. > > ------------------------------ Date: Sat, 21 Jun 2008 17:53:14 GMT From: Roger Ivie Subject: Re: LMF and abandonned products Message-ID: On 2008-06-21, Michael Moroney wrote: > Roger Ivie writes: > >>This is why Steamboat Willie has not yet lapsed into the public domain. > > Steamboat Willie was created before the copyright law was changed, and > would be in the public domain. Quoting the Wikipedia entry for Steamboat Willie: > Steamboat Willie has been close to entering the public domain in the > United States several times. Each time, copyright protection in the > United States has been extended. ... The U.S. copyright on Steamboat > Willie will be in effect until at least 2023 unless there is another > change of the law. -- roger ivie rivie@ridgenet.net ------------------------------ Date: Sat, 21 Jun 2008 12:36:36 -0700 From: Marty Kuhrt Subject: Re: PerfectCache on Integrity - anyone else using it? Message-ID: etmsreec@yahoo.co.uk wrote: > On 17 Jun, 15:21, Jan-Erik Söderholm > wrote: >> etmsr...@yahoo.co.uk wrote: >>> Hiya, >>> We've had a couple of sites needing to use PerfectCache on Integrity, >>> VMS versions 8.3 and 8.3-1H1. The problems that we've experienced >>> make me suspect that few sites are now using PerfectCache so I'm >>> wondering if my suspicions are correct? >>> Is anyone out there using PerfectCache on Integrity? Experiences? >>> Thanks in advance >>> Steve >> Doesn't the current cache features in later VMS version >> more or less make these 3'rd party cache products obsolete ? >> >> Jan-Erik. > > No! > XFC is fine on a single node, but if you have a cluster that's opening > files for write as well as read on more than one node, XFC backs off > and won't cache it. Not helpful when you want to cache it! Wow! I just assumed XFC would at least try to match the capabilities of the third party caching products. When I was working tech lead with I/O Express (Executive Software) and later CacheManager (SyMark) we had block-level write invalidates to notify other nodes of a write. This would allow the other nodes in the cluster to do whatever without having to consider the whole file suspect. (We worked at the block level, anyway) That was over a decade ago. Of course, since I don't do product support anymore, I don't research product features. Maybe XFC was updated at some point. ------------------------------ Date: Sat, 21 Jun 2008 21:18:39 GMT From: winston@SSRL.SLAC.STANFORD.EDU (Alan Winston - SSRL Central Computing) Subject: Virtualized VMS in clusters (general questions) Message-ID: <00A7B732.2A5B08B6@SSRL.SLAC.STANFORD.EDU> VMSers -- I'm trying to wrap my head around how virtualized VMS systems participate in certain aspects of clustering and volume shadowing. There may be something I'm just not getting. So this is kind of general. At the HP Tech Forum I got to play with booting a virtual Itanium on a real Itanium blade (which ran HPVM which runs on HP/UX; my first HP/UX login ever). I've encountered, also, the SRI Alpha emulators, and seen the SimH VAX emulator running. (I was trying to ask the kind of question I'm asking here at the Wilm's very interesting session on the architecture of the Alpha emulator, and I made them sound like plain VMS questions, but they're really questions about the interaction of the emulated VMS node with the cluster, and I didn't formulate them very well in person. I totally get that what the SRI Alpha emulator provides is an Alpha inside your Windows box, and that once it's up, VMS is VMS - or Tru-64 is Tru-64, or Linux is Linux, or whatever.) I get that if you're standing outside a box running a VAX or Alpha emulator under some other host operating system, you might as well be standing outside a VAX or Alpha. The virtual machine interacts over the Ethernet just like a real machine, and if you can keep the interaction to the Ethernet, you've just got a fast cheap VAX or Alpha, or more than one. (I've seen two SimH-emulated VAXes on the same laptop clustered together over Ethernet, for example.) The host operating system forwards Ethernet traffic to a virtualized NIC on the virtual VAX or Alpha, which can then participate in an NI cluster, no problem. (Or maybe the host is able to have multiple NICs and dedicate one to the virtual machine.) But I'm curious about access to shared resources in other ways. I think the relatively trivial answer is to run your virtual nodes as satellite nodes over Ethernet, and then everything just works. (Assuming that the host OSes appropriately forward MOP requests and the responses thereto. Don't know if the host OSes care about protocols or if they'll just forward anything intended for that MAC address.) How does access to SAN disks work? (If my real host is Windows or HP/UX or Linux, and the real host has a Fibre Channel connection, then I have a box that isn't participating in my Distributed Lock Manager that of necessity has write access to my cluster disks, which seems like a bad idea. [I mean, the real host has a wwid and the EVA has to present the disk to the real host, right?] Is there some way in which I can dedicate a real host Fibre Channel connection to my virtual machines? Do I have to have one real VMS box with a Fibre Channel connection on the cluster presenting my SAN disks to the virtualized hosts over MSCP? What about single-system-disk clusters? Virtualized VMSes don't actually know what underlying device they've booted from, and they might not actually have booted from the same device. (You present the disk image, which might be a container file, a DVD, or whatever, and it looks to the emulator like a generic disk - in the hpvm case, a generic SCSI disk with the unit number you give it.) Can multiple virtual VMSes boot from and log to (do SYSUAF updates, put stuff in a cluster wide audit log, etc) a cluster-common disk? Who arbitrates access to it? Can you do shadowed system disk on it? How? Is an all-virtual cluster that uses the single-system-disk approach possible? On the Itanium-emulated-on-Itanium approach (which isn't supported until VMS 8.4) can you successfully run a multi-node cluster all on one physical box with a single system disk? [There's some appeal to this one because virtual machines inside the box can communicate via a virtual switch, with traffic never leaving the box - faster, no packet sniffing, etc. But I'm not clear whether the hp/ux virtual switch supports, eg, DECnet, MOP, SCS packets (although I note that 8.4 also contains clustering over IP, so if the virtual switch doesn't support SCS-qua-SCS but does support IP (which it *must*), this could still work.) Can you do host-based volume-shadowing on SAN disks that are actually being presented by non-VMS hosts (if that's how that works at all)? How about on container files presented by the host? (And in _that_ case, does the enterprise just rely on system managers being very careful to use the same unit numbers for the same files everywhere, because VMS just doesn't have the information to let you know you screwed up?) Any insight appreciated. Thanks, -- Alan ------------------------------ Date: Sat, 21 Jun 2008 18:10:37 -0400 From: JF Mezei Subject: Re: Virtualized VMS in clusters (general questions) Message-ID: <02a8cd6f$0$25033$c3e8da3@news.astraweb.com> Alan Winston - SSRL Central Computing wrote: > I'm trying to wrap my head around how virtualized VMS systems participate in > certain aspects of clustering and volume shadowing. There may be something > I'm just not getting. So this is kind of general. When VMS boots, it asks the hardware layer (EFI in case of those IA64 things) about what devices are available. So when HP-UX hosts an instance of VMS, you configure the HP=UX software to give the VMS instance a list of devices it should have access to. I assume HP-UX provides an EFI emulator which interacts with VMS and which gives VMS the hardware config VMS will then use. There are different ways to deal with hardware abstaction: Use a "galaxy" style where the early boot environment gives each instance its own list of exclusive devices and the instances deal with the devices directly. This has the least amount of overhead because there is no middleman to process each IO. Use a low level intercept. VMS is told its has an EWA0: ethernet device. When it makes a IO request to it, HP-UX intercepts it and resends it to the actual ethernet device which would exists at the HP-UX level. Eassentially a remapping of device name from HP-UX names to VMS names. Another way is to have special drivers in the hosted OS which automatically remap IO requests to the HP=UX software. Consider how Insignia did it for hosting Windows on the old Macs: Windows had a couple of special Insignia provided drivers. When a Windows app wanted to connect to the internet, a special driver at the windows level would handle those socket calls and pass them on to MacOS. From the MACos point of view, it was getting a socket call from the insignia application to connect to the internet. This means that you didn't need to configure the internet connection on windows, it was using the Mac native config. But this means that you couldn't have a web server running on both the MAcOS and emulated Windows because they would both try to listen to the same port 80. ------------------------------ End of INFO-VAX 2008.346 ************************