[gpfsug-discuss] GPFS 3.5 to 4.1 Upgrade Question
Jan-Frode Myklebust
janfrode at tanso.net
Tue Dec 6 08:04:04 GMT 2016
Currently I'm with IBM Lab Services, and only have small test clusters
myself.
I'm not sure I've done v3.5->4.1 upgrades, but this warning
about upgrading all nodes within a "short time" is something that's always
been in the upgrade instructions, and I've been through many of these (I've
been a gpfs sysadmin since 2002 :-)
http://www.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm
https://www.scribd.com/document/51036833/GPFS-V3-4-Concepts-Planning-and-Installation-Guide
BTW: One relevant issue I saw recently was a rolling upgrade from 4.1.0 to
4.1.1.7 where we had some nodes in the cluster running 4.1.0.0. Apparently
there had been some CCR message format changes in a later release that made
4.1.0.0-nodes not being able to properly communicate with 4.1.1.4 -- even
though they should be able to co-exist in the same cluster according to the
upgrade instructions. So I guess the more versions you mix in a cluster,
the more likely you're to hit a version mismatch bug. Best to feel a tiny
bit uneasy about not running same version on all nodes, and hurry to get
them all upgraded to the same level. And also, should you hit a bug during
this process, the likely answer will be to upgrade everything to same level.
-jf
On Tue, Dec 6, 2016 at 12:00 AM, Aaron Knister <aaron.s.knister at nasa.gov>
wrote:
> Thanks Jan-Frode! If you don't mind sharing, over what period of time did
> you upgrade from 3.5 to 4.1 and roughly how many clients/servers do you
> have in your cluster?
>
> -Aaron
>
> On 12/5/16 5:52 PM, Jan-Frode Myklebust wrote:
>
>> I read it as "do your best". I doubt there can be problems that shows up
>> after 3 weeks, that wouldn't also be triggerable after 1 day.
>>
>>
>> -jf
>>
>> man. 5. des. 2016 kl. 22.32 skrev Aaron Knister
>> <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>>:
>>
>>
>> Hi Everyone,
>>
>> In the GPFS documentation
>> (http://www.ibm.com/support/knowledgecenter/SSFKCN_4.1.0/com
>> .ibm.cluster.gpfs.v4r1.gpfs300.doc/bl1ins_migratl.htm)
>> it has this to say about the duration of an upgrade from 3.5 to 4.1:
>>
>> > Rolling upgrades allow you to install new GPFS code one node at a
>> time without shutting down GPFS
>> > on other nodes. However, you must upgrade all nodes within a short
>> time. The time dependency exists
>> >because some GPFS 4.1 features become available on each node as soon
>> as
>> the node is upgraded, while
>> >other features will not become available until you upgrade all
>> participating nodes.
>>
>> Does anyone have a feel for what "a short time" means? I'm looking to
>> upgrade from 3.5.0.31 to 4.1.1.10 in a rolling fashion but given the
>> size of our system it might take several weeks to complete. Seeing
>> this
>> language concerns me that after some period of time something bad is
>> going to happen, but I don't know what that period of time is.
>>
>> Also, if anyone has done a rolling 3.5 to 4.1 upgrade and has any
>> anecdotes they'd like to share, I would like to hear them.
>>
>> Thanks!
>>
>> -Aaron
>>
>> --
>> Aaron Knister
>> NASA Center for Climate Simulation (Code 606.2)
>> Goddard Space Flight Center
>> (301) 286-2776
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161206/18ed0923/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list