[gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

Greg.Lehmann at csiro.au Greg.Lehmann at csiro.au
Tue Jan 9 03:46:48 GMT 2018


This had me wondering, so I tried SLES 12 SP3 and thankfully GPFS v5 still runs after the kernel patch and an mmbuildgpl. It was just a test box I had at the time, so I don't have any comments on performance.


From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peinkofer, Stephan
Sent: Tuesday, 9 January 2018 7:11 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS


Dear List,



my very personal experience today, using the patched kernel for SLES 12.1 LTS (3.12.74-60.64.69.1) on one single VM, was that GPFS (4.2.3-4) did not even start (the kernel modules seemed to compile fine using mmbuildgpl). Interestingly, even when I disabled PTI explicitely, using the nopti kernel option, GPFS refused to start with the same error!?



mmfs.log always showed something like this:

...

/usr/lpp/mmfs/bin/runmmfs[336]: .[213]: loadKernelExt[674]: InsModWrapper[95]: eval: line 1: 3915: Memory fault

...

2018-01-08_09:01:27.520+0100 runmmfs: error in loading or unloading the mmfs kernel extension

...

Since I had no time to investigate the issue further and raise a ticket right now, I just downgraded to the previous kernel and everything worked again.

As we have to patch at least the login nodes of our HPC clusters asap, I would also appreciate if we could get a statement from IBM how the KPTI patches are expected to interact with GPFS and if there are any (general) problems, when we can expect updated GPFS packages.

Many thanks in advance.
Best Regards,
Stephan Peinkofer

________________________________
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Buterbaugh, Kevin L <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
Sent: Monday, January 8, 2018 5:52 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

Hi GPFS Team,

Thanks for this response.  If it is at all possible I know that we (and I would suspect many others are in this same boat) would greatly appreciate a update from IBM on how a patched kernel impacts GPFS functionality.  Yes, we'd love to know the performance impact of the patches on GPFS, but that pales in significance to knowing whether GPFS version 4.x.x.x will even *start* with the patched kernel(s).

Thanks again...

Kevin


On Jan 4, 2018, at 4:55 PM, IBM Spectrum Scale <scale at us.ibm.com<mailto:scale at us.ibm.com>> wrote:

Kevin,

The team is aware of Meltdown and Spectre. Due to the late availability of production-ready test patches (they became available today) we started today working on evaluating the impact of applying these patches. The focus would be both on any potential functional impacts (especially to the kernel modules shipped with GPFS) and on the performance degradation which affects user/kernel mode transitions. Performance characterization will be complex, as some system calls which may get invoked often by the mmfsd daemon will suddenly become significantly more expensive because of the kernel changes. Depending on the main areas affected, code changes might be possible to alleviate the impact, by reducing frequency of certain calls, etc. Any such changes will be deployed over time.

At this point, we can't say what impact this will have on stability or Performance on systems running GPFS - until IBM issues an official statement on this topic. We hope to have some basic answers soon.



Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperworks%2Fcommunity%2Fforums%2Fhtml%2Fforum%3Fid%3D11111111-0000-0000-0000-000000000479&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C78bfa7bc56e6403dfa2008d553e510e7%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636507165938314209&sdata=D%2BormYVXAwlnskLb525U2nuaUXncu5Wgt98V6U4xBZc%3D&reserved=0>.

If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team.

<graycol.gif>"Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year everyone, I'm sure that everyone is aware of Meltdown and Spectre by now ... we, like m

From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: 01/04/2018 01:11 PM
Subject: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



Happy New Year everyone,

I'm sure that everyone is aware of Meltdown and Spectre by now ... we, like many other institutions, will be patching for it at the earliest possible opportunity.

Our understanding is that the most serious of the negative performance impacts of these patches will be for things like I/O (disk / network) ... given that, we are curious if IBM has any plans for a GPFS update that could help mitigate those impacts? Or is there simply nothing that can be done?

If there is a GPFS update planned for this we'd be interested in knowing so that we could coordinate the kernel and GPFS upgrades on our cluster.

Thanks...

Kevin

P.S. The "Happy New Year" wasn't intended as sarcasm ... I hope it is a good year for everyone despite how it's starting out. :-O

-
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180109/409a35d2/attachment.htm>


More information about the gpfsug-discuss mailing list