[gpfsug-discuss] SMB issues

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Mon Dec 19 16:06:08 GMT 2016


We see it on all four of the nodes, and yet we did some getent passwd/getent group stuff on them to verify that identity is working OK.

Simon

From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Bill Pappas <bpappas at dstonline.com<mailto:bpappas at dstonline.com>>
Reply-To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Monday, 19 December 2016 at 15:59
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] SMB issues



What I would do is when you identify this issue again, determine which IP address (which samba server) is serving up the CIFS share.  Then as root, log on to that samna node and typr "id <username>" for the user which has this issue.  Are they in all the security groups you'd expect, in particular, the group required to access the folder in question?



Bill Pappas

901-619-0585

bpappas at dstonline.com<mailto:bpappas at dstonline.com>


[1466780990050_DSTlogo.png]


[http://www.prweb.com/releases/2016/06/prweb13504050.htm]

http://www.prweb.com/releases/2016/06/prweb13504050.htm


________________________________
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of gpfsug-discuss-request at spectrumscale.org<mailto:gpfsug-discuss-request at spectrumscale.org> <gpfsug-discuss-request at spectrumscale.org<mailto:gpfsug-discuss-request at spectrumscale.org>>
Sent: Monday, December 19, 2016 9:41 AM
To: gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>
Subject: gpfsug-discuss Digest, Vol 59, Issue 40

Send gpfsug-discuss mailing list submissions to
        gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>

To subscribe or unsubscribe via the World Wide Web, visit
        http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
        gpfsug-discuss-request at spectrumscale.org<mailto:gpfsug-discuss-request at spectrumscale.org>

You can reach the person managing the list at
        gpfsug-discuss-owner at spectrumscale.org<mailto:gpfsug-discuss-owner at spectrumscale.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. SMB issues (Simon Thompson (Research Computing - IT Services))
   2. Re: Tiers (Buterbaugh, Kevin L)


----------------------------------------------------------------------

Message: 1
Date: Mon, 19 Dec 2016 15:36:50 +0000
From: "Simon Thompson (Research Computing - IT Services)"
        <S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>>
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>"
        <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: [gpfsug-discuss] SMB issues
Message-ID: <D47DAF11.34457%s.j.thompson at bham.ac.uk<mailto:D47DAF11.34457%s.j.thompson at bham.ac.uk>>
Content-Type: text/plain; charset="us-ascii"

Hi All,

We upgraded to 4.2.2.0 last week as well as to
gpfs.smb-4.4.6_gpfs_8-1.el7.x86_64.rpm from the 4.2.2.0 protocols bundle.

We've since been getting random users reporting that they get access
denied errors when trying to access folders. Some seem to work fine and
others not, but it seems to vary and change by user (for example this
morning, I could see all my folders fine, but later I could only see
some). From my Mac connecting to the SMB shares, I could connect fine to
the share, but couldn't list files in the folder (I guess this is what
users were seeing from Windows as access denied).

In the log.smbd, we are seeing errors such as this:

[2016/12/19 15:20:40.649580,  0]
../source3/lib/sysquotas.c:457(sys_get_quota)
  sys_path_to_bdev() failed for path [FOLDERNAME_HERE]!



Reverting to the previous version of SMB we were running
(gpfs.smb-4.3.9_gpfs_21-1.el7.x86_64), the problems go away.

Before I log a PMR, has anyone else seen this behaviour or have any
suggestions?

Thanks

Simon



------------------------------

Message: 2
Date: Mon, 19 Dec 2016 15:40:50 +0000
From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Tiers
Message-ID: <25B04F2E-21FD-44EF-B15B-8317DE9EF68E at vanderbilt.edu<mailto:25B04F2E-21FD-44EF-B15B-8317DE9EF68E at vanderbilt.edu>>
Content-Type: text/plain; charset="utf-8"

Hi Brian,

We?re probably an outlier on this (Bob?s case is probably much more typical) but we can get away with doing weekly migrations based on file atime.  Some thoughts:

1.  absolutely use QOS!  It?s one of the best things IBM has ever added to GPFS.
2.  personally, I limit even my capacity pool to no more than 98% capacity.  I just don?t think it?s a good idea to 100% fill anything.
3.  if you do use anything like atime or mtime as your criteria, don?t forget to have a rule to move stuff back from the capacity pool if it?s now being used.
4.  we also help manage a DDN device and there they do also implement a rule to move stuff if the ?fast? pool exceeds a certain threshold ? but they use file size as the weight.  Not saying that?s right or wrong, it?s just another approach.

HTHAL?

Kevin

On Dec 19, 2016, at 9:25 AM, Oesterlin, Robert <Robert.Oesterlin at nuance.com<mailto:Robert.Oesterlin at nuance.com><mailto:Robert.Oesterlin at nuance.com>> wrote:

I tend to do migration based on ?file heat?, moving the least active files to HDD and more active to SSD. Something simple like this:

rule grpdef GROUP POOL gpool IS ssd LIMIT(75) THEN disk
rule repack
  MIGRATE FROM POOL gpool TO POOL gpool
  WEIGHT(FILE_HEAT)

Bob Oesterlin
Sr Principal Storage Engineer, Nuance




From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org><mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Brian Marshall <mimarsh2 at vt.edu<mailto:mimarsh2 at vt.edu><mailto:mimarsh2 at vt.edu>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org><mailto:gpfsug-discuss at spectrumscale.org>>
Date: Monday, December 19, 2016 at 9:15 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org><mailto:gpfsug-discuss at spectrumscale.org>>
Subject: [EXTERNAL] Re: [gpfsug-discuss] Tiers

We are in very similar situation.  VT - ARC has a layer of SSD for metadata only,  another layer of SSD for "hot" data, and a layer of 8TB HDDs for capacity.   We just now in the process of getting it all into production.

On this topic:

What is everyone's favorite migration policy to move data from SSD to HDD (and vice versa)?

Do you nightly move large/old files to HDD or wait until the fast tier hit some capacity limit?

Do you use QOS to limit the migration from SSD to HDD i.e. try not to kill the file system with migration work?


Thanks,
Brian Marshall

On Thu, Dec 15, 2016 at 4:25 PM, Buterbaugh, Kevin L <Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu><mailto:Kevin.Buterbaugh at vanderbilt.edu>> wrote:
Hi Mark,

We just use an 8 Gb FC SAN.  For the data pool we typically have a dual active-active controller storage array fronting two big RAID 6 LUNs and 1 RAID 1 (for /home).  For the capacity pool, it might be the same exact model of controller, but the two controllers are now fronting that whole 60-bay array.

But our users tend to have more modest performance needs than most?

Kevin

On Dec 15, 2016, at 3:19 PM, Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com><mailto:Mark.Bush at siriuscom.com> wrote:

Kevin, out of curiosity, what type of disk does your data pool use?  SAS or just some SAN attached system?

From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org><mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu><mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org><mailto:gpfsug-discuss at spectrumscale.org>>
Date: Thursday, December 15, 2016 at 2:47 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org><mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Tiers

Hi Mark,

We?re a ?traditional? university HPC center with a very untraditional policy on our scratch filesystem ? we don?t purge it and we sell quota there.  Ultimately, a lot of that disk space is taken up by stuff that, let?s just say, isn?t exactly in active use.

So what we?ve done, for example, is buy a 60-bay storage array and stuff it with 8 TB drives.  It wouldn?t offer good enough performance for actively used files, but we use GPFS policies to migrate files to the ?capacity? pool based on file atime.  So we have 3 pools:

1.  the system pool with metadata only (on SSDs)
2.  the data pool, which is where actively used files are stored and which offers decent performance
3.  the capacity pool, for data which hasn?t been accessed ?recently?, and which is on slower storage

I would imagine others do similar things.  HTHAL?

Kevin

On Dec 15, 2016, at 2:32 PM, Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com><mailto:Mark.Bush at siriuscom.com> wrote:

Just curious how many of you out there deploy SS with various tiers?  It seems like a lot are doing the system pool with SSD?s but do you routinely have clusters that have more than system pool and one more tier?

I know if you are doing Archive in connection that?s an obvious choice for another tier but I?m struggling with knowing why someone needs more than two tiers really.

I?ve read all the fine manuals as to how to do such a thing and some of the marketing as to maybe why.  I?m still scratching my head on this though.  In fact, my understanding is in the ESS there isn?t any different pools (tiers) as it?s all NL-SAS or SSD (DF150, etc).

It does make sense to me know with TCT and I could create an ILM policy to get some of my data into the cloud.

But in the real world I would like to know what yall do in this regard.


Thanks

Mark

This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you.
Sirius Computer Solutions<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.siriuscom.com_&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=ApfdK36fjx8EPle4P0_HHozWlQgTEFSkvigVGHY-94U&e=>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org_&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=rtkufzJTlSLEaFc-2qGxOm33u9-5xYdReXP2nuL6KLM&e=>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=bU09EKmlGWP0q6ENn-SjisUJ3b-BFCitjbUPWfLMhUc&e=>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org_&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=rtkufzJTlSLEaFc-2qGxOm33u9-5xYdReXP2nuL6KLM&e=>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=bU09EKmlGWP0q6ENn-SjisUJ3b-BFCitjbUPWfLMhUc&e=>


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=bZPT0Z1zNEbNctrugxAQ_G6wpNfuJFzNpfawDym_G9U&e=>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DgMFaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=2V9MbsY4SgTRmE8kIq6GQAq0owTDl_XMhRmx6pH61Os&s=bU09EKmlGWP0q6ENn-SjisUJ3b-BFCitjbUPWfLMhUc&e=>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161219/0059c648/attachment.html>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 59, Issue 40
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161219/fb8c73da/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OutlookEmoji-1466780990050_DSTlogo.png.png
Type: image/png
Size: 6282 bytes
Desc: OutlookEmoji-1466780990050_DSTlogo.png.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161219/fb8c73da/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OutlookEmoji-httpwww.prweb.comreleases201606prweb13504050.htm.jpg
Type: image/jpeg
Size: 14887 bytes
Desc: OutlookEmoji-httpwww.prweb.comreleases201606prweb13504050.htm.jpg
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161219/fb8c73da/attachment.jpg>


More information about the gpfsug-discuss mailing list