[gpfsug-discuss] Maximum value for data replication?

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Thu Sep 1 22:06:44 BST 2016


I have two protocol node in each of two data centres. So four protocol nodes in the cluster.

Plus I also have a quorum vm which is lockstep/ha so guaranteed to survive in one of the data centres should we lose power. The protocol servers being protocol servers don't have access to the fibre channel storage. And we've seen ces do bad things when the storage cluster it is remotely mounting (and the ces root is on) fails/is under load etc.

So the four full copies is about guaranteeing there are two full copies in both data centres. And remember this is only for the cesroot, so lock data for the ces ips, the smb registry I think as well.

I was hoping that by making the cesroot in the protocol node cluster rather than a fileset on a remote mounted filesysyem, that it would fix the ces weirdness we see as it would become a local gpfs file system.

I guess three copies would maybe work.

But also in another cluster, we have been thinking about adding NVMe into NSD servers for metadata and system.log and so I can se there are cases there where having higher numbers of copies would be useful.

Yes I take the point that more copies means more load for the client, but in these cases, we aren't thinking about gpfs as the fastest possible hpc file system, but for other infrastructure purposes, which is one of the ways the product seems to be moving.

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Daniel Kidger [daniel.kidger at uk.ibm.com]
Sent: 01 September 2016 12:22
To: gpfsug-discuss at spectrumscale.org
Cc: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Maximum value for data replication?

Simon,
Hi.
Can you explain why you would like a full copy of all blocks on all 4 NSD servers ?
Is there a particular use case, and hence an interest from product development?

Otherwise remember that with 4 NSD servers, with one failure group per (storage rich) NSD server, then all 4 disk arrays will be loaded equally, as new files will get written to any 3 (or 2 or 1) of the 4 failure groups.
Also remember that as you add more replication then there is more network load on the gpfs client as it has to perform all the writes itself.

Perhaps someone technical can comment on the logic that determines which '3' out of 4 failure groups, a particular block is written to.

Daniel
[/spectrum_storage-banne]


[Spectrum Scale Logo]


Dr Daniel Kidger
IBM Technical Sales Specialist
Software Defined Solution Sales

+44-07818 522 266
daniel.kidger at uk.ibm.com







----- Original message -----
From: Steve Duersch <duersch at us.ibm.com>
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Cc:
Subject: Re: [gpfsug-discuss] Maximum value for data replication?
Date: Wed, Aug 31, 2016 1:45 PM


>>Is there a maximum value for data replication in Spectrum Scale?
The maximum value for replication is 3.


Steve Duersch
Spectrum Scale RAID
845-433-7902
IBM Poughkeepsie, New York




[Inactive hide details for gpfsug-discuss-request---08/30/2016 07:25:24 PM---Send gpfsug-discuss mailing list submissions to  gp]gpfsug-discuss-request---08/30/2016 07:25:24 PM---Send gpfsug-discuss mailing list submissions to  gpfsug-discuss at spectrumscale.org

From: gpfsug-discuss-request at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Date: 08/30/2016 07:25 PM
Subject: gpfsug-discuss Digest, Vol 55, Issue 55
Sent by: gpfsug-discuss-bounces at spectrumscale.org

________________________________



Send gpfsug-discuss mailing list submissions to
gpfsug-discuss at spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-request at spectrumscale.org

You can reach the person managing the list at
gpfsug-discuss-owner at spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

  1. Maximum value for data replication?
     (Simon Thompson (Research Computing - IT Services))
  2. greetings (Kevin D Johnson)
  3. GPFS 3.5.0 on RHEL 6.8 (Lukas Hejtmanek)
  4. Re: GPFS 3.5.0 on RHEL 6.8 (Kevin D Johnson)
  5. Re: GPFS 3.5.0 on RHEL 6.8 (mark.bergman at uphs.upenn.edu)
  6. Re: *New* IBM Spectrum Protect Whitepaper "Petascale Data
     Protection" (Lukas Hejtmanek)
  7. Re: *New* IBM Spectrum Protect Whitepaper "Petascale Data
     Protection" (Sven Oehme)


----------------------------------------------------------------------

Message: 1
Date: Tue, 30 Aug 2016 19:09:05 +0000
From: "Simon Thompson (Research Computing - IT Services)"
<S.J.Thompson at bham.ac.uk>
To: "gpfsug-discuss at spectrumscale.org"
<gpfsug-discuss at spectrumscale.org>
Subject: [gpfsug-discuss] Maximum value for data replication?
Message-ID:
<CF45EE16DEF2FE4B9AA7FF2B6EE26545F5813FFC at EX13.adf.bham.ac.uk>
Content-Type: text/plain; charset="us-ascii"

Is there a maximum value for data replication in Spectrum Scale?

I have a number of nsd servers which have local storage and Id like each node to have a full copy of all the data in the file-system, say this value is 4, can I set replication to 4 for data and metadata and have each server have a full copy?

These are protocol nodes and multi cluster mount another file system (yes I know not supported) and the cesroot is in the remote file system. On several occasions where GPFS has wibbled a bit, this has caused issues with ces locks, so I was thinking of moving the cesroot to a local filesysyem which is replicated on the local ssds in the protocol nodes. I.e. Its a generally quiet file system as its only ces cluster config.

I assume if I stop protocols, rsync the data and then change to the new ces root, I should be able to get this working?

Thanks

Simon

------------------------------

Message: 2
Date: Tue, 30 Aug 2016 19:43:39 +0000
From: "Kevin D Johnson" <kevindjo at us.ibm.com>
To: gpfsug-discuss at spectrumscale.org
Subject: [gpfsug-discuss] greetings
Message-ID:
<OFBE10B787.D9EE56E3-ON0025801F.006C39E8-0025801F.006C5E11 at notes.na.collabserv.com>

Content-Type: text/plain; charset="us-ascii"

An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160830/5a2e22a3/attachment-0001.html>

------------------------------

Message: 3
Date: Tue, 30 Aug 2016 22:39:18 +0200
From: Lukas Hejtmanek <xhejtman at ics.muni.cz>
To: gpfsug-discuss at spectrumscale.org
Subject: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8
Message-ID: <20160830203917.qptfgqvlmdbzu6wr at ics.muni.cz>
Content-Type: text/plain; charset=iso-8859-2

Hello,

does it work for anyone? As of kernel 2.6.32-642, GPFS 3.5.0 (including the
latest patch 32) does start but does not mount and file system. The internal
mount cmd gets stucked.

--
Luk?? Hejtm?nek


------------------------------

Message: 4
Date: Tue, 30 Aug 2016 20:51:39 +0000
From: "Kevin D Johnson" <kevindjo at us.ibm.com>
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8
Message-ID:
<OF60CB398A.7B2AF5FF-ON0025801F.007282A3-0025801F.0072979C at notes.na.collabserv.com>

Content-Type: text/plain; charset="us-ascii"

An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160830/341d5e11/attachment-0001.html>

------------------------------

Message: 5
Date: Tue, 30 Aug 2016 17:07:21 -0400
From: mark.bergman at uphs.upenn.edu
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8
Message-ID: <24437-1472591241.445832 at bR6O.TofS.917u>
Content-Type: text/plain; charset="UTF-8"

In the message dated: Tue, 30 Aug 2016 22:39:18 +0200,
The pithy ruminations from Lukas Hejtmanek on
<[gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8> were:
=> Hello,

GPFS 3.5.0.[23..3-0] work for me under [CentOS|ScientificLinux] 6.8,
but at kernel 2.6.32-573 and lower.

I've found kernel bugs in blk_cloned_rq_check_limits() in later kernel
revs that caused multipath errors, resulting in GPFS being unable to
find all NSDs and mount the filesystem.

I am not updating to a newer kernel until I'm certain this is resolved.

I opened a bug with CentOS:

https://bugs.centos.org/view.php?id=10997

and began an extended discussion with the (RH & SUSE) developers of that
chunk of kernel code. I don't know if an upstream bug has been opened
by RH, but see:

https://patchwork.kernel.org/patch/9140337/
=>
=> does it work for anyone? As of kernel 2.6.32-642, GPFS 3.5.0 (including the
=> latest patch 32) does start but does not mount and file system. The internal
=> mount cmd gets stucked.
=>
=> --
=> Luk?? Hejtm?nek


--
Mark Bergman                                           voice: 215-746-4061
mark.bergman at uphs.upenn.edu                              fax: 215-614-0266
http://www.cbica.upenn.edu/
IT Technical Director, Center for Biomedical Image Computing and Analytics
Department of Radiology                         University of Pennsylvania
         PGP Key: http://www.cbica.upenn.edu/sbia/bergman


------------------------------

Message: 6
Date: Wed, 31 Aug 2016 00:02:50 +0200
From: Lukas Hejtmanek <xhejtman at ics.muni.cz>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper
"Petascale Data Protection"
Message-ID: <20160830220250.yt6r7gvfq7rlvtcs at ics.muni.cz>
Content-Type: text/plain; charset=iso-8859-2

Hello,

On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> Find the paper here:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection

thank you for the paper, I appreciate it.

However, I wonder whether it could be extended a little. As it has the title
Petascale Data Protection, I think that in Peta scale, you have to deal with
millions (well rather hundreds of millions) of files you store in and this is
something where TSM does not scale well.

Could you give some hints:

On the backup site:
mmbackup takes ages for:
a) scan (try to scan 500M files even in parallel)
b) backup - what if 10 % of files get changed - backup process can be blocked
several days as mmbackup cannot run in several instances on the same file
system, so you have to wait until one run of mmbackup finishes. How long could
it take at petascale?

On the restore site:
how can I restore e.g. 40 millions of file efficiently? dsmc restore '/path/*'
runs into serious troubles after say 20M files (maybe wrong internal
structures used), however, scanning 1000 more files takes several minutes
resulting the dsmc restore never reaches that 40M files.

using filelists the situation is even worse. I run dsmc restore -filelist
with a filelist consisting of 2.4M files. Running for *two* days without
restoring even a single file. dsmc is consuming 100 % CPU.

So any hints addressing these issues with really large number of files would
be even more appreciated.

--
Luk?? Hejtm?nek


------------------------------

Message: 7
Date: Tue, 30 Aug 2016 16:24:59 -0700
From: Sven Oehme <oehmes at gmail.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper
"Petascale Data Protection"
Message-ID:
<CALssuR1qWo2Y5adUUZJtLgkNPUYztWpYP0XhNLTjBOG5352qjg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

so lets start with some simple questions.

when you say mmbackup takes ages, what version of gpfs code are you running
?
how do you execute the mmbackup command ? exact parameters would be useful
.
what HW are you using for the metadata disks ?
how much capacity (df -h) and how many inodes (df -i) do you have in the
filesystem you try to backup ?

sven


On Tue, Aug 30, 2016 at 3:02 PM, Lukas Hejtmanek <xhejtman at ics.muni.cz>
wrote:

> Hello,
>
> On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> > Find the paper here:
> >
> > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/
> Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection
>
> thank you for the paper, I appreciate it.
>
> However, I wonder whether it could be extended a little. As it has the
> title
> Petascale Data Protection, I think that in Peta scale, you have to deal
> with
> millions (well rather hundreds of millions) of files you store in and this
> is
> something where TSM does not scale well.
>
> Could you give some hints:
>
> On the backup site:
> mmbackup takes ages for:
> a) scan (try to scan 500M files even in parallel)
> b) backup - what if 10 % of files get changed - backup process can be
> blocked
> several days as mmbackup cannot run in several instances on the same file
> system, so you have to wait until one run of mmbackup finishes. How long
> could
> it take at petascale?
>
> On the restore site:
> how can I restore e.g. 40 millions of file efficiently? dsmc restore
> '/path/*'
> runs into serious troubles after say 20M files (maybe wrong internal
> structures used), however, scanning 1000 more files takes several minutes
> resulting the dsmc restore never reaches that 40M files.
>
> using filelists the situation is even worse. I run dsmc restore -filelist
> with a filelist consisting of 2.4M files. Running for *two* days without
> restoring even a single file. dsmc is consuming 100 % CPU.
>
> So any hints addressing these issues with really large number of files
> would
> be even more appreciated.
>
> --
> Luk?? Hejtm?nek
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160830/d9b3fb68/attachment.html>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 55, Issue 55
**********************************************



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Image.1__=0ABB0AB3DFD67DBA8f9e8a93df938 at us.ibm.com.gif
Type: image/gif
Size: 105 bytes
Desc: Image.1__=0ABB0AB3DFD67DBA8f9e8a93df938 at us.ibm.com.gif
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160901/efa61959/attachment.gif>


More information about the gpfsug-discuss mailing list