[gpfsug-discuss] storage-based replication for Spectrum Scale

Harold Morales hmorales at optimizeit.co
Fri Jan 26 20:47:03 GMT 2018


Thanks for participating in the discussion.

Immediately after replication I am getting the error documented below, I
have moved the mmsdrfs file (upon deleting the previous filesystem
definitions, because I have managed to configure exactly equal my source
and target clusters). Even then, I am getting the following error:

GPFS: 6027-419 Failed to read a file system descriptor.
There is an input or output error.
mmlsfs: 6027-1639 Command failed. Examine previous

That's the same error that upon first replicating without taking any other
action. I think I am missing something really important but still dont know
what it is.




2018-01-26 15:21 GMT-05:00 Glen Corneau <gcorneau at us.ibm.com>:

> Scale will walk across all discovered disks upon start time and attempt to
> read the NSD identifiers from the disks.  Once it finds them, it makes a
> local map file that correlates the NSD id and the hdiskX identifier.  The
> names do not have to be the same as either the source cluster or even from
> node-to-node.
>
> The main thing to keep in mind is to keep the file system definitions in
> sync between the source and destination clusters.  The "syncFSconfig" user
> exit is the best way to do it because it's automatic.  You generally
> shouldn't be shuffling the mmsdrfs file between sites, that's what the
> "mmfsctl syncFSconfig" does for you, on a per-file system basis.
>
> GPFS+AIX customers have been using this kind of storage replication for
> over 10 years, it's business as usual.
>
> ------------------
> Glen Corneau
> Power Systems
> Washington Systems Center
> gcorneau at us.ibm.com
>
>
>
>
>
> From:        Harold Morales <hmorales at optimizeit.co>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        01/26/2018 12:30 PM
> Subject:        Re: [gpfsug-discuss] storage-based replication for
> Spectrum Scale
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi Alex, This set up seems close to what I am trying to achieve.
>
> With regards to this kind of replication: any prereqs need to be met in
> the target environment for this to work? for example, should disk devices
> naming on AIX be the same as in the source environment? when importing the
> mmsdrfs file, how is Scale going to know which disks should it assign to
> the cluster? by their hdisk name alone?
>
> Thanks again,
>
>
>
> 2018-01-24 2:30 GMT-05:00 Alex Levin <*alevin at gmail.com*
> <alevin at gmail.com>>:
> Hi,
>
> We are using a  similar type of replication.
> I assume the site B is the cold site prepared for DR
>
> The storage layer is EMC VMAX and the LUNs are replicated with SRDF.
> All LUNs ( NSDs ) of the gpfs filesystem are in the same VMAX replication
> group to ensure consistency.
>
> The cluster name, IP addresses ,  hostnames of the cluster nodes are
> different on another site - it can be a pre-configured cluster without
> gpfs filesystems or with another filesystem.
> Same names and addresses shouldn't be a problem.
>
> Additionally to the replicated LUNs/NSDs you need to deliver copy
> of /var/mmfs/gen/mmsdrfs  file from A to B site.
> There is no need to replicate it in real-time, only after the change of
> the cluster configuration.
>
> To activate  site B - present replicated LUNs to the nodes in the DR
> cluster and run  mmimportfs as "mmimportfs  fs_name -i copy_of_mmsdrfs"
>
> Tested  with multiples LUNs and filesystems on various workloads - seems
> to be working
>
> --Alex
>
>
>
> On Wed, Jan 24, 2018 at 1:33 AM, Harold Morales <*hmorales at optimizeit.co*
> <hmorales at optimizeit.co>> wrote:
> Thanks for answering.
>
> Essentially, the idea being explored is to replicate LUNs between
> identical storage hardware (HP 3PAR volumesrein) on both sites. There is an
> IP connection between storage boxes but not between servers on both sites,
> there is a dark fiber connecting both sites. Here they dont want to explore
> the idea of a scaled-based.
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org*
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=d-vphLEe_UlGazP6RdYAyyAA3Qv5S9IRVNuO1i9vjJc&m=VbfWaftYSVjx8fMb2vHGBi6XUhDJOsKf_dKOX3J8s1A&s=7mUgkiF6PXf-djg8ue9FAFIqhNyoSzE6wYaIK_jmSPQ&e=>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=d-vphLEe_UlGazP6RdYAyyAA3Qv5S9IRVNuO1i9vjJc&m=VbfWaftYSVjx8fMb2vHGBi6XUhDJOsKf_dKOX3J8s1A&s=p2mGvPrlPLO1oyEh-GeVJiVS49opBwwCFs-FKQrQ7rc&e=>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org*
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=d-vphLEe_UlGazP6RdYAyyAA3Qv5S9IRVNuO1i9vjJc&m=VbfWaftYSVjx8fMb2vHGBi6XUhDJOsKf_dKOX3J8s1A&s=7mUgkiF6PXf-djg8ue9FAFIqhNyoSzE6wYaIK_jmSPQ&e=>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=d-vphLEe_UlGazP6RdYAyyAA3Qv5S9IRVNuO1i9vjJc&m=VbfWaftYSVjx8fMb2vHGBi6XUhDJOsKf_dKOX3J8s1A&s=p2mGvPrlPLO1oyEh-GeVJiVS49opBwwCFs-FKQrQ7rc&e=>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.
> org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_
> iaSHvJObTbx-siA1ZOg&r=d-vphLEe_UlGazP6RdYAyyAA3Qv5S9IRVNuO1i9vjJc&m=
> VbfWaftYSVjx8fMb2vHGBi6XUhDJOsKf_dKOX3J8s1A&s=p2mGvPrlPLO1oyEh-
> GeVJiVS49opBwwCFs-FKQrQ7rc&e=
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180126/bc94bb70/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 26117 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180126/bc94bb70/attachment.jpe>


More information about the gpfsug-discuss mailing list