[gpfsug-discuss] Recharging where HSM is used

Jeffrey R. Lang JRLang at uwyo.edu
Thu May 3 16:38:32 BST 2018


Khanh

Could you tell us what the  policy file name is or where to get it?

Thanks
Jeff

From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Khanh V Ngo
Sent: Thursday, May 3, 2018 10:30 AM
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Recharging where HSM is used

Specifically with IBM Spectrum Archive EE, there is a script (mmapplypolicy with list rules and python since it outputs many different tables) to provide the total size of user files by file states.  This way you can charge more for files that remain on disk and charge less for files migrated to tape.  I have seen various prices for the chargeback so it's probably better to calculate based on your environment.

The script can easily be changed to output based on GID, filesets, etc.

Here's a snippet of the output (in human-readable units):
+-------+-----------+-------------+-------------+-----------+
|  User |  Migrated | Premigrated |   Resident  |   TOTAL   |
+-------+-----------+-------------+-------------+-----------+
|   0   |  1.563 KB |  50.240 GB  | 6.000 bytes | 50.240 GB |
| 27338 |  9.338 TB |   1.566 TB  |  63.555 GB  | 10.965 TB |
| 27887 | 58.341 GB |  191.653 KB |             | 58.341 GB |
| 27922 |  2.111 MB |             |             |  2.111 MB |
| 24089 |  4.657 TB |  22.921 TB  |  433.660 GB | 28.002 TB |
| 29657 | 29.219 TB |  32.049 TB  |             | 61.268 TB |
| 29210 |  3.057 PB |  399.908 TB |  47.448 TB  |  3.494 PB |
| 23326 |  7.793 GB |  257.005 MB |  166.364 MB |  8.207 GB |
| TOTAL |  3.099 PB |  456.492 TB |  47.933 TB  |  3.592 PB |
+-------+-----------+-------------+-------------+-----------+

Thanks,
Khanh

Khanh Ngo, Tape Storage Test Architect
Senior Technical Staff Member and Master Inventor

Tie-Line 8-321-4802
External Phone: (520)799-4802
9042/1/1467 Tucson, AZ
khanhn at us.ibm.com<mailto:khanhn at us.ibm.com> (internet)

It's okay to not understand something. It's NOT okay to test something you do NOT understand.



----- Original message -----
From: gpfsug-discuss-request at spectrumscale.org<mailto:gpfsug-discuss-request at spectrumscale.org>
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
To: gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>
Cc:
Subject: gpfsug-discuss Digest, Vol 76, Issue 7
Date: Thu, May 3, 2018 8:19 AM

Send gpfsug-discuss mailing list submissions to
gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>

To subscribe or unsubscribe via the World Wide Web, visit
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-request at spectrumscale.org<mailto:gpfsug-discuss-request at spectrumscale.org>

You can reach the person managing the list at
gpfsug-discuss-owner at spectrumscale.org<mailto:gpfsug-discuss-owner at spectrumscale.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Recharging where HSM is used (Sobey, Richard A)
   2. Re: Spectrum Scale CES and remote file system mounts
      (Mathias Dietz)


----------------------------------------------------------------------

Message: 1
Date: Thu, 3 May 2018 15:02:51 +0000
From: "Sobey, Richard A" <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Recharging where HSM is used
Message-ID:
<VI1PR06MB320080BCAB0499AF166A59E4DF870 at VI1PR06MB3200.eurprd06.prod.outlook.com<mailto:VI1PR06MB320080BCAB0499AF166A59E4DF870 at VI1PR06MB3200.eurprd06.prod.outlook.com>>

Content-Type: text/plain; charset="utf-8"

Stephen, Bryan,

Thanks for the input, it?s greatly appreciated.

For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers.

So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB.

Richard

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> On Behalf Of Stephen Ulmer
Sent: 03 May 2018 14:03
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Recharging where HSM is used

I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns).

I?d also like to see what people are doing around this.

If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :)

Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool?

--
Stephen



On May 3, 2018, at 5:43 AM, Sobey, Richard A <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>> wrote:

Hi all,

I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage.

On-list or off is fine with me.

Thanks
Richard
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org_&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=T_TgIy-09AYM6gPWskPSxePi3tM5MADG_NorJCiEjUI&e=>
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20180503_dcde0c3e_attachment-2D0001.html&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=Kg8NWE_XWsVduKDWJAVO47iYJmhc9Z9kqdMoHkZ-xlE&e=>

------------------------------

Message: 2
Date: Thu, 3 May 2018 17:14:20 +0200
From: "Mathias Dietz" <MDIETZ at de.ibm.com<mailto:MDIETZ at de.ibm.com>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file
system mounts
Message-ID:
<OFE5D1E6A6.3395B08A-ONC1258282.00538CBE-C1258282.0053B576 at notes.na.collabserv.com<mailto:OFE5D1E6A6.3395B08A-ONC1258282.00538CBE-C1258282.0053B576 at notes.na.collabserv.com>>

Content-Type: text/plain; charset="iso-8859-1"

yes, deleting all NFS exports which point to a given file system would
allow you to unmount it without bringing down the other file systems.


Mit freundlichen Gr??en / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: mdietz at de.ibm.com<mailto:mdietz at de.ibm.com>
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk
WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht
Stuttgart, HRB 243294



From:   valleru at cbio.mskcc.org<mailto:valleru at cbio.mskcc.org>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:   03/05/2018 16:41
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file
system mounts
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>



Thanks Mathiaz,
Yes i do understand the concern, that if one of the remote file systems go
down abruptly - the others will go down too.

However, i suppose we could bring down one of the filesystems before a
planned downtime?
For example, by unexporting the filesystems on NFS/SMB before the
downtime?

I might not want to be in a situation, where i have to bring down all the
remote filesystems because of planned downtime of one of the remote
clusters.

Regards,
Lohit

On May 3, 2018, 7:41 AM -0400, Mathias Dietz <MDIETZ at de.ibm.com<mailto:MDIETZ at de.ibm.com>>, wrote:
Hi Lohit,

>I am thinking of using a single CES protocol cluster, with remote mounts
from 3 storage clusters.
Technically this should work fine (assuming all 3 clusters use the same
uids/guids). However this has not been tested in our Test lab.


>One thing to watch, be careful if your CES root is on a remote fs, as if
that goes away, so do all CES exports.
Not only the ces root file system is a concern, the whole CES cluster will
go down if any remote file systems with NFS exports is not available.
e.g. if remote cluster 1 is not available, the CES cluster will unmount
the corresponding file system which will lead to a NFS failure on all CES
nodes.


Mit freundlichen Gr??en / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: mdietz at de.ibm.com<mailto:mdietz at de.ibm.com>
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk
WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht
Stuttgart, HRB 243294



From:        valleru at cbio.mskcc.org<mailto:valleru at cbio.mskcc.org>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        01/05/2018 16:34
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file
system mounts
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>



Thanks Simon.
I will make sure i am careful about the CES root and test nfs exporting
more than 2 remote file systems.

Regards,
Lohit

On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support)
<S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>>, wrote:
You have been able to do this for some time, though I think it's only just
supported.

We've been exporting remote mounts since CES was added.

At some point we've had two storage clusters supplying data and at least 3
remote file-systems exported over NFS and SMB.

One thing to watch, be careful if your CES root is on a remote fs, as if
that goes away, so do all CES exports. We do have CES root on a remote fs
and it works, just be aware...

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
[gpfsug-discuss-bounces at spectrumscale.org] on behalf of
valleru at cbio.mskcc.org<mailto:valleru at cbio.mskcc.org> [valleru at cbio.mskcc.org]
Sent: 30 April 2018 22:11
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Hello All,

I read from the below link, that it is now possible to export remote
mounts over NFS/SMB.

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm


I am thinking of using a single CES protocol cluster, with remote mounts
from 3 storage clusters.
May i know, if i will be able to export the 3 remote mounts(from 3 storage
clusters) over NFS/SMB from a single CES protocol cluster?

Because according to the limitations as mentioned in the below link:

https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm


It says ?You can configure one storage cluster and up to five protocol
clusters (current limit).?


Regards,
Lohit
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20180503_32718637_attachment.html&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=h50Fy4alAwKdwU5cRNk4JB3q-8f9aTA2mtAfTS3YuRc&e=>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e=


End of gpfsug-discuss Digest, Vol 76, Issue 7
*********************************************



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180503/e5f150f7/attachment.htm>


More information about the gpfsug-discuss mailing list