[gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 6

Grace Tsai ghemingtsai at gmail.com
Tue May 29 18:39:24 BST 2012


Hi, Jez,

I tried what you suggested with the command:

mmchfs -z yes   /dev/fs1

and the list output of "mmlsfs" is as follows:

-sh-4.1# ./mmlsfs /dev/fs1
flag                value                    description
------------------- ------------------------
-----------------------------------
 -f                 32768                    Minimum fragment size in bytes
 -i                 512                      Inode size in bytes
 -I                 32768                    Indirect block size in bytes
 -m                 1                        Default number of metadata
replicas
 -M                 2                        Maximum number of metadata
replicas
 -r                 1                        Default number of data replicas
 -R                 2                        Maximum number of data replicas
 -j                 cluster                  Block allocation type
 -D                 nfs4                     File locking semantics in
effect
 -k                 all                      ACL semantics in effect
 -n                 10                       Estimated number of nodes that
will mount file system
 -B                 1048576                  Block size
 -Q                 none                     Quotas enforced
                    none                     Default quotas enabled
 --filesetdf        no                       Fileset df enabled?
 -V                 12.10 (3.4.0.7)          File system version
 --create-time      Thu Feb 23 16:13:28 2012 File system creation time
 -u                 yes                      Support for large LUNs?
 -z                 yes                      Is DMAPI enabled?
 -L                 4194304                  Logfile size
 -E                 yes                      Exact mtime mount option
 -S                 no                       Suppress atime mount option
 -K                 whenpossible             Strict replica allocation
option
 --fastea           yes                      Fast external attributes
enabled?
 --inode-limit      571392                   Maximum number of inodes
 -P                 system                   Disk storage pools in file
system
 -d                 scratch_DL1;scratch_MDL1  Disks in file system
 -A                 no                       Automatic mount option
 -o                 none                     Additional mount options
 -T                 /gpfs_directory1/        Default mount point
 --mount-priority   0                        Mount priority

But I still got the error message in dsmsmj  from "manage" on
/gpfs_directory1

"A conflicting Space Management is already running in the /gpfs_directory1
file system.
  Please wait until the Space Management process is ready and try"

Could you help please?
Could you give more suggestions please?
Thanks.

Grace

On Tue, May 29, 2012 at 4:00 AM, <gpfsug-discuss-request at gpfsug.org> wrote:

> Send gpfsug-discuss mailing list submissions to
>        gpfsug-discuss at gpfsug.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> or, via email, send a message with subject or body 'help' to
>        gpfsug-discuss-request at gpfsug.org
>
> You can reach the person managing the list at
>        gpfsug-discuss-owner at gpfsug.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gpfsug-discuss digest..."
>
>
> Today's Topics:
>
>   1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 28 May 2012 15:55:54 +0000
> From: Jez Tucker <Jez.Tucker at rushes.co.uk>
> To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message:
>        ANS9085E
> Message-ID:
>        <
> 39571EA9316BE44899D59C7A640C13F53059CC99 at WARVWEXC1.uk.deluxe-eu.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Hello Grace
>
>  This is most likely because the file system that you're trying to manage
> via Space Management isn't configured as such.
>
> I.E. the -z flag in mmlsfs
>
>
> http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html
>
> Also:
>
> This IBM red book should be a good starting point and includes the
> information you need should you with to setup GPFS drives TSM migration
> (using THRESHOLD).
>
> http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1
>
> Suggest you read the red book first and decide which method you'd like.
>
> Regards,
>
> Jez
>
> ---
> Jez Tucker
> Senior Sysadmin
> Rushes
>
> GPFSUG Chairman (chair at gpfsug.org)
>
>
>
> From: gpfsug-discuss-bounces at gpfsug.org [mailto:
> gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai
> Sent: 26 May 2012 01:10
> To: gpfsug-discuss at gpfsug.org
> Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E
>
> Hi,
>
> I have a GPFS system verson 3.4, which includes the following two GPFS
> file systems with the directories:
>
> /gpfs_directory1
> /gpfs_directory2
>
> I like to use HSM to backup these GPFS files to the tapes in our TSM
> server (RHAT 6.2, TSM 6.3).
> I run  HSM GUI on this GPFS server, the list of the file systems on this
> GPFS server is as follows:
>
> File System                  State        Size(KB)       Free(KB)   ...
> ------------------
> /                         Not Manageable
> /boot                  Not Manageable
> ...
> /gpfs_directory1       Not Managed
> /gpfs_directory2       Not Managed
>
>
> I click "gpfs_directory1", and click "Manage"
> =>
> I got error:
> """
> A conflicting Space Management process is already running in the
> /gpfs_directory1 file system.
> Please wait until the Space management process is ready and try again.
> """
>
> The dsmerror.log  shows the message:
> "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space
> management"
>
> Is there anything on GPFS or HSM or TSM server that I didnt configure
> correctly?  Please help.  Thanks.
>
> Grace
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> End of gpfsug-discuss Digest, Vol 5, Issue 6
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20120529/39cc4004/attachment.htm>


More information about the gpfsug-discuss mailing list