[gpfsug-discuss] Virtualized Spectrum Scale

Greg.Lehmann at csiro.au Greg.Lehmann at csiro.au
Wed Oct 26 04:07:19 BST 2016


The srp work was done a few years ago now. We use the same srp code for both physical and virtual, so I am guessing it has nothing to do with the SRIOV side of things. Somebody else did the work, so I will try and get an answer for you.

I agree performance and stability is good with physical and virtual.

Cheers,

Greg

-----Original Message-----
From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Aaron Knister
Sent: Wednesday, 26 October 2016 12:49 PM
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale

Hi Greg,

I'm rather curious about your srp difficulties (not to get too off topic). Is it an issue with srp by itself or the interaction of srp with the SRIOV IB HCA? We've used SRP quite a bit both virtualized and not and have seen good results both in terms of stability and performance.

-Aaron

On 10/25/16 9:04 PM, Greg.Lehmann at csiro.au wrote:
> We use KVM running on a Debian host, with CentOS guests. Storage is 
> zoned from our DDN Infiniband array to the host and then passed 
> through to the guests. We would like to zone it directly to the guests 
> SRIOV IB HCA, but srp seems to be a bit of dead code tree. We had to 
> do a bit of work to get it working with Debian and haven't as yet 
> spent the time on getting it going with CentOS.
>
>
>
> We also run Spectrum Archive on the guest with tape drives and 
> libraries zoned to the guest's PCIe HBAs which are passed through from the host.
> We are working towards going production with this setup.
>
>
>
> Xen was a bit a failure for us so we switched to KVM.
>
>
>
> Cheers,
>
>
>
> Greg
>
>
>
> *From:*gpfsug-discuss-bounces at spectrumscale.org
> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of 
> *Mark.Bush at siriuscom.com
> *Sent:* Wednesday, 26 October 2016 4:46 AM
> *To:* gpfsug-discuss at spectrumscale.org
> *Subject:* [gpfsug-discuss] Virtualized Spectrum Scale
>
>
>
> Anyone running SpectrumScale on Virtual Machines (intel)?  I'm curious 
> how you manage disks?  Do you use RDM's?  Does this even make sense to 
> do?  If you have a 2-3 node cluster how do you share the disks across?
> Do you have VM's with their own VMDK's (if not RDM) in each node or is 
> there some way to share access to the same VMDK's?  What are the 
> advantages doing this other than existing HW use?  Seems to me for a 
> lab environment or very small nonperformance focused implementation 
> this may be a viable option.
>
>
>
> Thanks
>
>
>
> Mark
>
> This message (including any attachments) is intended only for the use 
> of the individual or entity to which it is addressed and may contain 
> information that is non-public, proprietary, privileged, confidential, 
> and exempt from disclosure under applicable law. If you are not the 
> intended recipient, you are hereby notified that any use, 
> dissemination, distribution, or copying of this communication is strictly prohibited.
> This message may be viewed by parties at Sirius Computer Solutions 
> other than those named in the message header. This message does not 
> contain an official representation of Sirius Computer Solutions. If 
> you have received this communication in error, notify Sirius Computer 
> Solutions immediately and (i) destroy this message if a facsimile or 
> (ii) delete this message immediately if this is an electronic communication. Thank you.
>
> *Sirius Computer Solutions <http://www.siriuscom.com>*
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




More information about the gpfsug-discuss mailing list