Re: SQL Server I/O performance issue

Giganews Newsgroups
Subject: Re: SQL Server I/O performance issue
Posted by:  Geoff N. Hiten (SQLCraftsm…@gmail.com)
Date: Thu, 20 Sep 2007

This may be due to your disk subsystem.

If your storage unit relies on a caching controller on the host computer to
implement RAID technology, then your performacne will definitely suffer in a
clustered environment.  If your storage array has its own cache, then you
should see no difference.  When you have a failover cluster, you cannot
enable write caching on a controller in a host node.  The controller is
considered part of the host environment and will not be available during a
failure.  Enabling write cache will result in corrupt data during a failover
event.  You can easily test this by disabling write cache in a stand-alone
environment and comparing the results with your clustered system.

--
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP

"YaHozna" <YaHoz…@discussions.microsoft.com> wrote in message
news:3751DA65-D588-46A9-BA41-D4567E1F8B…@microsoft.com...
> Hi. Wonder if anyone can offer me some advice?
>
> I'm currently involved in setting up some new kit prior to an upgrade of
> SQL
> Server 2000 Standard on a single Win 2000 server to SQL Server 2005
> Standard
> x64 on a single node Failover Cluster. Additionally databases are to be
> held
> on FAS storage.
>
> Setting up has gone reasonably smoothly - SQL Server installed OK and
> failover is working fine. The only issue I've got is I/O performance
> between
> server and FAS which seems very poor compared to I/O on local machine. To
> stress test the setup I'm running SQLIOSim with the recommended config
> file
> for our setup and running Performance Monitor with the recommended
> Performance Audit counters. All results are fine except those for Physical
> Disk. Both % Disk Time and Avg Disk Queue Length are showing very poor
> performance compared to results shown when running the same test against
> the
> local disks. Average values are 10227 and 102 respectively. I'm not quite
> sure how to calculate the % Disk percentage however I think Avg Disk Queue
> Length is measured by dividing the value by the number of disks in the
> array
> - i.e. 20 in this case. This gives an average of approx 5 which, given
> that
> this is pretty much constant, would seem to suggest poor performance.
>
> So, finally, the question is, is there something fundamentally amiss with
> the setup or do the figures look OK? Any suggestions as to improving
> things
> or other stress tools/approaches to investigate?  The hardware spec is as
> follows:
>
> Each server has 1 NIC connected to a 1Gb Switch on a private LAN which is
> in
> turn connected to a NetApp file server.  Other NICs are for heartbeat and
> public LAN connection. All are using 1Gb Ethernet. iSCSI Initiator
> installed,
> v2.05.
>
> LUN created to the Netapp box of 100Gb - partitioned into 99Gb and 1Gb
> drives - Shows up as E: for data and Q: for quorum in Windows Explorer.
>
> N.B. Above info supplied by network guys and is stuff I'm not overly
> familiar with so be gentle with me :)
>
> Regards,
>
> YaHozna.

Replies

In response to

SQL Server I/O performance issue posted by YaHozna on Thu, 20 Sep 2007