<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Citrix &#8211; StorageHacker</title>
	<atom:link href="https://www.storagehacker.com/archives/tag/citrix/feed" rel="self" type="application/rss+xml" />
	<link>https://www.storagehacker.com</link>
	<description>Not just another Storage weblog</description>
	<lastBuildDate>Thu, 29 Oct 2015 15:10:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.0.3</generator>
<site xmlns="com-wordpress:feed-additions:1">24418214</site>	<item>
		<title>Server Virtualization</title>
		<link>https://www.storagehacker.com/archives/108</link>
					<comments>https://www.storagehacker.com/archives/108#respond</comments>
		
		<dc:creator><![CDATA[storagehacker]]></dc:creator>
		<pubDate>Wed, 15 Jun 2011 02:57:41 +0000</pubDate>
				<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Hyper-V]]></category>
		<category><![CDATA[hypervisor]]></category>
		<category><![CDATA[VMware]]></category>
		<guid isPermaLink="false">http://www.storagehacker.com/?p=108</guid>

					<description><![CDATA[After 5 years in SAN storage industry with a virtualization focus, I recently shifted gears to just virtualization in the context of servers, storage, and infrastructure.  This has been an eye opening experience, the most enlightening part of this re-focus has been the incredible efforts that Cisco has put into the engineering of the Cisco &#8230; <p class="link-more"><a href="https://www.storagehacker.com/archives/108" class="more-link">Read More<span class="screen-reader-text"> "Server Virtualization"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>After 5 years in SAN storage industry with a virtualization focus, I recently shifted gears to just virtualization in the context of servers, storage, and infrastructure.  This has been an eye opening experience, the most enlightening part of this re-focus has been the incredible efforts that Cisco has put into the engineering of the Cisco UCS platform.  While on the surface the Cisco UCS B-Series looks like just another blade center server, never judge a book by its cover.</p>
<p>The Cisco UCS Platform moves the concept of virtualization to the actual hardware.  So where VMware, Citrix, and Microsoft provide products that abstract the physical hardware to allow greater utilization of server hardware, Cisco has extended the concept abstracting hardware to the actual physical servers or blades.  Physical hardware has UUID, MAC Addresses and WWNN/WWPN addresses burned into the hardware, this means the operating system will key in on these addresses for certain features.  While hypervisor&#8217;s hide this physical addressing from the Virtual Machines (VM), the hypervisor itself is tied to these addresses.  This means that you cannot simply upgrade a blade or server by simply replacing it with a newer version, even in a boot from SAN environment without some manual intervention.  Cisco UCS allows for these addresses to be virtual and applied to blade, meaning the addresses can actually be moved from one blade to another.  This accomplished through the use of Service Profiles that contain the configuration of the not only the addressing (UUID, MAC, and WWNN/WWPN) but also the firmware version, number of NICs, number of FC HBAs, bios settings (CPU settings, Memory settings, etc), and boot order.  The Cisco UCS and Data Center products (Nexus 2000/5000/7000, WAAS, ACE, etc) are moving towards a wire once model for all connectivity options (Ethernet, FC, FCoE, or iSCSI).  The means that once the Data Center is wired adding or changing connectivity options does not require the planning, expense and downtime to re-wire.</p>
<p>The combination of Cisco UCS and hypervisor brings virtualization to both hardware and software forming the a very formidable backbone for the next generation data center.  As many companies move towards greater levels of virtualization, private cloud, hybrid cloud, and public cloud to provide more services to end users having scalable and flexible hardware platform is just a key as having a hypervisor that it is scalable and flexible.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.storagehacker.com/archives/108/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">108</post-id>	</item>
		<item>
		<title>BriForum Day 2</title>
		<link>https://www.storagehacker.com/archives/83</link>
					<comments>https://www.storagehacker.com/archives/83#comments</comments>
		
		<dc:creator><![CDATA[storagehacker]]></dc:creator>
		<pubDate>Thu, 17 Jun 2010 00:35:19 +0000</pubDate>
				<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Hyper-V]]></category>
		<category><![CDATA[hypervisor]]></category>
		<category><![CDATA[VDI]]></category>
		<guid isPermaLink="false">http://www.storagehacker.com/?p=83</guid>

					<description><![CDATA[Day 2 of BriForum has enlightened me even further on why VDI has not penetrated the nearly as fast or deeply into end user environments.  The appear to be two main causes that have either cause implementations to stall of fail in the marketplace and while both have technical aspects, one also political issues the &#8230; <p class="link-more"><a href="https://www.storagehacker.com/archives/83" class="more-link">Read More<span class="screen-reader-text"> "BriForum Day 2"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>Day 2 of BriForum has enlightened me even further on why VDI has not penetrated the nearly as fast or deeply into end user environments.  The appear to be two main causes that have either cause implementations to stall of fail in the marketplace and while both have technical aspects, one also political issues the consumer, the actual desktop user.  IT Administrators view VDI as the answer to a number of issues with end user support.</p>
<p>IT views the use of gold images (linked/smart-clones) that are maintained by them (OS and application configurations and updates) as a way to provide a consistent experience for all users, as well has the helping to limit the number of helpdesk calls related to end user changes to their desktop.  However, in a era where everyone is issued a desktop or laptop that has a company standard image to start with but allows the user to customize their environment with backgrounds, sounds, or icon that they use to show their individual personality or software that they are comfortable with to help complete their daily workload.  In order to allow for end user customization there either has to be a layering of technologies in the VDI space to use the gold image or each user is provisioned a full VM that can be fully customized, which then brings the IT administrators back to almost where they started from when they started to explore VDI.  This starting block is the OS and application updates and end user help desk calls due to user error or desktop hardware failure.  These two views make it hard for IT to mandate VDI, since without user approval, no matter how effective the VDI  solution is the project will be rejected or fail during the implementation phase.   Since a hybrid model with single image VMs can be used for users of support applications or CRM style applications and engineering and test users could have full VMs or stay with the traditional desktops or laptops, this is not the sole reason the VDI has penetrated like server virtualization.</p>
<p>The second biggest reason is that desktop user I/O requirements are actually more intense then servers.  In information presented in the sessions at BriForum studies have showed that the average of server I/O is 2.5 IOPs  versus the average of a desktop user being 8-16 IOPs.  A large part of this because server applications and operating systems are designed specifically to be very efficient because they are servicing a broader audience with the services that they provide.  Desktop applications and operating systems are not as carefully architected as the hardware is servicing a single user with the a standard workload of email, document reading/authoring, web browsing, streaming music or watching videos.  As result of this difference in I/O requirements, it is not as simple as taking the original architecture of a server virtualization project and using that same architecture to the VDI project.</p>
<p>VDI projects need to be architected from the ground up with  new requirements for storage (both physical disk and the layout of the shared storage), networking, and end user needs and wants.  As a power user myself that has large amount of time and pride in getting my laptop setup with all the tweaks, applications, and personality that make using it everyday go smoothly and take the mundane out of the work day.</p>
<p>Being a huge advocate for the merits of VDI, even to point I am exploring ways to implement it at home for my family to help eliminate the need for refreshing and managing so many different types of hardware, I am going be more diligent when proposing or helping my peers with VDI implementations.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.storagehacker.com/archives/83/feed</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">83</post-id>	</item>
		<item>
		<title>BriForum Day 1</title>
		<link>https://www.storagehacker.com/archives/81</link>
					<comments>https://www.storagehacker.com/archives/81#respond</comments>
		
		<dc:creator><![CDATA[storagehacker]]></dc:creator>
		<pubDate>Wed, 16 Jun 2010 02:05:02 +0000</pubDate>
				<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Hyper-V]]></category>
		<category><![CDATA[hypervisor]]></category>
		<category><![CDATA[VDI]]></category>
		<guid isPermaLink="false">http://www.storagehacker.com/?p=81</guid>

					<description><![CDATA[This week I am attending BriForum.  BriForum is a technical conference that draws attendees worldwide for a focus on application and desktop virtualization.  This is eight time that this conference has taken place and is the brain child of blogger Brian Madden (www.brianmadden.com). The day started early at 7am with breakfast and a high energy &#8230; <p class="link-more"><a href="https://www.storagehacker.com/archives/81" class="more-link">Read More<span class="screen-reader-text"> "BriForum Day 1"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>This week I am attending BriForum.  BriForum is a technical conference that draws attendees worldwide for a focus on application and desktop virtualization.  This is eight time that this conference has taken place and is the brain child of blogger Brian Madden (<a href="http://www.brianmadden.com" target="_blank">www.brianmadden.com</a>).</p>
<p>The day started early at 7am with breakfast and a high energy keynote by Brian himself.  One of the key points that resonated with me in Brian’s keynote was that server virtualization has been very successful in penetrating into the datacenter, but desktop virtualization that has been promising similar cost and management saving has not been nearly as successful.  The technical sessions started and ran until 5:30pm covered a wide range of topics to help fill in the gaps has why desktop virtualization (VDI) has been so slow in penetrating the market space.</p>
<p>The eye opening session which I plan on covering in more detail after the conference was one of the last ones on day one was given by Ron Oglesby is Chief Solution Architect at Unidesk, the virtual desktop management innovator.  Ron pointed out first hand the challenges that face IT administrators that are looking to implement virtual desktops for end users.  Many administrators, including myself (until this session) think that after successfully migrating or deploying a physical server environment to virtual environment, that a desktop migration uses that same requirements and architecture.   After this session, a number of assumptions that are used in a server virtualization project do not translate directly to VDI.  The biggest areas are networking, SLAs, and storage considers in the architecture.</p>
<p>I will give an update tomorrow and will try to address areas from Ron’s presentation.  If you are attending BriForum stop by Xiotech at booth 407 and talk storage and storagement in virtual environments.</p>
<p>-Storagehacker</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.storagehacker.com/archives/81/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">81</post-id>	</item>
		<item>
		<title>Interesting VDI facts</title>
		<link>https://www.storagehacker.com/archives/61</link>
					<comments>https://www.storagehacker.com/archives/61#comments</comments>
		
		<dc:creator><![CDATA[storagehacker]]></dc:creator>
		<pubDate>Thu, 04 Mar 2010 05:21:57 +0000</pubDate>
				<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Emprise 5000]]></category>
		<category><![CDATA[Hyper-V]]></category>
		<category><![CDATA[hypervisor]]></category>
		<category><![CDATA[ISE]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[VMware]]></category>
		<category><![CDATA[Xiotech]]></category>
		<guid isPermaLink="false">http://www.storagehacker.com/?p=61</guid>

					<description><![CDATA[Most of the Virtual Desktop Infrastructure (VDI) sizing and performance papers found from the virtualization companies and storage companies state that for sizing purposes a range of 5-20 I/O per seconds (IOPs) should be used.  Using this range as ESG Labs did in their Lab report on HP LeftHand P4000 SAN – Optimizing Virtual Desktop &#8230; <p class="link-more"><a href="https://www.storagehacker.com/archives/61" class="more-link">Read More<span class="screen-reader-text"> "Interesting VDI facts"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>Most of the Virtual Desktop Infrastructure (VDI) sizing and performance papers found from the virtualization companies and storage companies state that for sizing purposes a range of 5-20 I/O per seconds (IOPs) should be used.  Using this range as ESG Labs did in their Lab report on <a href="http://www.enterprisestrategygroup.com/2009/07/esg-lab-validation-report-hp-lefthand-p4000-san-optimizing-virtual-desktop-infrastructure-with-citrix-xendesktop/" target="_blank">HP LeftHand P4000 SAN – Optimizing Virtual Desktop Infrastructure with Citrix XenDesktop</a> with 5 IOPs being the Optimistic number and 20 IOPs being the Conservative number administrators can use the storage vendors stated IOPs to determine the number of VDI users per storage array.  The lab report provided the following table:</p>
<p><a href="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png"><img data-attachment-id="102" data-permalink="https://www.storagehacker.com/archives/61/attachment/400024" data-orig-file="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?fit=637%2C131&amp;ssl=1" data-orig-size="637,131" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="400024" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?fit=300%2C61&amp;ssl=1" data-large-file="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?fit=637%2C131&amp;ssl=1" loading="lazy" class="aligncenter wp-image-102 size-full" src="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?resize=637%2C131" alt="400024" width="637" height="131" srcset="https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?w=637&amp;ssl=1 637w, https://i0.wp.com/www.storagehacker.com/wp-content/uploads/2010/10/400024.png?resize=300%2C61&amp;ssl=1 300w" sizes="(max-width: 637px) 100vw, 637px" data-recalc-dims="1" /></a></p>
<p>The Xiotech Emprise 5000 with 7.8TB of useable RAID5 storage and linear expansion using the above IOP figures provide the following level of VDI Users:</p>
<div>
<table border="1" width="848" cellspacing="0" cellpadding="2" align="center">
<tbody>
<tr>
<td align="center" width="158">Number of ISEs</td>
<td align="center" width="153">IOPS</td>
<td align="center" width="194">Virtual Desktops Conservative</td>
<td align="center" width="195">Virtual Desktops<br />
Optimistic</td>
<td align="center" width="146">Response Time<br />
(ms)</td>
</tr>
<tr>
<td align="center" width="158">2</td>
<td align="center" width="153">7,000</td>
<td align="center" width="194">350</td>
<td align="center" width="194">1,400</td>
<td align="center" width="146">27</td>
</tr>
<tr>
<td align="center" width="158">4</td>
<td align="center" width="153">14,000</td>
<td align="center" width="194">700</td>
<td align="center" width="194">2,800</td>
<td align="center" width="146">27</td>
</tr>
<tr>
<td align="center" width="158">10</td>
<td align="center" width="153">35,000</td>
<td align="center" width="194">1,750</td>
<td align="center" width="194">7,000</td>
<td align="center" width="146">27</td>
</tr>
<tr>
<td align="center" width="158">15</td>
<td align="center" width="153">52,500</td>
<td align="center" width="194">2,625</td>
<td align="center" width="194">10,500</td>
<td align="center" width="146">27</td>
</tr>
<tr>
<td align="center" width="158">20</td>
<td align="center" width="153">70,000</td>
<td align="center" width="194">3,500</td>
<td align="center" width="194">14,000</td>
<td align="center" width="146">27</td>
</tr>
</tbody>
</table>
</div>
<div>However, these numbers does not take into account the fact the VMs require a certain amount of disk space for the installed Guest OS and applications that the VDI users need.  Using VMs configured with Microsoft Windows XP SP3 with a 20GB disk the number of VMs that a Xiotech ISE with 7.8TB of RAID5 usable space could service would actually look more like this:</div>
<div>
<table border="1" width="845" cellspacing="0" cellpadding="2" align="center">
<tbody>
<tr>
<td align="center" width="280">Number of ISEs</td>
<td align="center" width="282">Number of VMs<br />
with a 20GB disk</td>
<td align="center" width="281">Number of VMs<br />
with a 30GB disk</td>
</tr>
<tr>
<td align="center" width="280">2</td>
<td align="center" width="282">696</td>
<td align="center" width="281">464</td>
</tr>
<tr>
<td align="center" width="280">4</td>
<td align="center" width="282">1,392</td>
<td align="center" width="281">928</td>
</tr>
<tr>
<td align="center" width="280">10</td>
<td align="center" width="282">3,480</td>
<td align="center" width="281">2,320</td>
</tr>
<tr>
<td align="center" width="280">15</td>
<td align="center" width="282">5,220</td>
<td align="center" width="281">3,480</td>
</tr>
<tr>
<td align="center" width="280">20</td>
<td align="center" width="282">6,960</td>
<td align="center" width="281">4,640</td>
</tr>
</tbody>
</table>
</div>
<p>Each ISE is only 3U and is able to provide a very high density of VDI users with excellent performance and reliability.  Given most hypervisor vendors recommend 32-64 VDI VMs per physical host this configuration allows for 40U of rackspace to provide 1392 end users virtual desktops (using twenty-four 1U servers, four 1U FC Switches, and four ISEs).  The gains would be all the redundancy of the hypervisors high availability features, resource management, datacenter power redundancy (UPS and/or Generator) and the ability to enhance security by having all corporate data actually in the datacenter, opposed to user desktops and laptops.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.storagehacker.com/archives/61/feed</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">61</post-id>	</item>
	</channel>
</rss>
