ControlUp 4.1–The Master Systems Display for your Virtual Environment

One thing I have always liked about the Engineering section from Star Trek: The Next Generation was the Master Systems Display.  This large display, which was found on the ship’s bridge in later series, contained a cutaway of the ship that showed a detailed overview of the operational status of the ship’s various systems.


The Master Systems display from Star Trek Voyager. Linked from Memory Alpha.

Although the Master Systems Display is fictional, the idea of having one place to look to get the current status of all your systems can be very appealing.  This is especially true in a multi-user or multi-tenant environment where you need to quickly identify the systems and/or processes that are not performing optimally.  And this information needs to be displayed in a way that makes it easy to understand while providing administrators with a way to dig deeper if they need to. 

Some tools with these functions already exist in the network space.  They can give a nice graphical layout of the network, and they can show how heavily a link is being utilized by changing the color based on bandwidth utilization.  PHP Weathermap is FOSS example of a product in this space.

ControlUp 4.1 is the latest version of a similar tool in the systems management space.  Although it doesn’t provide a graphical map of where my systems might be residing, it provides a nice, easy to read list of my systems and a grid with their current status, a number of important metrics that are monitored, and the ability to dive down deeper to view child items such as virtual machines in a cluster or running processes on a Windows OS.

image
The ControlUp Management Console showing a host in my home lab with all VMs on that host. The Stress Level column and color-coding of potential problems make it easy to identify trouble spots.  Servers that can’t run the agent won’t pull all stats.

So why make the comparison to the Master Systems Display from Star Trek?  If you look in the screenshot above, not only do I see my ESXi host stats, but I can quickly see that host’s VMs on the same screen.  I can see where my trouble spots are and how it contributes to the overall health of the system it is running on.

What Is ControlUp?

ControlUp is a monitoring and management platform designed primarily for multi-user environments.  It was originally designed to monitor RDSH and XenApp environments, and over the years it has been extended to include generic Windows servers, vSphere, and now Horizon View.

ControlUp is an agent-based monitoring system for Windows, and the agent is an extremely lightweight application that can be installed permanently as a Windows Service or be configured to be uninstalled when an admin user closes the administrative console.  ControlUp also has a watchdog service that can be installed on a server that will collect metrics and handle alerting should all running instances of a console be closed.

One component that you will notice is missing from the ControlUp requirements list is a database of any sort.  ControlUp does not use a database to store historical metrics, nor is this information stored out “in the cloud.” This design decision is a double-edged sword – it makes it very easy to set up a monitoring environment, but viewing historical data and trending based on past usage aren’t integrated into the product in the same way that they are in other monitoring platforms.

That’s not to say that these capabilities don’t exist in the product.  They do – but it is in a completely different manner.  ControlUp does allow for scheduled exports of monitoring data to the file system, and these exported files can be consumed by a trending analysis component.  There are pros and cons to this approach, but I don’t want to spend too much time on this particular design choice as it would detract from the benefits of the program.

What I will say is this, though – ControlUp provides a great immediate view of the status of your systems, and it can supplement any other monitoring system out there.  The other system can handle long-term historical, trending, analysis, etc, and ControlUp can handle the immediate picture.

How It Works

As I mentioned above, ControlUp is an agent-based monitoring package.  That agent can be pushed to the monitored system from the management console or downloaded and installed manually.  I needed to take both approaches at times as a few of my servers would not take a push installation.  That seems to have gotten better with the more recent editions.

The ControlUp Agent polls a number of different metrics from the host or virtual machine – everything from CPU and RAM usage to the per-session details and processes for each logged-in user.  This also includes any service accounts that might be running services on the host. 

If your machines are VMs on vSphere, you can configure ControlUp to connect to vCenter to pull statistics.  It will match up the statistics that are taken from inside the VM with those taken from vCenter and present them side-by-side, so administrators will be able to see the Windows CPU usage stats, the vCenter CPU usage stats, and the CPU Ready stats next to each other when trying to troubleshoot an issue.

image
Grid showing active user and service account, number of computers that they’re logged into, and system resources that the accounts are utilizing.

For VDI and RDSH-based end-user environments, ControlUp will also track session statistics.  This includes everything from how much CPU and RAM the user is consuming in the session to how long it took them to log in and the client they’re connecting from.  In Horizon environments, this will include a breakdown of how long it took each part of the user profile to load. 

image
Grid showing a user session with the load times of the various profile components and other information.

The statistics that are collected are used to calculate a “stress level.”  This shows how hard the system is working, and it will highlight the statistics that should be watched closer or are dangerously high.  Any statistics that are moderately high or in the warning zone will show up in the grid as yellow, and anything that is dangerously high will be colored red.  This combination gives administrators a quick summary of the machine’s health and uses color effectively to call out the statistics that help give it the health warning that the machine has received.

image
Not only can I see that the 1st server may have some performance issues, but the color coding immediately calls out why.  In this case – the server is utilizing over 80% of its available RAM.

Configuration Management

One other nice feature of ControlUp is that it can do some configuration comparison and management.  Say I have a group of eight application servers, and they all run the same application.  If I need to deploy a registry key, or change a service from disabled to automatic, I would normally need to use PowerShell, Group Policy, and/or manually touch all eight servers in order to make the change.

The ControlUp Management Console allows an administrator to compare settings on a group of servers – such as a registry key – and then make that change across the entire group in one batch. 

In my lab, I don’t have much of a use for this feature.  However, I can definitely see the use case for it in large environments where there are multiple servers serving the same role within the environment.  It can also be helpful for administrators who don’t know PowerShell or have to make changes across multiple versions of Windows where PowerShell may not be present.

Conclusion

As I stated in my opening, I liken ControlUp to that master systems display.  I think that this system gives a good picture of the immediate health of an environment, but it also provides enough tools to drill down to identify issues.

Due to how it handles historical data and trending, I think that ControlUp needs to be used in conjunction with another monitoring system.  I don’t think that should dissuade anyone from looking at it, though, as the operational visibility benefits outweigh having to implement a second system.

If you want more information on ControlUp, you can find it on their website at http://www.controlup.com/

End-User Computing PodCast–Market Research

Back in December, I started kicking around the idea of doing a podcast focused on virtual end-user computing topics.  I wanted to do a little market research to gauge community interest in the idea.

I’ve put together a simple survey to gauge interest in the topic, and I would love to get your feedback.

https://docs.google.com/forms/d/12N7cAiGptfQ6E70wSA1jGdfSYLovydjuQAyuwYhD0kg/viewform?usp=send_form

If you have any questions, please feel free to reach out to me using the email link on the top of the page or by sending a message to @seanpmassey on Twitter.

Time to participate in the Project VRC "State of the VDI and SBC union 2015" survey

The independent R&D project ‘Virtual Reality Check’ (VRC) (www.projectvrc.com) was started in early 2009 by Ruben Spruijt (@rspruijt) and Jeroen van de Kamp (@thejeroen) and focuses on research in the desktop and application virtualization market. Several white papers with Login VSI (www.loginvsi.com) test results were published about the performance and best practices of different hypervisors, Microsoft Office versions, application virtualization solutions, Windows Operating Systems in server hosted desktop solutions and the impact of antivirus.

In 2013 and early 2014, Project VRC released the annual ‘State of the VDI and SBC union’ community survey (download for free at www.projectvrc.com/white-papers). Over 1300 people participated. The results of this independent and truly unique survey have provided many new insights into the usage of desktop virtualization around the world.

This year Project VRC would like to repeat this survey to see how our industry has changed and to take a look at the future of Virtual Desktop Infrastructures and Server Based Computing in 2015. To do this they need your help again. Everyone who is involved in building or maintaining VDI or SBC environments is invited to participate in this survey. Also if you participated in the previous two editions.

The questions of this survey are both functional and technical and range from “What are the most important design goals set for this environment”, to “Which storage is used”, to “How are the VM’s configured”. The 2015 VRC survey will only take 10 minutes of your time.

The success of the survey will be determined by the amount of the responses, but also by the quality of these responses. This led Project VRC to the conclusion that they should stay away from giving away iPads or other price draws for survey participants. Instead, they opted for the following strategy: only survey participants will receive the exclusive overview report with all results immediately after the survey closes.

The survey will be closed February 15th this year. I really hope you want to participate and enjoy the official Project VRC “State of the VDI and SBC union 2015” survey!

Visit www.projectvrc.com/blog/23-project-vrc-state-of-the-vdi-and-sbc-union-2015-survey to fill out the Project Virtual Reality Check “State of the VDI and SBC Union 2014″ survey.

Writing for Virtualization Review – My 2nd Gig

In my end of the year wrap-up post, I teased that there would be an announcement coming early in January.   Now that it’s early January, I can let the cat out of the bag.

Starting this month, I will writing a twice-per-month column called Sean’s Virtual Desktop in Virtualization Review.  My plan is to write about VDI and to do how-to type articles like the Horizon View series that I’ve done on this blog.

My first article, Pets and Cattle in Virtual End User Computing, should be up sometime in the next day or two.

Edit: The article is now up, and it can be viewed here.

Looking Back at 2014, My Goals for 2015, and Something New….

Christmas is here, and the end of 2014 is just around the corner.  That means it is time to look back on this year and start looking forward to the next.  I also have a couple of announcements at the end of the post.

A Year in Review

2014 was a very busy year with a lot of exciting opportunities to grow.  Some of the highlights from this year were:

  • Giving my first vBrownbag Presentation
  • Participating in the Virtual Design Master challenge
  • Attended my second VMworld, where I presented on the vBrownbag stage and met a lot of really cool people
  • Passed both VCAP exams in the desktop track
  • Was a technical reviewer on the VMware Horizon View 6 Desktop Virtualization Cookbook by Jason Ventresco
  • Became a VMware vExpert and Cisco Champion
  • Joined the Steering Committee for the Wisconsin VMUG

The biggest highlights were early in the year when my wife and I welcomed our second child, a daughter, into the world and moved into our new house.

Blog Statistics

2014 was a very busy year on the blog front.  As of 12/24/2014, I had posted 86 times and received 149,600 page views for the year.  Those numbers should creep up to closer to 90 posts and 150,000 page views by the end of the year.

Some key statistics for the year are:

2015 Goals

I have a few goals for 2015.  These goals are:

  1. Get My VCDX: I will be starting my design documentation for my VCDX after the 1st of the year.  I have my design picked out, and my goal is to defend in October.
  2. Make the jump to a vendor or partner: This kind of goes with goal #1 as this will help with the VCDX process.  I currently work on the customer side, and I would like to transition to the other side and work for a vendor or a partner.  My long-term goal is to get into technical marketing or become a technology evangelist.
  3. Find a better balance between work, community involvement, and family: I think this one is self-explanatory.  Smile

And now for the good stuff

I mentioned that I had a couple of announcements at the top of the post.  None of the announcements are a new job, otherwise I would not have listed that as one of my goals for the next year.  But I do think they are exciting opportunities.

The first announcement is that I’ve been invited to present at the inaugural meeting of the North Central Wisconsin VMUG in early February at UW-Eau Claire.  I’ll have more details about the date and time as they become available.

The second announcement is going to be a bit vague at this point.  It’s not a new job, in a manner of speaking, but it is a big opportunity that I’m very excited about.  Keep an eye out in early January for more details.

And a big thank you…

This year wouldn’t have been as successful as it was without the great vCommunity.  A few of the people that I’d like to thank are:

  • The Wisconsin VMUG Leadership team
  • The Virtual DesignMaster leadership team of Eric Wright, Angelo Luciani, and Melissa Palmer and all of the judges and participants
  • Jonathan Frappier and the vBrownBag crew
  • Josh Atwell – who, despite being in most of the above, has provided some great advice

I’m looking forward to 2015, and I hope everyone has a great year.

Crossing the Finish Line–Lessons Learned During #VDM30in30

Legend has it that, following the Athenian victory over the Persians at the Battle of Marathon in 490 BC, a runner named Philippides ran 26.2 miles back to Athens, announced the victory, and promptly died of exhaustion.  That legend, which I recall being in my high school history book, is inaccurate at best, and historians believe that it stems from the story of Pheidippides, who ran from Athens to Sparta in two days to call the Spartans to war against Persia.

As Jonathon Frappier said earlier this month, “Blogging is a marathon, not a sprint.” A 30 day blog challenge good pacing to avoid burnout and commitment to power through when you hit that creative wall.

This post will be post number 30 for November, and I will officially be dragging myself across the finish line for the VDM30in30 challenge.  I feel that I’ve grown as a writer and a technologist during this event.

I’d like to thank Eric Wright (@discoposse), Melissa Palmer (@vmiss33) and Jonathan Frappier (@jfrappier) for putting this together, Angelo Luciani (@AngeloLuciani) for helping to aggregate all the content, and everyone who participated in this event.

I hope you enjoyed my posts during November, and please feel free to let me know what you thought on Twitter or by email.

PS – I never did get around to writing fiction set in the Virtual Design Master universe.  Fiction writing is a completely different beast, and it requires a lot more planning than technical blog posts.

Sunday Recipe–The Art of Chicago Style Pizza #VDM30in30

The Windy City is well known for some of it’s culinary delights – Chicago-Style hot dogs, Maxwell Street polish sausage, Italian Beef sandwiches, and deep dish pizza.  It seems like you can get one these staples on practically any intersection, and there are very successful local chains that specialize in these foods.

When it comes to Chicago-Style Deep Dish pizza, there are fewer options.  The main purveyors of these foods are a few restaurants – Giordano’s, Gino’s, Pizzeria Uno, and Lou Malnati’s.  Lou’s is very well known, and they have restaurants all across the Chicago area.

I grew up in Schaumburg, IL, a Chicago suburb most commonly known for Woodfield Mall and having one of the first Ikea stores in the United States.  After high school, I moved to Wisconsin for college. 

Its not impossible to find a good pizza in Wisconsin, but it can be tough.  There are very few good local pizza places, and they rarely do any Chicago Style pizza.  Lou Malnati’s and Gino’s both ship frozen pre-cooked Deep Dish pizzas, but it’s just not that same as getting one fresh from the oven.

There are some Uno’s around…but let’s not get into that.

The only way to get a good taste of home fresh from the oven is to make it yourself.

What is Chicago Style Pizza?

Before we can talk about how to make Chicago Style pizza, we have to define exactly what it is.

If you think about pizza is usually layered, it goes crust – sauce – cheese+toppings. Chicago style pizza is fundamentally different – and it’s layered crust – cheese – toppings – sauce. 

Note: There are three kinds of Chicago Style Pizza.  Deep dish style, which is the kind mentioned above, a thin crust variety which has more of a cracker-like crust, and the Giordano’s stuffed pizza.  Stuffed pizza is similar to deep dish pizza, but it has an extra layer of crust between the cheese/toppings and the sauce.

Making Chicago Style Pizza

The pizza making community isn’t that different from the virtualization community.  The people who are active in it are very passionate about their craft and open to sharing dough formulas, sauce recipes, and other tips.  There is an entire community on the forums at pizzamaking.com dedicated to Chicago Style pizza with members who have spent considerable time attempting to replicate the formulas for the larger chain restaurants.

In order to make the dough, you will need a heavy stand mixer such as a KitchenAid and a scale.  All of the recipes are expressed in baker’s percentages, so the ingredients will need to be weighed.  Pizzamaking.com does include a page with dough calculators to convert the baker’s percentages to weights.

There are two dough recipes that I have had very good results with.  These recipes are:

Some people are very particular about the brand of flour, yeast, and oil that they use, but you don’t need to use the exact same brands as the recipe calls for.  You should, however, follow the directions closely as under or over-mixing can ruin the batch.

Once the dough has risen, I like to put it into a Ziplock bag and let it rest in the fridge for a couple of days.  This makes the dough easier to work with when I bake the pizza.  You don’t have to do this, though – you can use the dough right after it has been through the first rise.

When it comes time to bake the pizza, you will need to use a 2” deep round metal pizza pan such as this 14” non-stick pan.  A pizza stone is not required, but it can help during baking.  I usually put my pizza stone in before I preheat the oven, and I will place the pizza pan on it during baking.

If you prepared your dough in advance and refrigerated it, you will want to take it out of the fridge and place it in the pizza pan to warm up.  I usually let it do this for about an hour as it makes the dough easier to work with.  You will want to start preheating your oven at the same time.  I set the temperature to 450 and let it preheat for an hour, and I will turn the temperature down to 425 before I put the pizza in to bake.

Once the dough has warmed up, you will want to oil the bottom of the pan with a little corn or vegetable oil and then spread the dough out.  Don’t pinch the dough up the sides of the pan – just leave it a little thicker at the edges. 

Once the dough has been spread out, take a regular fork and dock the dough by pressing the fork all the way through.  This helps the dough bake up a little crispier.

Once you have the dough in the pan, it’s time to top the pizza.  The ingredients you will need for topping your pizza are:

  • 8 ounces of shredded mozzarella cheese
  • 1 pound of bulk Italian Sausage
  • 1 28 oz. can of crushed tomatoes

The first layer is mozzarella cheese.  You will need to spread the cheese around on the crust, leaving about 1/2 to 1 inch on the edges.  Most recipes call for using mozzarella sliced from block of cheese, but I use pre-shredded cheese from the store.  I usually use about 8 ounces of shredded cheese.

The next layer is the toppings.  A traditional Chicago Style pizza is topped with Italian Sausage, but you can use any combination of meats and vegetables that you prefer.  If you’re using bulk Italian Sausage from a butcher, you’ll want to roughly shape them into little balls about .75 to 1.25 inches in diameter and place them on top of the cheese.

The last layer that goes on the pizza before being placed into the oven is the sauce.  The sauce consists of one 28 oz. can of crushed tomatoes.  Any brand of crushed tomatoes will work.  If you have a fine-meshed strainer, you will want to drain out as much water as you can.  The texture of the tomatoes should look like a thick, chunky tomato paste after you drain the water off.

You will want to spread this mixture over the pizza as best as you can.  Don’t worry if it doesn’t look like you have enough tomato to cover the pizza – it will loosen up and spread out as the pizza cooks.  If you add another can, you will end up with pizza soup.  Trust me on this – I’ve made this mistake too many times.

After the sauce has been placed on top of the pizza, you will need to turn your oven down to 425 and place the pizza in for baking.  It takes about 30-35 minutes to cook the pizza, and you will want to rotate the pan 180 degrees after 15 minutes.

When your pizza is done, it should rest for a few minutes to cool down before cutting it into wedges.  You will need a spatula or a pie server to remove the pizza from the pan.

A finished pizza should look something like this:

2012-11-21_18-38-45_896

Horizon View 6.0 Load Balancing Part 1#VDM30in30

Redundancy needs to be a consideration when building and deploying business critical systems.  As user’s desktops are moved into the data center, Horizon View becomes a Tier 0 application that needs to be available 24/7 as users will not be able to work if they can’t get access to a desktop.

Horizon View is built with redundancy in mind.  A single View Pod can have up to 7 Connection Servers to support 10000 active desktop sessions, and the new View Cloud Pod features allows up to four View Pods to be stretched across two geographic sites.

Just having multiple connection servers available for users isn’t enough.  That doesn’t help users if they can’t get to the other servers or if a load-balancing technology like DNS Round Robin tries to send them to an offline server.

Load Balancers can be placed in front of a Horizon View environment to distribute connections across the multiple Connection Servers and/or Security Servers.  There are some gotcha’s to be aware of when load balancing Horizon View traffic, though.

VMware doesn’t appear to provide any publicly available documentation on load balancing Horizon View traffic, and most of the documentation that is available appears to be from the various load balancing vendors.  After reading through a few different sets of vendor documentation, a few commonalities emerge.

Horizon View Network Communications

Before we can go into how to load balance Horizon View traffic , let’s talk about how clients communicate with the Horizon View servers and the protocols that they use.

There are three protocols used by clients for accessing virtual desktops.  Those protocols are:

  • HTTPS – HTTPS (port 443) is used by Horizon clients to handle  user authentication and the initial communications with the Connection or Security server.
  • PCoIP – PCoIP (port 4172) is the remote display protocol that is used between the Horizon Client and the remote desktop. 
  • Blast – Blast (port 8443) is the remote display protocol used by HTML5-compatible web browsers.

Remote Desktop Protocol (RDP) is also a connectivity option. 

When a user connects to a Horizon View environment using either the web client for Blast or the Horizon Client application for PCoIP, the initial communications take place over HTTPS.  This includes authentication and the initial pool or application selection.  Once a pool or application has been selected and the session begins, communications will switch to either Blast or PCoIP.

In the example above, the user connects to the fully-qualified domain name of the security server.  After authenticating, they select a pool and connect using the protocol for that pool.  If they’re connecting over PCoIP, they connect to the IP address of the server, and if they connect over Blast, the connection goes through the URL of the server. 

6

The URLs used by clients when connecting through a security server.  The PCoIP URL is the external IP address used by the server.

When a load balancer is inserted into an environment to provide high availability for remote access, things change a little.  The initial HTTPS connection hits the load balancer first before being distributed to an available connection or security server.  All PCoIP and/or Blast traffic then occurs directly with the security server.

HorizonViewLoadBalancing

This can have some implications for the certificates that you purchase and install on your servers, especially if you plan to use Blast to allow users to access desktops from a web browser.  If you choose not to use HTTPS offloading, the certificate that is installed on the load balancer also needs to be installed on the security servers.  This may require a SAN certificate with the main external URL and the Blast URLs for all servers.

Load Balancing Requirements

There are a few requirements for load balancing your Horizon View environment.  These requirements are:

  • At least 2 Security or Connection Servers
  • A load balancer that supports HTTPS persistence, usually JSESSIONID

If you’re load balancing external connections, you’ll need an IP address for each security server and an IP address for the load balancer interface.  If you have two security servers, you will need a total of three public IP addresses.

In an upcoming post, I will walk through the steps of load balancing a Horizon View environment using a Kemp virtual Load Master.

Horizon View 6.0 Application Publishing Part 5: Manually Publishing an Application

The last post covered the process of creating an application pool using applications that have been installed on the server and are available to all users through the start menu.  But what if the application you need to publish out is not installed for all users or not even installed at all?

The application that needs to be published out might be a simple executable that doesn’t have an MSI installer.  It could be a ThinApp package located on a network share.  Or it could even be a web application that needs to be accessed from non-secure environments.  Whatever the reason, there may be times where an application will need to be published out that isn’t part of the default application list.

The steps for manually publishing an application are:

1.  Log into View Administrator

2.  In the Inventory panel, select Application Pools.

image

3. Click Add to create a new pool.

image

4. Select the RDS Farm you want to create the application in from the dropdown list and then click “Add application pool manually.”

image

5. Enter the following required fields.:

  • ID – The pool ID.  This field cannot have any spaces.
  • Display Name – This is the name that users will see in the Horizon Client.
  • Path – The path to the application executable.  This must be the full file path of the executable.
  • Description – A brief description of the application.

image

The following parameters are optional:

  • Version – The version number of the application
  • Publisher – The person or company that created or published the application
  • Parameters – Any command line parameters that need to be passed to the application executable. 

6. Make sure that the Entitle Users box is checked and click Finish.

image

7. Click Add to bring up the Find User or Group wizard.

image

8. Search for the Active Directory user or group that should get access to the application.  Select the user/group from the search results and click OK.

image

9. Click OK to finish entitling users and/or groups to pools.

10. Log into your Horizon environment using the Horizon Client.  You should now see your published application as an option with your desktop pools.

Note: You need to use version 3.0 or later of the Horizon client in order to access published applications.  Published applications are not currently supported on Teradici-based zero clients.

image

The Things I’m Thankful For–#VDM30in30

The United States is celebrating Thanksgiving today.  It’s a day to sit back, take stock of the good things in your life, and give thanks to the deity of your choice for them. 

It’s also a day for lots of turkey, football, family, and ironically (and unfortunately) the day that people rush out to buy the things they want at extremely low prices.

A Quick History Lesson

Tradition holds that the “First Thanksgiving” was held by the Pilgrims in 1621 to celebrate the harvest and give thanks to God for seeing them through the year.  That festival, which lasted three days, was celebrated with the Wampanoag tribe. 

The Thanksgiving holiday that we enjoy today wouldn’t be ritualized until 1863 when President Abraham Lincoln issued an order declaring the last Thursday in November to be a national day of thanksgiving, and it wouldn’t be fixed to the fourth Thursday in November until 1942.

What I’m Thankful For

I have a lot to be thankful for.  Some of those things are:

  • My wife, who is an amazing woman who puts up with a lot from me.  She is the rock in my life, and I would be lost without her.
  • My kids, who push me to keep learning more and never settle so they can have a chance at even better opportunities than I have.
  • A great community that now includes many friends and has provided opportunities to learn from the best at what they do
  • A great boss and amazing co-workers

Today is a day to think about the people and the things that you’re thankful for.

Follow

Get every new post delivered to your Inbox.

Join 86 other followers