Are you ready? Cisco Live 2017!


Are you ready? Cisco Live 2017!

Hey everyone!   The dynamic Cisco Champion duo of Paul Campbell (@paulmc3) and Robert Hawley (@writeerase) are heading out to Cisco Live again this year representing ROVE!

Did you catch our recap from 2016?  If not, click here!

This year we're going to switch it up a little bit and post a daily blog update highlighting the news of the day as best we can!

Now, down to business! 

Last year was my first Cisco Live and I learned a lot.  I'd like to provide a top 5 tips for Cisco Live from my perspective.

  1. Dress comfortably
    1. Many of us tend to want to dress a little better during our daily business lives, but Las Vegas, NV in the summer and Cisco Live (where you'll walk miles a day) isn't the best place for that!  Wear jeans, shorts, polo, whatever you're comfortable with.  Trust me, others will be dressed the same and you're in good company! Trust me, if we can pull this off, you're good!
  2. Wear appropriate shoes
    1. I broke this out from dress comfortably because I had a luggage malfunction with the airlines and wore sandals on the plane.  Due to that, I was stuck with sandals for almost 24 hours and let me tell you, it was NOT pleasant!  As stated, you will end up walking many miles throughout the few days you're out at Cisco Live between sessions, keynotes, events after hours, and more.  Don't get blisters due to poor planning; like I did in 2016.  Speaking of all the traveling, I think the infamous Dino will be showing up again this year in my twitter feed.
  3. Sessions are great... but don't skip World of Solutions
    1. I booked my entire calendar full in 2016.  As it was my first Cisco Live, I figured the sessions were where it was at.  Unfortunately, I ended up leaving some sessions due to knowing some of the content already.  I found myself 70% enthralled with sessions and 30% yearning.  I spent only a few hours between the week at World of Solutions and I promise you, you could spend at least two days entrenched at the exhibits and learn so much.  Plus, if you're like me with a son who loves gadgets, you can get some cool schwag; like this sword from Plixer.
  4. Don't be shy, attend the networking events
    1. Last year I found myself surrounded by so many veterans of Cisco Live and overwhelmed by the amount of content.  I should have networked more. Instead, I found myself being herded like cattle to each new session or event and forewent the social gathering at World of Solutions and many other events.  This year I am showing up a little later than I would like to on Sunday and missing out on all the great pre-Cisco Live meet ups, but don't be like me, go join them!
    2. Give back to the community as well while you're out there.  You never know who you'll meet while participating in the event and what you can learn about yourself along the way.  Here we are participating in 'helping stop hunger.'
  5. Keynotes are gems
    1. The opening keynote last year was downright amazing.  The acrobats, the style, the ambiance... just breathtaking.  It was like a party with thousands of close friends.  Chuck Robbins is sure to announce some astounding direction for Cisco's FY'18 which should bode well for the community.  Kevin Spacey closed the week as Frank Underwood from House of Cards last year... and let me tell you, I was giddy with excitement.  This year should be great with the speakers on tap for the keynotes from our industry.  I, as well as many colleagues, are very excited for Bryan Cranston's closing keynote!

Don't forget to #CLUS and connect with the @CiscoLive social crew while you're out there.  They're putting in a lot of work to make the social interaction part of what makes Cisco Live so great!

Be sure to follow me on twitter and if you're out at Cisco Live, message me and let's connect!  Stay tuned to @WithRove for updates on our blogs throughout the week.


Paul Campbell - Practice Lead, Software Defined Architectures


Netapp! Welcome to the HCI Party!


Netapp! Welcome to the HCI Party!

As if the hyper converged market wasn’t already crowded enough with offerings from EMC, Cisco, Nutanix, and Simplivity, NetApp has finally thrown their hat into the ring today with a SolidFire based solution.  In my opinion SolidFire is an awesome product and I am happy to see NetApp finally doing more with it post-acquisition.  Based on the messaging and configuration options it is apparent they are targeting enterprise-scale workloads instead of edge use cases. 

Let’s dig in.

Problems with existing HCI solutions as Netapp sees it:

Inability to fully consolidate workloads:  Traditional HCI cannot accommodate all enterprise workloads, leading to multiple silos of technology.  Most HCI use cases at this point are remote site or “generic” data center workloads.  Very rarely will customers place mission critical applications such as SAP, EHR, Oracle, etc. on HCI.  Those mostly remain on traditional stacks.

Unpredictable performance:  Most HCI solutions have no way of providing granular per-VM performance guarantees.  Some solutions handle high utilization better than others, but it’s still a “first-come-first-serve” scenario susceptible to noisy neighbor issues.

HCI Tax: The inability to scale compute and storage resources typically results in either overbuying storage or compute resources. 


The NetApp Solution:

What NetApp has put forward will result in some discussion around the definition of HCI.  The approach is totally unique in the market; some may contend that it’s not even HCI.  However, it DOES address a lot of the issues above and allows customers to be confident in the ability of the platform to deliver predictable performance levels for the most critical workloads.

NetApp has chosen to use the 2RU 4-node SuperMicro chassis.  In the solution, there are discrete Storage nodes and Compute nodes.  The Storage nodes form a SolidFire storage cluster and the Compute nodes run the hypervisor (VMWare at FCS) and connect to the storage cluster via iSCSI.  Storage or Compute nodes may be scaled independently as needed.  Only flash storage nodes are available, no hybrid!

Minimum Configuration:

  • (2) 2RU 4-Node Chassis
  • (4) Storage Nodes
  • (2) Compute Nodes


Node specifications:


  • Setup is a simple browser based install with 30 inputs, 30 minutes or less and includes storage, vSphere, and vCenter configuration.  At FCS, VMWare is the only hypervisor supported.
  • Managed via vCenter plugin and Management VM for monitoring, upgrade, and remote support.
  • Given the underlying storage is SolidFire, the HCI offering inherits all the robust QoS and API features that allow granular control at a VM level.
  • Will integrate with the NetApp Data Fabric to include SnapMirror, SnapCenter, AltaVault, and StorageGrid.

Musings from the Engineer:

As I stated previously, there is going to be a lot of discussion around this even being HCI.  It has separate storage, compute, and network so on paper it appears to be more of a CI “lite” approach shoved into multiple 4-node chassis.   I’ll leave that argument to the talking heads.  When I speak to customers they want administrative simplicity, secondary is the form factor.  Do they care if the nodes are truly Hyper converged or Single-purpose?  Even with the role-based node architecture, this is a compact and easy to administer solution that provides all the functionality of existing HCI and has promise to solve a lot of the issues that are holding HCI back in data center environments. 

As far as hypervisor support, I hope that NetApp will expand support for others quickly.  More and more, customers are asking for Hyper-V and XenServer support (yes, XenServer!).  Based on the architecture it should not be difficult to add multi-hypervisor support. 

Stay tuned for lab testing!


Brad Craig

ROVE - Senior Solution Architect



1 Comment

Integrate Fortinet Firewalls to enhance Cisco ACI security

Hey, everyone!  Welcome to a quick blog about enhancing your Cisco ACI security by integrating Fortinet Firewalls through L4-L7 service insertion.  

First, let's take a look at the topology we want to setup (pictured below, who doesn't like pictures?).  Underneath a single tenant and single VRF, we have two separate bridge domains called LAB_DATA and FortiTest.  LAB_DATA has three EPGs, and FortiTest has only one.  Underneath LAB_DATA's bridge domain we have two subnets configured.

So what do we want to test?  As seen in the diagram above, first we wanted to test traditional cross subnet boundary firewalling from FortiTestVM2 to FortiTestVM1  Second, we wanted to test the same functionality, but within the same Bridge Domain between FortiTestVM4 to FortiTestVM1  Lastly, we wanted to execute on the micro-segmentation strategy and within a single Bridge Domain and a single subnet talk from FortiTestVM3 to FortiTestVM1  What now?  Let's get after it!

We have a FortiGate 1500D firewall, and the first thing we need to do is create a new admin user that we're going to use for Cisco's ACI integration.  We also have to create a port-channel (manually) with a name we reference later on for the L4-L7 device. (pictured later)

Next, we log into ACI APICs and install the device package.  This one is the Fortinet device package for APIC 1.3, but was tested valid and working on 2.2(1n), as well as an upgrade we did to 2.2(1o).

Here we setup the L4-L7 Device.  If you haven't done it before, there are a few things to know.  You need to determine whether you are going to utilize "GoThrough" or "GoTo" mode for the function, as well as, whether or not you're using a context aware device.  An older, but a good quick reference to this from ACI 1.2 is here.  Notice in the top right section, we had to input the "interfaces" and while we did select the VPC itself, we had to give it the name of "ACI" manually, which is the component we spoke about earlier and will show a picture of in just a moment.  Likewise, you then have to setup the consumer and provider interfaces in the bottom right which is going to always be the 'Name' (Device1 in this case) and the 'Interface' you setup (ACI in this case).  Since we're doing a single device in one-armed mode, they're using the same interface.

Upon doing so, the APIC pushes down a new VDOM (9847 in this case) and also chooses a VLAN from the "dynamic VLAN pool" in ACI for the physical connection to the FortiGate firewall.  We have it configured for a range of 5 VLANs, and it chose 'Vlan 5' here and created ACI_v5.  Notice here the "ACI" physical connection that we setup in the beginning!

Next, we created our service graph template in ACI for the consumer to provider context through the FortiGate firewall.  Visual representation below.

Next, you have to deploy the service graph template in ACI which associates the consumer and provider EPGs through a contract.  After deployment auto population of EPG membership nodes in the FortiGate appear as objects (left) and you can see a validation of deployed services in ACI (right).

The policies for this test were an "any" on the FortiGate and all "denies" were handled by the contract provide/consume and filter allowance within ACI.  There is always a debate on how this should be handled.  I would prefer to drop all non-critical packets at the fabric layer and save any compute/overhead from hitting my firewall.  Now, I wouldn't necessarily recommend everyone deploy their firewalls with just "any any" everywhere, but you could!  What would be most beneficial is to keep your firewall rules the way you have them today and amend your deep packet inspections, URL filtering, DLP, et cetera in the rules.

Here we see the rules we expect to see, one subnet to another through a firewall, but this isn't anything new is it?

This is what we've worked for until now! talking to within the same subnet ( part of the same VRF, same Bridge Domain, but separate EPGs talking through a stateful firewall!

So, what is the one thing we didn't discuss?  How can a firewall write the rule for you based on the EPG/Contract association in ACI?  That would be awesome!  Unfortunately, this isn't a Fortinet problem; it is an integration problem with all firewall integrations at this time.  It is risky business to have your firewall policies written and maintained by an external entity.  I can see the security concerns here and why there is a separation. Through software defined networking we're looking to make the network intelligent and communicate together, which it is!  However, we're not fully to the state I think we will be within another year.

In conclusion, L4-L7 integration between Fortinet and Cisco's ACI worked splendidly.  While some see ACI as the "end all" for security in the data center, there are things that it doesn't do natively that other devices do well.  If you're looking to make a move in the data center and upgrade your infrastructure, consider how Cisco ACI can integrate with your current or future solutions today by reaching out to to set up a discussion with one of our ROVE experts.


Paul Campbell

ROVE | Practice Lead, Software Defined Architectures


1 Comment

Physical Server Protection with Rubrik


Physical Server Protection with Rubrik

Rubrik within a VMware environment has always been an easy sell. The product demos well and customers respond positively to how easy it is to set up and manage.  As with many other Virtualization focused data protection products however, protecting physical servers that still exist has been an issue.  As a result, many organizations use a secondary set of backup tools to handle the physical servers or an overly complex solution that does both physical and virtual.  As IT organizations search for simplicity and a better way to protect their data, Rubrik is definitely delivering.  As a former backup administrator, I always had to choose the backup product that stunk less and was never really impressed with any of the options that I had. I’m not sure it’s possible to make data protection sexy, but Rubrik gets REALLY close.

As part of the 3.0 and 3.1 release, Rubrik has quickly added and improved options for supporting physical servers.  Support now exists for the following scenarios:

·       Physical Windows Server Filesystems

·       Physical Linux Server Filesystems

·       NAS

·       Physical SQL (Standalone and Clustering)

In our lab at ROVE, we test and document the setup and operation of products and their features, not only to learn the technology we implement, but to ensure that the solutions we sell work as advertised.  Over the next few weeks I will be configuring the above features and posting videos of the process, the first one will be protecting physical Windows Servers. 

Video 1:  Physical Windows Filesystems

Brad Craig

ROVE - Senior Solution Architect



Cisco CloudCenter Summary


Cisco CloudCenter Summary

I had the opportunity to attend a 4 day partner Cisco Cloudcenter (formerly known as CliQr) class. For those not familiar with CloudCenter, it is an Application-Defined Cloud Management Platform that allows users to easily deploy and manage any application on any supported cloud, (private, AWS, Azure, Google, etc.)

CloudCenter has a simple architecture that consists of 2 primary components:

CloudCenter Manager (CCM) – The interface in which users model, deploy, and manage applications on and between a data center and a cloud infrastructure, and in which administrators control clouds, users, and governance rules.

CloudCenter Orchestrator (CCO) - Resident in every datacenter or cloud region, CCO automates the deployment of the application along with provisioning and configuring infrastructure – compute, storage and networking – per the application’s requirement.

The curriculum focused on the following:

-       Installation of the CCM, CCO and AMQP

-       Baseline Configuration

-       Build Private VMware Cloud with ACI

-       Building Public Cloud (AWS)

-       Modeling Applications

While pushing CloudCenter through its paces, it was clear that CloudCenter is a compelling solution for today’s IT organizations. Whether you are just starting with user self-service in a data center, migrating your first application into the cloud, or executing the second or third iteration of a cloud, this is a perfect fit for your environment.

We will continue to test scenarios with CloudCenter in the ROVE Lab to be able to provide the latest in integrated solutions.

If you need any assistance with your Cloud journey, please contact ROVE at


Mark Trojanowski

ROVE | Solution Architect



Goodbye VGA & HDMI Cable! Wireless DESKTOP Sharing is now available with Cisco Spark!


Goodbye VGA & HDMI Cable! Wireless DESKTOP Sharing is now available with Cisco Spark!

We’ve been able to use Cisco’s Wireless Proximity feature for quite some time on Telepresence room systems registered to on-premise infrastructure (Unified Communications Manager or VCS) to share content (desktop screen, power point presentations, etc.), but this has been a much-requested feature for those of us who have taken the plunge into the world of Cisco’s cloud-based telephony platform called Spark.  Removing clutter from the conference room table top has been a goal of all our customers who are fed up with VGA and HDMI cables strewn all over the place.  A cable for sharing content has been required with room endpoints registered to the Spark service … until now!

The Answer:  Desktop Proximity Pairing for Mac and Windows systems

Here’s what you need to know:

  • Cisco Spark for Mac and Windows users will see their Spark avatar photo automatically and the app will indicate when paired to the room system
  • Manual pairing is also available.  A prompt will appear on the room device with the name of the requesting person and an option to ‘Accept’ or ‘Decline’ the wireless screen share or call
  • Want to click-to-dial directly from your Mac or Windows Spark app?  No problem, just check the box labeled “Use this system for calls” and you can now control your room-based system for making calls directly from your app!
  • Spark Proximity is NOT compatible with On-Premise Telepresence systems, however, you can still use the Cisco Proximity app available here: 

Want to learn more about Cisco Spark?  Contact and we can setup a Spark demo at your location.


Share My Screen

Add Nearby Devices

Robert Hawley

ROVE | Solutions Architect - Enterprise Collaboration



Plan, Execute, THEN Drink the Beer!


Plan, Execute, THEN Drink the Beer!

Everyone's favorite thing to hate: planning.  I don't have all the answers, but I want to try to provide some guidance to someone specific.  Yes, you, the dude who recognizes the need for planning when nobody else around does. The dudette who is tired of having projects go awry every... single… time. Beers are great to celebrate a project victory, but a lot of us can often find ourselves using them throughout the project cycle to help numb the pain. I get it; I've been there.  

There are a lot of things that this post is not.  It is not a crash course in project management.  I don't talk about cool buzzwords like agile or scrum, and I'm not going to talk about change management processes.  It is simply a collection of my observations around the businesses I've worked in or with, and the multitude of projects I've seen succeed and fail. But read on, I think you'll still find it useful.

Why Don't We Plan?

Fail to plan, plan to fail, right?

So first, let's answer this question - why don't organizations plan, or why do they plan poorly?  Many companies have huge gaps in planning.  There are many reasons here, but here are a few of the ones I see most frequently.

Misconception 1: many organizations (or projects within an organization) think they are too small to even need to plan.  Let me disabuse you of that notion immediately.  Every organization and every project needs a plan.  It is with some irony that I've noticed that massive, complex projects tend to be more smooth and successful than small, "easy" ones.  Why is that? Simple - because people actually take the time to organize an action plan for big projects.  Those little critters that sneak through, those are the ones where dependencies are missed and you accidentally take down production.

Misconception 2: no time, no resources, no leadership, etc.  It is difficult to find extra time to do anything, and planning is no exception.  It takes valuable time and money.  It takes encouragement and buy in from managers and business leaders.  But again let me tell you, it pays off.  It is one of the best investments an organization can make.  8 hours of planning can stave off days, weeks, or even months of scrambling to get something to work right.  You may shake your head but I have seen it.  Over and over again.  It can be hard to convince people to let you take the time to plan, I know.  More on this later.

Misconception 3: no plans are needed because no impact is ever seen. Sometimes upper management is lead to believe this is because they have super heroes leading the charge for all of their projects, when in reality lower management is obscuring the truth.  Most often what this means is that your sysadmins, your engineers, your operators, they are putting in a ton of after hours, overtime, and extra work making things fit together after the fact.  They are getting pulled in a ton of different directions, always at the last minute, always fighting fires.  You can only run an organization like this for so long before people get burned out.  It is good to report a project as a success after a near disaster, but it is definitely not something to be celebrated if it took your admins out of commission for nights and weekends, away from their families, moments they can’t be returned to the team members. Even though from the top this looks like success, this is not a culture to be celebrated.

How Can I Start?

So you want to plan, right?  But you can't find the time, resources, management approval, etc.?  

My advice here is to start small.  Don't try to plan out something monumental first.  In fact, I would find something that is almost comically small.  It could be getting the physical cables for two servers.  This is a good idea because it is almost guaranteed to be successful (even without a plan), and it will give you experience in what it takes as well as expose others (and hopefully others in charge) to what the process will look like.

Next, establish a framework.  I will give you a short sample below but look for something generic that can fit most projects you do.  Don't come up with an impossibly long workflow or a lengthy list of forms that people need to fill out.  Keep it as simple as possible in order to meet your goals.  Having a framework is important because it will show that you are serious about the process.  Calling a meeting with some people with no agenda to talk about some random thoughts in your head - this frustrates everyone in the room.  But inviting people together to tell them concretely what you need from them (and they need from you) to be successful in your endeavor is welcomed and a great use of resources and time.

Lastly, recognize that in a lot of ways, planning is a little more of an art than a science.  Your process, requirements, and workflow will be different than others.  And like with anything else, you will get better the more you do it.  You'll hone your process and your craft over time.  The important thing is to start!

Got a Plan for a Plan?

Yes, in fact I do.  Here is a very high level generic workflow for project planning I try to follow.  

  1. Players and Goals - identify what the project is and what the goals are.  Why are you getting two new servers?  Also identify who is directly involved in the project.  
  2. Data Gathering/Inventory Gathering - Identify what you have today.  This could be anything from number of physical ethernet cables to number of VMs to storage array size.  Depending on the project, this will take a while, but it is both the most valuable and the most ignored part of projects that I've seen.  Good data gathering is your best weapon against unknowns - missed dependencies, missed requirements.  And as a bonus you can use data you gather for project or internal documentation.  Don't skip it!
  3. Requirements, Gap and Impact Analysis - Based on your data and your understanding of the project, identify any requirements you have, and identify any missing items that you need to make the project successful.  Also identify who and what will be impacted by this project.  You may have a gap simply because an important group doesn't know that your project is going on and they will be affected.  Make sure that your requirements have a designate name being assigned to them as you check them off!  If you need 6RU of rack space and John Brown from the DC Ops team tells you that he's got it reserved for you, write his name next to the requirement.  If I ask if we need a change window reserved in advance for this project and Jane Doe tells me no, then I write Jane Doe next to that item.  This isn't about assigning blame as you are all on the same team. It is about accountability.  Requirements that people usually miss are the most basic things - rack units, power, network ports, power and network cables (quantity and type), maintenance windows, change management processes, data center access tickets. Seriously these things are IT 101 but they can delay projects for an absurd amount of time.  
  4. Execution Steps - this should be (as detailed as you need it) steps to actually accomplish the project, which should flow from the previous 3 items.  If you write a step down here and you haven't accounted for it in the previous 3 steps, something is wrong and you need to go back and figure it out.  
  5. Post-Project Tasks - This can be anything from update documentation to post-mortem analysis.  Anything that needs to get done after the main execution window of the project is over.  And yes, now you can drink the beer!

Again, the more you do it, the easier and more natural it will become. Planning is a skill like anything else.  And the more you show that you are committed to it (and the more others see your projects succeeding because of it) the more buy in you'll get across the board.


Joel Cason

ROVE | Senior Technical Consultant



Happy Thanksgiving from ROVE


Happy Thanksgiving from ROVE

Thanksgiving is upon us, where we give thanks to our family, friends, and colleagues.  This year my family decided to open our doors for a ‘Friendsgiving’ to allow anyone without a place to go or family nearby to participate.  I think we can all benefit from opening our doors a little more and widening our scope this holiday season.  I’m thankful for my wife and son that I adore.  I have coworkers, customers, and partners that have become friends, not just colleagues.  I’m grateful for a career doing something I love and being able to influence others in a positive fashion.  I’m thankful for a place of business where I’m surrounded by like-minded individuals with impassioned goals to impact our industry, our customers, and ourselves.


ROVE is thankful for all our partners at DellEMC, Cisco, VMware, Citrix, ServiceNow, and more.  We are grateful for participation in our events, tailgates, and technology summits.  Most importantly, we at ROVE are thankful for every customer that has let us help them along their journey.


Enjoy your holiday and time off with friends and family.  May your network be stable, your compute be fast, your storage plentiful, and your customers joyful.


From everyone at ROVE, Happy Thanksgiving!


The Amazingness That Will Be vSphere 6.5


The Amazingness That Will Be vSphere 6.5

In case you haven't heard the news, VMware announced vSphere 6.5 at VMworld Europe this year.  The software hasn't actually released yet, and as usual I wouldn't recommend anyone jumps on the bandwagon at the .0 release (unless you have test environments).  Nobody wants to be that first gazelle across the river!

Make no mistake, there are a TON of new features and enhancements in this version, which you should be checking out on VMware's press releases.  I don't want to cover them all, as you'd need a gallon of coffee to make it through, but I did want to talk about a few of them that I'm personally excited about.

VCSA Updates

I feel like a lot of people hate the vCenter Server Appliance like they hate the web client.  They've been doing VMware for a long time, have gotten used to the old ways, and let's be frank the first release of these products were sub-optimal (read: sucky).  

However, my experience with the VCSA in the last couple of releases has been really positive.  I think it is a much simpler model of deployment, saves on Windows licenses, and sometimes SQL licenses.  I have had some issues trying to do various upgrades and conversions but that is actually one thing that has been focused on for this deployment.  The upgrade process is smoother and you can take Windows vCenters in either 5.5 or 6.0 flavor and convert automagically to the VCSA.  

Additionally - drumroll - VMware Update Manager is finally included!  And the upgrade/conversion will take all of your baselines along with it.  This was a big sticking point in the past - please use the VCSA so you don't have to burn a Windows license, but also please start a license fire right here because I need to deploy VUM.

There are a lot of general improvements including better performance and scaling, backup/restore, new and better stats collection, and a built-in HA feature that involves cloning a running VCSA into a secondary and a witness.  I'm unclear at this point whether these instances are additionally licensed...based on the clone model I'm assuming NO but it was something I thought about.  I'm also not sure about the external, multiple PSC deployment model and how that fits into the new VCSA, but again this is brand spanking new so a lot of details are still hazy.

6.5 also has some improved Web Client UI, as well as introduces the HTML5 client which no longer requires flash.  The HTML5 client has a completely redesigned look to it, compared to either the older fat client or web client.


Another big focus in 6.5 is security. While there are multiple security enhancements that I like (like verbose logging so I can see WHAT changed instead of THAT something changed!!!), encryption is always a big conversation with customers.

vSphere 6.5 enables two kinds of encryption.  The first is VM encryption.  This is a biggie because like vSphere replication unlocks your storage array choices at different sites, VM encryption at the hypervisor level unlocks your storage array (and possibly fabric choices) at different sites.  A storage array that doesn't support Data At Rest Encryption might be just the thing for you.  The VM guest encryption applies to VMDKs, VMXes, snapshots, and is managed with VMware's Storage Policy Based Management system (SPBM).  This is a great choice because SPBM is simple, and security that isn't simple is never used and = worthless.  It also looks like you will have a variety of key management options available as well.

The next big encryption is vMotion encryption.  These are basically single use encryption streams from host to host for the specific vMotion operation.  Some people might not think this is a big deal but outside of the security conscious, service providers and people leveraging cross vCenter and/or long distance vMotion should be pretty excited about this. vMotion has been a must-isolate network due to the transmission of memory contents right out in the open.  I know a lot of network and security admins who will sleep a little easier with this enhancement.  

Enhanced HA/DRS/FT

The last feature that I'm stoked about are the enhancements to some of the basic and most often used (well, except FT) functions of vSphere.

High Availability now has the ability to set VM dependencies.  This prevents your application layer from starting up before your DB layer and breaking your environment in the event that you have a host failure which took out both VMs.  You can configure the dependencies so that even in the event of an HA your apps will start up with your own intelligence guiding the process, further reducing your outage/downtime window.  It will do it for you rather than you getting a call from helpdesk at 2am letting you know that the application is unavailable.

HA also has a deeper tie in to the host hardware, so it can essentially prefail a host if it is ailing and put it into quarantine mode.  It will begin to evacuate it with vMotion (as long as it doesn't cause a problem) and DRS will not move VMs to it.  Again this is a cool enhancement that will also increase uptime.  

Speaking of DRS, it gets a boost as it now considers the saturation of host NICs in its algorithms.  So if a host has a memory/CPU discrepancy but the network is getting absolutely hammered, it won't exacerbate that problem by moving more workload to it.  DRS also exposes some (previously kinda sorta hidden) advanced options directly in the GUI as checkboxes.  One of these is VM distribution, so that outside of causing a performance issue DRS will attempt to keep your numbers of VMs roughly similar on all hosts.  This helps prevent a single host from housing half of your environment and causing a major issue during an HA event.  Again, these are seemingly small improvements but combined they will help your business stay online.  Priceless.

Fault Tolerance continues to get performance enhancements.  I fully expect to see more customers using FT in the future now that 6.0 removed the single proc issue and we see more and more efficiency in the configuration and network area.  


If you haven't checked out the VMware release notes for 6.5 please do. This is just scratching the surface of all the improvements related to the vRealize suite, general cloud and container support, and VVols 2.0. Hopefully this was a good starter to whet your appetite for what I'm sure is going to be a game changer in a lot of ways for your VMware environment.  


Click here for more original content from Joel Cason



Going above and beyond for customers


Going above and beyond for customers

In today’s competitive market, “satisfactory service” is no longer adequate. Businesses have to exceed customer expectations in delivering exceptional services. But as a professional service organization, it is very difficult to gauge what that is. So……how do we do it?   

Each customer comes with different needs and expectations.  As we work closely with our customers, we need to determine how we fit into their plans and how much they are willing to invest in the relationship.  The key is to identify which role the customer wants us to play. Whether it be a “service provider” or a “trusted partner.”  With that said, given the services we offer and reputation we always aim for all customers to consider us as the latter

Often when we first engage with a customer they prefer us to engage simply as a “service provider,” i.e. we are there to resolve a well-defined problem or implement a predetermined solution.  They have a clear expectation of service and delivery, and may not be able to invest a lot of time, effort and/or money in the relationship yet.  In these circumstances, our focus is to their answer immediate questions, solve their problems, exceed expectations and demonstrate our ability to follow through on promises.  As part of our core values; we always strive to develop long-term relationship through our leadership, dedication to meeting our customer's goals, and going beyond by adding our personal touch to set ourselves apart from the average one and done service provider.  

When we have put in the time & work necessary to be viewed as a “trusted advisor,” the customer is more willing to invest time, effort, risk, and/or money.  In return, they expect our organization to not only deliver on defined scopes of work and resolve problems but to add value and go the extra mile.  To create true value, the organization as a whole has to deliver on a consistent basis.  What this means is having a deeper understanding of the customer’s business and how IT affects that business.  From pre-sales through to implementation, the focus is on a holistic solution that takes into account not only solving the issue at hand but with a strategic view of how this solution fits into the broader IT strategy.  This requires a different approach from our organization and from each individual in how they interact with the customer.  The individualized personal approach remains important, but the entire organization needs to be on the same page as well.  This is the area where ROVE excels, strong communications across all the teams servicing an account; all focused on doing the right thing for the customer.   

I believe that going above and beyond for the customer is rooted not only from the individual but should be instilled in the culture of our organization.  This is obvious with ROVE.




Tony Vuvan

ROVE | Technical Consultant




Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Not a member? Sign up. Log Out