Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition

Comment

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition

Duo is not a new player to the multi-factor authentication world. Founded in 2010, Duo has built a powerful offering of on-premise application deployment, native cloud integration, and REST API capable security features encompassed into a single platform. It supports the full gamut of modern MFA methods, including app-based pushes, one-time use codes, SMS verification, phone validation, and hardware tokens. It’s not surprising then why Duo is now Cisco’s most recent acquisition.

With this acquisition, one of the first questions people may be asking is, how does Duo integrate with the existing Cisco portfolio? 

Identity Services Engine (ISE) has become a staple of Cisco’s security portfolio. At its core ISE offers authentication, authorization, and accounting services using RADIUS and TACACS, but goes even deeper by providing a wide range of advanced features. Profiling, posturing, pxGrid, TrustSec, guest access, and more have been integrated into ISE over the years, making it a cornerstone of access control for many IT organizations.

But at the heart of it, ISE is a RADIUS server with Active Directory integration. And so is Duo. So how do they integrate, and would you even want to try?

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 1

Upon deeper inspection, we find that while ISE and Duo fundamentally perform the same function (responding to RADIUS requests for access control), they offer vastly different capabilities. Sure, ISE isn’t a MFA platform, so that’s an obvious advantage for Duo. But Duo has other advantages as well. Duo’s cloud-based GUI allows policy creation and enforcement for many criteria that ISE cannot implement as simply, such as:

·      Access request country of origin

·      Previously connected devices

·      Insecure browser plugins

·      Tampered/jailbroken mobile devices

·      User biometric verification

·      Personal or corporate device detection

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 2

ISE isn’t defenseless in this arena however, and offers powerful access control features that Duo does not have an answer to, including:

·     pxGrid integration

·     Downloadable ACLs

·     Authorized VLAN to use

·     Voice domain permission

·     Security Group Tag

·     Web redirection

This is where we begin to see that while ISE and Duo perform similar functions, they each have unique and complimentary feature sets. So is it possible to have your cake and eat it too, getting the best of both worlds? 

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 3

Duo has existing documentation for integration with external RADIUS servers, such as ISE. In this architecture, an Authentication Proxy is deployed, which is a Duo component that is hosted on the company’s corporate network. This Authentication Proxy is used to provide RADIUS connectivity to on-premise applications, and in Duo’s documentation is recommended to be the RADIUS server that is configured for the application. The Authentication Proxy can then be configured to connect to Active Directory for authentication, or to ISE.

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 4

The issue that arises here is one of feature availability. This architecture works well for Duo, and maximizes the capabilities that Duo provides to the deployment. However, this comes at a cost to ISE functionality. For instance, the ability for ISE to identify the connecting machine operating system, the AnyConnect agent version, or the IP address of the connecting device are lost. Additionally, by default this configuration completely eliminates the ability to implement advanced ISE functionality such as dACLs or VLAN restrictions. This can be modified with additional configuration on the Authentication Proxy, but adds more steps to troubleshoot. And even then, Change of Authorization is unable to be implemented from ISE at all in this architecture, regardless of what options are configured on the Authentication Proxy.

This architecture does present benefits however, particularly with the native ISE integration with Active Directory. Specifically, this permits ISE to easily return different authorization parameters according to user groups within Active Directory. You’re just limited as to what those authorization parameters can be.

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 5

Now, this isn’t the only way that ISE and Duo can be deployed together. Another option is to reverse the order of this authentication process, configuring the application to authenticate to ISE, then have ISE authenticate to Duo.

This architecture enables full ISE functionality to the RADIUS authenticator, including the ability to implement Security Group Tags, CoA, dACLs, and any other option that would otherwise be supported between ISE and the application in question.

A few issues arise when we look deeper though. On the Duo side, we no longer have the ability to determine the public IP address the user is connecting from, and thereby cannot implement country of origin restrictions for users accessing Duo-protected resources. Additionally, in this design Duo would integrate with Active Directory, rather than ISE. This limits the ability for ISE to implement different levels of access control according to groups within Active Directory. This can be worked around by implementing multiple applications within Duo, applying the various levels of access control to each, and testing them in order from ISE to determine where the user lands, but the solution gets messy fast.

Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 6.png
Getting to Know Duo with ROVE, Cisco's Most Recent Acquisition 7

Regardless of which architecture you choose, both support the full range of MFA options provided by Duo. Using AnyConnect VPN terminated to a Firepower appliance, both architectures provide the ability to authenticate via push, access code, SMS, phone, and hardware token methods.

Looking to the future, I expect Cisco will begin to integrate Duo more fully with the current security portfolio. SAML functionality within Firepower is still missing, and would provide additional integration capabilities with Firepower VPN using AnyConnect. Additionally, today to integrate ISE to authenticate to Duo requires the administrator to configure Duo as a generic RADIUS token. I anticipate that Cisco will look to improve this integration to not only resolve these integration limitations between Duo and ISE, but also to build out even greater functionality and security capabilities.

Comment

Capitalizing On Data with ROVE TraX

Comment

Capitalizing On Data with ROVE TraX

What assets do you own?

What are their current life cycle stages?

How are they supported?

Organizations often subject themselves to wasteful spending that results from ineffective or inadequate management processes related to asset life-cycle management. Corporate IT organizations are responsible not only for the implementation of new technologies that provide their companies a competitive edge, but must also account for the aging technologies that provided that same edge in years past.

As technology assets become dated or too inefficient to meet growing demands, IT organizations must account for the decommission and disposal of aged assets.

A process which must include the management of both the physical and non-physical elements that represent the assets total cost of ownership. Failure to account for the termination of ongoing licensing, maintenance, and subscription services often exposes organizations to significant wasteful spending within their shrinking IT budgets.

As the cycle is repeated, where annuity agreements are not terminated as the asset they are intended to support is disposed, it becomes increasingly difficult for companies to identify valid from invalid agreements related to a specific manufacturer or service provider. As the failure to implement an annuity management strategy persists, a significant percentage of an organization's IT spend is exhausted on annuity agreements that serve no useful purpose to the company’s operation.

ROVE TraX, a vendor-provided annuity management platform, provides customers the visibility required to help clients account for the physical and non-physical assets and annuities present in their corporate compute environments.

ROVE TraX allows customers and their vendors to account for the termination or modification of annuity agreements that must be addressed upon the retirement of the assets in which they support. The ability to efficiently identify these related agreements allow customers to mitigate instances where costly annuities remain in effect beyond their asset’s retirement.

The ability for customers to identify poorly-managed annuities allows them to realign the applied service level agreements to best meet the needs of the particular device throughout its various life-cycle stages. This represents a direct cost-savings impact on the total cost of asset ownership. By implementing a solution to automate the collective management of IT assets and their associated annuity agreements, organizations are able to ensure business efficiency, strengthen customer relationships, “and cut costs up to 50%” according to IAITAM.org, reclaiming a significant portion of the corporate IT budget.

Comment

Working Smarter, Not Harder with ROVE TraX

Comment

Working Smarter, Not Harder with ROVE TraX

Like their parent companies, IT organizations search for ways to improve efficiency in order to reduce the costs associated with their operations as an organizational cost center. Many are establishing ways to increase business efficiency through the implementation of improved workflows. By reducing the amount of time and administration committed to routine operational tasks, companies and their IT organizations are able to work smarter - resulting in lower operational costs.

The average SMB spends 6.4% of its annual revenue on IT expenses, and “80% of total IT costs occur after the initial purchase”. - Gartner

One method being implemented today is the deployment of ROVE TraX to standardize and simplify the management of costly OEM annuity agreements. ROVE TraX enables customers to proactively manage assets and their associated annuity agreements. Customers are able to quickly identify instances where service-level agreements (SLAs) have become misaligned with the asset or service they are intended to support, or exist beyond the asset disposition all together.

The ability to work more efficiently helps organizations reduce wasteful spending, save time, and work smarter.

By engaging their customers with a centralized asset and annuity management platform, solution providers are helping their customers reduce costs associated with the administrative cycles that technology teams have deployed to manually manage the various annuity agreements supporting the organization's IT operations. The ROVE TraX platform allows customers to extend many of the internal administrative cycles outward to resources made available by their participating business partner in an effort to add value to the customer’s experience.

Comment

How to Build a Cisco Spark Bot with PHP

Comment

How to Build a Cisco Spark Bot with PHP

The below excerpt originally appeared on: mycollablab.com a blog written by ROVE Solutions Senior Technical Consultant, Jon Snipes.


For Cisco Spark bots there are “easy button” packages available for Python and Node.JS, but PHP is lacking.  I’ve worked a bit to put together a basic function that handles the Spark API.  It is important that you still understand the underlying API and when to use a GET vs POST vs PUT, and know what resource you are targeting within the API. The Cisco Spark for Developers page does a great job of walking you through each API call, when and how to use them and even a “test mode” to test different calls in real time.

This function can be placed within the working script as a standalone file <?php include(‘sparkAPI.php’); ?> or just included into your main app.  When calling the function, syntax should be.

send_to_spark([Token],[Method],[Resource],[Data(opt)]);

The Spark Token will be the authentication Token you get from the BOT or Integration page under your developer account.  HTTP Method supports GET, POST, PUT and DELETE.  Spark Resource would be the API resource you are sending to, so if you want to get a list of rooms you belong to you would use send_to_spark($token,”GET”,”Rooms”); .  Again reference Cisco Spark for Developers for more information.

Data can also be passed to the function for query strings or for POST and PUT messages.  The data field should be an array with the needed values and the script takes care of converting it to the appropriate query string or json data.  Successfull responses will be returned as an object and then be further parsed as needed.  DELETE methods respond with a NULL value when executed.  If there is an error on the API call, the header code is return to standard output and standard error log for addressing and the function returns false.

//Send query to get 1 rooms sorted by last activity

$queryData = array("max"=>"1",
                   "sortBy"=>"lastactivity");
$sparkResponse = send_to_spark($token,"GET","rooms",$queryData);

var_dump($sparkResponse);

/*OUTPUT
object(stdClass)#2 (1) {
  ["items"]=>
  array(1) {
    [0]=>
    object(stdClass)#1 (7) {
      ["id"]=>
      string(76) "Y2lzY29zcGFyazovL3VzL1JPT00vNWI"
      ["title"]=>
      string(10) "Jon Snipes"
      ["type"]=>
      string(6) "direct"
      ["isLocked"]=>
      bool(false)
      ["lastActivity"]=>
      string(24) "2018-03-13T20:15:53.231Z"
      ["creatorId"]=>
      string(79) "Y2lzY29zcGFyazovL3VzL1BFT1BMRS8wZmY3MGE3"
      ["created"]=>
      string(24) "2017-12-01T04:15:25.181Z"
    }
  }
}
*/

 

//Send Message to room

$queryData = array("roomId"=>"Y2lzY29zcGFyazov",
                   "text"=>"Test Message ***");
$sparkResponse = send_to_spark($token,"POST","messages",$queryData);

var_dump($sparkResponse);

/*OUTPUT
object(stdClass)#1 (7) {
  ["id"]=>
  string(80) "Y2lzY29zcGFyazovL3VzL01FU"
  ["roomId"]=>
  string(76) "Y2lzY29zcGFyazovL3VzL1JPT0"
  ["roomType"]=>
  string(6) "direct"
  ["text"]=>
  string(25) "Test Message ***"
  ["personId"]=>
  string(79) "Y2lzY29zcGFyazovL3VzL1BFT1"
  ["personEmail"]=>
  string(22) "jon.snipes@sparkbot.io"
  ["created"]=>
  string(24) "2018-03-13T20:15:53.231Z"
}
*/

The Function (Download)

//Build Function for sending API calls to spark
//add to script or include()'filename')
//Required - Token,Method,Resource
//Optional - Data
function send_to_spark($sparkToken,$sparkMethod,$sparkResource,$sparkData = "") {

  $sparkURL      = "https://api.ciscospark.com/v1/";
  $sparkMethod   = strtoupper($sparkMethod);
  $sparkResource = strtolower($sparkResource);

  switch ($sparkMethod) {
  //Set vaiables and syntax based on API Method
  
    case "GET":
    //Send GET message to spark
      if ( is_array($sparkData) ) {
      //Check if data is an array - used for searches
        $sparkResource .= "?".http_build_query($sparkData);
      } elseif ( $sparkData != "" ) {
      //Check if there is no spark Data (Optional Field) and formats as needed
        $sparkResource .= "/".$sparkData;
      }
      $httpOptions = array(
            'http' => array(
                'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/json\r\n",
                'method'  => 'GET'
            ),
        );
        break;
        
    case "POST":
    // json_encode data and send POST to Spark
      $httpOptions = array(
          'http' => array(
              'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/json\r\n",
              'method'  => 'POST',
              'content' => json_encode($sparkData),
          ),
      );
      break;
      
    case "PUT":
    // json_encode data and send PUT to Spark
      $httpOptions = array(
          'http' => array(
              'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/json\r\n",
              'method'  => 'PUT',
              'content' => json_encode($sparkData),
          ),
      );
      break;
      
    case "DELETE":
    //send DELETE
      $sparkResource .= "/".$sparkData;
      $httpOptions = array(
          'http' => array(
              'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/json\r\n",
              'method'  => 'DELETE'
          ),
      );
      break;
      
  }
  //set http context
  $httpContext  = stream_context_create($httpOptions);

  // make API call to Spark
  //Loop created for Retry-After.
  //Send API Call if HTTP return is 2XX or error then break loop and return value
  //If HTTP Response is 429 - Check Retry-After timer and sleep for that duration and continue loop
  do {
    $sparkResult = @json_decode(file_get_contents($sparkURL.$sparkResource, false, $httpContext),0);

    if ( preg_match("/HTTP\/1\.. 20.*/",$http_response_header[0]) ) {
    //If 2XX response code, API call was successful.  Return Sprk response (can be NULL response like on deletes)
    //break loop
      $returnValue = $sparkResult;
      break;
      
    } elseif  ( preg_match("/HTTP\/1\.. 429*/",$http_response_header[0]) ) {
    //if received a 429 thottling notice check for "retry-after" key and sleep.
    //continue loop until definate success or error

      //Convert HTTP Response headers into string for processing
      $fullHeader = "";
      foreach ( $http_response_header as $header ){
        $fullHeader .= preg_replace("/:/","=",strtolower($header))."&";
      }
      //Convert string to array to parse Retry-After time, log message and sleep for retry time
      parse_str($fullHeader,$httpResponseHeaders);
      
      if ( @is_numeric(httpResponseHeaders['retry-after']) ) {
      //Confirm that Retries-After exists and is a number to continue loop
        error_log("*** Spark API Throttled - ".$httpResponseHeaders['retry-after']."second delay injected\n");
        echo "*** Spark API Throttled - ".$httpResponseHeaders['retry-after']."second delay injected\n";
        sleep($httpResponseHeaders['retry-after']);
        
      } else {
        error_log("*** Spark API Response ERROR: ".$http_response_header[0]."\n");
        echo "*** Spark API Response ERROR: ".$http_response_header[0]."\n";

        $returnValue = false;
        break;
        
      }

    } else {
    //All other codes are an error - Displays errors in error and standard out.  Return false
    //break loop
    
      error_log("*** Spark API Response ERROR: ".$http_response_header[0]."\n");
      echo "*** Spark API Response ERROR: ".$http_response_header[0]."\n";

      $returnValue = false;
      break;
    }
  } while (0);

  return $returnValue;
}

Comment

AWS Containers Explained Using ECS, Fargate and EKS

Comment

AWS Containers Explained Using ECS, Fargate and EKS

The below excerpt originally appeared on: raid-zero.com a blog written by ROVE Solutions Senior Technical Consultant, Joel Cason.


Those of you who have perused my blog last year (or talked to me in person) know that I’m pretty stoked about containers.  I think they are very cool conceptually and can bring a lot of value to streamlining the development process.

AWS has several options for containers and I wanted to do a VERY high level run through these to distinguish them a bit and maybe whet your appetite to dive into them a little more.

Briefly, containers are isolated areas to run multiple applications on the same machine without them stepping on each other.  E.g. I can run two different apps of the same flavor with completely different versions and dependencies on the same system.  For VMware folks, think of these as VMs (even though they are decidedly not VMs).  Container orchestration is the ability to schedule, scale up and down container instances, restart them if they fail, etc.  For VMware folks, think vCenter.

Also, why would you want to run containers on AWS?  Well depending on the type you may get better manageability, availability, etc. than you do by running on your own infrastructure.  You also leverage the cost structure and economics of the cloud (opex vs capex). And finally you can connect containers to AWS services like RDS, DynamoDB, etc. via IAM roles.

Roll Your Own – Docker Swarm

So this isn’t an available service from AWS (nor does it need to be Docker Swarm – could be Kubernetes, Mesos Marathon, etc.), but you obviously have the option of building your own container ecosystem yourself.  You can deploy EC2 instances of whatever type you want, and install your desired software.

Lots of flexibility here, but also a lot of management for you to take on.  This may be a good solution if you already have experience running your own container orchestration system and you want to leverage either AWS availability or services, but one of the other options is probably going to be a better long term solution.

Elastic Container Service – ECS

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

The first container service we are talking about is ECS.  Think of ECS essentially as AWS’s version of Docker Swarm (or again, whatever container orchestration tool you like).

The idea behind ECS is that you spin up EC2 instances which are running Docker, but that also have an ECS agent.  Instead of directly interacting with Docker, you set up tasks that are managed by the ECS system and agents.  The agent communicates with the Docker daemon.

You have some flexibility in how your environment is configured.  You can configure different sized container hosts, flexible OS choice (though Windows has several restrictions).  You can also take advantage of awsvpc networking mode which is cool as it greatly simplifies container networking, especially when running multiple conflicting containers (like multiple web servers listening on port 80) on the same host.  (check it out here) (this is one of those things that isn’t available on Windows)

Containers run on ECS are specified by tasks which are not standard Docker things – though they are pretty close.  In other words, if you are used to running Docker commands directly, they won’t work for you here, so there is a mild amount of learning or translation required.  No real heavy lifting, but something to consider.

At the end of the day, this is cool but still something you are managing.  There is a set of instances tied to your account that are running your containers that must be monitored and scaled, even though ECS handles some of this for you.  And this is where Fargate comes in.

Fargate

https://aws.amazon.com/blogs/compute/aws-fargate-a-product-overview/

Fargate shares a lot of similarity between standard ECS, without you actually having to run your own container instance cluster.  Instead you simply define tasks as a Fargate type, and they will run on the AWS Fargate infrastructure.  You are charged for the tasks themselves, not for the container EC2 instances.

I like this as there is less to manage and deal with.  But Fargate also has some restrictions currently.  For example, it is Linux only, and you must use the awsvpc networking type.  However if you are running these types of tasks in ECS, then you can simply swap the launch type to Fargate – no muss no fuss.

Fargate will let you deploy containers in a resilient manner without having to manage the underlying infrastructure.  Fargate, in my opinion, is much closer to the VM vs EC2 instance comparison than ECS is.  You don’t spin up EC2 clusters in order to launch EC2 instances – that part is managed by AWS.  Similarly, Fargate is perfect when you don’t care about the infrastructure, you just want to launch a task.

EKS

https://aws.amazon.com/eks/

EKS is Kubernetes on Amazon, and like every other Kubernetes acronym it is all wrong (“Elastic Container Service for Kubernetes”).  It is currently in preview mode but will likely GA soon.  There will also be a Fargate option with EKS, but again not today.

Kubernetes is a container orchestration system that came out of Google.  It is very popular today and is fantastic for container scheduling.  However managing the Kubernetes system itself (day-2 operations) is still very complicated.  EKS, like other similar services, seeks to bridge this gap by providing Kubernetes management transparently, allowing you to focus on service deployment.

EKS should also support traditional Kubernetes API calls, so if you have existing Kubernetes clusters this should be a more or less 1:1 mapping.

Summary

Like I said this isn’t intended to be a deep dive or cover all use cases, but I wanted to hit the high notes and let you know what was out there.  Containers on AWS utilize standard VPC components so they can be built very securely and reliably.  You can also use the Elastic Container Registry service, or ECR (https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html).  And again you can easily tie containers into your existing AWS services.


MEET JOEL CASON, Senior Technical Consultant at ROVE

Joel Cason Senior Technical Consultant at ROVE.jpg

Joel is  a Senior Technical Consultant for ROVE with 15 years of IT experience.  Joel holds a number of advanced technical certifications from DellEMC, VCE and VMware and received his B.S. in Computer Science from NC State. 

You can find more of Joel's work at:

https://raid-zero.com

Comment

Branch Office Dial-Plans

Comment

Branch Office Dial-Plans

The below excerpt originally appeared on: mycollablab.com a blog written by ROVE Solutions Senior Technical Consultant, Jon Snipes.


Screen-Shot-2018-02-22-at-5.00.37-PM.png

Part 3 of the “Route next hop by calling party” is an exercise of reducing dial-plan dependencies.  When working with customers that have several or thousands of different remote offices like retail, we try to create a dial-plan that makes sense for the branch as much as the corporate site and standardization is always top of mind.  Our typical branch dial-plan suggestions for these type deployments fit into this model:  The branch office has a site code, you can dial between branches with the site code+extension, and within the branch you don’t need to dial a site code to call a local extension.Branch Office Dial-Plans

With these requirements we would end up with a single digit branch access code (#), a 3-4 digit site code and a 2-4 digit extension.  In our example we will pretend that each branch has a handful of phones in various areas and that the branch has a 3 digit code provided to them by corporate.  We will use that to create a # + <3 digit site code> + <2 digit extension>.  Then we can take this same setup and repeat for each branch.  When a user needs to call receiving at another store they just need to know the store code and then they will use the same 2 digit extension that they are used to.  When users need to call into corporate, they can either dial the users DID which we translate and keep on-net or use a published “universal” pattern for different departmental uses ie – 990010 for shipping etc.

Screen-Shot-2018-02-22-at-2.41.41-PM.png

Once we have defined our different sites we build the extensions based on the full number <sitecode>+<extension>.  Next we create our dialing habits so the user can dial 2 digit internal numbers.  Again we have to create a new calling search space and partition in order to handle our “Route next hop by calling party” logic. In our Branch_RouteByCLID CSS we need to add patterns matching each site code.

Screen-Shot-2018-02-22-at-5.00.37-PM-2.png

For inter-site dialing we simply create a translation pattern match #.XXXXX.  Discard Predot and route it through the Branch_DN_PT.  This way we can limit our exposure for inter-digit timeout and # makes for a pretty convincing training story “You press # and then the store number.  Like ‘number’ 520, then the extension.” 

Screen-Shot-2018-02-22-at-4.16.18-PM.png

By routing the internal dialed digits by calling party number it lets us get away with a single calling search space and partition across however many sites you want where in a traditional deployment you would need to create a unique calling search space and partition group for each site to get the same functionality.  The lack of calling search spaces does add a bit of complexity in the design though so you have to balance that out.  If you have a team doing basic MACD functions in CUCM where you want to lower the knowledge “cost of entry” to CUCM, this might be a good place to look.  Less options does mean less chance for screw up.  The down side is if an error is made somewhere in that single CSS, you could impact all sites and not just one.  But because everything is pretty baked in, the need to make changes on dial-plans is fewer and further between.

Like I said before.  This is more of an exercise of whats possible than a “this is how you should do it.”


MEET JON SNIPES, Senior Technical Consultant at ROVE

Jon+Snipes+Senior+Technical+Consultant+at+ROVE.jpg

After working as a butcher for several grocery chains in the region, Jon decided to make a career move and earned an Associates Degree in Network Administration. Since then he has gained focused experience with Cisco Collaboration products working for Cisco TAC on CME/CUE gateways and deploying collaboration solutions for Cisco Gold partners. In 2016, he completed his CCIE Collaboration. His focus is on Voice and Video deployments in complex call routing environments as well as developing applications to leverage Cisco APIs to fulfill advanced call routing needs and device/user provisioning.

You can find more of Jon's work at:

https://mycollablab.org

Comment

Beginning with AWS CloudFormation – Part 1

Comment

Beginning with AWS CloudFormation – Part 1

The below excerpt originally appeared on: raid-zero.com a blog written by ROVE Solutions Senior Technical Consultant, Joel Cason.


One of my few new goals for this year is to get back to blogging regularly about stuff I’m learning or interested in.

AWS CloudFormation is a utility that allows you to define AWS “infrastructure” as code in text files called Templates.  You can use it to deploy almost anything via JSON or YAML scripts.  The deployed resources are collectively called stacks.  There are other IaC options here as well, like Terraform, but I think it is handy to know the native toolset as well.  Plus if you are going for AWS certifications you’ll need to be familiar with it.

I wanted to use this series to walk through some simple examples that cover a lot of the functionality of CloudFormation.  This will give you a good intro, as well as give you, the reader, the ability to begin writing some serious CloudFormation templates.  I’ll also introduce the relevant documentation along the way.  So let’s dive in.

Here is the general guide for CloudFormation from Amazon that is also useful.  It is a bit more in depth (read: takes a lot longer to go through) than what we are doing here.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/working-with-templates-cfn-designer-walkthrough-createbasicwebserver.html

VPC

The first thing we are going to do is to build a Virtual Private Cloud via CloudFormation.  This is the foundation of AWS environments.  In CloudFormation, items we add like VPCs, EC2 instances, subnets, etc. are called Resources and go, surprisingly, in the Resources section.

For this example we are also going to use the CloudFormation designer available in AWS.  I actually don’t really like this tool (it is missing IDE elements), but it is a visual tool that is handy to help us get started.  CloudFormation is under the Management section of Services which you can find after logging in.  Once there, click Design Template to open the designer.

The designer automatically makes associations and will make some editing easier.  However once you get the hang of it, you are better off (in my opinion) just doing your own deal in a text editor.

We want to find the VPC resource, which is a subset of EC2. If you didn’t know this, you could just google CloudFormation VPC and it would quickly lead you here:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpc.html

This is also a helpful guide I’ll refer back to later.

So on the left hand side, expand the EC2 section and find the VPC object (not the VPCCidrBlock object) and drag it to the right hand canvas.

You’ll see your VPC object show up in the canvas selected, and so it will be in the bottom pane as well as in the Resources section.  It will have some random name for the object which you can change or leave, up to you.  Mine happens to have the name EC2VPC1TEY9 which I don’t really care for, so I’ll just click the edit button next to the name in the bottom pane and rename it to TestVPC.

Beginning with AWS CloudFormation ROVE Senior Technical Consultant Joel Cason 1.png

If you are following along, you’ll notice two things after you edit.

First, the name of the resource in the code block below (the tag under “Resources”) should change to your new name.  This is pretty handy as this resource name identifies this amongst all your other elements.  The designer automatically changes the name across all references, but if you change this identifier when you are coding in a text editor, you’ll need to make sure all references to it change as well.

Second, the top pane will dim and say it needs a refresh.  This is because the name of the resource it refers to in the diagram has changed, so we need to refresh to get the new name showing.  Simply hit refresh in the top right and it should look good with the new name on the VPC resource on the canvas.  If you are working in the designer you’ll see a refresh needed fairly often.  No big deal, just hit refresh and all is well.

So now we have a VPC resource that is named.  Is this all we need?  Good question.  The Properties section is empty.  Refer back to the CloudFormation VPC page I linked earlier.  In this page you will find a ton of good info (and you should absolutely look at the pages for every resource you are provisioning).  Specifically we are looking at Properties.  You will notice there are 5 properties for a VPC but only one, CidrBlock, is required.  This means that without a CidrBlock property, the deployment will fail.

Now another nit picky thing, go back to your CloudFormation designer and in the top left hit the Validate Template button (check mark in a box).  What does it tell you?  It will tell you the template is valid.  This does not mean you have written a correct template.  Again, this does not mean you have written a correct template.  This simply means that your template doesn’t have any JSON or YAML errors.  It doesn’t care that you are missing a required attribute, or that you mistyped CidrBlock as CirdBlock.

Anyway, back to business, we need to define the CidrBlock property for this to work correctly.  I’m going to assign a CidrBlock of 172.10.0.0/16 for my VPC, so under Properties I need to add it.  My resource definition will look like this:

"Resources": {
  "TestVPC": {
    "Type": "AWS::EC2::VPC",
    "Properties": {
      "CidrBlock": "172.10.0.0/16"
    }
   }
 }

You can add more properties if you wish but again we are going for bare minimum here.  Remember that, if you add additional properties, you’ll need to add a comma at the end of each line but the last in order for it to be valid JSON.  After you are done adding, I would give the Validate Template button another whirl just to double check the syntax.

Before we actually deploy this, click on the Template tab at the bottom of the page.  You’ll see the template in its entirety and you can modify it here also if you wish.  This is basically what your CloudFormation template would look like if you made it in a text editor, minus the Designer metadata stuff which you wouldn’t need.

Finally, let’s save a copy of this.  In the top left there is a little drop down menu that lets you save a copy either to local machine or to an S3 bucket.  I just named mine test.template.  Make sure you save often!

And now we can do a deployment by clicking the Create Stack button in the top left corner (cloud with up arrow).  When you create a stack, you have the option of either launching the designer, or pointing it at an already created file.  Since we already created our file it will already be selected for us.  Just hit next.

Give the stack a name, I just called mine TestStack and hit next.

Advanced options on the next page which I’m going to skip with Next, and finally we can hit Create.

This will take you back to the stack list.  You will probably have to hit refresh to see the one you just created.  Once you see it, it may already be finished as we just have it creating a VPC which is quick.

In general, the most common states you may see are:

  • Create In Progress – the stack is still building
  • Create Complete – the stack is finished and the build is successful
  • Rolling Back – the stack encountered an error while provisioning and is rolling back all the resources that were created
  • Rollback Complete or Create Failed – the stack encountered an error and rollback is complete.

If you click on the Events tab, you can see the various states your stack went through.  Hopefully for you it is just Create In Progress and Create Complete.  The Template tab will show you the template that was used to create it (and if you select the stack and use the Actions drop down button, you can go back into the template designer for the template).

Finally, let’s take a look at what it did.  Go take a look at your VPCs and you should see the new VPC provisioned.  Notice that it is actually not named TestVPC, or anything at all.  TestVPC was what we named the resource in the CloudFormation template, not the actual VPC that was being created.  We will look at this in the next post with tags.  The VPC is in the exact same state as it would be if you clicked the Create VPC button and just filled out the Cidr Block field.  It automatically created a Route Table and wide open Network ACL.  Again there is no difference between the CloudFormation deployment and one using the console.

Hopefully this was a good opener on CloudFormation.  I’ll examine some other details in future blog posts like input/output, chaining resources, and intrinsic functions.


MEET JOEL CASON, Senior Technical Consultant at ROVE

Joel Cason Senior Technical Consultant at ROVE.jpg

Joel is  a Senior Technical Consultant for ROVE with 15 years of IT experience.  Joel holds a number of advanced technical certifications from DellEMC, VCE and VMware and received his B.S. in Computer Science from NC State. 

You can find more of Joel's work at:

https://raid-zero.com

Comment

Building Bots with ROVE using Cisco Spark

Comment

Building Bots with ROVE using Cisco Spark

In the early days of Cisco Spark, Cisco built a quick and easy bot for moving email threads into a Spark Space by blind coping an email destination to the standard reply all email. The bot, would then create a space using the email subject as the name and then add everyone from the email thread for you. This was a great feature, but you always had that little voice in your head where you realized that every email you sent to this phantom 3rd party, had all of your email content and contacts. Cisco eventually killed their email to Spark bot due to other security concerns.

At ROVE, we liked the bot so much that we decided to rebuild the same functionality, but this time guard against the security concerns we had. Using a combination of Microsoft tools available on a standard Office365 subscription and borrowing some more advanced server-less functions from Azure, we can piece together a similar bot. This bot uses a mailbox that is ROVE owned so no more worries of a random 3rd party accessing our email threads and we can add logic into the script as needed for more advanced functions. In our case, we simply lock down the feature to ROVE employee use.

Workflow Overview

Step 1: Reply All to an email thread and add the bot's email address (mail2spark@abc.com) as a BCC.

Step 2: MS Flow tracks the user's inbox for messages and processes them.

Step 3: Flow checks that the email is in the correct BCC location and replies with an error as needed.

Step 4: Flow converts the email to a json formatted payload and posts the data to our Azure Function.

Step 5: Our Function parses the json payload, checks that the From address meets our criteria.

Step 6: Our Function uses the Spark Bot to create a space basing the space name on the email subject and strips any leading REL or FWD: to make it visually appealing.

Step 7: Our Function adds all email recipients to the space and then leaves the space. Anybody that doesn't have a Cisco Spark account will get an invite from Spark.

Step 8: Our Function replies to the original email using that formatted response.

Step 9: Flow responds to the Flow request with an HTML formatted response. 

Step 10: Flow cleans up by marking the email as read and then moves it to the trash.

Detailed Buildout

Step 1: Make a Spark Bot

Go to developer.ciscospark.com and log in with your Cisco Spark credentials. (You can use a free spark account to manage company bots.)

Go to "My Apps" at the top right and then add a bot.

Building Bots with ROVE - My Apps.png

On the next screen, you will give your bot a name, a Spark address and the opportunity to upload an image.

Building Bots with ROVE - New Bot.png

After you have created your bot, you'll get some more information. We aren't too concerned with the Bot ID for this use, but we do need the Bot's Access Token. This token is used to authenticate the API requests and gives you rights for the bot to do stuff. It's a username and password all in one, so do not lose it and do not give it out to anyone.

Step 2: Create an Azure Function

Azure Function are server less apps that can be built in an array of different programming languages. Consumption plan's are billed based on per-second resource consumption and executions. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB's of resource consumption per month. So we should be able to fit into the free tier for this use.

Create a new function, select PHP as the language and select all the default options. On the main function screen, you can add in the following script replacing the Spark Token, any withROVE propaganda and bot names as needed. Then click the "Get function URL"  in the top right. We'll need that later.

Building Bots with ROVE Get function URL.png

<?php

$sparkToken = "<Spark Token Here>";

function send_to_spark($method,$uri,$data) {

    global $sparkToken;

    switch ($method) {
        case "get":
            $uri = $uri."?".$data;
            $options = array(
                    'http' => array(
                        'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/x-www-form-urlencoded\r\n",
                        'method'  => 'GET',
                    ),
                );
                break;
        case "delete":
            $uri = $uri."/".$data;
            $options = array(
                    'http' => array(
                        'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/x-www-form-urlencoded\r\n",
                        'method'  => 'DELETE',
                    ),
                );
                break;
        case "post":
            $options = array(
                'http' => array(
                    'header'  => "Authorization: Bearer ".$sparkToken." \r\nContent-type: application/json\r\n",
                    'method'  => 'POST',
                    'content' => json_encode($data),
                ),
            );
            break;
    }

    $context  = stream_context_create($options);
    $result = json_decode(file_get_contents("https://api.ciscospark.com/v1/".$uri, false, $context));

    return $result;
}
$jsonData = json_decode(file_get_contents(getenv('req')));

“//If from is not a withrove account then send error” == “//If from is not an account in your orginization then send error
if ( preg_match("/@withrove.com/",$jsonData->from) && ! preg_match("/rove2spark@withrove.com/",$jsonData->to) ) {
  //Start building email response
  $httpResponse = "<html><body>";

  //Create room using the email subject - strip RE: and FWD:
  $roomName = preg_replace("/^[Ff][Ww][Dd]:/","",preg_replace("/^[Rr][Ee]:/","",strip_tags($jsonData->subject)));
  $sparkData = array("title"=>$roomName
                            ,"type"=>"group"
                            );
  $createRoom = send_to_spark("post","rooms",$sparkData);

  //Convert to: from: cc: fields to email list and add to room
  $sparkContacts = explode(';',$jsonData->to.';'.$jsonData->from.';'.$jsonData->cc);
  array_pop($sparkContacts);
  foreach ( $sparkContacts as $userEmail ) {
      if ( $userEmail != "rove2spark@withrove.com" && $userEmail != "" ){
        $sparkData = array("roomId"=>$createRoom->id
                ,"personEmail"=>$userEmail
                );
        $roomMember = @send_to_spark("post","memberships",$sparkData);
      }
  }

  //send welcome message
  $sparkMessage  = "rove2spark is an internal use Bot for moving email threads to Cisco Spark.\n";
  $sparkMessage .= "For use, reply all and BCC rove2spark@withrove.com.\n";
  $sparkData = array("roomId"=>$createRoom->id
            ,"text"=>$sparkMessage
            );
  send_to_spark("post","messages",$sparkData);

  //Add email body to the room
  $emailBody = preg_replace('/&nbsp;/','',preg_replace("/[\r\n]{1,}/","\n",strip_tags(base64_decode($jsonData->body))));
  $sparkData = array("roomId"=>$createRoom->id
            ,"text"=>$emailBody
            );
  send_to_spark("post","messages",$sparkData);

    //Remove bot from room
    $getMembership = send_to_spark("get","memberships","personEmail=rove2spark@sparkbot.io&roomId=".$createRoom->id);
    $deleteMembership = send_to_spark("delete","memberships",$getMembership->items[0]->id);


  //Build HTML email response
  $httpResponse .= "The space \"".$roomName."\" has been created and members added.  ";
  $httpResponse .= "Follow up the conversation on <a href='https://web.ciscospark.com'>Cisco Spark</a>.<br><br>";
  $httpResponse .= "To use rove2spark, replay-all to an email thread and BCC rove2spark@withrove.com.<br>";
  $httpResponse .= "</body></html>";
} else {
  $httpResponse = "ERROR";
}
file_put_contents(getenv('res'), $httpResponse);

?>

Under the "Integrate" section of the function we want to set the mode to "Standard" and select "POST" as the only available HTTP method.

Building Bots with ROVE_HTTP method.png

Step 3: Build a Mailbox

Nothing fancy is needed here. Just an email box.

Step 4: Create a Flow

Here is an overview of the Flow we'll be building.

Building Bots with ROVE Create a Flow.png

In the Office365 web portal when logged in as the user go to Flow and then create a new Flow. Next we select what trigger is used to kick off this Flow. In our case when an email arrives.

 Step 1

Step 1

 Step 2

Step 2

 Step 3

Step 3

Flow is used to add a basic outline for the email handling. First, we define when our flow is triggered. In our case, we are going to filter anything that hits our user's inbox, but we can get more specific if needed.

Building Bots with ROVE When a New Email Arrives.png

Next, we create a condition to check that our email bot is not in the To or CC field. If this happens, the bot would continue to build spaces every time someone hit reply all to an email thread which would decrease the willingness to use the bot pretty greatly. If they are not in the BCC field we end the flow by replying to the email explaining how to correctly use the bot and delete the email.

@not(contains(concat(triggerBody()?['To'], triggerBody()?['CC']), 'mail2spark'))

Building Bots with ROVE BCC Email.png

If the email passes that check then we build a HTTP step that triggers a REST API to our Azure Function. We take the email and send it as a json formatted post for our function to process.

Building Bots with ROVE Azure Function.png

To format the body, we use this long confusing string to concatename the different fields together and add formatting. Base64 encoding of the body protects us from any special characters and stuff that might mess with the json formatting.

concat('{

                   "to":"',triggerBody()?['To'],'",

                   "bcc":"',triggerBody()?['BCC'],'",

                   "cc":"',triggerBody()?['CC'],'",

                   "from":"',triggerBody()?['From'],'",

                   "subject":"',replace(triggerBody()?['subject'],'"',

                   "body":"',base64(triggerBody()?['Body']),'",),

'''"}')

The Azure Function will reply back with one of two responses. A simple "ERROR" response will tell flow to reply back with a generic email saying that the process failed and explaining how to use the bot. If the room has been built correctly, then the response will be an HTML formatted message that we will use to send as a Reply All to the original email thread.

Building Bots with ROVE Reply All Email Thread.png

At the end of each condition flow, we do some cleanup and mark the message as read and delete the original email which keeps our mailbox nice and clean.

Step 5: Test and Tune

At this point, your bot and email scripts should be functional.


MEET JON SNIPES, Senior Technical Consultant at ROVE

Jon Snipes Senior Technical Consultant at ROVE.jpg

After working as a butcher for several grocery chains in the region, Jon decided to make a career move and earned an Associates Degree in Network Administration. Since then he has gained focused experience with Cisco Collaboration products working for Cisco TAC on CME/CUE gateways and deploying collaboration solutions for Cisco Gold partners. In 2016, he completed his CCIE Collaboration. His focus is on Voice and Video deployments in complex call routing environments as well as developing applications to leverage Cisco APIs to fulfill advanced call routing needs and device/user provisioning.

You can find more of Jon's work at:

https://mycollablab.org

Comment

They Say it Takes a Village, Sometimes it Takes a Nation

Comment

They Say it Takes a Village, Sometimes it Takes a Nation

As many of us have our eyes on the news, our hearts break for the devastation that we see as Hurricane Harvey wreaks havoc through Houston and the surrounding area. In our ever increasingly fast paced world, that moment of heartache can be met with the distraction of a calendar meeting reminder, a call coming through, or a coworker's voice calling your name. When a tragic event seems so far away, yet so close to home, I'm grateful to live an age of technology that allows me to reach out from right where I'm at.  

Below are some options on how you can provide relief to the families in Houston from your desk today, or from your dinner table with your family tonight. These are only two of many options to make an impact, so if you have time, do a little bit of research to see which global or local charity may make the most heartfelt contribution to you!

Salvation Army

As Hurricane Harvey causes widespread damage across Texas, The Salvation Army is ready to provide physical and emotional care to survivors and relief workers. Salvation Army disaster teams from across the country are mobilizing and, even after disaster response efforts are over, The Salvation Army will remain in communities impacted by this terrible storm, supporting long-term disaster recovery efforts and providing ongoing assistance to those in need.  Click here

American Red Cross

The American Red Cross is helping the people affected by Hurricane Harvey in Texas and all across the Gulf coast. Shelters are open, truck loads of supplies are being distributed and volunteers in place.  Click here

Thank you to all the first responders and individuals who have their hands full in Houston, and to all those that are helping equip them with the resources and care that those affected by Hurricane Harvey need the most!

 

Nicole White, Marketing & CRM Lead

@nicole_d_white

Comment

Accelerate Container Deployment Testing with Azure Container Instances

Comment

Accelerate Container Deployment Testing with Azure Container Instances

Microsoft recently announced a new service called Azure Container Instances. This is an amazing offering that is now part of the Microsoft Azure lineup.  Azure Container instances provides Azure consumers with a simple and fast way to deploy containers for testing and development of applications.  There is no longer a need to learn complex programing modules to configure containers.  MS Azure Containers also allows the bursting and scaling of container instances as well as flexible billing such as per second billing.

In this blog post, I am going to show you how easy it is to get your first test container operational using Azure Container Instances. First thing’s first, we need to launch our MS Azure Portal with our assigned credentials:

CLICK TITLE TO READ MORE...

Comment

Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Not a member? Sign up. Log Out