Mapping Okta Groups to Keycloak (SAML 2.0)

So you’ve followed the guide to integrate Keycloak with Okta via SAML 2.0. The next logical step to simplify that configuration would be to automatically map user groups in Okta (if they should have access to Zerto) to corresponding groups in Keycloak to reduce management even more. Guess what? It’s actually pretty straightforward, and the nice part is I have “RH” to thank for taking what I’ve previously done with user mapping and bringing it to groups; so huge shoutout to him for the assist!

Let’s face it, no one wants to manage application access via users, especially at scale. And no one wants to have another step to take once a user logs into something for the first time. By mapping an Okta group to an existing group in Keycloak this will take any additional administrative work to wait for a user to login, only to get denied access, just so the admin can add them to the group to try again.

Properly managing access to applications shouldn’t be a burden. It should be as seamless and without admin overhead as possible. For example, if I’m defined as a Zerto Admin in Okta, when I first login to Zerto, I want to be let right in with the proper access; not have to bug another admin to add me to a group. What if my group membership needs change? It should be able to be done in Okta and then reflected in Zerto on my next login, so we want to also account for group membership updates without any additional work. The following procedure will be the icing on the cake and make your Keycloak/Okta integration pretty much hands-off once you’ve set it up; and you can go back to doing great things again.

Don’t Jump Ahead!

Before you follow this guide, please make sure you’ve already set up the integration (steps are in my previous blog titled: Zerto 10 Keycloak and Okta SAMLv2.0 Integration. You may have also got to the end of my previous guide and followed the link to this one; so if that’s how you ended up here, then you’re right on track!

Create the Keycloak Groups for Zerto Roles

When Zerto is deployed, there are some out-of-the-box pre-configured roles with the necessary permissions attached to them, so that’ll save you some time. You can view what those roles are and what privileges have been assigned to them in this Zerto document: ZVM Appliance Roles and Permissions

For the most part, these are generally all you need, but just know, that if you want to create any custom roles, you can, and Keycloak already contains the privileges within – so you can use what you need if you have to.

Before you start: If you don’t know how to manage Zerto Role-based Access Controls, please see my previously written blog titled “Zerto 10 Role-based Access Controls” and scroll down to the section titled “Managing Zerto Roles by Using Groups” to create the groups in Keycloak if you haven’t already done that.

Configure the Okta to Keycloak Group Mapping

The first thing you want to do here is make sure you’ve already created the groups you need in Okta and added users to that/those group(s). Once you’ve done that, go to the Zerto SAML application you created for the SAML 2.0 provider in Keycloak.

Create the Group Attribute Statement in Okta

When you get to the app:

  1. Click on the General tab, then scroll down to the SAML Settings area, and click Edit.

    Application General SAML Settings Edit link
  2. Under General settings, click Next.
  3. Scroll down to Group Attribute Statements and for the name, type groups.
  4. For the Filter, select starts with and enter the prefix for all groups related to Zerto. In my example, I have two groups; one for admins called ZertoAdmins, and one for viewers (read-only) named ZertoReadOnly.

    Group Attribute inputs
  5. Click Next, then on the next page, click Finish
  6. Now switch over to Keycloak.

Create the Group Mapper in Keycloak

  1. Log into Keycloak, switch to the Zerto realm.
  2. Click on Identity Providers, then click on your Okta SAML provider.

    Okta SAML provider in Keycloak
  3. Click on the Mappers tab at the top.
  4. Click Add Mapper.
  5. Provide a name to identify the mapper (i.e. ZertoAdmins, or ZertoReadOnly)
  6. Select Force as the sync mode override. This will force an update to group membership if one was made in Okta (i.e. moving a user from the ZertoReadOnly group into the ZertoAdmins group).
  7. Select Advanced Attribute to Group as the Mapper Type
  8. Type in the key for the attribute (this is the “name” in the group attribute statement from Okta). This is typically “groups” without the quotes.
  9. For the Value, input the actual name of the group in Okta (i.e. ZertoReadOnly).

    Creating the group mapper in Keycloak
  10. Click Select group
  11. Find the group in the list, click the arrow to the right, the click the Select button.

    Select the group in Keycloak
  12. Click Save.
  13. Now try logging in to Zerto.

Conclusion and Troubleshooting

After you’ve completed this step to map groups from Okta into Keycloak automatically, Keycloak will look for group memberships in the claims that come through with the login request. If Keycloak sees a match based on the mappers you have set up, then the user will automatically be assigned to the right group/role in Zerto and be allowed access.

Troubleshooting Note: Make sure when you created the group in Keycloak that you have added the necessary role to the group, because if the group isn't assigned to any Zerto role, then on login that user will get kicked back to the Okta login page.

rcFederation Tracer: If you’re setting this up for the first time and want to see what claims are coming through on your requests (again, thanks for the recommendation on this utility, RH!), take a look at SAML, WS-federation and OAuth tracer (installs as a browser add-in) to be able to see what is in your web requests as the communication between Keycloak and Okta take place.

Here’s an example of seeing the attributes Okta passes over to Keycloak on authentication:

Well, I hope this was helpful, and as always, if you have questions or comments, I’d love to hear your feedback. Please also share this with anyone who may find it useful.

Share This:

Zerto 10 Keycloak and Okta SAMLv2.0 Integration

Did you know that when the Linux-based Zerto Virtual Manager Appliance (ZVMA) was released, the way Zerto handled permissions has completely changed, giving you more control over who has access and what type of access they have?

In the old days (like a year ago, and to some still currently on the Windows-based ZVM), Zerto permissions were really an extension of vSphere permissions. When Zerto got installed on a Windows VM, part of that installation process created roles and permissions within vCenter that you could use to grant users access to certain Zerto functionality, if not all functionality. This was because Zerto mainly relied on whether or not you or any user trying to get into Zerto had an account with access to vCenter. For those who knew about it and used it, it worked, however, it left much more to be desired, like true RBAC and eliminating the possibility for any old vSphere Admin to have complete control over Zerto.

Today, as of the Zerto 9.7 Linux appliance and into 10, managing access in to Zerto has been decoupled from vSphere permissions and brought into Zerto through Keycloak, not to only provide RBAC, but to also provide an additional layer of security and more integration options for access management. Now the only connection into vSphere is a service account, and all user access into Zerto is based on having access granted through Keycloak.

Identity Provider Options

When you take a look at what type of integrations are available with Keycloak, it can be a little overwhelming, however, as long as it has what you need, you likely won’t care for what else is there, right? There are currently 18 built-in options for identity providers and user federation options (pictured below). I’d say there are likely many more when you consider that anything else that can be connected to with OpenID Connect, SAML v2.0, Kerberos, and LDAP/s are also available.

Keycloak User federation options screenshot

With a plethora of options available, the two most common ones I hear as customer needs today are Okta and Active Directory, and I’ve already published a YouTube video for Active Directory integration via LDAPs, so this update is going to be specific on how to set up Okta integration via SAML v2.0.

The goal here in this post is to list out the order of operations and the steps required to perform so that when you log in to Zerto, instead of pre-creating an account in Keycloak, you’re going to rely on an existing account in Okta that has access to Zerto, with the added benefit of push-button MFA.

Zerto UI Login Okta SAML button

Configuration

Procedure Overview

So I’ve tested this with both OpenID Connect, and SAML v2.0 Identity providers, and I’ve come to the conclusion (and verified with some customers I’ve encountered who were also Okta customers) that configuring this integration via SAML v2.0 is much simpler, and doesn’t require banging head on keyboard. Having no prior experience setting this identity provider up took less than an hour from start to finish, so it was extremely simple.

So if you want to do this in one sitting, there are five main steps in the procedure that I counted.. okay, 6 if you want to include deploying the ZVMA and getting it on the network, which I won’t cover here:

Note: Keycloak and Okta have the tendency to automatically log your session out if you leave them idle for too long, so be sure to keep those sessions active while you’re jumping between the two.

  1. Deploy, configure, and license the ZVMA
  2. Configure the SAML 2.0 provider in Keycloak
  3. Create the Okta Application and download the signing certificate
  4. Configure mappers to map user attributes from Okta into Keycloak
  5. Upload and import the Okta signing certificate to the ZVMA and Keycloak trust store
  6. Logging in to Zerto

One thing to note is that when you’re performing steps 2,3, and 4 above, you may want to have both Keycloak and Okta open at the same time, because there are some values that they will be trading back and forth. Having both open allows you to complete them in parallel and make for a smoother experience.

I will also include at the end of this write up a “next steps” optional but recommended step that comes after logging in for the first time, so be sure to read all the way through, because it will be about RBAC assignment to the Okta user that has been logged in.

If you have any questions, please ask them in the comments.

Configure the SAML v2.0 Provider

  1. Log into the Keycloak administrator interface on the target ZVMA via https://[FQDNorIP]/auth (replace [FQDNorIP] with the FQDN or IP address of your ZVMA).
  2. After you’re logged in, you will see a drop-down list at the top left that defaults to “master.” Click there and select zerto from the list to change into the Zerto realm of settings.

    Keycloak realm selection screenshot
  3. In the left navigation bar, under configure, select Identity providers.
  4. From the selection screen, choose SAML v2.0
  5. Enter the information as shown in the screenshot below, and note that you cannot change the Redirect URI, however, you will need this when configuring the Okta app, so copy it and have it ready to go when you get to the Okta configuration portion below.

    Keycloak SAML v2.0 general setting screenshot
  6. In the SAML Settings area, disable the setting labeled “Use entity descriptor.” Once disabled, more fields will appear below in the SAML settings.

    Disable Use entity descriptor setting screenshot
  7. Before filling anything out further, open another browser window and log in to the Okta admin site to create an app for Zerto, because now you’re going to need to gather/enter URIs in both Keycloak and Okta.

Create and Configure the Okta Application and Download the Signing Certificate

  1. In the Okta admin, expand Applications in the left navigation bar, and select Applications from the nested options.
  2. Click on Create App Integration

    Okta Create App Integration Screenshot
  3. For the name, enter Zerto SAML, then click Next.

    Okta app general settings screenshot
  4. Under General, where it asks for the Single sign-on URL, enter the Redirect URI that was automatically created in Keycloak. Refer to step 5 above where you started setting up the SAML v2.0 provider in Keycloak.
  5. Enable the ckeckbox labeled “Use this for Recipient URL and Destination URL.”
  6. Leave everything else as default, then scroll down and click Next.

    Create SAML Integration Configure URLs screenshot
    Configure SAML Integration Next button screenshot
  7. The next page is for feedback, so select the following options and click Finish. You will be returned to the applications page.

    Okta Feedback screenshot
  8. On the applications page, click the gear icon to the right of the Zerto SAML app you just created, and select Assign to Users.

    Assign users to Okta app screenshot
  9. For each user that requires access to Zerto, click the Assign link to the right of their name to add them to the app. Without assigning them, they won’t be able to login to Zerto using their Okta account. Optionally, you can create a group in Okta and assign your users to that instead of individually here.
  10. When you click on Assign, another box will pop up with the user name in the box. Click Assign and go back to be returned to the main list of users. If there are more users to add, repeat the previous step, otherwise, you can close the window with the list of users.
  11. Back on the applications page, if you click on the app, you will see your added users/groups in the list.

    Okta app assigned users
  12. Now, download the signing certificate. Click on the Sign On tab at the top.

    Okta app sign on tab
  13. Scroll down to the SAML Signing Certificates section and find the active certificate. At the right of that active certificate, select Actions > Download Certificate. This is what you will be uploading to the ZVMA and importing to Keycloak, so keep track of it. Save the certificate as a .cert file (which should be what it defaults to).

    Download the Okta signing cert
  14. Now you need to get a couple of URLs from Okta to use in Keycloak. Click on the Sign On tab for the Okta application.
  15. Scroll down to the SAML 2.0 section. Beneath the Metadata details header, click on the link that says more details.

    Okta SAML Details for Keycloak
  16. Copy the Sign on URL and the Sign Out URL

    Correct Okta URLs to copy to Keycloak
  17. Now return to Keycloak to continue the SAML v2.0 provider configuration.

Return to Keycloak

  1. In the SAML Settings section of the SAML v2.0 provider you’re configuring in Keycloak, find the Single Sign On Service URL field and enter the Sign on URL that you copied from Okta in the previous step.
  2. For the Single Logout Service URL, past the Sign Out URL you copied from Okta in the previous step. When done, it will look similar to the image below:

    Correct URLs to put into Keycloak
  3. Leave all other fields as default. Click Save.
  4. Scroll down to the Advanced Settings and verify the following settings:
    • First login flow: first broker login
    • Post login flow: none
    • Sync mode: Import

      SAML v2.0 provider advanced settings
  5. Click Save.

Configure Mappers for Attribute Import From Okta to Keycloak on Login

Mappers will be used between Okta and Keycloak to easily import user attributes on login to Zerto. If you do not provide mappers, then on first login, the user will be prompted to enter their e-mail address, first name, and last name. The idea with configuring mappers is to bring those attributes over from Okta to populate the fields in Keycloak for the user automatically, so the login is much more seamless.

First we will configure the attribute mapping in Okta, followed by the mapper configurations in Keycloak.

Okta Mapper/Attribute Configuration

  1. Log onto the Okta administration page.
  2. Go to the SAML Application that you previously configured in Okta (probably named Zerto SAML).
  3. On the General tab of the application, scroll down to the section labeled SAML Settings and click Edit.

    SAML Settings Edit
  4. Click Next.
  5. On the Configure SAML step, scroll down to the Attribute Statements section and add the following attributes. These will map Okta user attributes to Keycloak user attributes for simpler login as mentioned above.

    Okta SAML Attribute Mapper
  6. Scroll down and click Next.
  7. Click Finish.

Keycloak Mapper Configuration

Configure the Mappers for users’ e-mail, first name, and last name in Keycloak to be brought over to their Keycloak account automatically on login.

  1. In Keycloak, click on the Okta SAML provider you configured.
  2. Click the Mappers tab at the top, then click Add Mapper.

    Add Mapper in Keycloak
  3. Add the mapper for the user’s first name. Complete the fields as shown in the image below, then click Save.

    Keycloak first name mapper settings
  4. Go back to the Mappers tab, and add another mapper for the user’s last name this time (see image below for values to use). Click Save.

    Keycloak Mapper for Last name
  5. Go back to the Mappers tab, and add another mapper for the user’s e-mail address this time (see image for values to use). Click Save.

    Keycloak Email Mapper

Upload and Import the Okta Signing Certificate to the ZVMA and Keycloak Trust Store

Update: I decided to include the certificate import steps here, but left the link to the original Zerto documentation as others have been asking for it and felt this would be more “complete” with it inline.

  1. Upload the Okta certificate to the ZVMA. Put the file in the following location: /var/data/zerto/zkeycloak/certs/

    Upload Okta certificate file to /var/data/zerto/zkeycloak/certs/
  2. Use PuTTy or other SSH client to log onto the ZVMA. If you are doing this via the vSphere console, select 0 from the appliance manager menu to exit to the shell.
  3. Run the following command to add the certificate to Keycloak’s trust store:

    kubectl exec -i zkeycloak-0 -- /usr/bin/keytool -import -alias oktacert -file /opt/keycloak/conf/certs/[oktacertfilename].cert -keystore /opt/keycloak/conf/certs/truststore.jks
  4. You will be prompted to enter the keystore password. Use the password below. If for some reason you are asked to change that password, use the same one, don’t change it.

    truststorepass
  5. When prompted to trust the certificate type yes and press enter.
  6. Finally, fun the following command to kill the current pod and run the updated one with the certificate in place

    kubectl delete pod zkeycloak-0
  7. You can now end your SSH session and start logging in to Zerto via the Okta SAML login method.

Original Zerto documentation for importing certificates into Keycloak’s truststore:

https://help.zerto.com/bundle/Linux.ZVM.HTML.10.0_U3/page/Importing_the_AD_FS_Certificate_to_Keycloak.htm

Next Steps

After you’ve completed all the steps previous to this section, you can start logging in to Zerto. One thing to note is that when you login via your Okta credentials, the user loggig in (if given access to the Zerto SAML app via Okta) will be logged into Zerto, and if you look in the Users section of the Keycloak Zerto realm, there will also be an account created in there for the user.

By default, the user being given access through this method will have admin rights to Zerto. If you would like to minimize permissions or access into Zerto with, for example, read-only access, you can visit the following URL where I have previously wrote about how the Role-based Access Controls work within Zerto. Optionally, you can import group attributes from Okta the same way you mapped user attributes, however, that is out of scope here.

Zerto 10 Role-based Access Controls (RBAC) via Keycloak: https://www.genetorres.me/2023/10/13/zerto-10-role-based-access-controls-via-keycloak/

That’s all I’ve got for this time. I hope you’ve found this useful and if so, please share it with others who you feel will find it useful as well. For any questions, please leave a comment!

Update: Mapping Okta Groups to Keycloak Groups

After you’ve gone through this, you’re probably wondering how you can also automatically map Okta groups into Keycloak for Zerto access. Please see my follow-up blog post on Mapping Okta Groups to Keycloak (SAML 2.0) to continue from here and get your groups mapped over automatically. By doing this, you will avoid having to add users to Keycloak groups after their first login.

Share This:

Update: Migrate VM from Hyper-V to vSphere with Pre-Installed VMware Tools (vSphere 7 and 8 Edition)

I had previously written a post in response to a problem a customer was facing with migrating from Microsoft Hyper-V to VM vSphere.

You can find that previous post here: Migrate VM from Hyper-V to vSphere with Pre-Installed VMware Tools

I am writing this as a follow-up, because while the workaround I documented still works (for vSphere 6.x VMware Tools), something with the VMware Tools had changed when vSphere 7 went GA.  Several attempts to manipulate the new .msi file proved to not work, and in the flurry of life, I hadn’t had a chance to really sit down and figure it out.  So, the workaround for “now” was to install the working 6.x version, get migrated, and then upgrade VMware Tools; and that still works, by the way.

Then one day, I was going through my blog comments someone had responded, saying they’d figured it out.  @Chris, thank you very much for sharing your find!

So, since vSphere 8 recently went GA, I figured I’d also test this procedure on VMware Tools 12, and I’m happy to say, it also works.  So here’s what’s changed from the previous post when you’re trying to do the same using VMware Tools 11 (vSphere 7) or VMware Tools 12 (vSphere 8).

What You Will Need

Before you can get started, you’ll need to get a few things.  For details on how to get these requirements, refer to the original post mentioned above. 

  • Microsoft Orca (allows you to edit .msi files) – This is part of the Windows SDK, so if you don’t have it, see the post referenced above for the link to download as well as the procedure to only install Orca.
  • VMware Tools 11 or 12
  • Visual C++ 2017 Redistributable (if you’re following the procedure to get the VMware Tools from your own system, be sure to grab the vcredist_x64.exe)

If you would like to skip editing the VMware Tools MSI, you can download already “jailbroken” versions below. 

Note: These worked in the testing I performed, and I will not be making any changes to them, supporting them, or be responsible for what you download off of the Internet.  To be absolutely sure you have complete control over what you install in your environment (ESPECIALLY IN PRODUCTION), download from trusted sources and perform the edit to the MSI yourself.

Edit VMware Tools MSI with Orca (for VMware Tools 11 and VMware Tools 12)

  1. Launch Orca
  2. Click Open, and browse to where you saved VMware Tools64.msi, select it, and click Open.

    Launch Orca and Open VMware Tools MSI

  3. In the left window pane labeled Tables, scroll down and click on CustomAction.
  4. In the right window pane, look for the line that says VM_LogStart, right-click it, and select Drop Row.
  5. When prompted, click OK to confirm.


  6. In the left window pane labeled Tables, scroll down and click on InstallUISequence.
  7. In the right window pane, look for the line that says VM_CheckRequirements. Right-click on this entry, and select Drop Row.
  8. When prompted, click OK to confirm.

    InstallUISequence > VM_CheckRequirements > Drop Row

  9. Click save on the toolbar, and close the MSI file. You can also exit Orca now.

Next Steps

Now that you’ve successfully edited the MSI file to be able to be installed on your Hyper-V Windows VMs, copy the installers (don’t forget vcredist_x64.exe) and install.  When it asks for a reboot, you can safely ignore it, because once the VM boots up in vSphere, it would have already taken care of that for you.  (One less disruption to your production Hyper-V virtual machine).

Thanks for reading! GLHF

If you found this useful and know of any others looking to do the same, please share and comment.  I’d like to hear if/how it’s helped you out! If you’d like to reach me on social media, you can also follow me and DM me on Twitter @eugenejtorres

Share This:

Reduce the Cost of Backup Storage with Zerto 8.5 and Amazon S3

When Zerto 7.0 was released with Long-Term Retention, it was only the beginning of the journey to provide what feels like traditional data protection to meet compliance/regulations for data retention in addition to the 30-day short term journal that Zerto uses for blazing fast recovery.

A few versions later, Zerto (8.5) has expanded that “local repository” to include “remote repositories” in the public cloud. Today it’s Azure blob (hot/cold), and AWS S3 (with support for Standard S3, Standard S3-IA, or Standard One Zone-IA).

And to demonstrate how to do it, I’ve created some content, which includes video and a document that walks you through the process. In the video, I even go as far as running a retention job (backup) to AWS S3, and restoring data from S3 to test the recovery experience.

The published whitepaper can be found here: https://www.zerto.com/page/deploy-configure-zerto-long-term-retention-amazon-s3/

Update: I have just completed testing with S3 Bucket Encryption using Amazon S3 key (SSE-E3), and the solution works without any changes to the IAM policy (https://github.com/gjvtorres/Zerto-LTR-IAM-Policy). There are two methods to encrypt the S3 bucket, with Amazon S3 key as the first option (recommended), and AWS Key Management Service key (SSE-KMS) as the other. I suggest taking a look at the following AWS document that provides pricing examples of both methods. According to what I’ve found, you can cut cost by up to 99% by using the Amazon S3 key. So go ahead, give it a read!

https://aws.amazon.com/kms/pricing/

Now for the fun stuff…

The first option I have is the YouTube video below (or you can watch on my YouTube channel) .

I’ve also started branching out to live streaming of some of the work I’m doing on my Twitch channel.

If you find the information useful, I’d really appreciate a follow on both platforms, and hey, enable the notifications so when I post new content or go live, you can get notified and participate. I’m always working on producing new content, and feedback is definitely helpful to make sure I’m doing something that is beneficial for the community.

So, take a look, and let me know what you think. Please share, because information’s only useful if those who are looking for it are made aware.

Cheers!

Share This:

Migrate VM from Hyper-V to vSphere with Pre-Installed VMware Tools

Note: This post is written specifically for VMware Tools 10. If you’re looking for a procedure that works with VMware Tools 11 or VMware Tools 12, you can see my latest blog post here.

One of things I rarely get to do is work with Hyper-V, however, I’m starting to get more exposure to it as I encounter more organizations that are either running all Hyper-V or are doing some type of migration between Hyper-V and vSphere.

One of the biggest challenges that I’ve both heard and encountered in my own testing is really around drivers. If you’re making the move from Hyper-V to vSphere, you’re going to have to figure out how to get your network settings migrated along with the virtual machines, whether manually or in a more automated way.

And yes! You can definitely use Zerto as the migration vehicle and take advantage of benefits like:

  • Non-disruptive replication
  • Automatic conversion of .vhdx to .vmdk (and vice versa)
  • Non-disruptive testing before migrating
  • Boot Order
  • Re-IP

For re-IP operations , Zerto requires that VMware Tools is installed running on the VMs you want to protect.

Zerto Administration Guide for vSphere

There are two ways to accomplish a cross-hypervisor migration or failover with Zerto.

Installing the VMware Tools is going to be required either way. If you choose to install the VMware Tools before migrating or protecting, you are going to get much better results.

Post-installation of the VMware Tools will prevent the capability to automatically re-IP or even keep the existing network settings, therefore, you will end up having to hand-IP every VM you migrate/failover, which seriously cuts into any established recovery time objective (RTO) and leaves more room for human error.

Overview

We will walk through what you need to do in order to get VMware Tools prepared for installation on a Hyper-V virtual machine. After that, there is a video at the end of this post that will pick demonstrate successful pre-installation of VMware Tools, replication, and migration of a VM from Hyper-V.

At the time of this writing, the versions of Zerto, Hyper-V, and vSphere that I have performed the steps that follow are:

  • Zerto 8.0
  • Hyper-V 2016
  • vSphere 6.7 (VMware Tools from 6.7 as well)

I also wanted to give a shout out to Justin Paul, who had written a similar blog post about this same subject back in 2018. You can find his original post here: https://bit.ly/3dfWKdm

Pre-Requisites

Like a recipe, you’re going to need a few things:

VMware Tools

You will need to obtain a copy of the VMware Tools, and it must be a version supported by your version of vSphere. You can use this handy >>VMware version mapping file<< to see what version of the tools you’d need.

You can get the tools package by mounting the VMware Tools ISO to any virtual machine in your vSphere environment, browsing the virtual CD-ROM, and copying all the files to your desktop. If you don’t have an environment available, you can also >>download the installer<< straight from VMware (requires a My VMware account).

Since you only need a few files from the installer package, start the installer on your desktop and wait for the welcome screen to load. Once that screen loads, if you’re on a physical machine (laptop, PC, etc…), you’re going to get a pop-up stating that you can only install VMware Tools inside a virtual machine. DO NOT dismiss this pop-up just yet.

  1. Go to Start > Run and type in %TEMP% , the press Enter.
  2. Look for a folder that follows this naming convention {VVVVVVVV-WWWW-XXXX-YYYY-ZZZZZZZZZZZZZ} followed by “-setup” appended to it and open it.

    Open this folder and copy the 3 files out of it to your desktop.
  3. Copy the following 3 files to a folder on your desktop: vcredist_x64.exe, vcredist_x86.exe, and VMware Tools64.msi

    3 Required Files to Copy
  4. Once you’ve saved the files somewhere else, you can now dismiss the popup and exit the VMware Tools installer.

Microsoft Orca

Microsoft Orca is a database table editor that can be used for creating and editing Windows installer packages. We’re going to be using it to update the VMware Tools MSI file we just extracted in the previous steps, to allow it to be installed within a Hyper-V virtual machine.

Orca is part of the Windows SDK that can be downloaded from Microsoft (https://bit.ly/3d7aWoZ). Download the installer, and not the ISO (it’s easier to get exactly what you want this way).

Run the installer and when you get to the screen where you’ll need to Select the features you want to install, select only MSI tools and complete the installation.

After installation is completed, you can search your start menu for “orca” or browse to where it was installed to and launch Orca.

Edit VMware Tools MSI with Orca

Now that we’ve got the necessary files we need, and Orca installed, we’re going to need to edit the VMware Tools MSI to remove an installer pre-check that prevents installation on any other platform than vSphere.

  1. Launch Orca
  2. Click Open, and browse to where you saved VMware Tools64.msi, select it, and click Open.

    Launch Orca and Open VMware Tools MSI
  3. In the left window pane labeled Tables, scroll down and click on InstallUISequence.
  4. In the right window pane, look for the line that says VM_CheckRequirements. Right-click on this entry, and select Drop Row.

    InstallUISequence srcset= VM_CheckRequirements > Drop Row”>
  5. Click save on the toolbar, and close the MSI file. You can also exit Orca now.

What next?

I’ve made you read all the way down to here to tell you that if you want to skip the previous steps and are looking to do this for vSphere 6.7, I have a copy of the MSI that is ready for installation on a Hyper-V virtual machine. If you need it, send me a message on Twitter: @eugenejtorres

Now that you’ve got an unrestricted copy of the VMware Tools MSI package. Copy the VMware Tools MSI along with the vc_redist(x86/x64) installers to your target Hyper-V VMs (or a network share they can all reach), and start installing.

Important: When installing VMware Tools on the Hyper-V virtual machine, you may get the following error:

If you receive the error above, it means you’re missing Microsoft Visual C++ 2017 Redistributable (x64) on that VM.

If this is the case, click cancel and exit the VMware Tools installer. Run the vcredist_x64.exe installer that you copied earlier, and then retry the VMware Tools Installer.

Demo

Since you’ve gotten this far, the next step is to test to validate the procedure. Take a look at the video below to see what migration via Zerto looks like after you’ve taken the steps above.

If you have any questions or found this helpful, please comment. If you know someone that needs to see this, please share and socialize! Thanks for reading!

Share This:

How To: Migrate Windows Server 2003 to Azure via Zerto, Easily

So since Microsoft has officially ended extended support for Windows Server on July 15, 2015, that means that you may not be able to get support or any software updates. While many enterprises are working towards being able to migrate applications to more current versions of Windows, alongside initiatives to adopt more cloud services; being able to migrate the deprecated OS to Azure is an option to enable that strategy and provide a place for those applications to run in the meantime.

Be aware though that although Microsoft support (read this) may be able to help you troubleshoot running Windows Server 2003 in Azure, that doesn’t necessarily mean they will support the OS. That said, if you are running vSphere on-premises and still wish to get these legacy systems out of your data center and into Azure, keep reading and I’ll show you how to do it with Zerto.

Please note that I’ve only tested this with the 64-bit version of the OS (Windows Server 2003 R2). EDIT: this has also been verified to work on the 32-bit version of the OS – Thanks Frank!)

The Other Options…

While the next options are totally doable, think about the amount of time involved, especially if you have to migrate VMs at scale. Once you’re done taking a look at these procedures, head to the next section. Trust me, it can be done more easily and efficiently.

  • Migrate your VMs from VMware to Hyper-V
    • … Then migrate them to Azure. Yes, it’s an option, but from what I’ve read, it’s really just so you can get the Hyper-V Integration Services onto the VM before you move it to Azure. From there, you’ll need to manually upload the VHDs to Azure using the command line, followed by creating instances and mounting them to the disks. Wait – there’s got to be a better way, right?
  • Why migrate when you can just do all the work from vSphere, run a bunch of powershell code, hack the registry, convert the disk to VHD, upload, etc… and then rinse and repeat for 10’s or 100’s of servers?
    • While this is another way to do it, take a look at the procedure and let me know if you would want to go through all that for even JUST ONE VM?!
  • Nested Virtualization in Azure
    • Here’s another way to do it, which I can see working, however, you’re talking about nesting a virtual environment in the cloud and perhaps run production that way? While even if you have Zerto you can technically do this, there would have to be a lot of consideration that goes in to this… and likely headache.

Before You Start

Before you start walking through the steps below, this how-to assumes:

  1. You are running the latest version of Zerto at each site.
  2. You have already paired your Azure ZCA (Zerto Cloud Appliance) to your on-premises ZVM (Zerto Virtual Manager)
  3. You already know how to create a VPG in Zerto to replicate the workload(s) to your Azure subscription.

Understand that while this may work, this solution will not be supported by Zerto, this how-to is solely written by me, and I have tested and found this to work. It’s up to you to test it.

Additionally, this is likely not going to get any support from Microsoft, so you should test this procedure on your own and get familiar with it.

This does require you to download files to install (if you don’t have a Hyper-V environment), so although I have provided a download link below, you are responsible for ensuring that you are following security policies, best practices, and requirements whenever downloading files from the internet. Please do the right thing and be sure to scan any files you download that don’t come directly from the manufacturer.

Finally – yeah, you should really test it to make sure it works for you.

Migrating Legacy OS Using Zerto

Alright, you’ve made it this far, and now you want to know how I ended up getting a Windows Server 2003 R2 VM from vSphere to Azure with a few simple steps.

Step 1: Prepare the VM(s)

First of all, you will need to download the Hyper-V Integration Services (think of them as VMware Tools, but for Hyper-V, which will contain the proper drivers for the VM to function in Azure).

I highly suggest you obtain the file directly from Microsoft if at all possible, or from a trustworthy source. At the least, deploy a Hyper-V server and extract the installer from it yourself.

If you have no way to get the installer files for the Hyper-V Integration Services, you can download at your own risk from here. It is the exact same copy I used in my testing, and will work with Windows Server 2003 R2.

  1. Obtain the Hyper-V Integration Services ISO file. (hint: look above)
  2. Once downloaded, you can mount the ISO to the target VM and explore the contents. (don’t run it, because it will not allow you to run the tools installation on a VMware-hosted workload).
  3. Extract the Support folder and all of it’s contents to the root of C: or somewhere easily accessible.
  4. Create a windows batch file (.bat) in the support folder that you have just extracted to your VM. I put the folder in the root of C:, so just be aware that I am working with the C:\Support folder on my system.
  5. For the contents of the batch file, change directory to the C:\Support\amd64 folder (use the x86 folder if on 32-bit), then on the next line type: setup.exe /quiet (see example below). The /quiet switch is very important, because you will need this to run without any intervention.

    Example of batch file contents and folder path
  6. Save the batch file.
  7. On the same VM, go to Control Panel > Scheduled Tasks > Add Scheduled Task. Doing so will open the Scheduled Task Wizard.

    Create a scheduled task
  8. Click Next
  9. Click browse and locate the batch file you created in step 5-6, and click open

    Browse to the batch file
  10. Select when my computer starts, and click next

    Select when my computer starts
  11. Enter local administrator credentials (will be required because you will not initially have network connectivity), and click next

    enter admin credentials
  12. Click Finish

Step 2: Create a VPG in Zerto

The previous steps will now have your system prepared to start replicating to Azure. Furthermore, what we just did, basically will allow the Hyper-V Integration Services to install on the Azure instance upon boot, therefore enabling network access to manage it. It’s that simple.

Create the VPG (Virtual Protection Group) in Zerto that contains the Windows Server 2003 R2 VM(s) that you’ve prepped, and for your replication target, select your Microsoft Azure site.

If you need to learn how to create a VPG in Zerto, please refer to the vSphere Administration Guide – Zerto Virtual Manager documentation.

Step 3: Run a Failover Test for the VPG

Once your VPG is in a “Meeting SLA” state, you’re ready to start testing in Azure before you actually execute the migration, to ensure that the VM(s) will boot and be available.

Using the Zerto Failover Test operation will allow you to keep the systems running back on-premises, meanwhile booting them up in Azure for testing to get your results before you actually perform the Move operation to migrate them to their new home.

  1. In Zerto, select the VPG that contains the VM(s) you want to test in Azure (use the checkbox) and click the Test button.

    Select VPG, click Test
  2. Validate the VPG is still selected, and click Next.

    Validate VPG, click Next
  3. The latest checkpoint should already be selected for you. Click Next

    Verify Checkpoint, click Next
  4. Click Start Failover Test.

    Start Failover Test

After you click Start Failover Test, the testing operation will start. Once the VM is up in Azure, you can try pinging it. If it doesn’t ping the first time, reboot it, as the Integration Services may require a reboot before you can RDP to it (I had to reboot my test machine).

When you’re done testing, click the stop button in Zerto to stop the Failover Test, and wait for it to complete. At this point, if everything looks good, you’re ready to plan your migration.

If you did anything different than what I had done, remember to document it and make it repeatable :).

Next Steps

Once you’ve validated that your systems will successfully come up you can then schedule your migration. When you perform the migration into Azure, I recommend using the Move Operation (see image below), as that will be the cleanest way to get the system over to Azure in an application-consistent state with no data loss, as opposed to seconds of data loss and a crash-consistent state that the failover test, or failover live operations will give you.

Note: Before you run the Move Operation, it will be beneficial to uninstall VMware Tools on the VM(s) that you are moving to Azure. It has been found that not doing so will not allow you to uninstall them once in Azure.



Move Operation


Recommendations before you migrate:

  • Document everything you do to make this work. (it may come in handy when you’re looking for others to help you out)
  • Be sure to test the migration beforehand using the Failover Test Operation.
  • Check your Commit settings in Zerto before you perform the Move Operation to ensure that you allow yourself enough time to test before committing the workload to Azure. Current versions of Zerto default the commit policy to 60 minutes, so should you need more time, increase the commit policy time to meet your needs.
  • Be sure to right-size your VMs before moving them to the cloud. If they are oversized, you could be paying way more in consumption than you need to with bigger instance sizes that you may not necessarily need.

That’s it! Pretty simple and straightforward. To be honest, obtaining a working copy of Windows Server 2003 R2 and the Hyper-V Integration Services took longer than getting through the actual process, which actually worked the first time I tried it.

If this works for you let me know by leaving a comment, and if you find this to be valuable information that others can benefit from, please socialize it!

Cheers!

Share This:

Zerto: Can Failover Live Be Used for a Datacenter Migration, Consolidation, or HW Refresh?

The answer is yes, if you really wanted to… however, there’s another feature of Zerto that will allow you to perform a much “cleaner” migration of your VM(s) with a more planned approach.

This feature may not be easily located, as it’s found within the Actions menu in the Zerto UI, but it’s actually a very valuable one that basically allows you to migrate VMs from one location to another (cluster to cluster, vCenter to vCenter, vSphere <> Hyper-V, On-Prem to Public Cloud, Site to Site – even from one vendor’s hardware to another) with no data loss.  That’s right, an RPO of ZERO.

Failover Live (FOL)

First off, since the title of this blog post mentions “Failover Live”, or as we abbreviate it as FOL, lets talk about that method first.  What is the FOL process, and how does it work?

The FOL process is an operation that should be used following a disaster to recover your protected VMs in a recovery site, or in the event the protected site ZVM is not available.  The main thing to note here is that when you execute a FOL, Zerto will default to the latest checkpoint, or you can select a previous checkpoint in time to recover to (usually within seconds of each other).  Additionally, you have the option to either leave the VMs in the group running, power them off, or force a shutdown.

Essentially what this means is that when using FOL, Zerto is expecting that there’s been an unplanned environment disruption of some sort and  you need to resume production as quickly as possible in your recovery site.

Here’s the workflow for a failover operation.  You can download a PDF version of this diagram here.

Zerto Virtual Replication Failover Live Workflow Diagram

Please note, that the workflow objects in yellow include some decisions you will need to make based on your type of disruption as it relates to the power state of the VMs in your protected site (Shutdown (gracefully), Leave Powered On, or Force Shutdown).

Regarding my earlier comment about ZERO data loss, this method will only get you to the latest checkpoint when the outage was detected, or a previous checkpoint.  You can choose what point in time to recover to, which in either option, will be a crash-consistent state which may not be desired for something like a migration project.

For additional detail about the Failover Live (FOL) process and how it works, including considerations, see the Zerto Virtual Manager Administration Guide for vSphere.

Move VPG

As opposed to an unplanned disruption to your environment, the “Move VPG” operation in Zerto is recommended when you’re performing a planned migration whether it be your DR site, public cloud, new hardware, or other datacenter.  The difference here is that when you perform a planned migration of your virtual machine(s) to a recovery site, Zerto assumes that both sites are up and healthy and that you are performing a relocation of the virtual machine(s) in a controlled/orderly fashion – with the expectation of no data loss.

Here is the workflow for a Move VPG operation.  You can download a PDF version of this diagram here.

Zerto Virtual Replication Move VPG Workflow Diagram

So as you can see from the workflow above, the steps are a bit different than a failover live, as there are actually some steps taken in the protected site before VMs are brought up in the recovery site to ensure that what is booted is in the exact same state as the source copy.

For additional detail about the Move VPG process and how it works, see the Zerto Virtual Manager Administration Guide for vSphere.

Summary

While you can still use the FOL process to migrate VMs from one location to another, there is still going to be some level of data loss and a crash consistent boot.

To ensure you don’t lose any data (even data that may be in memory at the time you perform a FOL), the “Move VPG” operation will take care of automating the safe/graceful shutdown of a VM and replicate any remaining data before powering up in the recovery site.

When performing either operation, be sure to verify your commit policy as well, because you would want to make sure that the recovered/migrated VM is in a usable state before committing it to the recovery location because once you commit the change, you must wait for promotion and reverse protection (delta sync) to take place before you can perform a failback.  Both options will allow you the ability to rollback without commit, but behave differently in terms of the expected state of the protected site.

 

 

 

Share This:

Configuring AWS for Zerto Virtual Replication

By now, it’s no secret that the IT Resilience Platform that Zerto has come to be known as offers complete flexibility when it comes to multi-cloud agility.  This agility allows businesses to accelerate their digital transformation and truly take advantage of what the public cloud platform offers – ensuring even more freedom to choose your cloud and to be able to replicate workloads to, from, and even between public clouds.  As there have been great improvements in Zerto’s any-to-any story, one in particular I’d like to focus on in this article is AWS (Amazon Web Services).

Starting with Zerto Virtual Replication 6.0, customers now have:

  • Orchestration allowing not only targeting AWS for DR or for workload migration, but now the ability to come back out of AWS to on-premises datacenters, or even the ability to replicate between public cloud providers (AWS, Microsoft Azure, IBM Public Cloud) and Cloud Service Providers (CSPs).
  • Zerto Analytics visibility between all sites, including public cloud, now with network statistics and 30-day history.

Now, while these improvements are exciting and offer even more cloud agility to customers, one can’t help but realize that before you can actually start taking advantage of ZVR 6.0 to achieve a hybrid cloud architecture or DR in the cloud (specifically AWS), there are some pre-requisites to complete before doing so.  That said, meeting those requirements may not seem as intuitive as you’d hope at first glance.

While having a cloud use-case is usually the first step, and is determined by business requirements – the challenge lies within understanding what exactly needs to be configured in AWS for ZVR functionality, and how to accomplish it. If you take a look below, the workflow itself is a multi-step process that may not be very easy to perform, until now.

ZVR AWS Workflow
Figure 1: Configuring AWS for ZVR – Workflow

In my usual fashion of wanting to know exactly how things are done and then sharing it with everyone else, I’ve written a how-to document for configuring AWS for Zerto Virtual Replication, which I am happy to say has been turned into an official Zerto whitepaper and is now available for download!

>> Whitepaper – Configuring AWS for Zerto Virtual Replication <<

As usual, feedback, is welcomed with open arms. If you find this useful, please share and be social!

Share This:

Zerto Virtual Manager Outage, Replication, and Self-Healing

I’ve decided to explore what happens when a ZVM (Zerto Virtual Manager) in either the protected site or the recovery site is down for a period of time, and what happens when it is back in service, and most importantly, how an outage of either ZVM affects replication, journal history, and the ability to recover a workload.

Before getting in to it, I have to admit that I was happy to see how resilient the platform is through this test, and how the ability to self-heal is a built in “feature” that rarely gets talked about.

Questions:

  • Does ZVR still replicate when a ZVM goes down?
  • How does a ZVM being down affect checkpoint creation?
  • What can be recovered while the ZVM is down?
  • What happens when the ZVM is returned to service?
  • What happens if the ZVM is down longer than the configured Journal History setting?

Acronym Decoder & Explanations

ZVMZerto Virtual Manager
ZVRZerto Virtual Replication
VRAVirtual Replication Appliance
VPGVirtual Protection Group
RPORecovery Point Objective
RTORecovery Time Objective
BCDRBusiness Continuity/Disaster Recovery
CSPCloud Service Provider
FOTFailover Test
FOLFailover Live

Does ZVR still replicate when a ZVM goes down?

The quick answer is yes.  Once a VPG is created, the VRAs handle all replication.    The ZVM takes care of inserting and tracking checkpoints in the journal, as well as automation and orchestration of Virtual Protection Groups (VPGs), whether it be for DR, workload mobility, or cloud adoption.

In the protected site, I took the ZVM down for over an hour via power-off to simulate a failure.  Prior to that, I made note of the last checkpoint created.  As the ZVM went down, within a few seconds, the protected site dashboard reported RPO as 0 (zero), VPG health went red, and I received an alert stating “The Zerto Virtual Manager is not connected to site Prod_Site…”

The Zerto Virtual Manager is not connected to site Prod_Site

 

Great, so the protected site ZVM is down now and the recovery site ZVM noticed.  The next step for me was to verify that despite the ZVM being down, the VRA continued to replicate my workload.  To prove this, I opened the file server and copied the fonts folder (C:\Windows\Fonts) to C:\Temp (total size of data ~500MB).

As the copy completed, I then opened the performance tab of the sending VRA and went straight to see if the network transmit rate went up, indicating data being sent:

VRA Performance in vSphere, showing data being transmitted to remote VRA in protected site.

Following that, I opened the performance monitor on the receiving VRA and looked at two stats: Data receive rate, and Disk write rate, both indicating activity at the same timeframe as the sending VRA stats above:

Data receive rate (Network) on receiving/recovery VRA Disk write rate on receiving/recovery VRA

As you can see, despite the ZVM being down, replication continues, with caveats though, that you need to be aware of:

  • No new checkpoints are being created in the journal
  • Existing checkpoints up to the last one created are all still recoverable, meaning you can still recover VMs (VPGs), Sites, or files.

Even if replication is still taking place, you will only be able to recover to the latest (last recorded checkpoint) before the ZVM went down.  When the ZVM returns, checkpoints are once again created, however, you will not see checkpoints created for the entire time that ZVM was unavailable.  In my testing, the same was true for if the recovery site ZVM went down while the protected site ZVM was still up.

How does the ZVM being down affect checkpoint creation?

If I take a look at the Journal history for the target workload (file server), I can see that since the ZVM went away, no new checkpoints have been created.  So, while replication continues on, no new checkpoints are tracked due to the ZVM being down, since one of it’s jobs is to track checkpoints.

Last checkpoint created over 30 minutes ago, right before the ZVM was powered off.

 

What can be recovered while the ZVM is down?

Despite no new checkpoints being created – FOT or FOL – VPG Clone, Move, and File Restore services are still available for the existing journal checkpoints.  Given this was something I’ve never tested before, this was really impressive.

One thing to keep in mind though is that this will all depend on how long your Journal history is configured for, and how long that ZVM is down.  I provide more information about this specific topic further down in this article.

What happens when the ZVM is returned to service?

So now that I’ve shown what is going on when the ZVM is down, let’s see what happens when it is back in service.  To do this, I just need to power it back up, and allow the services to start, then see what is reported in the ZVM UI on either site.

As soon as all services were back up on the protected site ZVM, the recovery site ZVM alerted that a Synchronization with site Prod_Site was initiated:

Synchronizing with site Prod_Site

Recovery site ZVM Dashboard during site synchronization.

The next step here is to see what our checkpoint history looks like.  Taking a look at the image below, we can see when the ZVM went down, and that there is a noticeable gap in checkpoints, however, as soon as the ZVM was back in service, checkpoint creation resumed, with only the time during the outage being unavailable.

Checkpoints resume

 

What happens if the ZVM is down longer than the configured Journal History setting?

In my lab, for the above testing, I set the VPG history to 1 hour.  That said, if you take a look at the last screen shot, older checkpoints are still available (showing 405 checkpoints).  When I first tried to run a failover test after this experiment, I was presented with checkpoints that go beyond an hour.  When I selected the oldest checkpoint in the list, a failover test would not start, even if the “Next” button in the FOT wizard did not gray out.  What this has lead me to believe is that it may take a minute or two for the journal to be cleaned up.

Because I was not able to move forward with a failover test (FOT), I went back in to select another checkpoint, and this time, the older checkpoints were gone (from over an hour ago).  Selecting the oldest checkpoint at this time, allowed me to run a successful FOT because it was within range of the journal history setting.  Lesson learned here – note to self: give Zerto a minute to figure things out, you just disconnected the brain from the spine!

Updated Checkpoints within Journal History Setting

Running a failover test to validate successful usage of checkpoints after ZVM outage:

File Server FOT in progress, validating fonts folder made it over to recovery site.

And… a recovery report to prove it:

Recovery Report - Successful FOT Recovery Report - Successful FOT

 

Summary and Next Steps

So in summary, Zerto is self-healing and can recover from a ZVM being down for a period of time.  That said, there are some things to watch out for, which include known what your configured journal setting is, and how a ZVM being down longer than the configured history setting affects your ability to recover.

You can still recover, however, you will start losing older checkpoints as time goes on while the ZVM is down.  This is because of the first-in-first-out (FIFO) nature of how the journal works.  You will still have the replica disks and journal checkpoints committing to it as time goes on, so losing history doesn’t mean you’re lost, you will just end up breaching your SLA for history, which will re-build over time as soon as the ZVM is back up.

As a best practice, it is recommended you have a ZVM in each of your protected sites, and in each of your recovery sites for full resilience.  Because after all, if you lose one of the ZVMs, you will need at least either the protected or recovery site ZVM available to perform a recovery.  The case is different if you have a single ZVM.  If you must have a single ZVM, put it into the recovery site, and not on the protected site, because chances are, your protected site is what you’re accounting for going down in any planned or unplanned event.  It makes most sense to have the single ZVM in the recovery site.

In the next article, I’ll be exploring this very example of a single ZVM and how that going down affects your resiliency.  I’ll also be testing some ways to potentially protect that single ZVM in the event it is lost.

Thanks for reading!  Please comment and share, because I’d like to hear your thoughts, and am also interested in hearing how other solutions handle similar outages.

Share This:

Zerto Automation with PowerShell and REST APIs

Zerto is simple to install and simple to use, but it gets better with automation!  While performing tasks within the UI can quickly become second nature, you can quickly find yourself spending a lot of time repeating the same tasks over and over again.  I get it, repetition builds memory, but it gets old.  As your environment grows, so does the amount of time it takes to do things manually.  Why do things manually when there are better ways to spend your time?

Zerto provides great documentation for automation via PowerShell and REST APIs, along with Zerto Cmdlets that you can download and install to add-on to  PowerShell to be able to do more from the CLI.  One of my favorite things is that the team has provided functional sample scripts that are pretty much ready to go; so you don’t have to develop them for common tasks, including:

  • Querying and Reporting
  • Automating Deployment
  • Automating VM Protection (including vRealize Orchestrator)
  • Bulk Edits to VPGs or even NIC settings, including Re-IP and PortGroup changes
  • Offsite Cloning

For automated failover testing, Zerto includes an Orchestrator for vSphere, which I will cover in a separate set of posts.

To get started with PowerShell and RESTful APIs, head over to the Technical Documentation section of My Zerto and download the Zerto PowerShell Cmdlets (requires MyZerto Login) and the following guides to get started, and stay tuned for future posts where I try these scripts out and offer a little insight to how to run them, and also learn how I’ve used them!

  • Rest APIs Online Help – Zerto Virtual Replication
    • The REST APIs provide a way to automate many DR related tasks without having to use the Zerto UI.
  • REST API Reference Guide – Zerto Virtual Replication
    • This guide will help you understand how to use the ZVR RESTful APIs.
  • REST API Reference Guide – Zerto Cloud Manager
    • This guide explains how to use the ZCM RESTful APIs.
  • PowerShell Cmdlets Guide – Zerto Virtual Replication
    • Installation and use guide for the ZVR Windows PowerShell cmdlets.
  • White Paper – Automating Zerto Virtual Replication with PowerShell and REST APIs
    • This document includes an overview of how to use ZVR REST APIs with PowerShell to automate your virtual infrastructure.  This is the document that also includes several functional scripts that take the hard work out of everyday tasks.

If you’ve automated ZVR using PowerShell or REST APIs, I’d like to hear how you’re using it and how it’s changed your overall BCDR strategy.

I myself am still getting started with automating ZVR, but am really excited to share my experiences, and hopefully, help others along the way!  In fact, I’ve already been working with bulk VRA deployment, so check back or follow me on twitter @EugeneJTorres for updates!

Share This: