Sunday, June 28, 2020

Cloud Workload Migration

Migrating applications to the cloud require planning and research. In this lesson, we’ll take a look at five steps to migrate your on-premises workload to the cloud.

Application discovery

Our first step is to discover, analyze, and categorize your on-premises applications. Not all applications are suitable to migrate to a cloud environment. Here are some items you need to consider: evolving-technologies-workflow-migration-1080p.mp4

  • Specialized hardware requirements: does your application run on specific hardware? Cloud providers offer different CPU types, including x86 and ARM. Most cloud providers also offer GPUs.
  • Operating system: Does your application run on an operating system that the cloud provider supports or can you make it work on another operating system?
  • Legacy databases: does your application run on old database server software that a cloud provider might not support?
  • Security: does the application have any security measures? Older applications might not get updates anymore which might be acceptable in an isolated on-premises situation but not in the cloud.
  • Performance requirements: does your application have specific performance requirements? Can a cloud environment meet these requirements? Is your application sensitive to delay?

Application types

There are two kinds of applications. Ones that work in the cloud, and those that don’t. We call applications that don’t run in the cloud legacy applications. These applications were designed to run on traditional (virtual) servers. Let’s discuss the difference between the two application types.

Legacy applications

Most applications are legacy applications but with some modifications, we can make them work in the cloud. This is best explained with an example:

WordPress is a popular CMS. ~30% of the websites in 2018 run on WordPress. Even if you never worked with WordPress before or installed a web server, you can probably understand this example.

If you want to host WordPress yourself in the “traditional” way then we have a process that looks like this:

  1. Select a (virtual) server with a certain amount of CPU cores, memory, and storage.
  2. Install a Linux Distribution as the operating system.
  3. Install all required packages:
    1. Apache: our web server software.
    2. MySQL: the database server.
  4. Download the WordPress files and upload them to your server.
  5. Add the database name, username, and password to the wp-config.php file.
  6. Launch your website.

You can perform the above steps manually or use installation scripts to automate these steps.

When your website grows and attracts more visitors, you can scale up and add more CPU cores and/or increase the memory of the server. When your server dies, you have a problem. You can always install a new server and restore a backup.

With traditional servers, our servers are like pets. We manually install, upgrade, and repair them. They run for years so we treat them with love and care. Perhaps we even give them special hostnames.

Another name for pet servers is snowflake servers.

Cloud applications

Cloud providers offer virtual servers so you can repeat the above steps on a cloud provider’s virtual server. Running your website on a virtual server from a cloud provider is the same as regular Virtual Private Server (VPS) hosting, we end up with a “pet”.

In the cloud, we start new servers on-demand when we need them and we destroy them when no longer needed. Servers and other resources are disposable so we treat them like cattle. We want to spin up servers quickly so there is no time to manually configure a new server. To achieve this, we need to create a blueprint and configuration we can use for all servers.

To create a blueprint, we use a configuration management tool to define “infrastructure as code”. We configure the infrastructure we need in JSON/YAML files. Examples are Amazon AWS CloudFormation or Google Cloud Deployment Manager. Terraform is a configuration management tool that supports multiple cloud providers. To install and configure the servers, you can use deployment tools like Chef, Puppet, Ansible, and Amazon AWS CodeDeploy. Another option is to use containers which we discuss in another lesson.

When it comes to cloud computing, we want to keep cattle, not pets. If you are interested in building cloud-compatible applications, then you should take a look at the Twelve-Factor App Methodology. It’s a baseline that offers best practices to build scalable (web) applications.

Let’s walk through an example of how we can run our legacy WordPress application in the cloud:

Infrastructure as code

We create one or more configuration files that define our infrastructure resources:

  • Template for our web server(s).
  • Template for our database: we use a SaaS solution like Amazon AWS RDS, Google Cloud, or Azure Database for MySQL for this.
  • Template for a load balancer: if you have over one web server then you need a load balancer to distribute the incoming traffic.
  • Template for autoscaling alarm: when the CPU load of our web server exceeds 50%, we want to start a second web server.
  • Template for shared file storage: our web servers need shared access to certain files. I’ll explain this in a bit. You can use a SaaS solution like Amazon AWS Elastic File System (EFS), Amazon AWS S3, or Google Cloud Filestore for this.

Having your infrastructure as code makes it easy to create and replicate resources.

Version Control System (VCS)

You should store your configuration files in a Version Control System (VCS). The advantage of a VCS is that it keeps track of changes in your files and it’s easy to work on the same files with multiple people. Git is the most popular open-source VCS. Github is a popular website where you can create git repositories to save your files.

We also store our WordPress files in a git repository. whenever you make changes to your website, you add them to the git repository. This makes your application stateless.

If you want to try git, I recommend Gitlab. Github is popular for public git repositories but Gitlab allows you to create unlimited private git repositories for free.
Deployment tool

We configure a deployment tool to install required packages for Apache, clone the WordPress git repository and any other configuration files needed. When we launch a new web server, the deployment tool automatically installs and configures the new server.

Shared file storage

WordPress uses different files and folders. We have PHP and CSS files for the website and we store these in a git repository. When we start a new web server, we copy these files from the git repository to the local storage of the web server.

There are two problems though. Many legacy applications expect that they can create or modify files and folders.

If you want to update a WordPress plugin, the web server deletes the old plugin files and installs the new plugin files. We can work around this by using a local development web server we use to install a new plugin. When the plugin works, we add the new files to our git repository and build a new web server with our deployment tool.

The other problem is about file uploads. For example: when you upload an image to your WordPress website then the image is stored on the local storage of the web server. When we destroy the web server, the image is gone. To work around this, we redirect the WordPress images folder to shared file storage. You can do this on the operating system level with an NFS file share or with a WordPress plugin that uploads images to S3 directly.

Considerations

Besides figuring out if you can make your applications work in the cloud, here are some other items to consider:

  • Service model: which cloud resources (IaaS, PaaS, or SaaS) will you use?
  • Cloud deployment type: which deployment type (public, private, community, or hybrid) suits your requirements?
  • Availability: do you make your application available in one or multiple regions? How much redundancy do you need?
  • Complexity: how difficult is it to migrate the application to the cloud?
  • Security: are there any security considerations when you move the application to the cloud?
  • Environment: how will you implement your production, development, and staging environments?
  • Regulatory compliance: does this apply to your organization?
  • Dependencies: does your application have any dependencies that you can’t move to the cloud?
  • Business impact: how critical is the application? You might want to start with a less critical application when this is your first migration.
  • Networking: how will you access the application? Is your WAN connection sufficient?
  • Hardware dependencies: does your application have any hardware dependencies? Can you run the application in the cloud?
  • Cost: don’t underestimate costs. In the cloud, you pay for all resources and there are quite some costs you might not think about beforehand. Data transfer rates, log file storage, etc.

Migration strategies

There are different migration strategies if you want to move your on-premises applications to the cloud. Gartner describes five migration strategies:

  • Rehost: move the application from on-premises to a virtual server in the cloud. This is a quick way to migrate your application but you’ll miss one of the most important cloud characteristics, scalability.
  • Refactor: move the application to a PaaS solution so you can re-use frameworks, programming languages, and container you currently use. An example is moving your Docker containers to Google’s Kubernetes Engine.
  • Revise: some applications require changes before you can rehost or refactor them in the cloud. An example is a complex python application you need to break down and modify so you can run it on a serverless computing platform like Amazon AWS Lambda or Google App Engine.
  • Rebuild: get rid of the existing application code and re-create all code to use the cloud provider’s resources. Rebuilding your application from scratch with the cloud in mind allows you to use all cloud provider features but there is also a risk of vendor lock-in.
  • Replace: get rid of your application and replace it with a SaaS solution. You don’t have to invest time and money in developing your application to make it suitable for the cloud but migrating your data from your application to a SaaS application can also introduce problems.

Cloud providers also offer migration strategies:

Evaluation and testing

Once you have analyzed your applications and decided on a migration strategy, evaluate and test your options:

  • Perform a pre-migration performance baseline of each application.
  • Research and evaluate different cloud providers, their services, and SLAs.
  • Evaluate automation and orchestration tools. Are you going to use tools from the cloud provider or tools that are not tied to a specific cloud provider?
  • Evaluate monitoring and logging tools. Do you use a tool like Amazon AWS CloudWatch or an external solution like Datadog or Elasticsearch?

Once you have evaluated your options, you can test your migration.

Execute and manage migration

If this is your first migration to the cloud, take it easy and start with the “low hanging fruit”. Start with a non-critical easy to migrate application. During this migration, you can become familiar with the process, fine-tune it, and document the lessons you have learned.

After migration

Once the migration is complete, perform a post-migration performance baseline and compare the results with your pre-migration performance baseline. Cisco has a product called Cisco AppDynamics that does the work for you. AppDynamics collects application metrics, establishes a performance baseline and reports when performance deviates from your baseline.

Conclusion

You have now learned about the steps required to migrate your on-premises workload (applications) to the cloud:

  • Application discovery: figure out the requirements of your applications and research if they are suitable to run in the cloud.
    • Legacy applications are designed for traditional servers.
      • Traditional servers are our “pets” or “snowflakes”. We install, update, and repair them mostly manually and care for them.
      • You can make changes to some legacy applications to make them work in the cloud.
    • In the cloud, we run resources “on-demand”, there is no time to manually install a server.
      • We define our infrastructure resources in configuration files with a configuration management tool. We call this “infrastructure as code”.
      • We use a VCS to store configuration files.
      • We use deployment tools to install new servers automatically.
  • Migration strategies: there are different strategies to migrate your applications to the cloud. The Gartner migration strategies are popular, cloud providers also offer different migration strategies.
  • Evaluation and testing: once you have analyzed your applications and decided on a migration strategy, you need to evaluate all cloud providers and options.
  • Execute and manage migration: start with a non-critical application that is easy to migrate so you can fine-tune and learn the process.
  • After migration: compare your pre-migration performance baseline with your post-migration performance baseline.

I hope you enjoyed this lesson. If you have any questions, please leave a comment!

Cloud Security, Implications, and Policy

Cloud security is about securing the cloud and securing access to the cloud. In this lesson, we’ll look at security, implications, and policies of cloud computing.

Shared Responsibility

We have different cloud service models (IaaS, Paas, and SaaS). In these different service models, there is a shared responsibility.evolving-technologies-security-compliance-1080p.mp4

Cloud Service Models Iaas Paas Saas

With the IaaS service model, the cloud provider is responsible for the security of the lower layers. The customer is responsible for the security of the operating system and everything that runs on top of it. With PaaS, the cloud provider is responsible for everything except the data and application.

With a SaaS solution, the cloud provider is responsible for everything. The higher the cloud provider’s control of the service model, the more security responsibilities the cloud provider has.

Cloud Service Models Security Responsibility

The Cloud Security Alliance (CSA) is an organization that promotes best practices for cloud security. They offer a security guidance document that covers best practices and recommendations for all domains in cloud computing.

They have two recommendations for the shared responsibility model:

Cloud providers should clearly document their internal security controls and customer security
features so the cloud user can make an informed decision. Providers should also properly
design and implement those controls.

Cloud users should, for any given cloud project, build a responsibilities matrix to document
who is implementing which controls and how. This should also align with any necessary
compliance standards.

The CSA provides two tools to help meet these requirements:

  • Consensus Assessments Initiative Questionnaire (CAIQ): a template for cloud providers to document their security and compliance controls.
  • Cloud Controls Matrix (CCM): lists cloud security controls and maps them to multiple security and compliance standards. You can also use the CCM to document security responsibilities.

Compliance

Regulatory compliance means that an organization has to conform to a specification, policy, standard, or law relevant to its business processes. Violations of regulatory compliance often result in legal punishment, including fines.

Here are examples of regulatory compliance laws and regulations:

  • Payment Card Industry Data Security Standard (PCI DSS): an information security standard for organizations that handle credit cards.
  • Health Insurance Portability and Accountability Act (HIPAA): US legislation that provides data security and privacy to protect medical information.
  • General Data Protection Regulation (GDPR): EU legislation on data privacy and protection for individuals within the European Union.

There are also cloud-specific standards. Here are two ISO standards:

  • ISO 27017: ISO standard that provides guidelines on the security aspects of cloud computing.
  • ISO 27018: ISO standard that provides guidelines on protecting Personally Identifiable Information (PII) in cloud computing.

Threats and Risks

Cloud environments face the same threats as traditional (on-premises) IT infrastructures. The cloud runs on software. Software has vulnerabilities; attackers try to exploit these vulnerabilities.

The main difference between traditional IT infrastructures and cloud computing is that the cloud provider and customer share the responsibility for mitigating these threats. The customer has to understand who is responsible for what and trust the cloud provider meets their responsibilities.

Let’s discuss some threats unique to cloud computing.

Limited visibility and control

The responsibility of the customer and cloud provider depends on the cloud service model. With the PaaS and Saas service models, the cloud provider is responsible for most layers. This also means that the customer doesn’t have much visibility into what happens behind the scenes. The customer might want to monitor and analyze their applications outside of their control when the cloud provider manages the network layer.

Simplified unauthorized usage

Shadow IT is resources that users use without explicit approval from the organization. This is also a risk with traditional IT environments. For example, users that use Google Drive, Onedrive, or physical devices like USB sticks.

Cloud providers make it easy to provision new on-demand resources. It’s easy for staff to provision new cloud resources (especially PaaS and SaaS) without consent from the IT department. The IT department can’t protect something they don’t know about. Shadow IT reduces visibility and control.

Management APIs

We use management APIs to interact with cloud services, often used for automation and orchestration tools. These management APIs are sometimes accessible over the Internet and can have vulnerabilities.

Separation between tenants

A multi-tenant cloud is a private or public cloud where customers use services on a shared infrastructure. Vulnerabilities in a multi-tenant cloud is a risk. An exploit of a vulnerability in an application, hypervisor or hardware could overcome the logical isolation between customers, giving an attacker access to data of other customers.

Incomplete data deletion

The customer has limited visibility because they don’t know where the cloud provider physically stores their data. The cloud provider might spread out data over multiple storage devices. This reduces the ability of the customer to verify whether data has been securely erased.

Stolen credentials

You can create and access cloud resources through the GUI, CLI, or APIs. When your API keys are exposed, an attacker might try to create cloud resources. There are plenty of horror stories online where someone accidentally committed their Amazon AWS API keys to a public GitHub repository and ended up with a huge bill because hundreds of virtual machines were mining crypto currency. Cloud providers like AWS often waive these charges when it happens the first time.

Vendor lock-in

Vendor lock-in is an issue when you want to move to another cloud provider. Cloud providers offer different services and non-standard APIs. To use multiple services from one cloud provider is tempting since they integrate so well. The more services you use, the harder it becomes to switch to another cloud provider.

When a cloud provider goes bankrupt, it might be difficult to retrieve your data, and it’s difficult to switch quickly to another cloud provider. There are options to mitigate this. A good example is the Serverless Framework for serverless applications. Cloud providers like Amazon AWS and Google Cloud offer serverless applications natively, but the serverless framework is like a layer on top of it. It makes it easier to switch serverless applications from one cloud provider to another.

Another example is Terraform. Cloud providers offer tools to write your infrastructure as a code. Amazon AWS has CloudFormation, and Google Cloud has the Cloud Deployment Manager. Terraform is a tool you can use to write infrastructure as a code and it supports multiple cloud providers.

Increased complexity

Migrations to the cloud introduces complexity into IT operations. IT staff has to learn new skills and must have the ability, skill level, and time to learn all this new technology. They have to do this next to maintaining their on-premises IT infrastructure.

The first time I logged into Amazon AWS, I felt overwhelmed with all the services (140+). It doesn’t help that they use cryptic names for their services. The cloud can be a rabbit hole where you dive into learning one service, only to discover ten other services that look interesting.

The CSA has good implementation guides about these threats and how to counter them.

Security

A cloud security architecture should protect everything:

  • Cloud
    • Public
    • Private
  • Endpoints
    • Mobile devices (smartphones and tablets)
    • Laptops
    • (Virtual) Servers
    • IoT (Internet of Things)
  • Network
    • Campus
    • Branch
    • Corporate DC

This lesson is intended for students preparing for the “evolving technologies” section of Cisco CCIE/CCDE written exams and about the cloud so let me give you an overview of three Cisco cloud security products:

  • Cisco Cloudlock: a Cloud Access Security Broker (CASB), it’s a product that sits between on-premises infrastructure and a cloud provider. A CASB provides visibility and control in cloud activities, protects against compromised accounts, identifies data exposures, and privacy/compliance violations.
  • Cisco Umbrella: Cisco purchased OpenDNS and rebranded the OpenDNS enterprise security products to Cisco Umbrella. It’s DNS based, monitors all internet activity, and stops connections to malicious internet destinations.
  • Cisco Stealthwatch Cloud: this product gives visibility in network and cloud traffic. This tool uses flows logs to monitor cloud network traffic and reports suspicious activity.

Conclusion

In this lesson, you learned about cloud security, implications, and policy. If you have questions, please leave a comment!

Cloud Performance, Scalability, and High Availability

We often use Performance, Scalability, and High Availability interchangeably. There are however differences between these three items. In this lesson, we’ll look at their differences regarding the cloud.

evolving technologies performance scalability ha 1080p

Performance

Performance is the throughput of a system under a given workload for a specific time. For an application this could be:

  • The time it takes for an application to finish a task. For example, running a query on a database server to fetch all staff records.
  • The response time for an application to act upon a user request. For example, a user that requests a webpage.
  • A load of a system, measured in the volume of transactions. For example, a web server that processes 500 requests per second.

In the cloud, we validate performance by measurement and testing scalability. Here are examples of items you should measure:

  • Resource usage:
    • CPU load
    • Memory usage
    • Disk I/O
    • Read/write database queries
  • Application statistics
    • number of requests
    • response time

Performance measurement is an ongoing process, it never ends. You can use the cloud provider‘s tools or external tools.

Performance requirements change when there are new business requirements or when you add new features to your application.

If you use a public cloud, you also need to consider the bandwidth and delay of your WAN connection to the public cloud.

Networklessons.com runs on Amazon AWS. Here are two screenshots of how we measure the performance.

Aws Cloudwatch Ec2 Cpu Utilization
This screenshot shows the average CPU utilization of the EC2 instances (virtual machines) that host the networklessons.com website.

 

Aws Cloudwatch Rds Mysql Write Iops
This screenshot shows the average write IOPS of the networklessons.com website‘s database.

Scalability

Scalability is the ability of a system to handle the increase in demand without impacting the application’s performance or availability.

When the demand is too high and there are not enough resources, then it impacts performance. There are two types of scalability:

  • Vertical: scale up or down:
    • Add or remove resources:
      • CPU
      • Memory
      • Storage
  • Horizontal: scale out or in:
    • Add or remove systems

For example, we can increase the number of CPU cores and memory in a web server (vertical) or we can increase the number of web servers (horizontal).

We can scale up horizontally or vertical to prevent lack of resources affecting our performance and availability. Here is a screenshot of the Amazon AWS auto scaling policy we use for networklessons.com web servers:

Aws Ec2 Spot Fleet Autoscaling
Networklessons.com runs on three EC2 instances (virtual machines). When there is more demand and the average CPU utilization exceeds 50%, we scale out to a maximum of 10 EC2 instances.

Elasticity

What if you scale up or out and demand decreases? The advantage of the cloud is that you can scale down or in whenever you want. You pay for the resources you need. We call this elasticity and most cloud providers call it autoscaling. For example:

  • Amazon AWS Auto scaling
  • Microsoft Azure Autoscale
  • Google Cloud Platform Autoscaling

Public cloud providers seem to have an infinite capacity of compute and storage resources. Cloud providers like Amazon AWS, Azure, and Google Cloud need to have enough resources in reserve for their customers. You can bid on their unused capacity with spot instances and save money. However, when someone bids more than you, you lose the instance.

High Availability

High availability (HA) means the application remains available with no interruption.

We achieve high availability when an application continues to operate when one or more underlying components fail. For example, a router, switch, firewall, or server that fails.

We affect HA by implementing the same components on multiple instances (redundancy). For example:

  • Running two web servers instead of one
  • Running the same database on two servers, a master and slave.

We also need a fail-over mechanism aware of other components. For example:

  • A load balancer that detects that a web server is offline.
  • Slave database server that detects that the master database server is offline.

In cloud computing, there are two things to consider:

  • Cloud Provider HA
  • Customer HA

Cloud Provider HA

When you run a virtual machine on a cloud provider (IaaS) then the cloud provider offers HA for all underlying layers:

  • Networking
  • Storage
  • Servers
  • Virtualization

The cloud provider ensures that the failure of one component (for example a physical server) does not take your virtual machine down.

Cloud providers offer multiple regions and availability zones. Worldwide, they have different regions. Within a region, there are multiple availability zones.

Cloud Provider Region Availability Zone

Customer HA

It’s up to the customer to use the cloud provider’s services to build an HA solution. You have zero redundancy if you install an application on a single virtual machine in a single region.

Take down the virtual machine and the application is unavailable. Instead of installing a database server on a virtual machine yourself, you might be better off with a PaaS solution that offers redundant database servers. For example:

  • Google Cloud SQL
  • Amazon AWS RDS

Conclusion

You learned about the differences between cloud performance, scalability, and high availability. I hope you enjoyed this lesson. If you have questions, please leave a comment.

Cloud Service Models

We often see the “as a service” terminology when we talk about cloud computing. The National Institute of Standards and Technology (NIST) describes three services models in their definition of cloud computing:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

In this lesson, we’ll look at the different service models.

If you own and maintain your own infrastructure, you manage everything:

Cloud Service Models On Premises

You maintain the hardware, networking, storage, etc.

Service Models

Let’s look at the different cloud service models and how they relate to the traditional on-premises infrastructure.

Infrastructure as a Service (IaaS)

IaaS offers network infrastructure as a service. Look at this model:

Cloud Service Models Iaas

The cloud provider controls the underlying infrastructure:

  • Virtualization
  • Servers
  • Storage
  • Networking

The customer maintains the upper layers. The customer chooses the operating system, software, application(s), and data.

This service model is popular. You can choose the operating system you want which makes it easy to migrate on-premises servers to the public cloud.

Two examples of IaaS:

  • Amazon AWS EC2
  • Google Compute Engine

Platform as a Service (PaaS)

PaaS is like IaaS but we go three steps further. Check out the following model:

Cloud Service Models Paas

The cloud provider is also responsible for the operating system, middleware, and runtime. The customer maintains the data and application.

This is great for developers who want to run their PHP, Python or Java applications without worrying about the underlying layers. You upload your application and are ready to go.

Two examples of PaaS:

  • Amazon AWS BeanStalk
  • Google App Engine

Software as a Service (SaaS)

In the SaaS service model, the cloud provider maintains and provides everything.. Look at this model:

Cloud Service Models Saas

This is the service model most of us are familiar with. Examples are Office 365, Gmail, Facebook, Twitter, etc.

Everything as a Service (XaaS)

The trend nowadays is that we can get everything as a service. Cloud providers come up with new services they offer through the cloud. The X in XaaS is an unknown value, meaning “everything as a service”.

To give you an idea what kind of services there are,  here is a screenshot of Amazon AWS:

amazon aws services

This is a big list of services, there are over 140. Amazon AWS uses cryptic service names so it’s difficult to figure out what the service does by looking at the names.

Here are five XaaS examples:

  • Desktop as a Service
  • Database as a Service
  • Monitoring as a Service
  • IP Telephony as a Service
  • Blockchain as a Service

My favorite is probably AWS ground station where you can get a ground station for satellites “as a service”.

Conclusion

You have now learned the differences between the different cloud service models: IaaS, PaaS, SaaS, and XaaS.

Cloud Deployment Models

According to the definition of the National Institute of Standards and Technology (NIST) there are four cloud deployment models:

  • Public cloud
  • Private cloud
  • Community cloud
  • Hybrid cloud

In this lesson, you will learn about these four cloud deployment models.


Public Cloud

The public cloud is the cloud that most people are familiar with. The cloud infrastructure is available to everyone over the Internet. The cloud service provider (CSP) owns, manages, and operates the public cloud. The infrastructure is at the premises of the CSP.

Public Clouds

You pay for the services, storage, or compute resources you use. Customers that use the public cloud use shared resources. One advantage of the public cloud is that you don’t need to buy and maintain the physical infrastructure. Your connection to the public cloud could be over the Internet or a private WAN connection.

Examples of public cloud providers:

  • Amazon AWS
  • Microsoft Azure
  • Google Cloud
  • IBM cloud
  • Alibaba Cloud

Private Cloud

The private cloud is a cloud model where a single organization uses the cloud.The organization or a third party could own, manage, and operate the cloud. A combination of the two is also possible. This cloud can exist on-premises or off-premises.

Private Cloud On Or Off Premise

Organizations that want more control over their cloud or that are bound by the law often use private clouds.

One difference between traditional virtualization and cloud computing is how we deliver services. Traditional virtualization often uses manual intervention to deliver a service. Cloud computing uses orchestration and automation to deliver a service without manual intervention.

Here are four examples:

  • OpenStack
  • Microsoft Azure Stack
  • VMWare vCloud Suite
  • Amazon AWS Outposts

Public cloud providers can also emulate a private cloud within a public cloud. We call this a virtual private cloud. Amazon AWS and Google Cloud call this Virtual Private Cloud (VPC). Microsoft Azure calls this Virtual Network (Vnet). VPC and Vnet isolate resources with a virtual network unreachable from other customers.

Community Cloud

The community cloud is a private cloud for organizations that share common interests.  For example mission objectives, compliance policies, and security. This cloud can exist on-premises or off-premises.

Community Cloud Options

 

Because of regulatory standards and the law, the government and public sector often use community clouds. There are cloud providers that offer community clouds, here are some examples:

  • Amazon AWS GovCloud
  • Google Apps for Government
  • Microsoft Cloud for Government
  • Carpathia

Hybrid Cloud

A hybrid cloud is when we combine a private and public cloud. You don’t have a hybrid cloud when you use a separate private and public cloud. We call it a hybrid cloud when the private and public cloud is integrated. An organization might run a Microsoft Exchange server in their private cloud but use Microsoft Azure Active Directory in the public cloud for authentication.

Another example is an organization that runs most of their workloads on their private cloud. When the private cloud is at 100% capacity, they use the public cloud. We call this cloud bursting. The scaling from the private cloud to the public cloud is seamless, you don’t even notice whether you are on the private or public cloud.

Hybrid Cloud Topology

Here are three examples of hybrid clouds:

  • Cisco Hybrid Cloud Platform for Google Cloud
  • IBM Hybrid Cloud Platform
  • Rackspace Hybrid Cloud

Multicloud

This cloud “model” is not in the NIST list of deployment models but good to know since many organizations are interested in multicloud strategies. Some organizations use more than one public cloud provider. It’s easy to get into this. One business unit might use Microsoft technology so they pick Microsoft Azure. Another business unit is into machine learning so they use some Amazon AWS services.

Cloud providers offer different services. One cloud provider might be strong in one area and weak in another.

Inter Cloud Topology

To connect to different public cloud providers you can use an intercloud exchange. The intercloud exchange offers you a connection to one or more public cloud providers. This means you don’t need to use two or more private WAN connections to connect to all your public cloud providers.

Here are three reasons to use multicloud:

  • No vendor lock-in: using the services from a single cloud provider is convenient since everything is in one place. Unfortunately, this decision makes you dependent on the cloud provider so you might want to consider using multiple cloud providers.
  • The best solution for business case: each business unit in your organization selects the cloud provider that matches their needs.
  • Cost: each cloud provider has a different pricing strategy. This can also be a disadvantage since most cloud providers offer volume discounts.
  • Redundancy: there have been major outages. AWS had an S3 outage in 2017 that took down quite some websites and services.

One disadvantage of using multiple cloud providers is that you require IT staff that has knowledge of multiple cloud providers. With the number of services offered, it’s difficult to keep up with everything.

Conclusion

You now learned about the different cloud deployment models:

  • Public cloud
  • Private cloud
  • Community cloud
  • Hybrid cloud

And some common reasons why you might want to use multicloud. I hope you enjoyed this lesson, if you have questions, please leave a comment.