laitimes

How to choose between cloud and cloud? These are the most common architectural mistakes that provide the answers......

author:DBAplus Community

Foreword: According to a Citrix survey of 350 IT leaders implementing cloud computing strategies this year, 94% of respondents have participated in a cloud repatriation program in the past three years. Once upon a time, the cloud, known for its low cost and high agility, was a popular choice for modern architecture design, why is it that "going to the cloud" has become a new trend for many companies to practice?

According to Eyal Estrin, the problem with a poor "cloud" experience may lie in failed architectural decisions. How Do I Choose a Migration Strategy for Cloud Migration? How can I control cloud costs? How to avoid cloud lock-in dilemma?

When an organization designs a scenario to move workloads to the cloud for the first time, they often mistakenly view the public cloud as a panacea for all IT challenges, including scalability, availability, cost, and more, and can then make poor architectural decisions as they move forward with the cloud migration.

This article will review some of the common architectural mistakes organizations make.

“提升和转移”策略(lift-and-shift,又称重新托管)

Migrating your entire on-premises legacy workloads to the public cloud as-is may work fine unless there are licenses or specific hardware requirements, but this can also have a negative impact.

While virtual machines work flawlessly in the cloud, in most cases, the performance of virtual machines must be measured over time and instance sizes to match the workloads actually running to meet customer demand.

Choosing a lift and shift strategy as a temporary solution is feasible, especially if the organization doesn't have the time and resources to rebuild the workload and doesn't have an alternative to another architecture.

But in the long run, lift-and-shift (compared to on-premises) will be a costly solution and won't be able to get the full power of the cloud, such as scale-to-zero, managed service elasticity, and more.

Use Kubernetes for small/simple workloads

When designing modern applications, organizations tend to follow industry trends.

One of the hottest trends in the industry is the choice of containers to deploy various application components, and quite a few cases, organizations choose Kubernetes as the container orchestration engine.

While Kubernetes does have many benefits, and all hyperscale cloud providers offer a managed Kubernetes control plane, Kubernetes also presents a number of challenges.

Kubernetes has a long learning curve, and it takes a considerable amount of time for people to fully understand how to configure and maintain Kubernetes.

For small or predictable applications built from a small number of different containers, cloud vendors offer better and easier alternatives to deploy and maintain, all of which are fully capable of running production workloads and are easier to learn and maintain than Kubernetes.

Use cloud storage for backup or disaster recovery

When an organization starts searching for the first use case to use the public cloud, it immediately considers cloud storage as a backup, perhaps even for a disaster recovery scenario.

While both use cases are valid options, organizations often overlook the further future—once object storage (or managed NFS/CIFS storage services) is used at an organization's backup site, the recovery phase must be considered. It will take a lot of time to pull back a large binary backup file from the cloud environment, not to mention the cost of egress data, the cost of reading object API calls, and so on.

The same is true for disaster recovery scenarios – if we back up our on-premises VMs or even databases to the cloud, but don't have a similar infrastructure environment in the cloud, how can a cold disaster recovery site help in the event of a catastrophic event?

Separate the application layer from the back-end data storage layer

Most applications are built from the front-end/application layer and the back-end persistent storage layer.

In traditional or tightly coupled architectures, low latency is required between the application layer and the data storage layer, especially when reading or writing to the back-end database.

One of the common mistakes is to create a hybrid architecture: the front-end is in the cloud, pulling data from an on-premises database, or an old on-premises application is connected to a schema that hosts a database service (in the rare scenario) cloud. Unless the target application is not susceptible to network latency, it is highly recommended that all architectural components be arranged next to each other to reduce network latency between the individual application components.

Moving to multi-cloud and looking to address vendor lock-in risk

A common risk that many organizations are concerned about is vendor lock-in, where customers are locked into an ecosystem of specific cloud providers. To further illustrate, vendor lock-in is related to the cost of switching between cloud providers.

Multicloud doesn't solve the risks, but it creates more challenges, including skills gaps (teams familiar with the ecosystems of different cloud providers), central identity and access management, incident response across multiple cloud environments, egress traffic costs, and more.

Instead of designing complex architectures to mitigate theoretical or potential risks, design solutions to meet business needs and become familiar with the ecosystem of a single public cloud provider. Over time, once your team has enough knowledge of multiple cloud providers, you can expand the reach of your business.

architecture, without having to move to multi-cloud in the first place.

Choose the cheapest region in the cloud

As a rule of thumb, unless you have specific data residency requirements, choose a region close to your customers to reduce network latency.

Cost is an important consideration, but you should still design an architecture where your applications and data are close to your customers.

If your application serves customers across the globe or in multiple locations, consider adding a CDN layer that combines multi-region solutions (e.g., cross-region replication, global databases, global load balancers, etc.) to bring all static content closer to customers.

Failure to re-evaluate the existing architecture

In traditional data centers, only one application architecture is designed and remains static throughout the lifecycle of the application.

When designing modern applications in the cloud, we should embrace dynamic thinking, which means constantly re-evaluating architectures, looking at past decisions, and seeing if new technologies or services can provide a more appropriate solution for running applications.

The dynamic nature of the cloud and evolving technologies provide innovative capabilities and ways to run applications faster, more resilient, and more cost-effectively.

Biased architecture decisions

This is a trap that many architects fall into – having a background in a particular cloud provider, designing the architecture around that cloud provider's ecosystem, embedding biased decisions and service constraints into the architectural design.

Instead, the architect should have a good understanding of the business needs, the full scope of the cloud solution, the cost of the service, and the constraints before starting to choose the most appropriate service to participate in the architecture of the application.

Failure to increase the cost of architectural decisions

Cost is an important consideration when using cloud services, and one of the factors that affects cost is the ability to use the service on demand (and not pay for unused services).

Every decision in the architectural design (choosing the right compute node, storage tier, database tier, etc.) has a cost impact. Once you know the pricing model for each service and the potential growth for a particular workload, you can estimate the potential cost.

The dynamic nature of the cloud can result in different costs from month to month, so we need to regularly assess the cost of our services, change them from time to time, and adjust them to fit specific workloads.

summary

Public clouds also face many challenges in choosing the right services and architectures to meet specific workload requirements and use cases. There is no right or wrong way to design an architecture, but avoiding "bad" architectural decisions requires looking at the big picture when designing.

My personal advice to readers is to constantly expand their knowledge of cloud and architecture-related technologies, and to constantly question the current architecture to determine if there is an architectural alternative that is more suitable for the existing business.

About the Author:

Eyal Estrin, a Cloud and Information Security Architect and author of the books "Cloud Security Handbook" and "Cloud Native Application Security", has more than 20 years of experience in the IT industry.

Compilation丨onehunnit

Source丨 aws.plainenglish.io/poor-architecture-decisions-when-migrating-to-the-cloud-04fd2b53f2ca

*This article is for reference and learning only, and does not represent the position of the DBAPLUS community! Welcome technical personnel to contribute, submission email: [email protected]