2022 Gartner® Magic Quadrant™ SIEM
Get the reportMore
Cloud migration is the process of moving applications, data, and other components hosted on servers inside an organization to a cloud-based infrastructure.
Some of the leading cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform. These not only provide the hardware but also offer a variety of rich apps and services for continuous integration, data analytics, artificial intelligence, and more. At Sumo Logic, our cloud-neutral products can easily integrate with most leading cloud-based solutions.
Organizations have traditionally been held back by the challenge of growing their information infrastructure. However, moving to the cloud adds tangible value to their outlook. Here are a few benefits:
Agility and speed
With the cloud, procurement of new inventory and storage space is reduced to a matter of days or even hours, giving businesses the agility to respond to a rapidly changing technological environment.
The simplicity of cloud solutions makes teams more productive. In distributed teams, the cloud removes region-specific dependencies, creating a collaborative team setting.
Cloud providers package several useful features such as disaster recovery, automatic logging, monitoring, continuous deployment, and others as part of their solution.
Higher resource availability
The cloud environment comes with a no-downtime promise that increases the availability of resources, in turn leading to better asset utilization and customer satisfaction.
At large volumes, the unit price of servers comes down noticeably in comparison with native data centers. The pay-as-you-use model provides the flexibility that companies seek to counter seasonal demand and scale up or down as required by the business.
Gartner’s 5R’s – Rehost, Refactor, Revise, Rebuild, and Replace – is a great starting point for deciding on a cloud migration strategy. Here is a quick synopsis:
Also called ‘lift and shift,’ Rehosting is the use of Infrastructure-as-a-Service (IaaS). It’s about simply taking the existing data applications and redeploying them on the cloud servers. This works great for beginners, who are not yet accustomed to a cloud environment or for systems where code modifications are extremely difficult.
Also called ‘lift, tinker, and shift,’ refactoring involves making some optimizations and changes for the cloud and employing a Platform-as-a-Service (PaaS) model. Applications keep their core architecture unchanged but use cloud-based frameworks and tools that allow developers to take advantage of the cloud’s potential.
Adding another layer atop the previous two, this approach involves making architectural and code changes before migrating to the cloud. The objective is to optimize the application to take complete advantage of the cloud services, introducing major changes to the code. Advanced knowledge is required to implement this strategy.
Similar to Revise in its big-bang approach, Rebuild discards the existing code base in favor of a new one. For example, moving from Java to .NET. This is a time-consuming process and is only used when there is consensus that the existing solution does not suit the changing business needs and needs a revamp.
Migrating to a third-party, vendor-based application from an existing native application is what this strategy is all about. The existing application data needs to be migrated to the new system, however, everything else will be new.
The first step is to determine which applications (if any) make more sense in-house. Circumstances will vary, but these apps may include certain databases, applications for managing internal processes, or other applications that have special sensitivity to your organization.
The opposite of leaving apps in-house is making them portable, or ready to be ‘dragged and dropped’ into a cloud architecture and set straight to work. Portable apps present many administrative advantages such as easier disaster recovery, scaled capabilities for geographic cloud regions, faster turnaround for bringing versions to market, and cost-leveraging new and existing providers.
However, packaging older apps for portability involves a major reworking of the operating code, performed by highly-skilled software engineers who understand the legion of technical interactions taking place in a cloud environment.
The most common environment today is the hybrid cloud, which is a combination of native and portable apps working together on a platform that blends private internal networks with cloud-hosted services like Amazon’s AWS. These environments represent traditional and new networking challenges but also leverage the power of the cloud and its services to expand the reach and interaction capabilities of existing infrastructures.
Analyze carefully before undertaking the tricky path to cloud migration. This will save you endless frustration and costs down the line and build a strong cloud foundation from which to grow.
Below is a checklist of items that have extremely important ramifications for your cloud infrastructure today and on its future growth.
1. Choose the right cloud platform
How much raw storage will you need to host and properly back up your main databases? How much overhead will you need for hypervisors like those used in VMWare and Microsoft’s Azure environment? Build out the appropriate virtual workspace in advance and be selective when pricing cloud storage and hosting options.
2. Check for hardware obsolescence
Many existing network devices employ hardware acceleration to power through heavy traffic. Not all simulated devices—like virtual routers, switches, and load balancers—currently support hardware acceleration in the cloud. Audit your traffic and processor demands and trends before assuming virtual replacement devices will perform all the duties of existing native hardware.
3. Research licensing issues
Native networking environments of yesterday usually licensed software by the user, the device, or by the enterprise. But the cloud changes these variables, hosting apps on machines with adjustable virtual processor counts and adding the ability to scale application services and servers. Factoring this impact on your licensing model could save huge sums of money.
4. Mind your SSLs and certificates
Secure socket layer certificates operate with precision to verify data routes and security. Changing a host location can throw your SSLs off track. It’s best to review where and how you use SSLs and prepare to renew or replace them before going live from a new hybrid cloud environment. Companies like DigiCert offer insight tools to assess your SSL chain.
5. Audit IP addresses
IP addresses, usually statically or dynamically assigned and then forgotten about, take on another layer of challenge in the cloud. Old DHCP scopes and static addresses will likely change when moving to a cloud infrastructure, creating the need for a thorough IP address relationship audit prior to the move so that dependent sockets won’t be broken on migration.
6. Evaluate access control list dependencies
An access control list (ACL) regulates user and service activity across your network. As with IP addresses, migration will impact ACL dependencies. Customer traffic and system activities like backups, hypervisors, and monitoring will need to be reviewed for a shift to a full or hybrid cloud infrastructure to make sure everyone and everything can reach what it has to in the new design.
Advance planning and prep will make complying with audits much simpler.
Consider all the interactions that must be logged, analyzed, and reacted to every time traffic hits your site. IP traffic history, user tracking data, threat detection, security intrusion attempts, anomalies, and more collide in terabytes of data. Anyone string in that data could have deep implications for your business.
Understandably, correctly analyzing all this raw data can be intimidating. Log analysis of that much data requires assistance and software solutions that assist with or even guarantee compliance are often smart investments.
Not if, but when an event strikes your cloud environment, it is critical to run root-cause analysis and trace logs for culprit activity. It is an ever-evolving challenge, but a good compliance strategy addresses these requirements:
All data interaction between all machines, virtual and physical, must be recorded and compiled into one or more highly secure locations.
Once collected, requirements dictate that data must be demonstrated to be immutable. Safety requirements include heavy encryption and a hard audit trail for the black box in which centralized logging data is locked.
Daily infrastructure reviews
A smart cloud model includes daily probes for threats and vulnerabilities. By simulating attacks, outages, and other crises, good teams can respond with agility or even prevent service interruption.
Clear and rigid data retention policies
Industry standards for keeping a full history of cloud activity vary from three months to 12 or more. It’s important to first define your policies, and then implement them. Inconsistencies between policy and procedure can lead to big compliance trouble fast, so data retention is a critical focus area for audit preparation.
Sumo Logic helps enterprises accelerate their cloud strategy by offering an automated, easy, and rapid development and deployment process for cloud-based applications. We have developed the first truly next-generation machine data analytics platform, delivered as a cloud-based service. Some of the features that our platform offers are:
Continuous intelligence capabilities with built-in advanced analytics help uncover patterns and anomalies.
On-demand scaling to support rapid growth and cloud migration thanks to a multi-tenant architecture.
Security analytics and identification of any risks and threats within the cloud environment.