Barker: I think the biggest concerns, which both turned out to be unfounded, have been around the security of using virtualization and the risks of having multiple virtual machines running within the same physical infrastructure. While there has recently been the release of Spectre and Meltdown vulnerabilities within the CPU architectures that have reignited some of these concerns, patches have been released quickly and the exploit required root or administrator access to the systems themselves if an attacker has that information to your private cloud, it is a far larger problem.
In general, resource isolation and virtual machine isolation has been found to be completely secure, and issues generally arise when these are misconfigured during deployment. A properly designed virtual environment with network isolation and storage isolation if needed is very secure.
I suspect it's very difficult to program such a thing. VMware has excellent management tools and a track record in providing hardware virtualization, but it comes at a relatively hefty price, especially if you are putting a large deployment together. If you are primarily a Windows environment and most of the guest machines are going to be running Windows Server, then a Hyper-V environment may be preferable.
The licensing costs can be lower if deployed correctly with Windows Data Centre edition or using Windows Server Hyper-V Core, and the management interfaces will be familiar to users. KVM and Xen are both excellent open-source hypervisor platforms, but they lack management interfaces.
While there are options to address this such as going for an OpenStack environment or using a front-end such as OnApp, these do add some complexity to the design if you don't have prior experiencing in using those tools or open source software in general. Rittwage: I'm not sure I would deploy anything except for the majors for any critical business role, but for practice and learning about the product, or for temporary disaster recovery situations, I've seen VirtualBox used.
You can also have the physical server deployed as a hypervisor, but with only a single virtual machine running on it, which can be a good way to ensure the required resources are available to that application while keeping the benefits of management and migration that a virtualized environment can bring.
Likewise, legacy applications can be an issue to put into a virtual environment--not all applications will sit happily with virtual CPUs or virtual NICs, as they have been designed to speak to the physical hardware itself. Due to the maturity of the virtualization market, these applications are becoming far fewer and less of a concern as time goes on.
Virtualization is about sharing the underlying hardware with other tasks. Barker: Mostly I suspect this will be around a shift to more network virtualization on the physical network hardware in order to support workloads and virtual machines that are regularly migrating between hypervisor nodes, and it will mean ensuring that the physical network infrastructure that supports your virtual infrastructure is properly designed for SDN, scripting, and vxLANs.
Another area will be the continued increase in the use of containerization within the virtual machines--products such as Docker and Kubernetes provide for OS and application virtualization within the virtual machine itself. In the right use cases, this brings massive benefits in speed of deployment, consistency of the environment, and the ability to migrate application workloads instantly between virtual machines. Rittwage: It's pretty mature at this point, so I'm not sure what new challenges will show up in the next 5 years.
Koblentz: Generally, what other advice do you have for people in charge of implementing and maintaining server virtualization projects? Barker: Plan for growth. During the design phase, after you have your benchmarking of the existing environment, make sure to plan for how you'll expand the platform with new hypervisors or additional storage in a way that minimizes impact on the environment.
With virtualized environments, there is an expectation of much higher availability, and you need to be able to add in another set of disks or another four hypervisors without having to re-architect the whole platform because there were only enough switch ports for the initial build. Also, make sure you still have a good backup strategy.
Although everything is now virtualized and likely a lot more resilient to the failure of a physical component of the infrastructure, things do still go wrong.
Having everything virtualized opens up some other backup strategies with snapshots of virtual machines and technologies such as [backup appliances], which can make taking backups, managing the backups, and restoring far easier than when everything was on its own individual servers. Rittwage: Plan for performance, growth, and redundancy. People expect to be able to use an expensive server for 5 years or more.
Use a consultant that has successfully moved many companies to virtualization. DevOps, virtualization, the hybrid cloud, storage, and operational efficiency are just some of the data center topics we'll highlight. Delivered Mondays and Wednesdays.
Evan became a technology reporter during the dot-com boom of the late s. He published a book, "Abacus to smartphone: The evolution of mobile and portable computers" in and is executive director of Vintage Computer Federation, a c 3 non-p In other cases, when it comes to assets that are rarely used, you may be fine to sacrifice performance and speed for cost savings.
Regardless of which route you go, understanding your need for performance is crucial to having the best experience with dedicated equipment or a virtualization vendor. Direct connections into cloud services has made utilizing a cloud service such as AWS and Azure easier. Getting GigE and 10 GigE circuits using Atlantech Online Cloud Connect or similar services makes it more functional than having to rely on connections over the public Internet.
Since virtualization servers are located offsite, you have an immediate advantage in terms of disaster recovery. In many cases, vendors with appropriate risk-mitigation planning can significantly improve your business continuity planning.
Risk mitigation ultimately depends on the configuration of your dedicated or virtual servers. In many cases, companies are able to significantly mitigate risk by switching to virtualization vendors that offer appropriate safeguards against hardware failure and backups both on and offsite.
The security of your physical or virtual servers depends largely on configuration, staff knowledge, and environment. For many organizations with minimal budget or hardware, switching to virtualization can offer significant gains in security protection. As your data assets increase, maintaining appropriate temperature and humidity can become more challenging.
Does your staff have the knowledge and bandwidth to appropriately manage server acquisition, maintenance, configuration, and security? Perhaps more important, are they aware of best practices for increasing efficiency and realizing cost savings? Switching to virtualization can free your IT team from dealing with data storage and server management, allowing them to focus on other priorities and opportunities for cost savings.
Many organizations choose to slowly migrate their workloads to virtualization over time. If this is your intent, communicate with your vendor about their existing migration tools, and have a conversation about application compatibility. Most businesses find that migration to virtualization, even when performed slowly over time, is much easier than they think. You may have certain data assets that do not contain payment, health, or other types of information that are subject to regulatory requirements.
In these cases, using a basic physical server that you already own could be the right choice. In a case study at Westminster College, the IT team decided to use physical servers to store camera footage while moving most of their overall workload to virtualization. The Westminster team felt they could absorb the responsibility and risk of storing this data on premises, using a basic server that was already owned. Both physical servers and virtualized servers must meet all standards set forth by law.
Typically, compliance is verified and measured by independent auditors. Depending on your use case, the right answer may be using both virtual and physical servers in a colocation datacenter.
Virtual server hosting has matured and we are seeing more hybrid approaches where web facing servers may be virtual and back end, data crunching servers reside in a colo data center, resulting in a hybrid environment.
Atlantech Online offers data center and colocation services to businesses of all sizes. If you would like to see how Atlantech can provide the right solution for your needs, click here to schedule a consultation. Get a Quote. Book a Meeting. But luckily for you, there are ways around all this tedious upkeep and those annoying repairs.
When you virtualize your servers, you reduce your need for physical, on-site servers. Instead, you have multiple virtual servers acting on behalf of one physical server. In theory, you should require less maintenance, upkeep, and IT support. With all that upkeep and all those repairs, you will be forced to spend money at one point or another. But again, virtualize those servers and the need for upkeep and repairs will all but disappear — ultimately reducing your overall IT-related expenses.
No business is a friend of downtime.
0コメント