RAM Type: DDR4 RDIMMs, LRDIMMs
Max Ram Capacity: RDIMMs: 6TB(32GB dual rank DIMMs), LRDIMMs: 12TB (128GB quad rank DIMMs)
Processor Socket Count: 4
Compatible Processor Series: E7-4800 v3, v4 or E7-8800 v3, v4
Number of Ram Slots: 8 risers with 12 slots each. Total 96
Hard Drive Bays: 24 2.5" drives. Up to 8 PCIe SSD's
BIOS Notes: Requires BIOS Revision 2.0 or higher to run E7-4800 v4, or E7-8800 V4 processors
E7 v3 processors only officially support 1536GB per processor installed
The Dell PowerEdge R930 is a 4U rack-mountable server featuring a quad-socket LGA-2011 motherboard. It can take E7-8800 v3, E7-8800 v4, E7-4800 v3, or E7-4800 v4 series processors. This system features eight memory risers. Two per processor. Each memory riser can hold 12 DIMMs. The risers will only address the memory if the accompanying processor is installed. Check the owner's manual in order to check which riser is associated with which processor. This system can address up to 96 sticks of 64GB dual rank RDIMMs at either 2400MHz or 2133MHz for a total of 6TB of memory. Alternately it can be installed with 128GB quad rank LRDIMMs for a total of 12TB of memory. With two added risers it can hold up to 10 PCIe devices. To use slots 6 through 10 all fours processors must be installed. It can come in a configuration with 4 2.5" hard drives or 24 2.5" hard drives. You may install up to 8 PCIe storage devices in the expansion slots in the back of the system. This system features four gigabit networking ports by default. E7 8800 v4 and E7-4800 v4 processors require BIOS revision 2.0 or higher to operate.
Overall this is an absolute beast of a machine. This single system alone can be filled with as much compute or memory or fast storage as many people's entire server rack from just two generations ago.
RAM Type: DDR4 RDIMMs, LRDIMMs
Max Ram Capacity: RDIMMs: 512GB(32GB DIMMs), LRDIMMs: 1TB (64GB DIMMs)
Processor Socket Count: 2
Compatible Processor Series: E5-2600 V3, E5-2600 V4, E5-1600 V3, E5-1600 V4
Number of Ram Slots: 16
Hard Drive Bays: 4 or 8 2.5" or 3.5" drives.
BIOS Notes: Requires BIOS Revision A12 or higher to run E5-2600 v4
Requires BIOS Revision A13 or higher to run E5-1600 v4
The Dell Precision T7910 is a workstation featuring an EATX motherboard with Dual LGA2011-3 sockets. This server supports Xeon E5-2600 v3 in dual processor configurations and Xeon E5-1600 v3 in single processor configurations. If the BIOS revision is A12 or newer then it will also support dual Xeon E5-2600 v4's or a single Xeon E5-1600 v4. This workstation uses a C612 chipset. It features 16 DIMM slots and can accept DDR4 LRDIMMs up to 64GB/stick for a total of 1TB in a dual processor configuration or RDIMMs up to 32GB/stick or a total of 512GB in a dual processor configuration. With a single processor configuration that number is halved. This may eventually be increased to officially support 128GB LRDIMMs but that has not been confirmed yet.
The T7910 features 6 x16 width PCIe slots and one PCI slot. The top two PCIe slots are linked to processor 1 and the bottom 5 slots are linked to processor 0. This means that if you do not have a second processor in then the top PCIe slots will not be operational.
This workstation can house 4 hot swappable 3.5" drives in the front or up to 8 2.5" drives. The processor socket is square ILM if you are looking for a heatsink replacement.
For additional information on this model please visit https://www.itconnected.tech/support/dell/dell-precision-t7910.html
Virtualization has become a massively important part of almost all server workloads and there are lots of options to choose from. For this article, we will just focus on the two frontrunners, Microsoft Hyper-V and VMware ESXi and why you might choose one over the other.
First I will talk about their similarities and then highlight how they differ. Both of ESXi and Hyper-V are Type I or bare metal hypervisors. This means that you can load VM's directly onto them and do not need a base operating system as well. They both feature live migration, high-availability and failover capabilities and they both can support a large array of guest VM operating systems.
So how do they differ then? Well, for starters, Hyper-V can come in a couple different flavors. You may load Hyper-V Server as it's own standalone setup like you would with ESXi but there is also the option to use Windows Server 2016 and simply add the Hyper-V role. This kind of blurs the line of it being a type I and type II hypervisor, but this can be very handy for many people as it allows virtualization to be built on top of already existing server stacks running other Windows Server services. Anyone already using Windows Server will likely be more comfortable using Hyper-V because it using the same UI and design principles. ESXi being Linux based will likely be quicker to pickup for those more familiar with bash and less GUI heavy interaction.
One other thing that varies between them rather significantly is their licensing. Both of them charge you per physical CPU so if you have a quad socket system then you will have to pay for four licenses. This is very important because the licenses are not cheap. I have to give Hyper-V the advantage on this one though because on ESXi you will have to buy the ESXi license as well as the Microsoft License underneath if you are using their services. If you are using Hyper-V then it can come as a free role bundled in with your Windows Server license.
One huge win that ESXi has over Hyper-V is that it has been around in one form or another for much longer than Hyper-V has been around. This means that lots more support material and guides are available. Many add-ons and additional software tools have been made over the year to aid in almost anything you could want to do with virtualization. This is largely why ESXi is still used in much larger corporations. It has greater flexibility and power when combined with all of the advanced tools that have been created over the years. But then on the flipside, you have the active directory integration with Hyper-V which allows for easier setup of a secure environment.
Last thing to consider is the potential size of your deployment. I won't go into detail on this because there are too many specifics to hash out, but basically, I highly recommend you read through the documentation if you are looking at very packed hosts because you may eventually run into issues where you are exceeding the limits of the hypervisor in terms of RAM or VM's or CPU's/vCPU's.
So finally which one wins? Well, like with most things, it really depends on your use and the current environment you are running and what you are willing to learn. Most people would agree that ESXi is a more complete solution but you are also going to need to learn a new environment if you choose ESXi over Hyper-V and you are giving up some creature comforts as well as likely paying a bit more for licensing.
If you've looked into hypervisors at all you have probably realized that there are a ton of different options and packages, even from one vendor there might be many different options like with Citrix and VMware. All of these can be separated into two groups. Type 1, commonly referred to as a bare-metal or client hypervisor. Or, Type 2, commonly referred to as a hosted hypervisor. Both of these are incredibly valuable, but your use case will greatly determine which you want to use.
Type 1 hypervisors are installed directly onto your system and are a standalone operating system, hence, bare-metal. Once you have installed this kind of hypervisor then you can install OSes onto the hypervisor and it will run that operating system once you spin it up to run. These systems you will typically be accessed by some remote connection or send actions via command prompt. You can have many virtual machines spun up on these kinds of hypervisors at once. Here is a list of type I hypervisors. Here are some type 1 hypervisors to look into(sorted by market share): ESXi , Hyper-V, and Xen.
Type 2 hypervisors, in contrast, do not run directly on the system and instead run inside of the host operating system. This means that if you are already running Windows Server 2016 then you might install a type 2 hypervisor into this environment. This is very handy if you need to run some linux only software or swap between operating systems. This will save you from needing to dual boot and do a restart everytime you want to go back and forth. These can be used in a similar way to how a type 1 is where you load up a few different operating systems and let it run in the background doing server tasks and that was common before everyone had virtualized their infrastructure because that allowed them to leave their current equipment and setup in place and simply take on additional functionality inside of a type 2 hypervisor, but at this point most people go directly for type 1 when they are building out their systems. Type 2 is mainly used now for individual systems needing another environment. Here is a list of type 2 hypervisors you might be interested in VMware Workstation, VirtualBox, QEMU
You may or may not have heard of a hypervisor before, but if you have ever used a virtual machine then you have probably used one. A hypervisor is a layer of software that sits between your hardware and the operating system that allows you to potentially run multiple operating systems off of a single set of hardware simultaneously. This means that when you start your computer up it first loads in your hypervisor and then your hypervisor loads your OS or OSes. These are used for a lot of reasons. One of the major ones is consolidation. A hypervisor letting you run multiple OSes off a single system means that you don't need two separate physical machines if you have them doing different tasks requiring different operating systems. This can save you a lot of money on infrastructure if instead of buying one system for your Active Directory and another for your storage solution and another for that pesky bit of software that still needs to run on Microsoft Server 2003. You can consolidate all of these various systems onto a single machine. This will save you on energy and space and hardware costs. Another potential reason is for security. With a hypervisor it is easier to keep your various operating systems compartmentalized and quarantine anything that may have gotten infected and it is also very easy to revert to a previous state and start again. If your system gets infected then you just spawn another copy of the instance and you're back with a fresh start in seconds.