When you have VMs in Azure that are built outside of an Azure Vnet and you want to have that VM talking to another VM, or you need to set security boundaries, you will need to migrate this VM into an Vnet. In Azure (or even PowerShell), there is no option to link an existing networkless VM to a new or existing Vnet, this post is about how to do that.
First step is to log into the old or new portal and select the VM you wish to move
Stop the VM and select Delete,
fill in the name of the VM and pay attention here as it is crucial that you DO NOT select the disk(s) in the wizard
Now that the VM is deleted, we also need to remove the ‘Cloud Service’.
please note that when you have a VIP, you will be assigned a new one later.
Now we will create a new VM based upon the disks that are still in our storage account.
At this time of writing, following action is not possible in the new portal, hence, we need to return to the old one.
Select New VM, select ‘From Gallery’ and go to ‘My Disks’ and select the disk from the deleted VM:
Give the VM his name back and select the appropriate size of your VM:
Select New or existing ‘Cloud Service’, Region and ‘Availability Set’
In the last wizard page, select any extensions if required.
Add additional Data disks to your VM if needed
Et voila, your VM is now available in the VNet of your choice.
For my customer I had to set up the VMM Delegated Administrator role for their Tenant Operations team in order to gain ‘connect to console’ functionality (without providing too much additional rights in the VMM console).
Setting up delegated user roles is in fact very easy to follow: https://technet.microsoft.com/en-us/library/gg696971.aspx
Trying this out in the QA environment went real smooth, so I rerun the same actions on the Production environment,
tested it out and things looked fine … until I had to perform the demo …
During the demo I could log into the VMM cluster and launch the VMM console, only it seemed to fail on the part that was of interest;
the VM console access.
I received errors that claimed I did not have sufficient rights to perform that action and no further clues were to be found in VMM or in the server logs ….
Back on Monday with fresh pair of brains, I thought about fixing it with mighty PowerShell.
Running the get-help *connect* led me to the following command: Grant-VMconnectAccess
As I already had setup a security group called ‘CS-WS-Role-VMM_Console_Access’ I just needed to run the following command on the hyper-V host holding the tenants to enable it on ALL deployed VMs:
Grant-VMconnectAccess –Vmname * -username “CS-WS-Role-VMM_Console_Access” -verbose
(I added the -verbose parameter so that I could see on what objects it was applied)
Rerunning the connect to console seemed to work fine from here on 🙂
Preparing your AD can (and will) save you a lot of time if you do it properly.
Heating up Google servers told me that there is very little or no info to be found regarding this part, so here is my input to whoever wants to read up on it.
Design your infrastructure
Before you start with the configuration you will need to decide upon some design questions:
– Forest and domain to be used
– How many DC’s will I deploy? (recommended to have at least 2)
– What roles will I install on them?
– Do I want to have my DC’s virtual or physical?
– If virtual, where will I host them? On the fabric Hyper-V servers or outside of it?
– What’s the scope of my private cloud? Test Lab, QA or Production environment?
For Production environments you might want to choose physical boxes as these have the advantage that they will still be running if your Fabric goes down (for whatever reason). This seems to be the most safe solution, only it brings some additional license & hardware costs along compared with running virtual DCs on the Fabric. In this case the licenses cost is covered if you have installed Windows Server 2012 R2 Datacenter on your Fabric Hyper-V hosts.
See more on best practices regarding virtual DCs:
See more on Microsoft-Server-Virtualization-Licensing (e-book)
My personal prefered solution is one that makes use of ‘best of both worlds’: 1 virtual DC running on the fabric and 1 DC on a physical host.
QA environments should most optimally reflect the Production environment, when it comes to DEV/TEST you could run everyting virtual.
When all DC’s are virtual, don’t place them on SMB3 fileshare (or at least not all of them) as all incoming requests needs to get authenticated and you might run into the chicken/egg problem.
In ANY of the above situations: ALWAYS make sure to have frequent backups ! (remember that backups only have use if they are tested from time to time)
ADFS design considerations
Remember that in some later stage you will also need to configure ADFS, here you also have some options to think about:
– Do I install the ADFS role on my DC’s or will I have dedicated hosts?
– Physical or Virtual?
– Single or redundant host?
Note that ADFS requires a database and that ADFS is NOT part of the System Center Suite, so you cannot benifit on reduced license costs by installing the DB on the ‘core’ Cloud SQL (the one hosting all the SQL instances for System Center and which is included in the System Center License).
Again, some options are available, you could install an internal database or make use of the free SQL express or install a full blown (expensive) SQL, running on a dedicated host or on the ADFS server itself.
If ADFS is crusial to your environment then make sure that you have a second ADFS available (NLB) and that a dedicated SQL (cluster?) should also be installed. Regarding the latter, an always available SQL DB is only required when you need to perform ADFS configuration changes when one of the ADFS nodes is not available. You could install an internal DB on the first ADFS and when this node is down, the second one will still be able to do what it’s designed for (except you will not be able to make changes as the DB resides on the first node)
OS installation tips
Before installing the AD DS role on a physical box, make sure that you have installed the latest drivers/firmware from the hardware vendor and run Windows Update.
When you have decided to go for a VM: select a Gen2 VM during setup as this has some advantages over Gen1 VM’s, note that it’s important to select this during the VM creation wizard as it is not possible to change after it has been installed. Also double check if the integration services are installed properly.
In both cases, first thing to do is to properly configure IP settings as otherwise you will get prereq remarks during setup of the DC roles.
Domain Controller configuration tips
don’t forget about spreading the FSMO roles, configuring time servers and especially make sure that DNS is behaving as expected (zone transfer delegation, forwarders, ..) !! When your DCs are virtual, configure them to be ‘higly available’.
Forest and domain
Installing your first Cloud domain controller requires it’s own domain and preferably you want to have a dedicated forest/domain for this setup,
ADFS will make it possible to authenticate against other forests/domains so don’t worry about that right now.
If you want to deploy in an existing forest, then at least make sure that you have at least a dedicated sub domain.
After the AD DS role installation, you will have the option to promote to a domain controller.
For our setup you must choose between ‘add a new domain to an existing forest’ or ‘add a new forest’.
Next, select the functional level for the forest and/or domain, (if your environment allows, the prefered setting is Windows Server 2012 R2) and if you want to add DNS to the DC capabilities (yes you want to have that). Another wise thing to do is to run the BPA after adding the role(s).
Looking for PowerShell commands ? https://technet.microsoft.com/en-us/library/hh472162.aspx
Prepare the environment
Once the AD Domain Service Role is installed we must prepare by:
– adding users to the domain
– adding service accounts to the domain
– adding computer objects to the domain
– adding security groups to the domain
– adding addtional OU’s to the domain
– adding GPO’s to the domain (firewall, remote desktop, admin rights, Windows update, security, …the more you can automate, the better!)
Either way, if you install the private cloud by PDT (see my first blog post) or by hand, the more you have prepared, the faster you will be able to finish.
users & service accounts: check following document here
security groups: check following document here
These are a ‘must’ have, depending on the level of delegation, you might need to add more security groups to fill your needs.
At last !
This was something I was missing in our private cloud setup, we imported Management Packs for most of the environment’s building blocks,
except for the storage spaces part as Microsoft did not release anything until now.
As workaround,we have built and imported a custom written MP (ours was based upon known Event-IDs listed in Storage Spaces FAQ), however,
this was pretty limited in functionality as it contained only a few SCOM rules for the listed event IDs.
The new released MP does a lot more; storage spaces and file share health, monitors to reset state, and also very welcome, health rollup.
Good to know is that this management pack supports up to:
16 storage nodes
12 storage pools
120 file shares
Before installing this new MP, please review the supported configurations and prerequisites:
|Virtual Machine Manager
||2012 R2 With rollup update 4 or later installed
|Windows Server File Servers
||2012 R2 with KB300850 (November 2014 update rollup) or later
The following requirements must be met to run this management pack:
- Operations Manager Connector for Virtual Machine Manager installed and configured
Configuring this connection will install the required VMM Management Packs.
- Storage Spaces managed by Virtual Machine Manager.
- KB2913766 “Hotfix improves storage enclosure management for Storage Spaces” must be installed on the VMM server and file server nodes
you can download the MP here: http://www.microsoft.com/en-us/download/confirmation.aspx?id=46832
After installation of the private cloud fabric, we noticed SMB Client 30308 alerts on one of our Hyper-V nodes when live migrating VMs:
Microsoft has decades-long experience building enterprise software and running some of the largest online services in the world. It has leveraged this to implement and continuously improve security-aware software development, operational management, and threat mitigation practices that are essential to the strong protection of data in the cloud.
Security is built into Azure from the ground up, starting with the Secure Development Lifecycle, a mandatory development process that embeds security requirements into every phase of the development process.
Microsoft ensures that the Azure infrastructure is resilient to attack by mandating that our operational activities follow the rigorous security guidelines laid out in the Operational Security Assurance (OSA) process.
It comes after an active week for Docker, which on Tuesday received a huge equity investment of $95 million, which the company said it will use in part to further its collaborations with partners including Microsoft, Amazon Web Services and IBM. Microsoft also just announced that Docker containers are coming to Hyper-V and for Windows Server
Today I received a customer notification that their Windows Azure Pack Portal for Tenants was no longer available and the webpage was showing ‘500 internal Server Error’.
This error code is pretty general, but we soon noticed that on the ADFS part, the certificate for signing the tokens was automatically renewed and as the new ADFS signing certificate’s public key is embedded in the metadatafile (https://server/federationmetadata/2007-06/federationmetadata.xml), the WAP portal could no longer verify this and resulted in the 500 error.
More info about automatic renewals for ADFS certifcates can be found here: https://technet.microsoft.com/en-us/library/dn781426.aspx
As described in technet;
If AutoCertificateRollover is set to TRUE, the AD FS certificates will be renewed and configured in AD FS automatically.
Once the new certificate is configured, in order to avoid an outage, you must ensure that each federation partner is updated with this new certificate.
If you’re new to Azure, you will (at some point) have to deal with upgrade & fault domains.
Here is some information on what both are and what they are used for.
An upgrade domain is a logical unit used to group role instances in a cloud service for updating purposes. By default, a cloud service has up to five upgrade domains. As you increase your role instance count, instances will be allocated to the next subsequent upgrade domain. As an example, if you have 7 instances for a web role, then upgrade domains 0 through 1 will have two instances, and upgrade domains 2 through 4 will have one instance. When updates to the cloud service are applied, Azure will roll through the upgrade domains applying the update to one upgrade domain at a time. This insures that only a minimum number of instances are offline during an upgrade.
A fault domain is a physical unit used to avoid a single point of failure for the cloud service. When a cloud service role has more than one instance, Azure will provision the instances in multiple fault domains. In a datacenter, you can think of a fault domain as a rack of physical servers. By spreading the deployment for cloud service roles across multiple fault domains, Azure is better able to resolve hardware failures without your service being completely unavailable.
What is Nano server?
Nano server is a new installation option in Windows Server vNext that provides the lowest possible Windows footprint possible, significantly smaller even than Server Core. This is possible through a significant refactoring of the operating system and is focused around two key scenarios:
- Born-in-the-cloud applications
- Cloud platform – Hyper-V and Scale-out File Servers
For other scenarios you would continue to leverage Server Core.
Nano server is selected during Windows install (with the other option being Server Core/Server with a GUI) and the entire GUI stack is removed along with other components. There is no option to RDP or even logon locally to a Nano server deployment. Instead management is done via WMI and PowerShell. Some key metrics comparing Nano Server to a regular Windows Server deployment are:
- 93 percent lower VHD size
- 92 percent fewer critical bulletins
- 80 percent fewer reboots