Work Folders – one of the most exciting new feature in Windows Server 2012 R2 that creates a lot of new possibilities for Bring Your Own Devices (BYOD) to provide controlled access to data stored on the corporate network. It provides the following benefits:
Users can access only their own Work Folders from their personal computers (or various devices) anywhere from centrally managed file servers on the corporate network
Enables users to access work files while offline and sync with the central file server (devices also keep a local copy of the users’ subfolders in a sync share, which is a user work folder)
Work Folders can co-exist with existing deployments of Folder Redirection, Offline Files, and home folders
Security policies can be configured to instruct PCs and devices to encrypt Work Folders and use a lock screen Password
Failover Clustering issupported to provide a high availability solution
Work Folders can be published to the Internet using the Web Application Proxy functionality (also new to Server 2012 R2), enabling users to synchronize their data whenever they have an Internet connection, without the need of a VPN or Remote Desktop
Work Folders Server – a server running Windows Server 2012 R2 for hosting sync shares with user files:
Install the File and Storage Services role
Work folders is managed through Server Manager for a centralizing view of sync activity
Multiple sync shares can be created on a single Work Folders Server
You can grant sync access to groups (by default, administrators don’t have access to files on the sync share)
Possibility to define a device policies per sync share
User devices – best functionality is given with Windows 10, Windows RT 8.1, or Windows 8.1 operating systems; Windows 7, iPad, and iPhone clients are also supported
Files remain in sync across all user devices
Users work with their Work Folders like with any other folder. The only difference is that when right-click the Work Folders icon, they got the option to force synchronization with the server, and then to other devices
Users can access and use Work Folders from different devices, irrespective of their domain Membership
The Web Application Proxy (WAP) Servers act as an SSL termination instance towards the Internet. External connections that try to access the Active Directory Federation Services (ADFS) farm or internal applications that are published via the Web Application Proxy will terminate their SSL connections at the Web Application Proxy. Unfortunately, the Windows 2012R2 server default settings allow a lot of SSL Cipher Suites that are publically known as weak or “outdated” like SSLv3, DES encryption and key length below 128bit. The preferred Server Ciphers of a freshly installed and updated Windows 2012R2 server are SSLv3 168 bits DES-CBC3-SHA TLSv1 256 bits AES256-SHA Therefore from a network security standpoint it is mandatory to harden the SSL settings on the Web Application Servers BEFORE opening the WAP server in the DMZ for incoming Internet connections. The best option to harden the SSL settings on a standalone Windows Server 2012R2 is to modify the Local Group Policy: From a commandline run: “gpedit.msc”. In the computer section navigate to “Administrative Templates – Network – SSL Configuration Settings” Edit the “SSL Cipher Suite Order”:
The listed Cipher suites can be exported and adjusted to the actual security requirements by deleting the unwanted Ciphers from the list. As a minimum all combinations that contain SSL2, SSL3, DES, 3DES, MD5 elements are deleted as well as all combination with a cipher length below 128bit.
The list needs to be sorted in a way that the preferred SSL ciphers are on top.
Afterwards create a string of the values from list and separate each cipher by “,” without any blank. Don’t leave a “,” at the end of the string. The input box in the GPO menu has a limited size. Make sure that your string fits into this limit. If not, delete further ciphers which are not widely used.
It looks like simply activating the new local GPO by running “gpupdate /force” is not sufficient. Please reboot the WAP servers one-by-one after setting the SSL cipher policy
However, before we open the firewall, an internal test should be executed to validate the SSL hardening. You can run the sslscan tool (you can download from here sslscan) from another computer in the DMZ or the WAP server itself. DNS resolving of the federation or application name must resolve to the external Load Balancer or interface of the WAP server. Example for SSL Server Ciphers before SSL Hardening (left side) and after SSL Hardening (right side):
When the Web Application Proxy server has been connected to the Internet finally, a second check can be achieved by using one of the proven Internet based SSL scan tools, e.g. https://www.ssllabs.com/ssltest/ .
You can find a string of the recommended SSL ciphers for importing into local GPO here.
We used Quest Migration Manager 8.10 recently in a project at a customer for a combined Active Directory and Exchange migration. Overall target was to integrate a Windows 2003 domain cross forest and cross org into the central AD Forest with several child domains. Since from mail perspective our migration source was Exchange 2007 and our migration target Exchange 2013, we decided to use the Native Move Job option along with the Migration Manager for Exchange Agent (MAgE) services.
The customer environment look like the following:
Source Domain in Single Domain Forest with Domain Controllers on Windows 2003 and Exchange 2007 as mail system.
Target Domain was one of several child domains in the central Forest. All domain controllers running Windows 2012 R2 and mail system was Exchange 2013 SP1.
All Exchange 2013 servers had been deployed to root domain which also kept all important system and admin accounts.
To limit complexity in the setup of Quest Migration Manager 8.10, we decided to use a single administrative account from target Forest’s root domain and granted all necessary permissions in the domains to run both, Active Directory and Exchange migration. Only for access to source Exchange 2007 when running the move request, we used an account from source domain with Org Admin permissions.
Installation of Migration Manager 8.10. on a member server in target domain (best practice recommendation) including all cumulative hotfixes went smoothly. After successful Directory Synchronization, we connected to the Exchange source and target Organization and finally deployed 2 Instances of the MAgE agent for native mailbox move jobs on our agent host and console server. Note: For agent hosts Windows 2012 R2 is currently (May 2014) not supported. You have to stay with Windows 2008 R2 here.
However, after starting the agent services running with our administrative account , we recognized, that we could not open the log file of the agent in the Log Panel inside the Migration Manager for Exchange GUI. We searched for the log file and found it in “c:\progamdata\quest software\Migration Agent for Exchange\NativeMove directory”.
The log file showed that the agent was not starting to process the migration collection due to missing settings and then went to sleep. The lines of error:
Waiting for agent settings: Not found: (&(objectClass=serviceConnectionPoint) …..
Agent is not ready to start. Agent going to sleep at 1 minute.
repeated over and over.
Obviously the agent tried to execute an LDAP query to find a connection point in Active Directory. Note: Currently QMM 8.10 uses 3 different systems to store configuration data: An ADLDS server, a SQL Server Instance and the Active Directory (ADDS).
Service Connection Point (SCP):
We ran the query which was shown in the log file against the target domain and we could find the Service Connection Point (SCP) immediately in the System container of the domain naming context.
The Service Connection Point consists primarily of the keywords array attribute and the serviceBindingInformation attribute. The QMM MAgE looks for the serviceBindingInformation attribute to get its SQL connection properties. In SQL it will finally find all information to process the collection.
We do not know why Developers at Dell Software made this process so complex. However, in our setup the agent could not find the Service Connection Point, because the agent was looking in the domain, where its service account was located and this was the root domain of the forest while the agent host had installed the SCP during installation in the child domain where the computer account was member of.
Switching the agent host and agent service account to an account from child domain would have been a solution, but was not in compliance with customer policy to host all system accounts in root domain.
Moving agent host and console to root domain would not have meet best practices and would have interfered running directory synchronization.
So we ended up in giving the agent just what it requested:
We manually created a Service Connection Point in the root domain and copied all serviceBindingInformation values over.
The agent started immediately and worked without errors.
For future design we can only recommend to store Service Connection Point in the Configuration Partition as Exchange and lots of other software. Using the domain naming context will always lead to problems in a big Enterprise environment with Active Directory consisting of multiple domains in a forest.
However, the mixture of Powershell Versions we find at customer Environments will get wider and wider. Same is valid for modules like ActiveDirectory or SnapIns for Exchange. One will need to start with a lot of checks in the beginning of the code when a script is planned to be used universally.
Following the communities in the Web we found surprising news from Microsoft. In a letter to the achievers of the Microsoft Certified Master Program, Microsoft announced the decision to cancel (“pause”) the Master certification track while leaving the title valid for now. Find more Information on Devin Ganger’s Blog. The arguments of Microsoft to stop the program include costs of the track, poor contribution of the MCP community and the obvious existence of “non-technical” barriers for many candidates like the extensive costs and the English-only approach. Microsoft’s Tim Sneath: “We want it to be an elite community, certainly. But some of the non-technical barriers to entry run the risk of making it elitist for non-technical reasons. Having a program that costs candidates nearly $20,000 creates a non-technical barrier to entry. Having a program that is English-only and only offered in the USA creates a non-technical barrier to entry. Across all products, the Masters program certifies just a couple of hundred people each year, and yet the costs of running this program make it impossible to scale out any further.” Find the full text here.