Exchange Migration first & Mailbox Folder Permissions

In many inter forest migration projects, the mailbox migration to the new Exchange Organization into the new forest is performed first and the user migration is performed in a separate step at a later time. After the mailbox migration and before migrating the user objects, you have a classic Resource Forest scenario. Users are created as disabled user accounts in the target forest, receiving a linked mailbox, connected to the source Active Directory user. Important AD attribute in this scenario is msExchMasterAccountSID. This attribute of the disabled target user object holds the objectSID of the source user account. This allows to connect to the own mailbox and shared mailbox resources (Delegate permissions etc.) with the active source user object.

ExMigirst01

Did you ever thought about mailbox folder permissions in this scenario? 

For every migrated folder permission (e.g. with QMM for Exchange) and also every time a user manually adds mailbox folder permissions for another (not Active Directory migrated) user in the target mailbox, the SID of the source user object is added to the mailbox folder permissions. In this example, we’ve selected the not migrated user UserA from the Global Address list and added him as delegate for the Inbox and the calendar for the Info MBX:  ExMigirst02

At some point, the Active Directory migration will start. During this process, the user account in the target domain will be activated and the Linked Mailbox is converted to a User Mailbox. This action will clear the attribute msExchMasterAccountSID. This is necessary because the target account will now be used to access the own mailbox and resources of other mailboxes. If a migrated user is now added to the mailbox folder permissions, the target SID will be added and no longer the source SID. Let’s use the mailbox example above and add the migrated user UserB as additional delegate for the Calendar of the Info MBX:ExMigirst03

In this example, UserB will of course not have any problems accessing the Info MBX. But what happens if UserA will be migrated and the user starts accessing the Info MBX with TARGETDOMAIN\UserA? The SID of the target account has no permissions on the Inbox and the Calendar folder. Will UserA loose access to these folders now? Generally, the answer is YES, UserA will lose access! But…

In Active Directory Migration projects, it is best practice to migrate the SIDHistory to the target user account. In this case, the objectSID of the source user is copied to the attribute SIDHistory of the target account. For our example, it means that UserA will not lose access to the Info MBX because his Access Token contains the Source SID which has permissions on the Inbox and Calendar folder in the Info MBX.

SIDHistory CleanUp & mailbox folder permissions?

Clearing SIDHistory is part of most of the migration projects. Before clearing the SIDHistory attribute of the target accounts, it is required to replace the source SIDs with the corresponding target SIDs inside the mailbox folder permissions. This process is called ReACLn. Without this action, many users will lose access to shared mailbox resources when the SIDHistory attribute is cleared.


Exchange Processing Wizard (Part of Dell Migration Manager for Active Directory)

Dell migration Manager for Active Directory contains the Exchange Processing Wizard. This wizard is able to replace existing source SIDs with the matching target SIDs for permissions inside the exchange environment. The wizard is using the matching information in the QMM AD LDS database, created during the directory synchronization.

To ReACL permissions inside the mailboxes, we have to select the option “Update client permissions”:

ExMigirst04

Now we can choose to process all Public Folders and Mailboxes or select individual Mailboxes or Public Folder or even skip Public Folders or Mailboxes completely:

ExMigirst05

The wizard provides the possibility to only process one server or process multiple servers in parallel.

Known limitation of the Exchange Processing Wizard:

The wizard is unable to set the Free/Busy permissions Free/Busy time, subject, location. After processing, the permission is changed to Free/Busy time only:

ExMigirst06


Good to know – Check real SID behind folder permissions

Get-MailboxfolderPermission: Unfortunately, as long as the SIDHistory is set for a user, Exchange will always resolve the permissions to the target account. So Exchange will always show the TARGETDOMAIN\User although in fact the source SID has permissions on the mailbox folder. You will also see the same result if you query folder permissions via EWS (e.g. with EWSEditor).

MFCMAPI:

To check which SID is really behind the permission, you can use MFCMAPI to access the mailbox.

  1. Create a new profile for mailbox and disable Exchange Cached Mode.
  2. Start MFCMAPI
  3. Click Session->Logon and choose the profile you’ve created in step 1.
  4. Double Click the Mailbox entry and now navigate to the folder for which you want to display the permissions.
  5. On the right side, now double click PR_NT_SECURITY_DESCRIPTOR

In the Smart View, you can see which SID is really behind the Access Control Entry.

QMM 8.10 error: Agent is not ready to start – SCP not found

We used Quest Migration Manager 8.10 recently in a project at a customer for a combined Active Directory and Exchange migration. Overall target was to integrate a Windows 2003 domain cross forest and cross org into the central AD Forest with several child domains. Since from mail perspective our migration source was Exchange 2007 and our migration target Exchange 2013, we decided to use the Native Move Job option along with the Migration Manager for Exchange Agent (MAgE) services.

Situation:

The customer environment look like the following:
Source Domain in Single Domain Forest with Domain Controllers on Windows 2003 and Exchange 2007 as mail system.
Target Domain was one of several child domains in the central Forest. All domain controllers running Windows 2012 R2 and mail system was Exchange 2013 SP1.
All Exchange 2013 servers had been deployed to root domain which also kept all important system and admin accounts.
To limit complexity in the setup of Quest Migration Manager 8.10, we decided to use a single administrative account from target Forest’s root domain and granted all necessary permissions in the domains to run both, Active Directory and Exchange migration. Only for access to source Exchange 2007 when running the move request, we used an account from source domain with Org Admin permissions.

Native Move Job
Setup for Native Move Job

Installation of Migration Manager 8.10. on a member server in target domain (best practice recommendation) including all cumulative hotfixes went smoothly. After successful Directory Synchronization, we connected to the Exchange source and target Organization and finally deployed 2 Instances of the MAgE agent for native mailbox move jobs on our agent host and console server. Note: For agent hosts Windows 2012 R2 is currently (May 2014) not supported. You have to stay with Windows 2008 R2 here.

Problem:

However, after starting the agent services running with our administrative account , we recognized, that we could not open the log file of the agent in the Log Panel inside the Migration Manager for Exchange GUI. We searched for the log file and found it in “c:\progamdata\quest software\Migration Agent for Exchange\NativeMove directory”.

scp not found
Log snippet from MAgE agent

The log file showed that the agent was not starting to process the migration collection due to missing settings and then went to sleep. The lines of error:

 

Waiting for agent settings: Not found: (&(objectClass=serviceConnectionPoint) …..

Agent is not ready to start. Agent going to sleep at 1 minute.

repeated over and over.

Obviously the agent tried to execute an LDAP query to find a connection point in Active Directory.
Note: Currently QMM 8.10 uses 3 different systems to store configuration data: An ADLDS server, a SQL Server Instance and the Active Directory (ADDS).

Service Connection Point (SCP):

We ran the query which was shown in the log file against the target domain and we could find the Service Connection Point (SCP) immediately in the System container of the domain naming context.

QMM_8.10_SCP

The Service Connection Point consists primarily of the keywords array attribute and the serviceBindingInformation attribute. The QMM MAgE looks for the serviceBindingInformation attribute to get its SQL connection properties. In SQL it will finally find all information to process the collection.
QMM_8.10_SCP_3
We do not know why Developers at Dell Software made this process so complex. However, in our setup the agent could not find the Service Connection Point, because the agent was looking in the domain, where its service account was located and this was the root domain of the forest while the agent host had installed the SCP during installation in the child domain where the computer account was member of.

Solution:

Switching the agent host and agent service account to an account from child domain would have been a solution, but was not in compliance with customer policy to host all system accounts in root domain.
Moving agent host and console to root domain would not have meet best practices and would have interfered running directory synchronization.

So we ended up in giving the agent just what it requested:
We manually created a Service Connection Point in the root domain and copied all serviceBindingInformation values over.

The agent started immediately and worked without errors.

For future design we can only recommend to store Service Connection Point in the Configuration Partition as Exchange and lots of other software. Using the domain naming context will always lead to problems in a big Enterprise environment with Active Directory consisting of multiple domains in a  forest.

 

How to write or migrate sidHistory with Powershell (3)

In our large scale Active Directory Cross Forest migration project, we now have migrated already 40.000 user accounts globally. Our self made scripting routine to migrate/write sidHistory into the target accounts turned out to be a robust, reliable part of the process and I feel safe now to share some experiences. We are running it on multiple migration servers around the globe as scheduled task – which you can easily call a “service” as it is running every 5 minutes.
I will write about the whole mechanism of how we automated our large scale Active Directory migration in another blog post, but will concentrate here to share our way of managing the sidHistory part.
As you know already from part 2 of this blog post, we were buidling our code on the examples that MSFT Jiri Formacek published here.

However, 2 main restrictions prevented us from using this code as is:

  1. We wanted to make sure that we really used the Domain Controller with the PDC Emulator role from source domain. Our source environment has 100+ domain controllers and the PDC role is siwtched from one DC to another DC under certain conditions. Therefore to use a fixed name for the PDC role Domain Controller was not acceptable.
  2. Our Active Directory account migration process was fully automated and it was the user who starts his/her migration not us. Therefore the requirement was given, that we only can run sidHistory migration (together with the account activation in target domain) as a continuous background service. Every session based approach would not have helped like we can find it in ADMT or Dell Migration Manager for Active Directory.
    Prepopulating sidHistory on the previously created disabled accounts in target domain was not an option, since Exchange 2010 was giving errors for disabled users with sidHistory of source active users under certain circumstances.

Solutions:
1) This was not a big thing. A small function could do the trick.

function getPDCE($domain) {
$context = new-object System.DirectoryServices.ActiveDirectory.DirectoryContext("domain",$domain)
$PDC=[System.DirectoryServices.ActiveDirectory.Domain]::GetDomain($context).PdcRoleOwner.Name
return $PDC
} 

2) This was not that easy (for us). Running our account migration script as usual – means as scheduled task with admin credentials – did not work for the sidHistory part in it since the credentials of the logged user account were not handed over to the SIDCloner routine.
All the code we could find on Jiri’s page asked for credential information interactively or would need explicit credentials in the script in another way.
Although we are packaging our Powershell Scripts into an .exe file by using Sapien Powershell Studio and could hide the password from simple file editing, putting user name and password into the script was not an acceptable way for us to go.
After testing back and forth, someone cam up with the idea of using the Windows credential manager to work around our deadlock situation.
The script would access the credential manager interface, get the credential information from there and would then pass them to the DsAddSidHistory function.
We created a function to retrieve credentials from Credential Manager store based on a very good script example to be found on Technet here.
While this seemed to be a clever way of achieving our target of having a scheduled user account activation script with sidHistory functionality, we ran into errors again. Retrieving credentials from Credential Manager by script obviously fails, when the script runs with exactly the credentials that you want to retrieve. This was true in our case, because the user account migration script was scheduled with that “big admin” account.

The solution finally was:
The user account migration script was running as a scheduled task with full admin credentials. When it came to migrate (in our project setup: activate) a user account in the target domain, it did not (could not) write sidHistory, but created an input file with username and target DC (the DC closest to the site where the user was had logged in from – remember that the user triggers his/her migration in our project).
On the same migration server a second script was scheduled with a server-local admin account. This script consists of 3 parts. First part is to check if there are new input files. Second part is to retrieve the full admin credentials from Credential manager and passing them to second part. Third part is to migrate sidHistory which succeeds because you have put all parts together for the SIDCloner routine:
PDC Emulator DC for source you have found by query.
Target DC was in file (but you can take every writable you want if replication delay does not matter).
Explicit credentials you get from Credential Manager.
Nowhere in both scripts password information is saved in clear text.

Additional Information

Moving servers between domains with Powershell v3 add-computer commandlet

Background
Migrating computers as part of an Active Directory migration has 2 aspects. There is an Active Directory object migration as it is with user and group objects. And in addition you have to disjoin the Windows computer from the old domain and join it to the new domain which requires to modify the workstation or server OS.
The most simple way to move a computer between domains is of course to use the GUI and change the domain or workgroup association of the computer in the system property settings manually.
However, this is not a solution for migration projects where you want to move many computers at the same time remotely logging on to the machines interactively.

Requirements
In our actual large scale migration project we have to deal with multiple thousands servers that have to be migrated to the new domain. While the client computers run through an SCCM controlled imaging process, the server domain move is the task of the server admins. Some of the server admins are responsible for large scale test and development environments with hundreds of servers.
To ease the one time task of domain migration for those administrators, our idea was to implement a remote service utility which can migrate servers to the new domain in bulk mode.
While we would operate and maintain the service centrally, the server admins should decide which servers to migrate and when to migrate them. Another requirement was to leave the servers as is without installing QMM agents that could interfere with running applications.

server move automat


Solution

To be as most flexible as can be, we chose the add-computer Powershell commandlet for our scripting solution. (We ended up with a 320 line script to combine multiple modifications during server move). Server owners would place a config file on a share. The script server periodically scans the share for new config files and processes the server names.
While the final script contains multiple functions, the core function with the add-computer commandlet to disjoin/join the Computer can be found here:

function domain_move($compacc,$fqdn) {
$username_joinTarget=”TARGETDOMAIN\SERVICEACCOUNT”
$password_joinTarget=cat“d:\scripts\server_move\JoinTarget.txt”|convertto-securestring
$cred_JoinTarget=new-object -typename System.Management.Automation.PSCredential –argumentlist $username_joinTarget,$password_joinTarget
$username_unjoinSource=”SOURCEDOMAIN\SERVICEACCOUNT”
$password_unjoinSource=cat“d:\scripts\server_move\UnjoinSource.txt”|convertto-securestring
$cred_UnjoinSource=new-object -typename System.Management.Automation.PSCredential -argumentlist $username_unjoinSource,$password_unjoinSource
$Error.clear
Try {Add-Computer -ComputerName $compacc -DomainName $TARGETDOMAIN -Credential $cred_JoinTarget -UnjoinDomainCredential $cred_UnJoinSource -Server $TargetDC -PassThru -Verbose}
Catch {return $false}
Start-Sleep -Seconds 10
Restart-Computer -ComputerName $fqdn
return $true}

The variables $compacc and $fqdn come from the main part of the script as parameters when calling the function.
$compacc=”samaccountname of computer to migrate”
$fqdn=”full qualified domain name of computer to migrate”
The text files with the encrypted passwords are located in the same directory as the executable or ps1 script.

Discussion
The add-computer commandlet was introduced with Powershell 2, but had the restriction that you could only migrate the local computer and needed to run Powershell Remoting to make it useful for other computers.
With Powershell version 3 the parameter –computer was added which allows you to address remote computers for domain move.
Note: This parameter does not rely on Powershell Remoting
Another important parameter for us is –server which defines the target DC that will control the domain join operation. Since we are creating the computer object in the target domain in a specific OU in advance in the same script, it is important not to run into replication delays (trying to join while the computer object was not replicated). The –server parameter which never worked properly in Powershell version 2, did its job for us as long as we used the FQDN of the Domain Controller as syntax.
Note: If you cannot succeed with the domain\DC value for the –server parameter as listed in the Tech Net article, try out the FQDN instead.
A remarkable caveat of the add-computer commandlet for server domain move is the explicit input of domain credentials for disjoin and join actions. Even when the account that is running the script or scheduled task keeps all necessary permissions, you still have to pass account and credentials to make the domain join working. We suppose that this is a WMI restriction and that WMI is underlying code here. Check out the WMI commands below. To overcome this limitation we captured the encrypted passwords in 2 separate text files and only listed the service account in the script code. In the final version the script code was transformed into an exe file by using Powershell Admin Studio by Sapien Technlogies.
The add-computer commandlet also provides a parameter –restart. We cannot recommend to use this parameter, because it might trigger the reboot too fast, which can lead to RPC connection errors after reboot. We recommend to set a sleep time of multiple seconds and trigger a separate restart-computer commandlet which provides you with multiple options and restart dependencies.
We do not use the –path parameter but create the computer account in a separate function.
For a full amount of Parameters please check Tech Net.

Alternatives to the Add-Computer commandlet

Quest Resource Updating Manager
If you have deployed Migration Manager for Active Directory in your migration project, you can create collections for computers that should undergo a domain move. The collections can be filled by import scripts, so that you can achieve a semi-automatic solution.
QMM Resource Updating Manager
While the main purpose of QMM Resource Updating Manager is to prepare the resources (file shares, local groups, registry etc) for the domain move (which either requires to install agents or to deploy vmover scripts), it also has an option to move computers remotely without installing agents.
QMM Resource Updating Manager


NETDOM

Another option is the NETDOM JOIN legacy command which is around since Windows NT 4.
http://technet.microsoft.com/de-de/library/cc772217(v=ws.10).aspx
(To use netdom, you must run the netdom command from an elevated command prompt.)

WMI
Another way is to go WMI native and use the commands that might be underlying of the add-computer commandlet. However, we find WMI a bit “clumsy” for this purpose (we like it easy).
Example:
$currentserver= (gwmi -computername $Computer -class “Win32_ComputerSystem” -Authentication 6)
$currentserver.JoinDomainorWorkGroup($newdomain,$password,$username,$Null,33)

Quest Migration Manager for Active Directory – password error when synchronizing user objects – part 2

In part 1 of this post we explained why QMM Directory Sync Agent (DSA) might run into problems when sychronizing user passwords that have been resetted by using administrative credentials to a value which is present in the password history. In this post we will show how we can identify affected user accounts and how we can work around the issue.
As we have learned in the first part, there are 3 good methods to identify the password synchronization errors:

  • QMM AD GUI – failed objects link in the Status page of the Active Directory synchronization
  • QMM Error Reporter Utility – Quest Utility you can download from support site
  • DSA Log File Parsing – you can parse the log files with any good Parser/Scripting engine

Methods of resolving the password synchronization Problem:

1. User changes password
The simplest approach to solve the problem is the user himself – maybe after contacting the user. When the user changes the password of his Acive Directory account the default way (e.g. via CTR+ALT+DEL). Changing the password this way will ensure that the password policy of the domain is enforced (instead of bypassed via admin reset). Assumed that password policies between source and target domain are aligned, Quest Active Directory Synchronization Agent (DSA) will successfully set the new password on the target user account.

2. User is forced to change password
Another method similiar to 1. is to force the user to change the password by setting the “User must change password at next logon” flag. This can be achieved by using ADUC for single users.

user_must_change_password

However, when it comes to mass operations, you can achieve the same goal by setting the attribute “pwdLastSet” to “0” programmatically by using Powershell, VB etc.
Approach 1. and 2. have in common that you have to make sure that users do not call the help line and ask for an admin reset to their “usual” password again.

3. Temporary Fine Grained Password Policy controlled by DSA Parser script
Our customers often complain that they do not like to inform users to change their passwords with messages like “your actual password is not compliant with corporate policies – please change”. Educated users will ask: “How come that you know my password. We have been told, admins do not know users’ passwords …
Well, to workaround this situation, a new approach is possible if your target Active Directory domain is Windows 2008 or higher.
The plan:

  • Increase DSA log file size to make sure you have a full DSA cycle in the log (optional). A full cycle will always work once through the failed objects queue and list the password sync Errors.
  • Create a group in target domain that will contain user objects with password sync error.
  • Create a Fine Grained Password Policy (FGPP) in target domain that contains the same password settings as the default domain policy with the exception of password history which is set to Zero
  • Assign the FGPP to the domain group
  • Create a script that parses the DSA log and fills the group. Empty the group before filling to remove already processed accounts

As you can see, the idea is to allow DSA temporarily and only once for the users with password sync problems to bypass the password history setting. This way the password transfer is possible and a further user migration will not end up in a logon error for these users.

From a security standpoint one can argue that bypassing the password history setting is not advisable. We share this opinion, but we have to recognize that the bypassing already started in the source domain. We neither improve the situation during migration, nor do we make it more worse. But we will prevent user logon errors to target domain later.

A scripting example (example, not more 😉 ) can be found here:

Powershell Script INPUT PWDUSER

Migration Manager for Active Directory (QMM/AD) – version 8.10 is available with Windows 2012 Domain Controller support

Dell published the version 8.10 of the migration software Quest Migration Manager for Active Directory. The new release 8.10 will allow using Windows 2012 domains and domain controllers in Inter-Forest migration projects as migration target infrastructure.

  • Migrate objects to Windows 2012 Active Directory
  • Synchronize objects with Windows 2012 Active Directory in both directions
  • Synchronize passwords with Windows 2012 Active Directory
  • Migrate SID-History to Windows 2012 Active Directory
  • Migrate computers to Windows 2012 Active Directory

However, there is one limitation to mention at this point: Windows 2012’s breaking feature Dynamic Access Control (DAC) is not supported when trying to migrate from a non DAC to DAC permission model or from DAC to DAC.

(Source: Quest Migration Manager 8.10 Release Notes, last revised 5/7/2013)

Quest Migration Manager for Active Directory – password error when synchronizing user objects – part 1

One of the most useful features of QMM Active Directory synchronization is the ability to synchronize the password of user objects between Active Directory Domains. While Microsoft’s Forefront Identity Manager (FIM) first needs to capture the user password on the Domain Controller when the user actual changes the password, QMM can transport the password hash directly at any time. While FIM needs to install an agent on every Domain Controller to capture the password changes, QMM places an agent “on the fly” on only one dedicated Domain Controller. This can make a big difference in large Active Directory infrastructures.
However, running a long term “ongoing continous” Active Directory synchronization often shows one or many errors like this (snippet from Migration Manager GUI) and fails to update the password to the actual value:

pwdsync_error

The error is a  bit misleading here. QMM is purely transporting the password hash and therefore cannot measure the length of the user password nor can QMM prove the complexity. That means, we have to deal with a password history problem. Assuming we have the same password policies in source domain and target domain and an ongoing password synchronization, this error may never come up, because the password history policy of the source domain would prevent the user to change the password to a value that is still in the password history store.
But there is a second method of changing passwords: The admin reset of passwords. When an admimistrator changes (resets) the password on behalf of a user, he can set the password to a value that is in the password history store. An administrative reset can bypass the password policy. Our investigations showed that several users bypassed the password history policy by calling the help line …
After the administrative reset of the password in source domain, QMM directory synchronization agent (DSA) recognizes a change of the password of the user object and tries to replicate the password hash to the target domain user object. But the DSA has to go “through” the password policy check like a standard user password change which finally results in the password error message above.

You also can find specific error codes in the DSA log file:
05/07/13 08:32:45 (GMT+01:00)     Common AcAdSwitches Error 0xe100004f. Cannot synchronize passwords, source user: “<user name>”, target user: “<user name>” Error 0x8007052d. Unable to update the password. The value provided for the new password does not meet the length, complexity, or history requirements of the domain.

In part 2 of this post, we will show ways to work around the password sync error.

QMM AD – Incorrect Directory Synchronization Agent Matching, Caching and Repairing

QMM AD stores matching data in ADLDS and in Cache DB

From our experience, the Directory Service Agent component (DSA) from Quest Migration Manager for Active Directory is a reliant and powerful way to synchronize Active Directory objects and attribute data from one domain to another (and vice versa). It also has the ability to synchronize user passwords by installing a single agent on one domain controller. More than this, DSA is also responsible for mailbox creation in the Quest setup and synchronizes mailbox and Active Directory permissons.

The speed of delta synchronization (synchronizing changes of object attributes) is a combination of matching and caching. Quest DSA uses an AD LDS database to create matching objects that describe the synchronization relationship between an object in source domain and its peer in the target domain.

However, the ADLDS matching objects are most important when starting the synchronization and performing a Full Resync. In the ongoing synchronization, DSA takes the matching information from its cache which is a JET database located in “…\DSA\CONFIG\Cache” directory. The cache database file can grow up to a size which exceeds the size of the AD LDS by far. If disk space matters on your DSA machine, have an eye on the cache file size first.

DSA cache files

Solving incorrect matching

By default, Quest Active Directory sync knows 3 criteria for object matching (the way, how DSA identifies, whether it has to merge an existing account in target or create a new one) – mail address, sid-sidHistory, samAccountName. Both decisions (merge or create) have consequences since DSA will create or modify the matching object and bind objects together, that should form a unity (or not).

matching_criteria

However, we do not live in a perfect world and situations occur where the matching went wrong.

Real word scenario:

  1. Group A is created in source domain, mail-enabled and filled with 10 members. It is part of DSA migration scope.
  2. DSA picks up the group and looks up the matching criteria. All 3 criteria are activated and mail has highest precedence. DSA does not find a peer and creates a new group A in target domain with e-mail and the link resolver fills the group membership with the target user objects. DSA also creates a matching object and updates the cache file. So good so far.
  3. Now somebody decides to create a new group B in source domain and shifts the mail address from group A to group B while the mail address on group A is renamed in the same step.
  4. DSA will recognize that group B is existing and looks up for matching criteria. It will find a match for the mail address in group A of the target domain and will set up a matching of group source B to target A. It also will replicate the membership from source B to target A.
  5. We have now a lot of uncomfortable issues. Membership in the DLs looks different for users in source and users in target domain. Group A in target has 2 entries in sidHistory, one for source group A and one for source group B. The matching attribute in group A from target domain is now filled with the object GUID from group B in source domain and the proxyAddresses including X.500 are mixed as well. Other attributes depend on your skip list settings
  6.  And we still have group A in source. Since the matching criteria of sid-sidHistory is still valid, you can end up with a flip/flop, so that DSA runs over the two accounts and whenever there is a new attribute change on one of the source groups, it can either be group A or group B which is merged to group A in target.

OK, we should try to clean up the confusion.

  1. We better remove mail address matching in our setup since it has problems with the domestic way of changing groups in this customer environment. We clean up all wrong values of group A in target. We run a full resync (which is restricted to once per quarter)
  2. Same thing again, because the matching attribute was filled with the wrong value and the matching sid-sidHistory was still in place.
  3. We clean up again and delete the matching attribute. We modify group B in source to trigger DSA and expect that a new group B in target is created.
  4. Do we succeed? No. Of course not. There is a wrong matching object for group B (and group A) in ADLDS. OK. We clean up again and we delete the matching objects in ADLDS.
  5. No way. The same thing happens again. No group B in target, but a matching of group A and group B to group A in target.
  6. This time we stop DSA, clean up group A in target domain including wrong entries in proxyAddresses, sidHistory and delete the matching attributes. We delete the cache file and start with Full Resync – and we succeed

It’s all about cache. All the cleanup and repair actions can fail as long as the cache file still contains the wrong linking. Since a selective cleaning of the wrong object matching of the cache is not possible (anyone to try?), we always will need a full resync (of thousand objects) to repair a single object pair with wrong matching.

An alternative would have been to delete all 3 groups and create fresh objects. I would call it the “brute force method”. Not acceptable in many cases though.

Dell Quest Migration Tools: Readiness for Windows Server 2012 and Exchange 2013

Actually we can see a good market response to the new Microsoft Server flagships, Windows Server 2012 and Exchange 2013.
Migration projects are still ahead and probably will not die in 2013.

The following table shows the readiness of the Quest migration tool suite  and answers the question whether the tool can be installed on a Windows Server 2012, whether it can migrate Active Directory to Windows 2012 and mail systems to Exchange 2013.

Microsoft did not release a new version of ADMT yet (the most actual version is still 3.2), that is fully compliant with Windows 2012 functional mode domains, nor can you install ADMT 3.2 on a Windows Server 2012 member server. Actually, a migration from a Windows 2008 R2 domain to a Windows 2012 domain with 2012 functional level can neither be achieved by using the native tool (ADMT) nor Quest Migration Manager for Active Directory.

Active Directory Product Version Tool installation on Windows Server 2012? Can backup/restore AD data on Windows Server 2012 DCs? OR Can migrate data to Windows 2012 DCs?
Recovery Manager for Active Directory Forest Edition 8.2.1 yes yes
Recovery Manager for Active Directory 8.2.1 yes yes
Migration Manager for Active Directory 8.9 No no
       
Migration Product Version Tool installation on Windows Server 2012? Can migrate data to Exchange 2013? Office 365?
Migration Manager for Exchange 8.9 no no/yes
Migration Manager for Exchange IntraOrg Edition 1.0.1 no yes
Notes Migrator for Exchange 4.6.1 no (no Windows 8 admin workstation) yes/yes
Coexistence Manager for Notes 3.4 no yes/yes
Groupwise Migrator for Exchange 4.2 no (no Windows 8 admin workstation) yes/yes

 

Quest Migration Manager for Active Directory®: QMM vmover’s registry access blocked by security software

In our Exchange and Active Directory migration project we recently deployed a vmover package on a large number of client computers where QMM vmover.exe performs all resource updating locally without stressing the network. The results were quite positive, but after a while the Client protection team of the customer, who is actually running McAffee® security software on all client computers, was complaining about vmover activities. The security software identified vmover as intruder and blocked actions.
They said, that vmover.exe would try to add new keys in the McAffee® agent part of the registry. We could not believe that, but the AccessProtectionLog.txt of McAffee® exactly provided evidence:

14.01.2013 14:49:37
Blocked by Access Protection rule          NT AUTHORITY\SYSTEM
C:\Program Files\Quest\vmover\Vmover.exe
\REGISTRY\MACHINE\SYSTEM\CurrentControlSet\services\McAfeeFramework\Security
Common Standard Protection:
Prevent modification of McAfee Common Management Agent files and settings
Action blocked :
Create

Our settings in vmover.ini did allow vmover to update registry keys, which means Re-ACLing of permissions on registry keys, but we did not have an explanation why vmover would create something outside the user hive when updating user profiles.
Using the ProcessMonitor it was even more obvious that vmover.exe tries to create keys in the registry.

vmover_createregkey

The response from Quest Development came after short time:

What we saw in Process Monitor did not necessarily mean that vmover actually tries to create anything in there, but it’s rather the fact that RegCreateKeyEx function is used to enumerate the registry. There are two functions, one is RegOpenKeyEx and second is RegCreateKeyEx, both can be used to read information from a key, but the later will create a key if it does not exist depending on parameters passed. RegCreateKeyEx is used by Vmover for performance reasons. Also the entire registry is processed when process registry option is selected and all services are enumerated this way in registry when service processing is enabled.

With those arguments we got back to the Client protection team and after spending a beer or two, they agreed to put McAffee® on the Whitelist which solved the access block problem.
Good to know how things work.