Poor-man’s Active Directory backups (export really)

Sometimes I need a “copy” of an Active Directory domain, partition or LDS instance. Usually this is when I remove decomissioned domains in a multi-domain forest and want to keep a record of what was left when I deleted it. You can do this With LDIFDE.EXE. Here is an example command to make a full export of a domain partition:

ldifde.exe -f <path and name of export file>.ldf -d “<DN of partition” -p subtree -x -l * -j <path to log folder>

I mean; you can never be too sure right?

VirtualBox 4 and physical NICs

I use Oracle VirtualBox for much of my testing and demos. Yesterday I figured I wanted to connect two machines in a way that enabled the VMs on one talk to the other, while at the same time keeping the traffic isolated. The easiest way I could think of was adding a second NIC in each machine (host) and connecting them with a crossover cable. One of the hosts already had a RealTek Gbps adapter and the adapter I added also used the same chip, and thus the same driver. Windows handled this fine, even though both NICs had the same device name (not connection name). Unfortunately VirtualBox did not. I couldn’t tell the two cards apart. Looking in the config files for VMs configured for bridged networking, I discovered that VirtualBox references physical NICs on the host by name, not GUID etc. So the only way I could think of for fixing this was to rename the second NIC.

Like I said, the connection name was different and easily changed, but the device name required some tweaking. Here’s how I did it:

  1. Look up the Device Instance Path for the NIC in Device Manager.
    This is most easily done by opening properties for the NIC from the Network Connections window, since you will not be able to tell the NICs apart if you use Device Manager (since they have the same device name).
    The Device Instance Path will look something like this:
    PCIVEN_10EC&DEV_8168&SUBSYS_816810EC&REV_014&98ECCA6&0&00E3
  2. Now open regedit as LOCAL SYSTEM
    This is most easily achieved by using PsExec.exe from Sysinternals. The command is:
    psexec.exe -i -s regedit.exe
  3. Navigate to the key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetEnum.
  4. Under the Enum key you will find the “path” from the Device Instance Path.
  5. Under the last part of the path, 4&98ECCA6&0&00E3 in this case, you will find a value called DeviceDesc. It will probably have a value something like this:
    @oem49.inf,%rtl8168.devicedesc%;Realtek PCIe GBE Family Controller
    The value first makes a reference to the driver INF file and a variable within it. I do not know how this works in detail, but you can just change it to any sting value you want. The INF file will not contain the data you choose anyway. I changed mine to the name of the device with “#2” appended.
  6. Before the data in DeviceDesc can be changed you need to give the local Administrators group (or another security principal of your choice), access to the key. While the last part of Device Instance Path is selected, hit Edit and select Permissions. Add the security principal you want.
  7. Change the value of DeviceDesc.
  8. Exit regedit.
  9. Open or refresh the Network Connections window and check that your device has the new name.
    I did not have to reboot for the change to take effect, but that may be necessary.

After this procedure my “new” NIC instantly showed up in VirtualBox and was available for Bridged Networking. I filed a bug at the VirtualBox site so maybe they will fix it. I have no idea of the consequences for making this change yet. It may break Windows networking in some subtle way, but I doubt it. You may also have trouble if you need to upgrade the device driver for the NIC, but again, I doubt it. To make sure you do not run into trouble, save the previous value of the DeviceDesc value so you can change it back later. You could also change the permissions back to what they were before. Consider this my disclaimer.

Troubleshooting Forefront Endpoint Protection 2010 Installations

I had a hand in rolling out Forefront Endpoint Protection (FEP) for a customer recently. Some of our clients did not get FEP installed even though the SCCM client was installed and working correctly, and they had all prerequisites present and had successfully received the advertisement and downloaded the files from the distribution point (DP). It turned out that these clients were already running Microsoft Security Essentials (MSE), which FEP does not detect or uninstall. The solution was to manually uninstall MSE first and then wait for the next installation attempt from the SCCM client.

For future reference; these are the Anti-Malware products that FEP can detect and uninstall before it installs itself:

  • Symantec Endpoint Protection version 11
  • Symantec Corporate Edition version 10
  • McAfee VirusScan Enterprise version 8.5 and version 8.7
  • Trend Micro OfficeScan version 8.0 and version 10.0
  • Forefront Client Security version 1 including the Operations Manager agent

If you want to troubleshoot FEP deployments here are som interesting logfiles:

  • %WINDIR%%TEMP%FEP-ApplyPolicy-%COMPUTERNAME%.log
  • C:Documents and SettingsAll UsersProgramdataMicrosoftMicrosoft Security ClientSupportEppSetup.log
    (This folder also contains other interesting files regarding the FEP install.)

An overview of all SCCM 2007 logfiles is available here: http://technet.microsoft.com/en-us/library/bb892800.aspx

 

Authentication errors on NLB cluster

I configred a 2 node NLB cluster to load balance Remote Desktop Session Hosts with Windows Server 2008 R2. These were virtual servers running on VMWare so I selected to use multicast mode for the cluster. The cluster IP (.3) correctly resolved to the cluster multicast address with ARP. The cluster formed and converged successfully.

After a few minutes I could no longer access node 2 (.2) from any other computer. The error given by Windows was “The target account name is incorrect”. This is a Kerberos authentication error (KRB5KRB_AP_ERR_MODIFIED)., which means there is a mismatch between the name you entered in your request and the name of the server you are actually contacting. In other words, the name of the machine is different from the name you specified, yet you were sent to that machine by the networking stack. I also experienced another error when trying to remotely administering node 2; The RPC server is unavailable. How could this be?

Turns out the reason is IPv6 6to4. When both NLB nodes were given the same official IPv4 address as a secondary address on their adapters (the cluster address), they also both configured the corresponding IPv6 address for that IPv4 address on their 6to4 adapters. Both machines had the same IPv6 6to4 address. Once they both registered this in DNS the game was up. Whenever I typed in the name of node 2 in an UNC path it randomly resolved to the IPv6 address of node 1. When the connection was attempted on node 1 the actual name of node 1 was different from what was specified in the Kerberos ticket that the connecting machine got from the Domain Controller, thus producing the “mismatch” error, and in turn the RPC error.

To work around the error I disabled 6to4 on both NLB nodes:

netsh interface ipv6 6to4 set state disabled

This is not a perfect solution and I would much rather find a way to disable the registration of 6to4 addresses in DNS. I will have to look into this further…

Another approach would be to disable strict name checking on the hosts, in effect disabling the requirement that the name of the machine must match the name in the Kerberos ticket, but that is a major change with some serious security issues attached to it. Also the effect of having two machines with the same 6to4 address is unknown to me.

By the way, you might not see this error in your network or lab, since 6to4 addresses are only configured for machines with official IPv4 addresses. In other words, if you use any of the RFC1918 addresses (10/8, 172.16/12 or 192.168/16) you will not have 6to4 addresses.

The Case of The Strange Folder Redirection Error

I was enabling Folder Redirection for some Windows 7 Professional machines, or rather, for the users of some Windows 7 Professional machines. The users already had a server based home directory with a My Documents folder, which also had data. The purpose of the operation was to, firstly, enable Folder Redirectin, but also to merge the contents of the My Documents folder on the client machines with the My Documents folder on the network server. First, to see what kind of conflict resolution Folder Redirection had, I created a file with the same name (but different content) in both the local My Documents folder and the one on the server. After logging on the first time the Folder Redirection policy was active I found this error event in the Application log on the client machine:

Log Name:      Application
Source:        Microsoft-Windows-Folder Redirection
Event ID:      502
Level:         Error
User:          <domain><username>
Computer:      <computername>
Description: Failed to apply policy and redirect folder Documents to \<servername><share><username>Documents.
Redirection options=0x1001.
The following error occurred: Failed to copy files from C:Users<username>Documents to  \<servername><share><username>Documents.
Error
details: This function is not supported on this system.

Originally I thought this was a problem with NTFS file permissions on the file server, but these we OK. After all, other clients were redirecting their folders without problem. Since the error details didn’t give me any clue I decided to try to remove my offending duplicate file. I deleted it from the client machine and on the next logon the My Documents folder redirected without problem.

The error in the log should have a better explanation of what is happening. Folder Redirection is definitely supported on Windows 7 Professional. The Folder Redirection specific event logs didn’t contain any more information either. The error text should have said something along the lines “File conflict; file X already exists”. Maybe in Windows 8.

Unfortunately I solved the problem before I had a chance to enable debug logging for the Folder Redirection Client Side Extension. Maybe that would have told me what the problem was. Should you want to enable debug logging for Folder Redirection you can do so with this command:

reg.exe add “HKLMSoftwareMicrosoftWindows NTCurrentVersionDiagnostics” /v FdeployDebugLevel /d 0x0f /t REG_DWORD

If you are on Windows XP/2003 or earlier this will give you a log file: %windir%debugusermodefdeploy.log. If you are running Windows Vista/2008 or newer you will simply get more events in the Windows event logs.

So, should you find yourself staring at this error in the middle of the night (or any other time), see if you have any duplicate files in the folders you are trying to redirect.

Happy redirecting!

UPDATE: The duplicate files I created were created by a different account than the user owning the client computer and home directory on the server. That means that the user actually owning the folders could not delete or move the duplicate file. That could also be the reason for this error. But the fact remains; it is still a very poor error message.

UPDATE 2: I have now had a chance to test the conflict resolution in Folder Redirection and from my tests it seems that the client wins if the same file (but with different content) exists on both the server where you are redirecting to and on the client. I performed the same experiment as outlined above; two files with the same name, one on the server and one on the client. This time they were both created by the user owning the client and the folder on the server. Upon the next logon the folder redirection policy took effect and the local files were copied to the server, merging them with the content that already existed there. But as I say, the copy of the identical file on the server was silently overwritten by the file on the client. So now you know.

Script to find outdated computer objects in Active Directory

Computers have accounts in Active Directory and log on just as user accounts do. The “user name” of a computer is its name with a dollar sign appended, e.g: MYPC1$. The password is set by the machine when it is joined to the domain and changed every 30 days by the machine. Just as with user accounts, computer accounts gets left in the domain when the machine is decomissioned or reinstalled under another name etc. This leaves our directory service full of outdated user and computer object. (Outdated in this context means “has not logged on to the domain for X days). This in turn makes it hard to know how many computers are actually active. So how to battle this problem?

On Windows 2000 every DC maintend an attribute on each computer object called lastLogon. It stored the timestamp of the last logon for a computer on that DC. The “on that DC” part is important, because the lastLogon attribute was not replicated beyond the local DC. So to figure out when computer CORP-PC1 last logged on, you would have to query the lastLogon attribute on all the DCs in the domain and find the most recent one. This could be done, but was tedious and time consuming. Enter Windows Server 2003.

In Windows Server 2003 a new attribute was introduced; lastLogonTimestamp. Just like lastLogon, lastLogonTimestamp stored the timestamp of the last logon to the domain for a computer account, but lastLogonTimestamp is a replicated attribute, which means that now every DC knows the most recent logon time for a computer. So to figure out when CORP-PC1 last logged on to any DC in the domain we can query any DC for the lastLogonTimestamp attribute of CORP-PC1. Very nice.

I often find myself in a situation where I need to “clean house” in customer’s directories so I created a script that uses lastLogonTimestamp to find all computers that have not logged on to the domain for X days. X varies greatly from customer to customer, but a good start would be 90 days. The sciprt is called FindOutdatedComputers.vbs and in this post I would like to share it with you.

FindOutdateComptuers.vbs has 3 user changeable variables: intTimeLimit, strDomain and strDC. These are defined at the very beginning of the code and should be the only variables you change.

intTimeLimit: the number of days since a computer logged on to the domain

strDomain: LDAP domain name; e.g. dc=mydomain,dc=com

strDC: FQDN of DC in domain; e.g. dc1.mydomain.com

The script must be run with cscript.exe and takes one command line argument; a 0 (zero) or a 1 (one). 0 mens just echo out the machines that have not logged on since the defined limit and 1 means echo out, but also move the comptuters to the Outdated Computer Objects OU (which the script creates). It will never delete any information from your AD, only move computer accounts to the Outdated Comptuer Objects OU. If you omit this argument, the default action is just to display the results and not move anything.

For FindOutdatedComptuers.vbs to work your domain functional level must be at least Windows Server 2003. The script checks for this and warns you if you are below the needed level.

Here is the code:

https://gist.github.com/morgansimonsen/8040007

Desktop.ini customizations do not take effect

You copy a desktop.ini file into a folder to customize and maybe localize it. You have correctly set the file’s attributes to Hidden, System and Read-Only, but still your customizations do not work. To make it work you need to set the Read-Only or System flags on the folder where the desktop.ini file resides. As I am sure you know, folders cannot be read-only, neither can they “remember” your settings so that all new files placed in the folder become read-only as well. So this flag is purely to tell Windows that this is a special folder and that it should look for a desktop.ini file inside it.

To change the attributes from the command line type:

attrib +R +S <your folder>

From PowerShell:

Set-ItemProperty <your folder> -Name Attributes -Value “ReadOnly,System”

It took me a while to figure this out so hopefully someone can make use of this info.

Here is a link to another issue where your desktop.ini customizations do not work:

If you won’t translate RDS profiles; I will!

Out of pure frustration with the fact that the Active Directory Migration Tool (ADMT) is unable (unwilling is my guess) to do security translation for users’ Remote Desktop Services (RDS) roaming profiles, I decided to take matters into my own hands and created the script below. It is not very refined just now, but I have a lot of ideas for future versions. In the meantime, if you can use it for something; great!

# TranslateRDSProfiles.ps1
# Morgan Simonsen
# http://morgansimonsen.wordpress.com
#
# Script to translate a user profile belonging to a migrated user.
# Primarily intended to solve the problem with ADMT not translating
# Remote Desktop Service roaming profiles.
#
# Version: 1.0 (13.02.2012)
#     Initial version.

#----------------------------------
# User changeable strings
# Only edit in this section!
#
# Root of folder or sharing storing the profiles
$RDSProfileRootDirectory = "Z:" # Include trailing backslash
# NetBIOS Name of source domain; the domain the user was migrated from
$SourceNBTDomainName = "DOMAIN_1" # Include trailing backslash
# NetBIOS Name of target domain; the domain the user was migrated to
$TargetNBTDomainName = "DDS" # Include trailing backslash
# DNS name of target domain
$TargetDNSDomainName = "dds.intern"
# FQDN of Domain Controller in target domain
$TargetDomainDC = "ddsdc1.dds.intern"
# Location of subinacl.exe
$SUBINACLLocation = "C:Program Files (x86)Windows Resource KitsToolssubinacl.exe"
# Location of echoArgs.exe (not actually used by the script)
$echoArgs = ($PSHome+"ModulesPscxAppsechoArgs.exe")
#----------------------------------

Get-ChildItem $RDSProfileRootDirectory | ForEach `
{
    Write-Host ("Processing folder: "+$_.name)
    If (Get-QADUser ($TargetNBTDomainName+$_.name) -service $TargetDomainDC)
    {
        $user = Get-QADUser ($TargetNBTDomainName+$_.name) -service $TargetDomainDC
        Write-Host (" Found matching user: "+$user.Userprincipalname)

        If ( (Test-Path ($_.FullName+"NTUSER.DAT")) -or (Test-Path ($_.FullName+"NTUSER.MAN")) )
        {
            Write-Host (" Found user registry hive: "+($_.FullName+"NTUSER.DAT"))
            Write-Host (" Updating permissions...")
            $entireprofile = ($_.FullName+"*.*")
            $completeusername = ($TargetNBTDomainName+$user.samaccountname)
            Write-Host (" SubInACL.exe File Output:")
            & $SUBINACLLocation /noverbose /subdirectories $entireprofile /grant="$completeusername=f" /setowner=$completeusername  > $null
            Write-Host (" SubInACL.exe Registry Output:")
            reg.exe load ("HKU"+$_.Name) ($_.FullName+"NTUSER.DAT") > $null
            $regkey = ("HKEY_USERS"+$_.Name)
            & $SUBINACLLocation /noverbose /subkeyreg $regkey /grant="$completeusername=f"  > $null
            reg.exe unload ("HKU"+$_.Name) > $null
        }
    }
    Else
    {
        Write-Host " Cannot find user in domain that matches folder name."
        Write-Host " Continuing with next user."
    }
}

	

Exploring the Global Catalog and examining the “universalness” of Universal Groups

Universal groups (UG) are stored in the Global Catalog (GC). But what exactly is the Global Catalog, and how does it store objects? Does it store anything at all! And how do Universal Groups work anyway?

Active Directory Domain Controllers (DC) have exactly one database. It is stored in %windir%NTDS and is called NTDS.DIT. DIT means Directory Information Tree. It is an MDB database. This database is the only place a Domain Controller stores directory data. So that means that the Global Catalog is not a separate database.

Domain Controller databases store Naming Contexts (NC). These are also called directory partitions, or just partitions, and those names actually makes it easier to understand what they are. By default a DC has at least 3 NCs. The first is the Domain NC, also called the Default Naming Context, which stores all the objects belonging to a domain; users, groups, computers etc. The DC has a copy of this NC because it is a DC for the same domain. The second is the Schema Naming Context which stores all the object definitions for a particular forest. And finally the Configuration Naming Context, which stores the common configuration for a forest, e.g. sites, subnets, forest-wide services etc. In addition to these we can have application partitions, used by DNS for example, but they are not important here. The Configuration and Schema partitions are stored by every DC in a forest and contain the same data on all DCs. If you are a DC in a forest you always store the full Configuration and Schema NCs. The Domain Naming Context is stored in full by every DC in the given domain. A DC can only be a DC for one domain, meaning it only ever stores one complete Domain Naming Context. In addition to its own full Domain Naming Context, the DCs that are designated as Global Catalogs also store partial Naming Contexts for all other Domain Naming Contexts in the forest. So the Global Catalog is not a separate Naming Context.

Now, the point of a Global Catalog is to have one or more places where you can get information, but not all the information, from all the domains in the entire forest. To be a GC, a machine must also be a DC. You cannot just be a GC. A GC knows a little bit about every object in every Domain NC in the forest. It also knows everything in the Configuration and Schema partitions, because the GC is also a DC and every DC holds the Configuration and Schema partitions. A forest must have at least one Global Catalog, and every site in a forest must have a Global Catalog. A user must be able to talk to a Global Catalog to log on. You talk to a GC using LDAP or ADSI. A regular DC listens for LDAP traffic on port 389 TCP (and 689 TCP for LDAPS), and on these ports you can retrieve all the information about any object in any of its full NCs. The Global Catalog, which is not a separate database or naming context, remember, listens for LDAP traffic on port 3268 TCP.

What controls the information the GC has about each object from the Domain NCs in the forest is called the Partial Attribute Set (PAS), and it is defined on the attributes in the Schema. Some attributes are part of the PAS, but most are not. If you would like to add an attribute to the PAS you can do so with the Schema Management Console Snap In. So every attribute definition in the schema has a flag that says if that particular attribute should be replicated to the Global Catalog. Which is not a separate database or naming context, but has its own port, but not its own protocol.

So let’s try to sum this up. Objects in domains in Active Directory are stored in Naming Contexts. An object is made up of several attributes, some of which are flagged as being part of the Partial Attribute Set. A DC designated as a GC stores a copy of every domain NC in the forest, containing all the objects in that NC, but only the attributes that are flagged as being part of the PAS. That means that the Global Catalog is just a copy of a regular domain NCs,but with a little less data. A DC/GC also has only one directory service process, LSASS.EXE, which listens on both port 389 TCP and 3268 TCP. If traffic comes in on port 389 you can ask about the Configuration, Schema or full Domain NCs. But if traffic comes in on port 3268 you can only retrieve data about attributes that are part of the PAS, and only from domain NCs (any domain NC on the DC/GC).

What I am trying to say here is that there really is no difference between the regular DC function and the GC function. To say that something is stored in the Global Catalog really does not make sense, since it is in the same database and in the same Domain Naming Context. The only difference is what you can “see” while looking at the GC, as opposed to looking at the DC. Think of this as having DC glasses and GC glasses. When viewed with the GC glasses you can see a fuzzy outline of an object, but when viewed with the DC glasses, the same object is clear and crisp.

When we say that a universal group is stored in the Global Catalog what we are really saying is that a Universal group is a group, stored in the Domain Naming Context of the domain where it was created, which has its members attribute in the Partial Attribute Set. To demonstrate this lets look at a Global Group in a full domain NC (with our DC glasses). You will see the members attribute populated with its members. Now look at the same global group on a GC (with GC glasses). The members attribute is missing so the GC does not know who is a member of the Global Group. A Universal group’s members attribute will look identical when viewed in the full Domain NC and when viewed in the partial Domain NC.

Also, think about this. If Domain B is a child domain of Domain A. What happens to a Universal group created in Domain B when Domain B is decommissioned? It will disappear! Because, it was just a group who had its members attribute in the PAS. If a Universal group was really stored in a separate Global Catalog, it would not be visible in any domains in the forest, it would only be visible when you connected to the Global Catalog partition. Instead, in real life. you can see Universal groups in Active Directory Users and Computers in the domain where you created them, but not in any other domains in your forest. Another effect of this is that when User A in Domain B is a member of a Universal Group created in Domain C, you cannot see that User A is a member of the Universal Group when you look at him in ADUC in Domain B. But you can see that he is a member when you look at the Universal Group’s members in Domain C.

So the Global Catalog is an illusion and saying that you replicate something to it is nonsense. What you are doing is applying a filter to objects.

(Some illustrations would probably be a good idea for this post, but it’s just too late. Sorry!)

Information wants to be free!