Moving Exchange mailboxes with more than 50 corrupted items


The maximum number of corrupted items a move request will allow you to enter in the GUI for skipping is 50. If you enter more it will act like it will work then not do the move.

If a mailbox move fails with too many errors you will need to move the mailbox via an exchange PowerShell command line (make sure you run the exchange PowerShell not the general PowerShell). The command line is as follows:

New-MoveRequest -Identity <USERNAME> -AcceptLargeDataLoss -BadItemLimit '<max number of corrupted items>' –TargetDatabase <GUID>

You can get your database GUID's with this command:
Get-MailboxDatabase | fl Identity, GUID

So for example to move testuser to a database with GUID 89261a9a-ce53-41bb-a652-1361bc3616e0 and allow up to 999 corrupted items you would use the following command:

New-MoveRequest -Identity testuser -AcceptLargeDataLoss -BadItemLimit '999' -TargetDatabase 89261a9a-ce53-41bb-a652-1361bc3616e0

Increasing Exchange 2010 local move request limit

Exchange 2010 SP1 reduced the number of users that could be moved at one time within the same database which drastically slowed down our user migration. The limit was originally 5 it is now 2. 

To increase this limit you can edit %programfiles%\Exchange Server\V14\Bin\MSExchangeMailboxReplication.exe.config and change the value for MaxActiveMovesPerTargetMDB to the number you want. Make sure you change it in both locations. I would probably not go over 5. 

Once you do that restart the Microsoft Exchange Mailbox Replication service and your moves should now do more at a time.

Slow user logon's

We experienced a problem this weekend with one of our domain controllers taking a longer than the usual amount of time to boot and login. Doing some quick troubleshooting I was more than a bit confused as to why this was happening since there were virtually no errors in the logs but there were a couple warnings which at the time did not make much sense. Since this was a virtual DC and I had spent quite some time looking over logs and running the usual tests (dcdiag, nltest, etc) with no errors I decided what the heck lets demote it let it sit for a hour and then promote it again. OK well nice idea but the first real odd error came up then which was it said hey I can't talk to the other DC to offload my info. OK fine for some reason I can't recall now we ended up rebooting the other DC and even odder I now had no problems demoting the affected DC or promoting it some time later. It also replicated fine to all 15 of my other DC's. Everything seemed to be working, still no errors, and no visible issues. It was called good and chalked up to moon spots, black cats jumping on the server or who knows what. Monday everything still seemed OK. Then on Tuesday several users complained that it was taking 10 to 30 minutes to log in. This only seemed to happen when they got one specific DC in the site, the same one we had issues with before. The log in process was hanging at applying your personal settings.

I noticed this warning showing up in the logs

Log Name:      System
Source:        LsaSrv
Date:          9/23/2011 12:45:24 AM
Event ID:      40960
Task Category: None
Level:         Warning
Keywords:     
User:          SYSTEM
Computer:     DC1-2
Description:
The Security System detected an authentication error for the server LDAP/DC1.MYDOMAIN.local/MYDOMAIN. The failure code from authentication protocol Kerberos was "No authority could be contacted for authentication.
(0x80090311)".

-    Also on the client that got the slow DC I found that when we run gpupdate /force it takes a long time to come back and then results in below message.

H:\>gpupdate /force
Refreshing Policy...

User Policy Refresh has not completed in the expected time. Exiting...
User Policy Refresh has completed.
Computer Policy Refresh has completed

I now tried pinging the server with a packet size of 1472 (ping <servername> -f -l 1472) this failed with a request time out. I was able to ping with a packet size of 1450 from the client. 

During this same time while troubleshooting I had tried to RDP from the DC users had long login times on to the other DC that worked fine in the same site. When I did this it would connect show a black screen and then I got the following error.

Remote Desktop Disconnected. Your Remote Desktop session has ended. The connection to the remote computer was lost, possibly due to network connectivity problems. Try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. 

So we were apparently having network problems where the network was unable to transmit packet size 1472 to this DC. 

So on the 2 DC's in this site we changed the MTU size in the registry under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<AdapterID>

Create a new DWORD called MTU and set the decimal value to the MTU size you want. I used 1450 since that worked doing the test ping.

Rebooted both DC's and the client machines and I am now able to log in at a normal speed to both DC's and I am able to do a gpupdate from the clients.

Obviously I have glossed over a ton of troubleshooting steps we took. The entire process actually took quite some time to nail down what was going on which is why I felt this was an important article to share. I have included what I feel are the important details.

Synology NAS's

I had been looking to replace some failing tape drives at a couple remote offices and after taking into consideration the cost of tapes and the replacement drive I decided to look at some type of disk storage as well. I have had issues with esata drives hanging my servers on start up before though so was not sure how well this search would turn out. After looking at a ton of drives and NAS devices I decided to try out the Synology DS411slim. A huge point in their favor was cost ($399 plus up to 4 2.5 inch drives) and that it supported iSCSI! Their interface is very clean, easy to use, and easy to upgrade.  For the home users it supports streaming to the xbox 360 as well as access from multiple computers at once (but not if you are using it in iSCSI mode for either option but generally a home user won't do this). After a bunch of testing it turns out the device works great for doing my backups as a iSCSI connected drive. I ended up getting them for 2 more remote offices and even bought one for myself and I plan to buy more as other tape drives get old.

Bottom line if you are looking for a small easy to use NAS device for a small business, remote office, or just yourself I have so far been highly impressed with the Synology devices we bought. This is not your NAS from 5 years ago they are fast, have many connection options (network, usb, esata) and are easy to manage.

http://www.synology.com

Monitoring Cluster Shared Volumes without SCOM

Recently we had one of our SAN volumes attached to our VM cluster run out of space. This was a bit of a surprise since I had thought our SAN would warn us when a volume was getting low but it turns out it won't. Luckily all that happened was our virtual guest paused itself so there was no data loss but this was a production server so we needed to make sure this would not continue to happen. Hunting for a way to monitor Cluster Shared Volume (CSV) status I really could not find a way other than Microsoft System Center Operations Manager (SCOM) which is not an option for us. So instead I started looking at scripting something in Powershell.  I knew I could get the available disk space via the VMM cluster console so a script should be possible.

I was able to put this script together thanks to several great examples from others. It will check the CSV status and if it falls below a certain threshold it will send an email to the address specified. You will need to modify a few of the lines below for email server, to address, etc.

    #Load the FailoverClusters module
    Import-Module FailoverClusters

    $warninglevel = 15  # The percent to send warning at
    $objs = @()
    $nl = [Environment]::NewLine

    $csv_status = Get-ClusterSharedVolume
    foreach ( $csv in $csv_status )
    {
       $expanded_csv_info = $csv | select -Property Name -ExpandProperty SharedVolumeInfo
       foreach ( $csvinfo in $expanded_csv_info )
       {
          $obj = New-Object PSObject -Property @{
             Name        = $csvinfo.Name
             Path        = $csvinfo.FriendlyVolumeName
             Size        = $csvinfo.Partition.Size
             FreeSpace   = $csvinfo.Partition.FreeSpace
             UsedSpace   = $csvinfo.Partition.UsedSpace
             PercentFree = $csvinfo.Partition.PercentFree
          }
          if ($csvinfo.Partition.PercentFree -lt $warninglevel) { $objs += $obj }
       }
    }


    if ($objs.count -gt 0) {
        $smtpServer = "mailserver"
        $msg = new-object Net.Mail.MailMessage
        $smtp = new-object Net.Mail.SmtpClient($smtpServer)
        $msg.From = "from address"
        $msg.To.Add("to address")
        $msg.Priority = "high"
        $msg.Subject = "Warning Cluster Volume low on space"
        #Preamble text
        $msg.body = "Enter any explanatory message here or delete this line"              
        #Next line is what puts in the volume information
        $msg.body += $objs | ft -auto Name,Path,@{ Label = "Size(GB)" ; Expression = { "{0:N2}" -f ($_.Size/1024/1024/1024) } },@{ Label = "Free(GB)" ; Expression = { "{0:N2}" -f ($_.FreeSpace/1024/1024/1024) } },@{ Label = "Used(GB)" ; Expression = { "{0:N2}" -f ($_.UsedSpace/1024/1024/1024) } },@{ Label = "PercentFree" ; Expression = { "{0:N2}" -f ($_.PercentFree) } } | Out-String
        $smtp.Send($msg)

    }

To make this work create the script in a ps1 file. Modify the settings that need to be modified and adjust your warning threshold to a percent you like. Create a scheduled job to run it at whatever interval you wish. I run mine every 15 minutes. When creating the job make sure the program you run is powershell.exe then give it a command line option of your script name.