Bye Bye Microsoft UAG !

Posted by Ahmed Nabil In | 1 comments»
Yesterday Microsoft Announced officially that there will be no new future full version of Microsoft UAG. Server 2012 and 2012R2 will cover some of the UAG features as Direct Access and basic secure application publishing. This was quite expected especially after the TMG retirement and the new web application proxy that was introduced in Server 2012 R2.


For more details on the latest changes to the Forefront family, please check the below article

http://blogs.technet.com/b/server-cloud/archive/2013/12/17/important-changes-to-the-forefront-product-line.aspx

Windows XP support will end April 2014, What about Microsoft Server 2003 ?

Posted by Ahmed Nabil | 2 comments»
A lot of companies nowadays are trying to move all their clients from the old windows XP platform to either Windows 7 or Windows 8.  By April 8, 2014 there will be no more support for Microsoft Windows XP, this includes windows updates, security updates, fixes and normal support. Although XP was a great product, Its highly recommended to move to a newer and more supported OS as Windows 7 or Windows 8.1. Its not recommended to run a 10 years old OS nowadays with all this new security challenges.

Client migration is real problem but also the Server migration is still a big issue. I was curious to know the status of the Windows 2003 Server OS that was released at the same time frame of Windows XP and the good news is that its end of support is July 2015. This is a good opportunity for organizations to plan the move of the 2003 servers, i still see Domain Controllers, Exchange, File and application servers still on 2003 or 2003 R2.

There might be a chance that some security or fixes released for 2003 servers might work for XP (Basically 2003 is the XP server version) However no one should take the risk and start phasing out XP and planning for 2003 phase out before July. 2015.


Microsoft Support Life Cycle Index:

http://support.microsoft.com/gp/lifeselectindex

System Center 2012 Products doesn't Appear on WSUS

Posted by Ahmed Nabil In , | 1 comments»
While checking the WSUS Products and Classifications i noticed that some products as the System Center Service Manager is not listed, also i noticed that the System Center 2012 R2 Products are not listed at all even with the latest updates (I am running WSUS on Windows server 2012 R2).

Upon checking the Microsoft Update Catalog on the Internet, i noticed that Service Manager is not listed/published on the Catalog as well, this explains why SCSM updates are not listed in the WSUS. For the current state WSUS is not designed to give SCSM under Products and Classifications as its not yet under the Catalog.

As for the System Center 2012 R2, they were just released less than 2 month ago and the next expected Rollup Will be mostly in the first quarter of 2014. I double checked with Microsoft Support team and they replied back that the Product team is working on WSUS hotfix/Service Pack/Release..........etc that might list the 2012 R2 Products.

Recommended links for WSUS

  1. Microsoft Update Catalog: http://catalog.update.microsoft.com
  2. Deploy WSUS 2012 and 2012R2 in your organization: http://technet.microsoft.com/en-us/library/hh852340.aspx


   

Two DNS Records with same IP Address. Aging and Scavenging problems with DHCP Lease duration !!

Posted by Ahmed Nabil In | 2 comments»
Aging and Scavenging is very crucial and important for Active Directory Integrated zone, it should be carefully planned and configured. We recently faced a problem when a System Admin reported to me having two DNS records having the same IP address in the DNS Active Directory Integrated zone.

The first thing that came to my mind was to check the Scavenging settings however they both (Refresh and Non-Refresh) seem to be fine compared to the DHCP release time. Always remember that the main rule for this setting is that the Non-Refresh Interval + Refresh Interval should be greater than the DHCP release time. You can tweak it depending on your network, IPs availability and how busy is your network with computers in and out but always keep in mind this main equation.

The second thing to check was the DHCP scope properties and specifically the DNS Tab. Upon checking this setting i noticed that Dynamically Update DNS only if requested by DHCP clients is selected as shown below.





It should be noted that with this above setting, only if the client initiate a request to renew or release by maybe using the ipconfig /release command, then the DNS record will be updated or removed from the DNS zone. As per Microsoft Support advice, in most circumstances, the DHCP client won't initiate the DHCP release request (The client is just removed from the network) and the DHCP and DNS integrated zone won't notice that this client is removed and they still think that this client is online.

After the DHCP lease duration ends, the DHCP server will get this IP back and another client may get this same IP and register itself with the same IP. Now remember the main equation we mentioned earlier, since the Aging and Scavenging time didn't end (They are greater than the DHCP lease), the result will be two records with the same IP address in the DNS zone.

The Solution to this issue is to ensure the DNS record is deleted once the Lease time is reached, we need to change the setting in the image above (Scope Properties - DNS) to Always Dynamically update DNS A and PTR Records.

After changing this setting you will need to restart both DHCP server and DNS server services.


Reference Link:

http://blogs.technet.com/b/networking/archive/2008/03/19/don-t-be-afraid-of-dns-scavenging-just-be-patient.aspx



User Login is very slow on Virtual Machines connected to Broadcom Network cards !

Posted by Ahmed Nabil In | 1 comments»
A weird behavior was noticed on few Hyper-V host machines, these machines are mainly Dell Rack servers as the powerful R720 Model. These servers came with mix of Network adapters (based on your configuration) as the Broadcom built in adapters and extra Intel cards. The behavior noticed was that Virtual machines that are connected to the Broadcom NICs has poor performance and user login takes several minutes on these specific machines. VMs connected to the Intel Network card login normally and has better performance.

When I moved the VMs connected to the Broadcom NICs to the Intel NICs they behaved normally and users can login much faster. I then tried to move the VMs on the Intel Cards to Broadcom and we got the poor performance. We concluded it must be something related to the Broadcom Network card.

Troubleshooting steps:


  1. Updating the Broadcom NICs to the latest driver. Still No change.
  2. We compared the options/Properties of Intel Card vs. Broadcom card and it was noticed that Transmit buffers in Broadcom was 200 vs 500 in the Intel cards, I tried changing it to match the Intel settings. Still No change.

Resolution:

After several investigations, the problem turned out to be in the VMQ setting which was enabled on both Broadcom and Intel. When it was disabled on the Broadcom card, the VM worked perfectly. Looks like the Broadcom card is not compatible with the VMQ feature.


Reference Links:




How to Update Lync 2013 Standard Backend Database ?

Posted by Ahmed Nabil In | 0 comments»
Microsoft Releases frequent Cumulative updates for Lync 2013 servers and Most of these updates requires updating the Lync 2013 SQL Backend database. In this article I will discuss how to update the SQL Express backend database after installing a new Lync 2013 Cumulative update or specific Rollup that requires DB version update.


First to check the status of the Database after installing the cumulative update and whether you need an update, you need to run the below two commands from elevated Lync Power Shell.

  1. PS C:\> Test-CsDatabase -ConfiguredDatabases -SqlServerFqdn servername.domain.com | FT DatabaseName, ExpectedVersion, Installedversion        
  2. PS C:\> Test-CsDatabase -CentralManagementDatabase | FT DatabaseName, ExpectedVersion, InstalledVersion

The Output will be something similar to that (This is the output of the second command on one of my deployments)

DatabaseName        ExpectedVersion      InstalledVersion
     ------------                   ---------------              ----------------
         xds                             10.13.2                    10.13.1
          lis                                 3.1.1                      3.1.1

 
As you can see there is a difference between the expected and installed value in the XDS database.
 
To update the Database we need to run the below commands in sequence:
 
  1. Install-CsDatabase -ConfiguredDatabases -SqlServerFqdn FEBE.FQDN -Verbose
  2. Install-CsDatabase -CentralManagementDatabase -SqlServerFqdn CMS.FQDN -SqlInstanceName DBInstanceName -Verbose
 
 
After that we need to enable CsMobility service and Run Bootstrapper as follows:
 
  1. Enable-CsTopology
  2. Bootstrapper.exe
  3. Reboot the server.

 
This should take care of updating the backend DB and you can double check by running the previous Power Shell again and ensure both expected and installed versions are the same.

Reference Links:

Lync 2013 Cumulative Update List

http://blogs.technet.com/b/nexthop/archive/2013/03/26/lync-2013-cumulative-updates-list.aspx

Lync Server Resources

http://blogs.technet.com/b/nexthop/p/links.aspx


 

UAG 2010 SP4 Released to support Windows 8.1 and IE 11

Posted by Ahmed Nabil In , | 0 comments»
Good news for all Microsoft UAG 2010 users, Service Pack 4 (SP4) was released officially yesterday to support Microsoft Latest Operating system Windows 8.1 and Internet Explorer 11 (IE 11). This new SP4 provides other important features as the support of Remote Desktop Connection RDC 8.1 and the Remote Apps published from Remote Desktop session Host on Windows 2012 or 2012 R2.

The Link to download the Latest SP4 is as follows:

https://www.microsoft.com/en-us/download/details.aspx?id=41181


For More details about features and fixes included in this Service Pack, please check the following link:

https://support.microsoft.com/kb/2861386


Service Pack 4 includes some other fixes as well as stability and performance enhancements, I will give it a try and hopefully it will be smooth operation.


Why DPM can only take offline backup for some VMs under Hyper-V 2012R2 Host ? Need SCSI !!

Posted by Ahmed Nabil In , | 8 comments»
I published a recent article on migrating/moving VMs from 2008R2 to 2012R2 Hyper-V host (you can check it at http://itcalls.blogspot.com/2013/11/how-to-migratemove-virtual-machines.html ), after successfully moving these VMs we faced another problem trying to backup them. We were using the latest DPM 2012R2 and we noticed that it added these VMs as offline only and for some reason it can't take online backup of them.

After some investigation we noticed an error on the hyper-v host with Event ID 10103 (check below image) which clearly mention that backup will fail because this VM doesn't have a SCSI controller.


So the solution was just to add a SCSI controller even if its connected to nothing, after that online/hot backup was taken smoothly without any problem.

So what was the problem ?

I discussed this issue with several Microsoft support personnel and It turned out that online or hot backup for a VM in 2012R2 Hyper-V host requires mounting a new VHD in the VM and then dismounting it later. Since only the SCSI controller can mount/support hot plugging of virtual disks, it became clear why we need this SCSI controller.

Old version of Hyper-V didn't work this way,  it required that the Hyper-V host mount the guest VHD as part of the backup process which is something Microsoft didn't like as it increase the surface of attack on this host.


How to Migrate/Move Virtual machines from 2008R2 Host to 2012R2 Host ?

Posted by Ahmed Nabil In | 3 comments»
We were recently working on a project with my team to move and migrate several 2008R2 virtual machines from 2008R2 SP1 Hyper-V Host to the latest Hyper-V 2012R2 host and we faced a lot of troubles since the old import and export is not working anymore and not supported.

Hyper-V Virtual machines that used to be exported from Server 2008R2 Host were utilizing the version 1 WMI namespace which resulted in an export file (.exp) which was used to represent the exported virtual machine. When Server 2012 was released version 2 of WMI namespace was introduced and version 1 was deprecated (Server 2012 still compatible with old version 1 WMI namespace but no new features or additions will be added, Microsoft normally use deprecated as a step to fully remove and delete this feature on the next version) and then it was totally deleted in Server 2012R2.

For a list of deprecated and removed features in 2012/2012R2, check this link

http://technet.microsoft.com/en-us/library/dn303411.aspx

After several trials and investigations done by my team we reached two recommended methods for moving the VMs to Hyper-V 2012R2 hosts (Both were confirmed later by Microsoft Support Team) as follows:

  1. The Easiest way will be to turn off the VM on the hyper-V 2008R2 host and stop the VMMS service on the host server as well to unlock all VM files. The next step is to copy all Virtual Machine files/folder including the VHDs, XML..........etc. to the server 2012R2 and import them directly.
  2. Another method can be done is to use an intermediate server (this can be used if you already exported your VM and deleted the original VMs). In this method you will export the VM from the 2008R2 host and then import in in 2012 Hyper-V host (no need to start it up) and then export it again from the 2012 Host and import it in 2012 R2 host.

For more information about Version 2 WMI Namespace, check the below link

http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/05/30/the-v2-wmi-namespace-in-hyper-v-on-windows-8.aspx

An excellent resource on Hyper-V generation 2 virtual machines is the 10 blog series by John Howard

http://blogs.technet.com/b/jhoward/archive/2013/10/24/hyper-v-generation-2-virtual-machines-part-1.aspx







WMI Unhealthy on 2008R2 Domain Controllers - WBEM_E_QUOTA_VIOLATION

Posted by Ahmed Nabil In | 0 comments»
Windows Management Instrumentation (WMI) is a key core windows management technology. It provides a consistent approach to carry day to day management operations with programming or scripting languages.

I recently started getting WMI failures on daily basis on my 2008R2 domain controllers accompanied by several scripts failure and DNS performance degradation.


Also I noticed that the Configuration Manager SCCM evaluation rules on this domain controller failed and SCCM is reporting errors. The policy Request date on the SCCM is few hours back and it will never report back to SCCM till the DC/Server is rebooted.


Troubleshooting Steps:


  1. I started by running the WMI diagnosis tool from http://www.microsoft.com/en-us/download/details.aspx?id=7684
  2. The WMI diag log file reported WBEM_E_QUOTA_VIOLATION as follows:
.5265 16:34:02 (0) ** 981 error(s) 0x8004106C - (WBEM_E_QUOTA_VIOLATION) WMI is taking up too much memory
.5266 16:34:02 (0) ** => This error is typically due to the following major reasons:
.5267 16:34:02 (0) **    - The requested WMI operation is extremely costly in terms of resources and
.5268 16:34:02 (0) **      the WMI provider handling this operation has exceeded the authorized limits.

 3.  tried later to check whether the basic WMI function is working by running the below test:

1.     From Elevated Command Prompt type Run wbemtest, connect the namespace root\cimv2
2.     Click Query… and enter the following query “Select * from Win32_ComputerSystem”
3.     This test failed and the following error was reported.

0x80041017 Facility: WMI  Description: Invalid Query

1   4. I tried fixing and rebuilding the WMI Repository as follows:

  • Disable and stop the WMI service. sc config winmgmt start= disabled and net stop winmgmt
  • At a command prompt (cmd), change to the WBEM folder. cd %windir%\system32\wbem
  • Rename the repository folder. rename repository repository.old
  • Re-enable the WMI service. sc config winmgmt start= auto
  • Run the following command to manually recompile all of the default WMI .mof files and .mfl files
  • cd %windir%\system32\wbem
  • for /f %s in ('dir /b *.mof *.mfl') do mofcomp %s


The only way to get around this issue was to manually reboot the server. After Rebooting the server, it works for  few hours without a problem then the failures start again. One thing else to be noticed is that the WMIPRVSE.exe process is consuming huge amount of memory during this problem.

Resolution Steps:

  1. Increased the "MemoryPerHost” value to 1 GB (1073741824), by default it should be 536870912 which means 512 MB as per attached article


2. Install the following WMI fixes

KB Article Number (s) : 2705357  
Language: All (Global)  
Platform: x64  

KB Article Number (s) : 2692929  
Language: All (Global)  
Platform: x64  

KB Article Number (s) : 2617858  
Language: All (Global)  
Platform: x64  

KB Article Number (s) : 2465990  
Language: All (Global)  
Platform: x64  

KB Article Number (s) : 2492536  
Language: All (Global)  
Platform: x64  




For a list of suggested WMI hotfixes on different windows platform, please check this blog which is maintained and updated regularly.








AD CS not configured for Revocation checking of all certificates

Posted by Ahmed Nabil In | 0 comments»
Recently the SCOM server (One of your best friends on the network) started reporting the error "AD CS not configured for Revocation checking of all certificates", SCOM reported Event 128 as a warning

Event 128 is normally reported when someone try to use Certificate Request with all time-valid CA certificates to request a certificate. However y default CA doesn't support such request and event 128 gets reported.

After checking this issue and consulting Microsoft tech support, this issue will normally occur only after renewal of CA certificate. When the CA certificate is renewed, the OCSP Response signing certificate used for validation of existing certificates must still be signed by the CA certificate that was used to issue the existing certificates and new CA certificate. However by default CA doesn't support the renewal of OCSP Response signing certificate by using a previous CA certificate.

This issue/behavior can be fixed as follows:


  1. From elevated CMD run the certutil -setreg ca\UseDefinedCACertInRequest 1
  2. Restart the CA services

This command will enable the CA support for certificate request signed by old certificate. If the OCSP is not renewed, you need to go ahead and renew it as per the following articles









Error 0x803100B7 Group Policy settings require the creation of a startup PIN, but a pre-boot keyboard is not available on this device

Posted by Ahmed Nabil In , , | 0 comments»
I Purchased few weeks ago the Microsoft Surface Pro tablet, its a very nice production tablet that really enables remote users to run their production applications and workloads. There are still some room of improvement to get promoted as the number one choice of tablets for business users. From my point of view the three main things that need improvement are the Battery Life, 3G/4G connectivity option and better Camera.

Surface Pro comes with windows 8 Professional which is very nice and allows you to join your corporate network however it lacks a great feature which is Direct Access ! So I decided to turn it to fully productive device and install windows Enterprise on it. Its very simple as if you are building a new normal fresh computer.

I formatted the Surface drive however I kept the recovery image (for any future need), after finishing Windows Enterprise I installed the latest Surface Pro Firmware and Driver Pack http://www.microsoft.com/en-eg/download/details.aspx?id=38826

Finally I got my DirectAccess working on my Surface. That was really an exciting moment. Th next challenge was joining my domain MBAM/Bitlocker policy. Our MBAM / Bitlocker policy requires the use of a PIN while booting the computer. When the MBAM encryption wizard started I got the error 0x803100B7 Group Policy settings require the creation of a startup PIN, but a pre-boot keyboard is not available on this device and by checking the event viewer the following details were provided as per attached image


To Fix this issue we need to change/enable few settings in the Surface Local Policy.

Note: In order to use the Pre-authentication you need to have a Keyboard attached to the surface during the boot, You may use the Surface Touch/Type Keyboard or any external Keyboard connected to the USB port.

  1. Type GPEDIT.MSC in the Run bar to access the local Group Policy Editor
  2. Drill down to Computer Configuration - Administrative Templates - Windows Components - BitLocker Drive Encryption - Operating System Drives
  3. Enable both "Require Additional Authentication at Startup" and "Enable use of BitLocker authentication requiring preboot keyboard input" - Check below image.





After that restart the Bitlocker Management Client Service to kick in back the MBAM wizard which should complete normally without any problem. 










Enable Auto Enrollment to Avoid Expiring Certificates

Posted by Ahmed Nabil In | 2 comments»
Its common that sometimes few admins miss the renewal of some key certificates in their Microsoft internal PKI (Public Key Infrastructure), this is due to the fact that its a bit of manual task and you need to set manually some Outlook reminders (My favorite method) or run schedules tasks to remind you before the Certificate expiration date.

However if you a user that logs frequently on this CA (Certificate Authority) server we can enable Auto Enrollment for this user. After configuring it, we don’t need to worry about the expiring certificates as long as the specific user still logs onto the CA.

To Enable Auto Enrollment you need to do the following:


  1. Right click on the Certificate Template where you need to enable the Auto Enrollment feature
  2. On the Security Tab (Check below image), add a specific user or grant an existing user the Auto Enroll permission (In my case i picked a normal low privileged service account that connects periodically on the server at least each month for maintenance and installing latest windows updates.)                                                                                                                                                                                                                                                                      
                                                                                                                                                             
  3. Publish the Template and issue the needed certificate.
  4. Open the Group Policy Management (On your Domain Controller) and either create a new Group policy or simply edit the Default Domain Policy
  5. Navigate to User Configuration - Policies - Windows Settings - Security Settings - Public Key Policy and enable Autoenrollment as shown below. 

This user with the Autoenroll feature enabled when logged in on the CA server will get notified and the certificate will get enrolled and the Certificate won't get expired.




The Validity Period of an Issued Certificate is Shorter than Configured

Posted by Ahmed Nabil In | 4 comments»
I recently passed with couple of scenarios where one of the issued Certificates in Microsoft PKI infrastructure solution has validity period shorter than the period already configured on the template of this certificate. The main reason of changing and increasing the validity period/years for several specific certificates is to avoid frequent renewal process. 

The scenario i passed by recently was when a user duplicated one of the templates and changed the Validity from the default 2 Years to 4 Years and issued the new Certificate however the issued certificate still reads 2 Years. This can be due to one of two reasons



  1. The CA certificate period /Remaining Period (CA cannot issue a certificate that is longer than its own CA certificate) is less than the user certificate period. You cannot issue a user certificate which is 10 Years valid if your Root CA is 5 years only.
  2. The default Validity Period that is allowed by CA (defined in CA reg)


To check for the CA Certificate period/Duration, you need to do the following

  1. Open the CA Console
  2. Right Click on the CA - Properties
  3. From the General TAB click View Certificate and check the duration.





If the CA Remaining duration is less than the required user certificate duration then you need to increase the CA value and renew the CA certificate as follows:

  1. Configure CAPolicy.inf that directly controls CA certificate.
  2. Go to C:\Windows Folder, find the file CAPolicy.inf
  3. Change the "RenewalValidityPeriodUnits" value to the appropriate period (10 or 15 Years)
  4. Restart the CA Service
  5. Renew the CA Certificate (Right Click on the CA - All Tasks - Renew CA Certificate)




If the CA Period/Duration is fine and longer than the user certificate need then we need to check the default Validity Period in the CA Registry by doing the following:

  1. Open Admin CMD on the CA server and type certutil -getreg ca                                                                                                                                                                                    
  2. Check the ValidityPeriodUnits which refers to the maximum period that this CA can issue. You can define this value according to your own requirements, but it won’t exceed the lifetime of CA.
  3. From the Same CMD run certutil -setreg ca\ValidityPeriodUnits 5 (This will increase the validity to 5 years)
  4. Stop and restart CA service.


Now try again to Enroll certificate again from client to check the validity period.

How to Publish New Certificate Revocation List (CRL) from Offline Root CA to Active Directory and Inetpub

Posted by Ahmed Nabil In | 5 comments»
Its highly recommended when building your Microsoft PKI (Public Key Infrastructure) to have your Root CA offline after issuing the Enterprise Sub CA certificates. Its recommended to minimize the access to the Offline Root CA as possible. The Root CA is not a domain joined machine and can be turned off without any problem.

One of the Key issue is the CRL generated from the Root CA, you need to set the CRL interval for a large value so that we don’t need to copy the CRL to an online location frequently and do not implement delta CRLs, because the publication of each delta CRL would require access to the offline root CA in order to copy the delta CRL to an online publication location. In order to change the CRL interval you need to:



  1. Turn on the Offline Root CA and login with Admin account
  2. Open the Certification Authority Console
  3. Right Click on the "Revoked Certificates" and click Properties.
  4. Set “CRL Publish interval” to a large value (Default is 26 Weeks) and  uncheck “Publish Delta CRL” check-box.



In order to Publish a new CRL from the offline Root CA to the Enterprise Sub CA you need to do the following:

  1. Publish a new CRL on the Root CA, this can be done by Right Click the "Revoked Certificates" - All Tasks - Publish                                                                                                                                                                                                                                                          
                                                                                                                                                                     
  2. Copy the CRL file from the Root CA located under %systemroot%\system32\certsrv\certenroll to the Sub CA Server
  3. Turn off the Root CA
  4. Copy the above file to the InetPub folder (HTTP Path) in the Sub CA server which is by default located under the C:\inetpub\wwwroot\Certdata
  5. Open an Admin Command Prompt and run the following command to publish it to the Active Directory (LDAP Path).                                                                                           certutil -f -dspublish " C:\Inetpub\wwwroot\certdata\RootCA.crl


This process of renewing the CRL and publishing a new one is manually done since the Root CA is offline and thats why its better to make the CRL publish interval more than the default value so you won't do it frequently. You may also want to set an automated reminder before the next renewal date.




How to Manually Delete Old/Empty WSUS computer Group from Database

Posted by Ahmed Nabil In | 7 comments»
Recently i was trying to delete/Remove one of the old computer groups under WSUS Console - Computers - All Computers. This Group was an old group with no members/Clients or any pending approvals any more. I tried removing it from the GUI by Right licking the object and Delete but the server hanged and i got connection error as shown below.




This problem might occur in WSUS servers utilizing the internal DB which has several limits and with the huge number of updates and many groups you can face such issue.

To solve this problem and manually remove this Group you will need to work it from the database and edit couple of tables as follows:

  1. Make Sure to take a backup from your WSUS server and WSUS DB.
  2. Use SQL Management Studio to connect to Windows internal database \\.\pipe\MSSQL$Microsoft##SSEE\sql\query
  3. We will mainly remove the old records from tbtargetgroup and tbflattenedtargetgroup tables
  4. Run “select * from tbtargetgroup” to get the list of groups.
  5. Identify the TargetGroupID of one of the groups that you want to delete
  6. Run delete from tbflattenedtargetgroup where TargetGroupID = ‘<TargetGroupID for the group which is to be removed>’
  7. Run delete from tbdeployment where targetgroupid = ‘<Previous step ID>
  8. Run delete from tbtargetgroup where TargetGroupID = ‘<Previous same ID>’

This should take care of the deletion of these groups. Again its always required to ensure you have full backup from your DB.


How to Clean Microsoft WSUS Content Folder from Old and unneeded Products

Posted by Ahmed Nabil In , | 10 comments»
Microsoft WSUS administrators sometimes tend to select all given Products (Options - Products and Classifications) and by time the WSUS content folder grows dramatically till it fill all disk space. If the WSUS administrator tries to uncheck or deselect unneeded products later on, this won't save or minimize the current space.

So how do the WSUS updates gets downloaded/Propagated on the WSUS server ?


  1. WSUS server contacts the Microsoft Update servers and will only downloads the metadata (Not complete Full Update Package)
  2. The Binaries or the actual downloads are only downloaded when you approve them manually or if there is an Auto approval rule configured.

In order to clean the WSUS content folder from old/unneeded  or unused products you have to do the following:

  1. Under Options - Update Files and Languages, Remove the check box for download Express Installation files (This is optional recommendation depending on your environment).
  2. In the Options - Products and Classifications, select only the needed products.
  3. On WSUS console- decline all approved updated which were either installed or not applicable.
  4. Delete the WsusContent Folder.
  5. Navigate to the C:/program files/updates services/tools on the WSUS server
  6. Run WSUSutil.exe Reset


On the next download cycle it will download only the updates which have been listed in products and classifications and which have not been declined.

Also Its recommended to install all Latest WSUS updates and hotfixes.


Windows 7 UAG Direct Access Clients Cannot RDP Server 2012 Domain Controllers

Posted by Ahmed Nabil In | 2 comments»
After upgrading our domain Controllers, DNS and DHCP servers to the latest Windows Server 2012, I noticed that our Windows 7 UAG DirectAccess clients are not able to RDP (Remote Desktop/MSTSC) to the new Server 2012 Domain Controllers. The same client can ping the 2012 Domain Controller and 2012 DNS server without any problem however all RDP traffic fails.


The weird thing was that at the same time these clients (Windows 7 DirectAccess UAG clients) can ping and RDP/MSTSC any other Windows 2012 member server without any problem at all.

I did some intensive checks and research on the Internet but could not find anything regarding Server 2012 domain controllers issues with UAG based Direct Access, As per Microsoft Escalation team recommendation we did a full tracing scenario on both the UAG and Windows 7 client. 

I tried to reproduce the problem while turning simultaneous capture on both the client and the UAG server using the below Scenario


Netsh wfp capture start

netsh trace start scenario=directaccess capture=yes report=yes

Tried both RDP and Ping to the Problematic Windows server 2012 domain Controller

Netsh trace stop
Netsh wfp capture stop


After Analyzing the captured logs, it was noticed that the DA client is using an Expired Security Association when it cannot access the 2012 server. Tracing from UAG server showing the RDP traffic dropped on UAG Server due to the following error. 

“STATUS_IPSEC_WRONG_SA”

Resolution:

This problem was fixed by installing the hotfix for IKEEXT.dll on the UAG 2010 DA server and on the Windows 7 DA client. It should be noted that this fix is only for 2008R2 servers and Windows 7 Clients

2801453-Delay during IPsec renegotiation on a computer that is running Windows 7 or Windows Server 2008 R2


After installing this fix on both the UAG server and Windows 7 client, i was able to RDP/MSTSC the 2012 Domain Controller/DNS without any problems.



The Card Supplied Requires Drivers that are not present on this System

Posted by Ahmed Nabil | 1 comments»
I recently started getting the above mentioned Logon warning Message (Check below screen shot) while logging on my old 2003 and 2003R2 servers using Remote Desktop. I was using a fairly new Windows 8 Laptop.



This warning is mainly related to trying to redirect the smart card to the RDP session. This issue didn't occur with Server 2008 or 2008R2 because the driver store in 2008 and above is huge and incorporates a lot of drivers while mostly the 2003 system doesn't have the needed driver.

If you passed by this issue then you have three different options as follows:


  1. Disable this device on the laptop (If its not used) and it won't be redirected.
  2. Install the Smart Card driver on the 2003 server (You may face some problems for driver availability and compatibility).
  3. Uncheck the smart card box in the MSTSC settings before establishing the RDP session.

I picked option 3 which was the safest and convenient option by unchecking the Smart Card from the MSTSC settings - Show Options - Local Resources - Local Devices and Resources - More - Uncheck the Smart Card option.

After that you can RDP normally to any old 2003 server without getting this warning.


A new MVP is here from Egypt

Posted by Ahmed Nabil | 0 comments»
I am pleased to announce and share with you all that I have been awarded the prestigious Microsoft Most Valuable Professional (MVP) award in the Enterprise Security Area. It was a very exciting moment opening and going through the congratulations email.


Attached is the exact mail that I received.

Congratulations! We are pleased to present you with the 2013 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Enterprise Security technical communities during the past year.

I am really honored to be associated with a great group of technical people around the whole world.  I’ll be aiming to continue my efforts to support the IT community all over the world and especially in my country and region for the next coming years. 

Thanks to everyone who supported me for the last year. Thanks for all my Blog followers, readers, friends........etc. I won't have made it without your feedback and comments.

Thanks to Microsoft for their support to the IT community and for recognizing our efforts.

You may check my MVP Profile here.

Thanks again to everyone.

Running Lync 2013 client on DirectAccess Computer

Posted by Ahmed Nabil In , | 2 comments»
I am a big fan of Microsoft DirectAccess technology, for those who are not aware of DirectAccess, Its Microsoft new Remote connectivity solution where users on the Internet get Intranet connectivity to their corporate network without installing any client or initiating any software like old traditional VPN.

Microsoft DirectAccess is purely based on IPV6 and Lync 2013 is fully supporting IPV6 and Lync 2013 clients using DirectAccess should work without any problem.

If any one encountered problems making Lync calls or connecting to Lync on a DirectAccess Computer then you need to ensure that IPV6 is enabled on the Lync 2013 Server as per the below image.




This can be achieved as follows:

  1. Open the Lync 2013 Topology Builder from an old file or download the topology
  2. Edit the properties of the Lync server and in the General Properties ensure that IPV6 is enabled.
  3. These settings need to be published from the Action Menu - Topology - Publish

Now you can enjoy Lync 2013 over DirectAccess Connection.



Cracking Wireless WEP using BackTrack article published in Hack Insight Magazine

Posted by Ahmed Nabil | 1 comments»
A couple of month ago i was asked by the Hack Insight Magazine editor to write an article on the famous BackTrack 5.0 R3. Hack Insight - a line of Hack Insight Press which is the new publication devoted to IT Security. Its a good Magazine with different security articles on different platforms.

It was a new experience and i was really excited with this kind of publication. I picked the Wireless WEP cracking as i can see many people still adopting it, this article will show how easy to crack a WEP key using BackTrack.

My Article was published on the March 2013 volume. The link to download/Read the article is: http://www.hackinsight.org/articles.php?article_id=8



So Take a look and let me know what do you think :)

Microsoft Lync 2010 client / XP machines connectivity with Lync 2013 Server

Posted by Ahmed Nabil In | 1 comments»
After a successful implementation of the Microsoft Lync 2013 we faced a problem with our legacy Windows XP machines that are still existing on the network and need to access the Lync 2013 server. This was not possible because Lync 2013 client is only supported on Windows 7 or Windows 8. The Lync Web client doesn't support all features especially the Audio/Video conference features on XP machines (Greyed out).

As a solution we tried using Lync 2010 full client on Windows XP machines. When we try to login from Lync 2010 Client in the Windows XP or even Windows 7 we received the below error.


“Microsoft Lync 2010 is not a version that can be used to sign in to the server

 
In Order to allow backward compatibility for Lync 2010 clients on Lync 2013 server, you need to do the following:
 
  1. Open Lync 2013 Control Panel
  2. Click on Clients – Client Version Policy
  3. Click the “Global Policy” – Edit – Show details
  4. Double Click “UCCP”
  5. At the bottom under Action: select “allow”  - Ok
  6. Back to the Global Policy Settings Select OC Version 4.0.7.7577.4103 and allow it as previous policy (Default was block)

 The below screen shot is provided for more elaboration.


 
 

Hopefully this should be helpful for anyone facing the same problem.


Microsoft Update List for Hyper-V

Posted by Ahmed Nabil In | 1 comments»

A lot of IT Professionals are moving to Hyper-V and they need to keep updated with all Hyper-V hotfixes, updates and Service Packs. The Below links are for Hyper-V on both Windows 2012 and Windows 2008R2. Some of these updates/fixes are intended to fix specific problem, so don’t apply them unless you have this specific issue. 

1.       Windows 2008R2 Hyper-V

 

2.       Windows 2012 Hyper-V 

 
Thanks to the above wiki creators.

UAG 2010 File Access application fails to open/start after UAG 2010 SP3 implementation

Posted by Ahmed Nabil In | 1 comments»

After the install of UAG 2010 SP3, all of the portal applications worked fine except for the File Access application. I tried several options with the application, removing it and adding it back but never worked till I asked my friend Ben Ari (UAG Senior Escalation engineer) as I thought it was depreciated. It turned out to be a glitch or bug after implementing the UAG 2010 SP3 and to fix you need to do the following:
 
1.       On the UAG server, Navigate to the following folder under UAG installation folder \Microsoft Forefront Unified Access Gateway\von\FileAccess

2.       Edit the file “default.asp” (I took a backup first before any change)

3.       Modify the word cint to parseInt on Line 20 (Or you can search the file as it appears once). Make sure that the “I” in “parseInt” is written in caps (Check below screenshot)
 
 

4.       Save and activate your Forefront UAG

 It should be noted that incase there are more than one UAG server in the array, this change should be applied on all servers manually by the same method and steps above.

UAG 2010 Portal error 403.14 after applying UAG 2010 SP3

Posted by Ahmed Nabil In | 0 comments»
The UAG 2010 SP3 was officially released on Feb 20, 2013 as promised by the UAG team to resolve several issues mainly to support Microsoft new systems and applications (Windows 8, Server 2012, Exchange 2013, SharePoint 2013...............etc). Thanks UAG team.

Download Link:  http://www.microsoft.com/en-us/download/details.aspx?id=36788

Release Notes: http://support.microsoft.com/kb/2744025


I will discuss my SP3 installation process and my comments for a successful implementation in another Post. One of the main issues that i faced after the installation of the UAG 2010 SP3 was that all my users were unable to access the UAG Portal page or any published site on the UAG (Mail OWA, SharePoint............etc) and the following error is Displayed:

Forbidden Directory, Listing Denied Error code 403.14

Microsoft has a good KB that deals with this error (http://support.microsoft.com/kb/961172) however in my case i needed to reboot the IIS later on. The Steps done to fix this issue is as follows:


  1. Open Forefront UAG management on the UAG server.
  2. Open/Explore the Trunks under the HTTP and/or HTTPS connections.
  3. Right click each Trunk and select Disable.
  4. Save and Activate the UAG configuration.
  5. Right Click the trunks again and select enable.
  6. Save and Activate the Configuration again.
  7. Open the IIS Manager
  8. Highlight your server name and from the actions hit restart to restart the IIS. This should do the trick.

Hopefully this should help anyone facing this problem after UAG SP3.

Microsoft UAG backup using DPM 2012

Posted by Ahmed Nabil In , | 0 comments»
I have been working on this issue for some time and it was always failing with different kind of errors, i consulted Microsoft Team as well and finally it worked with me. I am using DPM 2012 but it shouldn't differ with any earlier DPM version as 2010. The main blocking issue here is the TMG component on the UAG server. Remember that its highly recommended not to touch the TMG configuration on the UAG however this is one of the rare cases that need the administrator to tweak some TMG settings. To enable the DPM to backup UAG and install the client you need to do the following (Make sure to take Full backup from your TMG settings and Rules):



1. Ensure the File and Printer sharing is checked/Enabled on the UAG internal Network card.

2. From the TMG console- Firewall Policy. On the right pane click show system Policy rule

3. You need to disable system Rule number 2 (Allow Remote Management from selected computers using MMC) by Right clicking the rule and edit system policy, I am assuming the default TMG rules are not touched before.

4. You need to disable System Rule number 22 (Allow RPC from Forefront TMG to Trusted servers)

5. From the Right Pane, in the toolbox section create a new Protocol under user defined. The Protocol parameters as follows:

Primary connection: Type: TCP, Direction: Outbound, Port range: 135-135

Secondary Connection: Type: TCP, Direction: Outbound, Port range: 1024 - 65535

6. Final Step, create a new Access Rule (Make sure to move it to the top). Allow - All outbound traffic except selected (Choose - RPC All interfaces) - From DPM server (Create computer object with DPM IP address)- To Local Host (UAG server) - All users................etc



Save the Settings and ensure they are Synced from the monitoring tab. Now try to install the Agent from the DPM on the UAG server and take a simple test backup.