Howto: Disable a NIC when running Sysprep


Disabling a network card when running sysprepping a Windows machine is easy.  Two things need to happen: 

1.  Add the following command to the [GuiRunOnce] section of your sysprep.inf file
 
Command0=”C:\temp\disablenic.cmd”
 
2.  On the machine you are sysprepping, create a C:\temp\disablenic.cmd file that contains the following:
 
netsh interface set interface “Local Area Connection 2″ DISABLED
 
Change the name of the interface you want disabled as needed.  To determine the names of all network interfaces on a system, run the following command:
 
netsh interface show interface
 
Proceed with syspreping as normal. When the machine boots up, the specified network interface(s) will be disabled.

Network card configuration missing after P2V using VMware Converter


Last night I converted a physical Windows 2003 R2 server to a VMware virtual machine using VMware Converter Standalone version 4.0.1.  The entire process was extremely simple, only four steps.  After the P2V conversion completed, the physical machine powered off, and the newly created VM booted up.  Everything appeared to be normal, until I realized I couldn’t RDP into the new VM.

I jumped on the server console via the Virtual Infrastructure client, and found that my VM was receiving an IP address from DHCP, rather than the static address the physical server was configured with.  I attempted to assign the static IP to the NIC, and received a message that an existing NIC already was using that IP address.   No other NICs were visible in the Network Connections applet.

I immediately thought back to my post from earlier this summer titled Fix: The IP address you have entered for this network adapter is already assigned to another adapter that is hidden from the Network Connections folder because it is not physically in the computer”  This post details how to start Device Manager in a mode that shows hidden devices.  I was able to follow the steps to remove the phantom NIC, then was able to assign the static IP address to the VM’s NIC, which allowed me to RDP into the server once again. 

The steps are:

  1. Click Start, click Run, type cmd.exe, and then press ENTER.
  2. Type set devmgr_show_nonpresent_devices=1, and then press ENTER.
  3. Type Start DEVMGMT.MSC, and then press ENTER.
  4. Click View, and then click Show Hidden Devices.
  5. Expand the Network adapters tree.
  6. Right-click the dimmed network adapter, and then click Uninstall.
Finally I configured the static IP on the NIC, and all was well.

Fix: The IP address you have entered for this network adapter is already assigned to another adapter that is hidden from the Network Connections folder because it is not physically in the computer


I brought up a snapshot of a Windows Server 2003 R2 guest today and could not login to the domain.  After further review I found the server had lost its static TCP/IP settings – both NICs were set to DHCP (they had previously been statically set).  When I attempted to add the TCP/IP addresses back to the NICs, I received the following error message:

 
“The IP address you have entered for this network adapter is already assigned to another adapter “Fast Ethernet Adapter #2″. “Fast Ethernet Adapter #2″ is hidden from the Network Connections folder because it is not physically in the computer. If the same address is assigned to both adapters and they both become active, only one of them will use this address. This may result in incorrect system configuration. Do you want to enter a different IP address for this adapter in the list of IP addresses in the Advanced dialog box?”
 
Solutions – KB825826 outlines several potential fixes.  I ended up using Method #6 to remove the hidden network adapter.  To uninstall the ghosted network adapter from the registry, complete these steps:
  1. Click Start, click Run, type cmd.exe, and then press ENTER.
  2. Type set devmgr_show_nonpresent_devices=1, and then press ENTER.
  3. Type Start DEVMGMT.MSC, and then press ENTER.
  4. Click View, and then click Show Hidden Devices.
  5. Expand the Network adapters tree.
  6. Right-click the dimmed network adapter, and then click Uninstall.
Next I configured the static IP on the NIC, and regained network connectivity.  A reboot was required in my case, only because services dependant on domain availability did not automatically startup.

Howto: Edit network card bindings in Windows Server 2008


Figuring out how to edit the order of NIC bindings on a Windows 2008 Server took quite a bit of Googling. It seems that you need to know a secret key combination to be able to view the Advanced tab, where the option to edit the NIC bindings is located.

To edit the network card binding order in Windows Server 2008:

Login to the server with administrative credentials

Click Start > Control Panel > Network and Sharing Center

On the left hand side select Manage network connections

Press Alt+N to display the Advanced menu

Select Advanced Settings. On the Adapters and Bindings tab, highlight your NIC and use the arrows on the right hand side to adjust it’s binding order.

You can also access the Network Connections screen directly by clicking Start > Run , typing ncpa.cpl and pressing Enter

Fix for make install / compiler issues with Intel e1000 NIC driver in SLES 10


How I was able to make and install the Intel e1000 NIC driver in SLES 10 Linux:

Steps 1 through 3 of the e1000-8.0.6.tar.gz readme file are simple enough to follow when making the Intel e1000 network card driver on SLES 10 SP2.  
 
1. Move the base driver tar file to the directory of your choice.  For example, /usr/local/src/e1000
 
2. From a terminal prompt, untar archive:
 
    tar zxf e1000-8.0.6.tar.gz
 
3. Change to the driver src directory:
 
    cd e1000-8.0.6/src/
 
Step 4 was where I started having problems
 
4.  make install
 
should have compiled the driver module.  Instead, I received the following error:
 
Linux kernel source not found in any of these locations:
*** Install the appropriate kernel development package, e.g.
*** kernel-devel, for building kernel modules and try again. Stop.
 
I opened YaST and searched for kernel-devel, but that package was not listed.  I did see a kernel-source package, which I installed.  I then ran make install again, and this time I received a different error message:
 
Makefile:131: *** Compiler not found.  Stop.
 
I went back into YaST, installed the gcc compiler, which added glibc-devel and libmudflap packages as dependencies, and ran make install once again.  This time it compiled successfully.
 
The binary was installed as /lib/modules/2.6.16.60-0.21-bigsmp/kernel/drivers/net/e1000/e1000.ko
 
5.  Make sure to remove any older existing drivers before loading the new driver:
 
rmmod e1000
 
6.  The module was then loaded using the following syntax:
 
insmod /lib/modules/2.6.16.60-0.21-bigsmp/kernel/drivers/net/e1000/e1000.ko
 
Once you assign an IP address, you should be able to use the interface.

Dell PowerEdge 1950 NIC teaming test results


I’ve completed testing of the NIC teaming on our new Dell PowerEdge 1950 servers.  I’m more than a little bit surprised by the results, which I’ll get to in a moment.  My initial assumptions were that the network adapters would perform in the following order, from best to worst performing: 

1)  Teamed Intel NICs
2)  Teamed Broadcom NICs
3)  Single Intel NIC
4)  Single Broadcom NIC
 
I tested each configuration by copying a large file or directory of files from server PO1 to server PO2.  Both servers booted from SAN, ran Windows 2003 with the latest Windows patches and updates from our Patchlink server. PO2 was cloned from PO1 after being sysprep’d.  The servers were configured identically, each plugged into the same module on the same HP Procurve 5304xl switch.  The switch was configured with 802.3ad link aggregation.
 
The NICs that were tested were:
 
1 quad port Intel VT 1000 gigabit NIC PCI-X
2 integrated Broadcom Netxtreme II BCM5708C gigabit NICs
 
The files I used to test were:
  • OM_5.4.1_SUU_A00.iso, a 1.85 GB ISO image file
  • gw700.iso, a 689MB ISO image file
  • A 2.23 GB directory of 509 text files, each averaging 5MB in size
The methodology I used to test with was:
  • Install the NIC drivers and configure team’s static IP, subnet mask, default gateway, and DNS on each server.  Default team settings were used, including TCP Offload Engine (TOE), Large Send Offload (LSO), and Checksum Offload (CO)
  • Disable all unused NICs
  • Restart both servers
  • Copied the first test file from PO1 to PO2 using the following syntax:
  copy filename \\po2\c$\temp\test\
  • Timed how many seconds it took to copy the file from PO1 to PO2
  • Deleted the copied file from PO2
  • Copied the test file again from PO1 to PO2 until 5 passes were completed
  • Repeated the process for the next test file(s)
The following configurations were tested:
 
1) Single Intel NIC to Single Intel NIC using driver dated 6/13/08
 
2) Single Broadcom NIC to Single Broadcom NIC using driver dated 2/21/08
 
3) Single Intel NIC using driver dated 6/13/08 to Single Broadcom NIC using driver dated 2/21/08
 
4) Teamed Intel NIC to Teamed Intel NIC using driver dated 6/13/08
 
5) Teamed Intel NIC using driver dated 8/23/07 to Teamed Intel NIC using driver dated 6/13/08
 
6) Teamed Intel NIC to Teamed Intel NIC using driver dated 8/23/07
 
7) Teamed Broadcom NIC to Teamed Broadcom NIC using driver dated 2/21/08
 
You can see the test results in the attached document, but to summarize:
 
1)  The teamed Intel NICs performed the worst – even worse than using single Intel NICs
2)  The single Broadcom NIC outperformed the single Intel NIC
3)  The teamed Broadcom NICs were the highest performing
 
I have no clue why the results are what they are.  In the past, I’ve experienced horrendous performance with the Broadcoms, and great performance from the Intels.  Does anyone have any idea as to why the teamed Intel NICs would perform so poorly?
 
The only real difference I could see was that when copying files, Windows Task Manager showed Network Utilization at ~51-56% for the Broadcom tests, and ~16-17% for the Intel tests.  Why, I’m not sure.
 
The data in the spreadsheet shows actual averages, which was the average number of seconds it took to copy a file over five tries, and what is called adjusted average.  Adjusted average is something I learned about long ago in a stats class I had that said it’s a best practice to disregard the lowest and highest value in your sample.  Either way you look at it, the findings are the same:  The Intel performance is horrible while the Broadcoms perform great.
 
Based upon these tests I’m going to recommend going with the teamed Broadcom NICs in the new server deployment.
Follow

Get every new post delivered to your Inbox.

Join 32 other followers