Add support for Broadcom NetXtreme II GigE. Jumbo buffer support
authorSepherosa Ziehau <sephe@dragonflybsd.org>
Sat, 26 May 2007 08:50:49 +0000 (08:50 +0000)
committerSepherosa Ziehau <sephe@dragonflybsd.org>
Sat, 26 May 2007 08:50:49 +0000 (08:50 +0000)
is missing currently, which will be added later.

Thank David Christensen <davidch@broadcom.com> for sending us two
sample NICs.
Thank dillon@ for providing a blazing fast machine and environment
to test the driver.
Also thank Walter <wa1ter@myrealbox.com> very much, who contacted
Broadcom for me :)

Obtained-from: FreeBSD (w/ modification)

13 files changed:
share/man/man4/Makefile
share/man/man4/bce.4 [new file with mode: 0644]
sys/conf/files
sys/conf/options
sys/config/GENERIC
sys/config/LINT
sys/config/NATA
sys/dev/netif/Makefile
sys/dev/netif/bce/Makefile [new file with mode: 0644]
sys/dev/netif/bce/if_bce.c [new file with mode: 0644]
sys/dev/netif/bce/if_bcefw.h [new file with mode: 0644]
sys/dev/netif/bce/if_bcereg.h [new file with mode: 0644]
sys/dev/netif/mii_layer/brgphy.c

index 4b9fcba..a2bfd1a 100644 (file)
@@ -1,6 +1,6 @@
 #      @(#)Makefile    8.1 (Berkeley) 6/18/93
 # $FreeBSD: src/share/man/man4/Makefile,v 1.83.2.66 2003/06/04 17:10:30 sam Exp $
-# $DragonFly: src/share/man/man4/Makefile,v 1.52 2007/04/01 13:19:49 swildner Exp $
+# $DragonFly: src/share/man/man4/Makefile,v 1.53 2007/05/26 08:50:49 sephe Exp $
 
 MAN=   aac.4 \
        acpi.4 \
@@ -26,6 +26,7 @@ MAN=  aac.4 \
        aue.4 \
        awi.4 \
        axe.4 \
+       bce.4 \
        bfe.4 \
        bge.4 \
        bktr.4 \
diff --git a/share/man/man4/bce.4 b/share/man/man4/bce.4
new file mode 100644 (file)
index 0000000..c7368d7
--- /dev/null
@@ -0,0 +1,295 @@
+.\" Copyright (c) 2006 Broadcom Corporation
+.\"  David Christensen <davidch@broadcom.com>.  All rights reserved.
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions
+.\" are met:
+.\"
+.\" 1. Redistributions of source code must retain the above copyright
+.\"    notice, this list of conditions and the following disclaimer.
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\"    notice, this list of conditions and the following disclaimer in the
+.\"    documentation and/or other materials provided with the distribution.
+.\" 3. Neither the name of Broadcom Corporation nor the name of its contributors
+.\"    may be used to endorse or promote products derived from this software
+.\"    without specific prior written consent.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS'
+.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
+.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+.\" THE POSSIBILITY OF SUCH DAMAGE.
+.\"
+.\" $FreeBSD: src/share/man/man4/bce.4,v 1.7 2007/02/09 18:26:13 brueffer Exp $
+.\" $DragonFly: src/share/man/man4/bce.4,v 1.1 2007/05/26 08:50:49 sephe Exp $
+.\"
+.Dd May 26, 2007
+.Dt BCE 4
+.Os
+.Sh NAME
+.Nm bce
+.Nd "Broadcom NetXtreme II (BCM5706/BCM5708) PCI/PCIe Gigabit Ethernet adapter driver"
+.Sh SYNOPSIS
+To compile this driver into the kernel,
+place the following lines in your
+kernel configuration file:
+.Bd -ragged -offset indent
+.Cd "device miibus"
+.Cd "device bce"
+.Ed
+.Pp
+Alternatively, to load the driver as a
+module at boot time, place the following line in
+.Xr loader.conf 5 :
+.Bd -literal -offset indent
+if_bce_load="YES"
+.Ed
+.Sh DESCRIPTION
+The
+.Nm
+driver supports Broadcom's NetXtreme II product family, including the
+BCM5706 and BCM5708 Ethernet controllers.
+.Pp
+The NetXtreme II product family is composed of various Converged NIC (or CNIC)
+Ethernet controllers which support a TCP Offload Engine (TOE), Remote DMA (RDMA),
+and iSCSI acceleration, in addition to standard L2 Ethernet traffic, all on the
+same controller.
+The following features are supported in the
+.Nm
+driver under
+.Dx :
+.Pp
+.Bl -item -offset indent -compact
+.It
+.\"IP/TCP/UDP checksum offload
+TCP/UDP checksum offload
+.\".It
+.\"Jumbo frames (up to 9022 bytes)
+.It
+VLAN tag stripping
+.It
+Interrupt coalescing
+.It
+10/100/1000Mbps operation in full-duplex mode
+.It
+10/100Mbps operation in half-duplex mode
+.El
+.Pp
+The
+.Nm
+driver supports the following media types:
+.Bl -tag -width ".Cm 10baseT/UTP"
+.It Cm autoselect
+Enable autoselection of the media type and options.
+The user can manually override
+the autoselected mode by adding media options to
+.Xr rc.conf 5 .
+.It Cm 10baseT/UTP
+Set 10Mbps operation.
+The
+.Xr ifconfig 8
+.Cm mediaopt
+option can also be used to select either
+.Cm full-duplex
+or
+.Cm half-duplex
+modes.
+.It Cm 100baseTX
+Set 100Mbps (Fast Ethernet) operation.
+The
+.Xr ifconfig 8
+.Cm mediaopt
+option can also be used to select either
+.Cm full-duplex
+or
+.Cm half-duplex
+modes.
+.It Cm 1000baseT
+Set 1000baseT operation over twisted pair.
+Only
+.Cm full-duplex
+mode is supported.
+.El
+.Pp
+The
+.Nm
+driver supports the following media options:
+.Bl -tag -width ".Cm full-duplex"
+.It Cm full-duplex
+Force full duplex operation.
+.It Cm half-duplex
+Force half duplex operation.
+.El
+.Pp
+For more information on configuring this device, see
+.Xr ifconfig 8 .
+.Sh HARDWARE
+The
+.Nm
+driver provides support for various NICs based on the Broadcom NetXtreme II
+family of Gigabit Ethernet controllers, including the
+following:
+.Pp
+.Bl -bullet -compact
+.It
+HP NC370T Multifunction Gigabit Server Adapter
+.It
+HP NC370i Multifunction Gigabit Server Adapter
+.El
+.Sh DIAGNOSTICS
+.Bl -diag
+.It "bce%d: PCI memory allocation failed!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: PCI map interrupt failed!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: Unsupported controller revision (%c%d)"
+The driver does not support the controller revision in use.
+.It "bce%d: Controller initialization failed!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: NVRAM test failed!"
+The driver could not access the controller NVRAM correctly.
+.It "bce%d: DMA resource allocation failed!"
+The driver could not allocate DMA memory to setup the controllers
+host memory data structures.
+.It "bce%d: Interface allocation failed!"
+The driver could not create a network interface for the controller.
+.It "bce%d: PHY probe failed!"
+The driver could not access the PHY used by the controller.
+.It "bce%d: Failed to setup IRQ!"
+The driver could not initialize the IRQ handler.
+.It "bce%d: Error: PHY read timeout!"
+The driver could not read a PHY register before the timeout period expired.
+.It "bce%d: PHY write timeout!"
+The driver could not write to the PHY register because a timeout occurred.
+.It "bce%d: Timeout error reading NVRAM at offset 0x%08X!"
+The driver could not write to NVRAM because a timeout occurred.
+.It "bce%d: Unknown Flash NVRAM found!"
+The driver does not recognize the NVRAM device being used and therefore
+cannot access it correctly.
+.It "bce%d: Invalid NVRAM magic value!"
+The driver cannot read NVRAM or the NVRAM is corrupt.
+.It "bce%d: Invalid Manufacturing Information NVRAM CRC!"
+The driver cannot read NVRAM or the NVRAM is corrupt.
+.It "bce%d: Invalid Feature Configuration Information NVRAM CRC!"
+The driver cannot read NVRAM or the NVRAM is corrupt.
+.It "bce%d: DMA mapping error!"
+The driver was unable to map memory into DMA addressable space required
+by the controller.
+.It "bce%d: Could not allocate parent DMA tag!"
+The driver could not allocate a PCI compatible DMA tag.
+.It "bce%d: Could not allocate status block DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+status block.
+.It "bce%d: Could not allocate status block DMA memory!"
+The driver could not allocate DMA addressable memory for the controller's
+status block.
+.It "bce_d: Could not map status block DMA memory!"
+The driver could not map the status block memory into the controller's DMA
+address space.
+.It "bce%d: Could not allocate statistics block DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+statistics block.
+.It "bce%d: Could not allocate statistics block DMA memory!"
+The driver could not allocate DMA addressable memory for the controller's
+statistics block.
+.It "bce%d: Could not map statistics block DMA memory!"
+The driver could not map the statistics block memory into the controller's DMA
+address space.
+.It "bce%d: Could not allocate TX descriptor chain DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+TX chain.
+.It "bce%d: Could not allocate TX descriptor chain DMA memory!
+The driver could not allocate DMA addressable memory for the controller's
+TX chain.
+.It "bce%d: Could not map TX descriptor chain DMA memory!"
+The driver could not map the TX descriptor chain memory into the controller's DMA
+address space.
+.It "bce%d: Could not allocate TX mbuf DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+TX mbuf memory.
+.It "bce%d: Unable to create TX mbuf DMA map!"
+The driver could not map the TX mbuf memory into the controller's DMA
+address space.
+.It "bce%d: Could not allocate RX descriptor chain DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+RX chain.
+.It "bce%d: Could not allocate RX descriptor chain "
+The driver could not allocate DMA addressable memory for the controller's
+RX chain.
+.It "bce%d: Could not map RX descriptor chain DMA memory!"
+The driver could not map the RX descriptor chain memory into the controller's DMA
+address space.
+.It "bce%d: Could not allocate RX mbuf DMA tag!"
+The driver could not allocate a DMA tag for the controller's
+RX mbuf memory.
+.It "bce%d: Unable to create RX mbuf DMA map!"
+The driver could not map the RX mbuf memory into the controller's DMA
+address space.
+.It "bce%d: Firmware synchronization timeout!"
+The driver was not able to synchronize with the firmware running on the
+controller.
+The firmware may be stopped or hung.
+.It "bce%d: Invalid Ethernet address!"
+The driver was not able to read a valid Ethernet MAC address from NVRAM.
+.It "bce%d: Reset failed!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: Byte swap is incorrect!"
+The driver has encountered a fatal initialization error.
+Contact the author
+with details of the CPU architecture and system chipset in use.
+.It "bce%d: Firmware did not complete initialization!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: Bootcode not running!"
+The driver has encountered a fatal initialization error.
+.It "bce%d: Error mapping mbuf into RX chain!"
+The driver could not map a RX mbuf into DMA addressable memory.
+.It "bce%d: Error filling RX chain: rx_bd[0x%04X]!"
+The driver was unable to allocate enough mbufs to fill the RX chain
+during initialization.
+Try increasing the number of mbufs available in
+the system, increase system memory.
+.\"or if using jumbo frames, make sure enough 9KB mbufs are available.
+.It "bce%d: Failed to allocate new mbuf, incoming frame dropped!"
+The driver was unable to allocate a new mbuf for the RX chain and reused
+the mbuf for the received frame, dropping the incoming frame in the process.
+Try increasing the number of mbufs available in the system or increase system
+memory.
+.It "bce%d: Controller reset failed!"
+A fatal initialization error has occurred.
+.It "bce%d: Controller initialization failed!"
+A fatal initialization error has occurred.
+.It "bce%d: Block initialization failed!"
+A fatal initialization error has occurred.
+.It "bce%d: Error mapping mbuf into TX chain!"
+The driver could not map a TX mbuf into DMA addressable memory.
+.It "bce%d: Watchdog timeout occurred, resetting!"
+The device has stopped responding to the network, there is a problem
+with the cable connection, or a driver logic problem has occurred..
+.It "bce%d: Fatal attention detected: 0x%08X!"
+A controller hardware failure has occurred.
+If the problem continues replace the controller.
+.El
+.Sh SEE ALSO
+.Xr altq 4 ,
+.Xr arp 4 ,
+.Xr miibus 4 ,
+.Xr netintro 4 ,
+.Xr ng_ether 4 ,
+.Xr vlan 4 ,
+.Xr ifconfig 8
+.Sh HISTORY
+The
+.Nm
+device driver first appeared in
+.Fx 6.1 .
+.Sh AUTHORS
+The
+.Nm
+driver was written by
+.An David Christensen Aq davidch@broadcom.com .
index 92cd5b7..a4ee45d 100644 (file)
@@ -1,5 +1,5 @@
 # $FreeBSD: src/sys/conf/files,v 1.340.2.137 2003/06/04 17:10:30 sam Exp $
-# $DragonFly: src/sys/conf/files,v 1.159 2007/04/29 01:29:30 dillon Exp $
+# $DragonFly: src/sys/conf/files,v 1.160 2007/05/26 08:50:49 sephe Exp $
 #
 # The long compile-with and dependency lines are required because of
 # limitations in config: backslash-newline doesn't work in strings, and
@@ -179,6 +179,7 @@ dev/disk/aic7xxx/ahd_pci.c          optional ahd pci
 dev/disk/aic7xxx/aic79xx.c             optional ahd pci
 dev/disk/aic7xxx/aic79xx_osm.c         optional ahd pci
 dev/disk/aic7xxx/aic79xx_pci.c         optional ahd pci
+dev/netif/bce/if_bce.c                 optional bce
 dev/netif/bfe/if_bfe.c                 optional bfe
 dev/netif/bge/if_bge.c                 optional bge
 dev/disk/buslogic/bt.c                 optional bt
index 184ec98..655625b 100644 (file)
@@ -1,5 +1,5 @@
 # $FreeBSD: src/sys/conf/options,v 1.191.2.53 2003/06/04 17:56:58 sam Exp $
-# $DragonFly: src/sys/conf/options,v 1.64 2007/04/26 02:10:57 dillon Exp $
+# $DragonFly: src/sys/conf/options,v 1.65 2007/05/26 08:50:49 sephe Exp $
 #
 #        On the handling of kernel options
 #
@@ -512,6 +512,10 @@ KTR_VERBOSE                        opt_ktr.h
 KTR_GIANT_CONTENTION           opt_ktr.h
 KTR_SPIN_CONTENTION            opt_ktr.h
 
+# bce driver
+BCE_DEBUG              opt_bce.h
+BCE_NVRAM_WRITE_SUPPORT        opt_bce.h
+
 # ed driver
 ED_NO_MIIBUS           opt_ed.h
 
index 65e368b..40cee8b 100644 (file)
@@ -4,7 +4,7 @@
 # Check the LINT configuration file in sys/config, for an
 # exhaustive list of options.
 #
-# $DragonFly: src/sys/config/GENERIC,v 1.49 2007/04/01 13:59:41 sephe Exp $
+# $DragonFly: src/sys/config/GENERIC,v 1.50 2007/05/26 08:50:49 sephe Exp $
 
 platform       pc32
 machine                i386
@@ -192,6 +192,7 @@ device              vx              # 3Com 3c590, 3c595 (``Vortex'')
 # PCI Ethernet NICs that use the common MII bus controller code.
 # NOTE: Be sure to keep the 'device miibus' line in order to use these NICs!
 device         miibus          # MII bus support
+device         bce             # Broadcom NetXtreme II Gigabit Ethernet
 device         bfe             # Broadcom BCM440x 10/100 Ethernet
 device         dc              # DEC/Intel 21143 and various workalikes
 device         fxp             # Intel EtherExpress PRO/100B (82557, 82558)
index c74deac..58adddd 100644 (file)
@@ -3,7 +3,7 @@
 #      as much of the source tree as it can.
 #
 # $FreeBSD: src/sys/i386/conf/LINT,v 1.749.2.144 2003/06/04 17:56:59 sam Exp $
-# $DragonFly: src/sys/config/LINT,v 1.114 2007/05/23 17:07:55 swildner Exp $
+# $DragonFly: src/sys/config/LINT,v 1.115 2007/05/26 08:50:49 sephe Exp $
 #
 # NB: You probably don't want to try running a kernel built from this
 # file.  Instead, you should start from GENERIC, and add options from
@@ -1961,6 +1961,7 @@ device            trm             # Tekram DC395U/UW/F and DC315U
 device         miibus
 
 # PCI Ethernet NICs that use the common MII bus controller code.
+device         bce             # Broadcom NetXtreme II Gigabit Ethernet
 device         bfe             # Broadcom BCM440x 10/100 Ethernet
 device         dc              # DEC/Intel 21143 and various workalikes
 device         fxp             # Intel EtherExpress PRO/100B (82557, 82558)
@@ -2698,6 +2699,8 @@ options   DEBUG_CRIT_SECTIONS
 options        DEBUG_INTERRUPTS
 options        DEVICE_SYSCTLS
 #options       DISABLE_PSE
+options                BCE_DEBUG
+options                BCE_NVRAM_WRITE_SUPPORT
 #options       ED_NO_MIIBUS
 options        ENABLE_ALART
 options        FB_DEBUG=2
index 40c2ee6..ee62c0a 100644 (file)
@@ -6,7 +6,7 @@
 # Check the LINT configuration file in sys/config, for an
 # exhaustive list of options.
 #
-# $DragonFly: src/sys/config/Attic/NATA,v 1.3 2007/05/18 02:07:57 dillon Exp $
+# $DragonFly: src/sys/config/Attic/NATA,v 1.4 2007/05/26 08:50:49 sephe Exp $
 
 platform       pc32
 machine                i386
@@ -194,6 +194,7 @@ device              vx              # 3Com 3c590, 3c595 (``Vortex'')
 # PCI Ethernet NICs that use the common MII bus controller code.
 # NOTE: Be sure to keep the 'device miibus' line in order to use these NICs!
 device         miibus          # MII bus support
+device         bce             # Broadcom NetXtreme II Gigabit Ethernet
 device         bfe             # Broadcom BCM440x 10/100 Ethernet
 device         dc              # DEC/Intel 21143 and various workalikes
 device         fxp             # Intel EtherExpress PRO/100B (82557, 82558)
index 7119422..7e476ff 100644 (file)
@@ -1,8 +1,8 @@
-# $DragonFly: src/sys/dev/netif/Makefile,v 1.24 2006/12/10 04:59:38 sephe Exp $
+# $DragonFly: src/sys/dev/netif/Makefile,v 1.25 2007/05/26 08:50:49 sephe Exp $
 #
 
-SUBDIR= an acx ar ath aue axe bfe bge cue dc ed em ep fwe fxp gx ipw iwi kue \
-       lge lnc mii_layer my nfe nge nv pcn ral ray re rl rtw rue rum sbni \
-       sbsh sf sis sk sr ste stge ti tl tx txp ural vge vr vx wb wi xe xl
+SUBDIR= an acx ar ath aue axe bce bfe bge cue dc ed em ep fwe fxp gx ipw iwi \
+       kue lge lnc mii_layer my nfe nge nv pcn ral ray re rl rtw rue rum \
+       sbni sbsh sf sis sk sr ste stge ti tl tx txp ural vge vr vx wb wi xe xl
 
 .include <bsd.subdir.mk>
diff --git a/sys/dev/netif/bce/Makefile b/sys/dev/netif/bce/Makefile
new file mode 100644 (file)
index 0000000..710d75d
--- /dev/null
@@ -0,0 +1,18 @@
+# $DragonFly: src/sys/dev/netif/bce/Makefile,v 1.1 2007/05/26 08:50:49 sephe Exp $
+
+KMOD=  if_bce
+SRCS=  if_bce.c
+SRCS+= miibus_if.h device_if.h bus_if.h pci_if.h
+SRCS+= opt_bce.h opt_polling.h
+
+KMODDEPS = miibus
+
+#ifndef BUILDING_WITH_KERNEL
+opt_polling.h:
+       echo '#define DEVICE_POLLING 1' > ${.OBJDIR}/${.TARGET}
+
+opt_bce.h:
+       touch ${.OBJDIR}/${.TARGET}
+#endif
+
+.include <bsd.kmod.mk>
diff --git a/sys/dev/netif/bce/if_bce.c b/sys/dev/netif/bce/if_bce.c
new file mode 100644 (file)
index 0000000..0517cfd
--- /dev/null
@@ -0,0 +1,7136 @@
+/*-
+ * Copyright (c) 2006-2007 Broadcom Corporation
+ *     David Christensen <davidch@broadcom.com>.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of Broadcom Corporation nor the name of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written consent.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS'
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $FreeBSD: src/sys/dev/bce/if_bce.c,v 1.31 2007/05/16 23:34:11 davidch Exp $
+ * $DragonFly: src/sys/dev/netif/bce/if_bce.c,v 1.1 2007/05/26 08:50:49 sephe Exp $
+ */
+
+/*
+ * The following controllers are supported by this driver:
+ *   BCM5706C A2, A3
+ *   BCM5708C B1, B2
+ *
+ * The following controllers are not supported by this driver:
+ *   BCM5706C A0, A1
+ *   BCM5706S A0, A1, A2, A3
+ *   BCM5708C A0, B0
+ *   BCM5708S A0, B0, B1, B2
+ */
+
+#include "opt_bce.h"
+#include "opt_polling.h"
+
+#include <sys/param.h>
+#include <sys/bus.h>
+#include <sys/endian.h>
+#include <sys/kernel.h>
+#include <sys/mbuf.h>
+#include <sys/malloc.h>
+#include <sys/queue.h>
+#ifdef BCE_DEBUG
+#include <sys/random.h>
+#endif
+#include <sys/rman.h>
+#include <sys/serialize.h>
+#include <sys/socket.h>
+#include <sys/sockio.h>
+#include <sys/sysctl.h>
+
+#include <net/bpf.h>
+#include <net/ethernet.h>
+#include <net/if.h>
+#include <net/if_arp.h>
+#include <net/if_dl.h>
+#include <net/if_media.h>
+#include <net/if_types.h>
+#include <net/ifq_var.h>
+#include <net/vlan/if_vlan_var.h>
+
+#include <dev/netif/mii_layer/mii.h>
+#include <dev/netif/mii_layer/miivar.h>
+
+#include <bus/pci/pcireg.h>
+#include <bus/pci/pcivar.h>
+
+#include "miibus_if.h"
+
+#include "if_bcereg.h"
+#include "if_bcefw.h"
+
+/****************************************************************************/
+/* BCE Debug Options                                                        */
+/****************************************************************************/
+#ifdef BCE_DEBUG
+
+static uint32_t        bce_debug = BCE_WARN;
+
+/*
+ *          0 = Never             
+ *          1 = 1 in 2,147,483,648
+ *        256 = 1 in     8,388,608
+ *       2048 = 1 in     1,048,576
+ *      65536 = 1 in        32,768
+ *    1048576 = 1 in         2,048
+ *  268435456 = 1 in             8
+ *  536870912 = 1 in             4
+ * 1073741824 = 1 in             2
+ *
+ * bce_debug_l2fhdr_status_check:
+ *     How often the l2_fhdr frame error check will fail.
+ *
+ * bce_debug_unexpected_attention:
+ *     How often the unexpected attention check will fail.
+ *
+ * bce_debug_mbuf_allocation_failure:
+ *     How often to simulate an mbuf allocation failure.
+ *
+ * bce_debug_dma_map_addr_failure:
+ *     How often to simulate a DMA mapping failure.
+ *
+ * bce_debug_bootcode_running_failure:
+ *     How often to simulate a bootcode failure.
+ */
+static int     bce_debug_l2fhdr_status_check = 0;
+static int     bce_debug_unexpected_attention = 0;
+static int     bce_debug_mbuf_allocation_failure = 0;
+static int     bce_debug_dma_map_addr_failure = 0;
+static int     bce_debug_bootcode_running_failure = 0;
+
+#endif /* BCE_DEBUG */
+
+
+/****************************************************************************/
+/* PCI Device ID Table                                                      */
+/*                                                                          */
+/* Used by bce_probe() to identify the devices supported by this driver.    */
+/****************************************************************************/
+#define BCE_DEVDESC_MAX                64
+
+static struct bce_type bce_devs[] = {
+       /* BCM5706C Controllers and OEM boards. */
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5706,  HP_VENDORID, 0x3101,
+               "HP NC370T Multifunction Gigabit Server Adapter" },
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5706,  HP_VENDORID, 0x3106,
+               "HP NC370i Multifunction Gigabit Server Adapter" },
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5706,  PCI_ANY_ID,  PCI_ANY_ID,
+               "Broadcom NetXtreme II BCM5706 1000Base-T" },
+
+       /* BCM5706S controllers and OEM boards. */
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5706S, HP_VENDORID, 0x3102,
+               "HP NC370F Multifunction Gigabit Server Adapter" },
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5706S, PCI_ANY_ID,  PCI_ANY_ID,
+               "Broadcom NetXtreme II BCM5706 1000Base-SX" },
+
+       /* BCM5708C controllers and OEM boards. */
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5708,  PCI_ANY_ID,  PCI_ANY_ID,
+               "Broadcom NetXtreme II BCM5708 1000Base-T" },
+
+       /* BCM5708S controllers and OEM boards. */
+       { BRCM_VENDORID, BRCM_DEVICEID_BCM5708S,  PCI_ANY_ID,  PCI_ANY_ID,
+               "Broadcom NetXtreme II BCM5708S 1000Base-T" },
+       { 0, 0, 0, 0, NULL }
+};
+
+
+/****************************************************************************/
+/* Supported Flash NVRAM device data.                                       */
+/****************************************************************************/
+static const struct flash_spec flash_table[] =
+{
+       /* Slow EEPROM */
+       {0x00000000, 0x40830380, 0x009f0081, 0xa184a053, 0xaf000400,
+        1, SEEPROM_PAGE_BITS, SEEPROM_PAGE_SIZE,
+        SEEPROM_BYTE_ADDR_MASK, SEEPROM_TOTAL_SIZE,
+        "EEPROM - slow"},
+       /* Expansion entry 0001 */
+       {0x08000002, 0x4b808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 0001"},
+       /* Saifun SA25F010 (non-buffered flash) */
+       /* strap, cfg1, & write1 need updates */
+       {0x04000001, 0x47808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, SAIFUN_FLASH_BASE_TOTAL_SIZE*2,
+        "Non-buffered flash (128kB)"},
+       /* Saifun SA25F020 (non-buffered flash) */
+       /* strap, cfg1, & write1 need updates */
+       {0x0c000003, 0x4f808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, SAIFUN_FLASH_BASE_TOTAL_SIZE*4,
+        "Non-buffered flash (256kB)"},
+       /* Expansion entry 0100 */
+       {0x11000000, 0x53808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 0100"},
+       /* Entry 0101: ST M45PE10 (non-buffered flash, TetonII B0) */
+       {0x19000002, 0x5b808201, 0x000500db, 0x03840253, 0xaf020406,
+        0, ST_MICRO_FLASH_PAGE_BITS, ST_MICRO_FLASH_PAGE_SIZE,
+        ST_MICRO_FLASH_BYTE_ADDR_MASK, ST_MICRO_FLASH_BASE_TOTAL_SIZE*2,
+        "Entry 0101: ST M45PE10 (128kB non-bufferred)"},
+       /* Entry 0110: ST M45PE20 (non-buffered flash)*/
+       {0x15000001, 0x57808201, 0x000500db, 0x03840253, 0xaf020406,
+        0, ST_MICRO_FLASH_PAGE_BITS, ST_MICRO_FLASH_PAGE_SIZE,
+        ST_MICRO_FLASH_BYTE_ADDR_MASK, ST_MICRO_FLASH_BASE_TOTAL_SIZE*4,
+        "Entry 0110: ST M45PE20 (256kB non-bufferred)"},
+       /* Saifun SA25F005 (non-buffered flash) */
+       /* strap, cfg1, & write1 need updates */
+       {0x1d000003, 0x5f808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, SAIFUN_FLASH_BASE_TOTAL_SIZE,
+        "Non-buffered flash (64kB)"},
+       /* Fast EEPROM */
+       {0x22000000, 0x62808380, 0x009f0081, 0xa184a053, 0xaf000400,
+        1, SEEPROM_PAGE_BITS, SEEPROM_PAGE_SIZE,
+        SEEPROM_BYTE_ADDR_MASK, SEEPROM_TOTAL_SIZE,
+        "EEPROM - fast"},
+       /* Expansion entry 1001 */
+       {0x2a000002, 0x6b808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 1001"},
+       /* Expansion entry 1010 */
+       {0x26000001, 0x67808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 1010"},
+       /* ATMEL AT45DB011B (buffered flash) */
+       {0x2e000003, 0x6e808273, 0x00570081, 0x68848353, 0xaf000400,
+        1, BUFFERED_FLASH_PAGE_BITS, BUFFERED_FLASH_PAGE_SIZE,
+        BUFFERED_FLASH_BYTE_ADDR_MASK, BUFFERED_FLASH_TOTAL_SIZE,
+        "Buffered flash (128kB)"},
+       /* Expansion entry 1100 */
+       {0x33000000, 0x73808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 1100"},
+       /* Expansion entry 1101 */
+       {0x3b000002, 0x7b808201, 0x00050081, 0x03840253, 0xaf020406,
+        0, SAIFUN_FLASH_PAGE_BITS, SAIFUN_FLASH_PAGE_SIZE,
+        SAIFUN_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 1101"},
+       /* Ateml Expansion entry 1110 */
+       {0x37000001, 0x76808273, 0x00570081, 0x68848353, 0xaf000400,
+        1, BUFFERED_FLASH_PAGE_BITS, BUFFERED_FLASH_PAGE_SIZE,
+        BUFFERED_FLASH_BYTE_ADDR_MASK, 0,
+        "Entry 1110 (Atmel)"},
+       /* ATMEL AT45DB021B (buffered flash) */
+       {0x3f000003, 0x7e808273, 0x00570081, 0x68848353, 0xaf000400,
+        1, BUFFERED_FLASH_PAGE_BITS, BUFFERED_FLASH_PAGE_SIZE,
+        BUFFERED_FLASH_BYTE_ADDR_MASK, BUFFERED_FLASH_TOTAL_SIZE*2,
+        "Buffered flash (256kB)"},
+};
+
+
+/****************************************************************************/
+/* DragonFly device entry points.                                           */
+/****************************************************************************/
+static int     bce_probe(device_t);
+static int     bce_attach(device_t);
+static int     bce_detach(device_t);
+static void    bce_shutdown(device_t);
+
+/****************************************************************************/
+/* BCE Debug Data Structure Dump Routines                                   */
+/****************************************************************************/
+#ifdef BCE_DEBUG
+static void    bce_dump_mbuf(struct bce_softc *, struct mbuf *);
+static void    bce_dump_tx_mbuf_chain(struct bce_softc *, int, int);
+static void    bce_dump_rx_mbuf_chain(struct bce_softc *, int, int);
+static void    bce_dump_txbd(struct bce_softc *, int, struct tx_bd *);
+static void    bce_dump_rxbd(struct bce_softc *, int, struct rx_bd *);
+static void    bce_dump_l2fhdr(struct bce_softc *, int,
+                               struct l2_fhdr *) __unused;
+static void    bce_dump_tx_chain(struct bce_softc *, int, int);
+static void    bce_dump_rx_chain(struct bce_softc *, int, int);
+static void    bce_dump_status_block(struct bce_softc *);
+static void    bce_dump_driver_state(struct bce_softc *);
+static void    bce_dump_stats_block(struct bce_softc *) __unused;
+static void    bce_dump_hw_state(struct bce_softc *);
+static void    bce_dump_txp_state(struct bce_softc *);
+static void    bce_dump_rxp_state(struct bce_softc *) __unused;
+static void    bce_dump_tpat_state(struct bce_softc *) __unused;
+static void    bce_freeze_controller(struct bce_softc *) __unused;
+static void    bce_unfreeze_controller(struct bce_softc *) __unused;
+static void    bce_breakpoint(struct bce_softc *);
+#endif /* BCE_DEBUG */
+
+
+/****************************************************************************/
+/* BCE Register/Memory Access Routines                                      */
+/****************************************************************************/
+static uint32_t        bce_reg_rd_ind(struct bce_softc *, uint32_t);
+static void    bce_reg_wr_ind(struct bce_softc *, uint32_t, uint32_t);
+static void    bce_ctx_wr(struct bce_softc *, uint32_t, uint32_t, uint32_t);
+static int     bce_miibus_read_reg(device_t, int, int);
+static int     bce_miibus_write_reg(device_t, int, int, int);
+static void    bce_miibus_statchg(device_t);
+
+
+/****************************************************************************/
+/* BCE NVRAM Access Routines                                                */
+/****************************************************************************/
+static int     bce_acquire_nvram_lock(struct bce_softc *);
+static int     bce_release_nvram_lock(struct bce_softc *);
+static void    bce_enable_nvram_access(struct bce_softc *);
+static void    bce_disable_nvram_access(struct bce_softc *);
+static int     bce_nvram_read_dword(struct bce_softc *, uint32_t, uint8_t *,
+                                    uint32_t);
+static int     bce_init_nvram(struct bce_softc *);
+static int     bce_nvram_read(struct bce_softc *, uint32_t, uint8_t *, int);
+static int     bce_nvram_test(struct bce_softc *);
+#ifdef BCE_NVRAM_WRITE_SUPPORT
+static int     bce_enable_nvram_write(struct bce_softc *);
+static void    bce_disable_nvram_write(struct bce_softc *);
+static int     bce_nvram_erase_page(struct bce_softc *, uint32_t);
+static int     bce_nvram_write_dword(struct bce_softc *, uint32_t, uint8_t *,                                        uint32_t);
+static int     bce_nvram_write(struct bce_softc *, uint32_t, uint8_t *,
+                               int) __unused;
+#endif
+
+/****************************************************************************/
+/* BCE DMA Allocate/Free Routines                                           */
+/****************************************************************************/
+static int     bce_dma_alloc(struct bce_softc *);
+static void    bce_dma_free(struct bce_softc *);
+static void    bce_dma_map_addr(void *, bus_dma_segment_t *, int, int);
+static void    bce_dma_map_mbuf(void *, bus_dma_segment_t *, int,
+                                bus_size_t, int);
+
+/****************************************************************************/
+/* BCE Firmware Synchronization and Load                                    */
+/****************************************************************************/
+static int     bce_fw_sync(struct bce_softc *, uint32_t);
+static void    bce_load_rv2p_fw(struct bce_softc *, uint32_t *,
+                                uint32_t, uint32_t);
+static void    bce_load_cpu_fw(struct bce_softc *, struct cpu_reg *,
+                               struct fw_info *);
+static void    bce_init_cpus(struct bce_softc *);
+
+static void    bce_stop(struct bce_softc *);
+static int     bce_reset(struct bce_softc *, uint32_t);
+static int     bce_chipinit(struct bce_softc *);
+static int     bce_blockinit(struct bce_softc *);
+static int     bce_newbuf_std(struct bce_softc *, struct mbuf *,
+                              uint16_t *, uint16_t *, uint32_t *);
+
+static int     bce_init_tx_chain(struct bce_softc *);
+static int     bce_init_rx_chain(struct bce_softc *);
+static void    bce_free_rx_chain(struct bce_softc *);
+static void    bce_free_tx_chain(struct bce_softc *);
+
+static int     bce_encap(struct bce_softc *, struct mbuf **);
+static void    bce_start(struct ifnet *);
+static int     bce_ioctl(struct ifnet *, u_long, caddr_t, struct ucred *);
+static void    bce_watchdog(struct ifnet *);
+static int     bce_ifmedia_upd(struct ifnet *);
+static void    bce_ifmedia_sts(struct ifnet *, struct ifmediareq *);
+static void    bce_init(void *);
+static void    bce_mgmt_init(struct bce_softc *);
+
+static void    bce_init_context(struct bce_softc *);
+static void    bce_get_mac_addr(struct bce_softc *);
+static void    bce_set_mac_addr(struct bce_softc *);
+static void    bce_phy_intr(struct bce_softc *);
+static void    bce_rx_intr(struct bce_softc *, int);
+static void    bce_tx_intr(struct bce_softc *);
+static void    bce_disable_intr(struct bce_softc *);
+static void    bce_enable_intr(struct bce_softc *);
+
+#ifdef DEVICE_POLLING
+static void    bce_poll(struct ifnet *, enum poll_cmd, int);
+#endif
+static void    bce_intr(void *);
+static void    bce_set_rx_mode(struct bce_softc *);
+static void    bce_stats_update(struct bce_softc *);
+static void    bce_tick(void *);
+static void    bce_tick_serialized(struct bce_softc *);
+static void    bce_add_sysctls(struct bce_softc *);
+
+
+/****************************************************************************/
+/* DragonFly device dispatch table.                                         */
+/****************************************************************************/
+static device_method_t bce_methods[] = {
+       /* Device interface */
+       DEVMETHOD(device_probe,         bce_probe),
+       DEVMETHOD(device_attach,        bce_attach),
+       DEVMETHOD(device_detach,        bce_detach),
+       DEVMETHOD(device_shutdown,      bce_shutdown),
+
+       /* bus interface */
+       DEVMETHOD(bus_print_child,      bus_generic_print_child),
+       DEVMETHOD(bus_driver_added,     bus_generic_driver_added),
+
+       /* MII interface */
+       DEVMETHOD(miibus_readreg,       bce_miibus_read_reg),
+       DEVMETHOD(miibus_writereg,      bce_miibus_write_reg),
+       DEVMETHOD(miibus_statchg,       bce_miibus_statchg),
+
+       { 0, 0 }
+};
+
+static driver_t bce_driver = {
+       "bce",
+       bce_methods,
+       sizeof(struct bce_softc)
+};
+
+static devclass_t bce_devclass;
+
+MODULE_DEPEND(bce, pci, 1, 1, 1);
+MODULE_DEPEND(bce, ether, 1, 1, 1);
+MODULE_DEPEND(bce, miibus, 1, 1, 1);
+
+DRIVER_MODULE(bce, pci, bce_driver, bce_devclass, 0, 0);
+DRIVER_MODULE(miibus, bce, miibus_driver, miibus_devclass, 0, 0);
+
+
+/****************************************************************************/
+/* Device probe function.                                                   */
+/*                                                                          */
+/* Compares the device to the driver's list of supported devices and        */
+/* reports back to the OS whether this is the right driver for the device.  */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   BUS_PROBE_DEFAULT on success, positive value on failure.               */
+/****************************************************************************/
+static int
+bce_probe(device_t dev)
+{
+       struct bce_type *t;
+       uint16_t vid, did, svid, sdid;
+
+       /* Get the data for the device to be probed. */
+       vid  = pci_get_vendor(dev);
+       did  = pci_get_device(dev);
+       svid = pci_get_subvendor(dev);
+       sdid = pci_get_subdevice(dev);
+
+       /* Look through the list of known devices for a match. */
+       for (t = bce_devs; t->bce_name != NULL; ++t) {
+               if (vid == t->bce_vid && did == t->bce_did && 
+                   (svid == t->bce_svid || t->bce_svid == PCI_ANY_ID) &&
+                   (sdid == t->bce_sdid || t->bce_sdid == PCI_ANY_ID)) {
+                       uint32_t revid = pci_read_config(dev, PCIR_REVID, 4);
+                       char *descbuf;
+
+                       descbuf = kmalloc(BCE_DEVDESC_MAX, M_TEMP, M_WAITOK);
+
+                       /* Print out the device identity. */
+                       ksnprintf(descbuf, BCE_DEVDESC_MAX, "%s (%c%d)",
+                                 t->bce_name,
+                                 ((revid & 0xf0) >> 4) + 'A', revid & 0xf);
+
+                       device_set_desc_copy(dev, descbuf);
+                       kfree(descbuf, M_TEMP);
+                       return 0;
+               }
+       }
+       return ENXIO;
+}
+
+
+/****************************************************************************/
+/* Device attach function.                                                  */
+/*                                                                          */
+/* Allocates device resources, performs secondary chip identification,      */
+/* resets and initializes the hardware, and initializes driver instance     */
+/* variables.                                                               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_attach(device_t dev)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint32_t val;
+       int rid, rc = 0;
+#ifdef notyet
+       int count;
+#endif
+
+       sc->bce_dev = dev;
+       if_initname(ifp, device_get_name(dev), device_get_unit(dev));
+
+       pci_enable_busmaster(dev);
+
+       /* Allocate PCI memory resources. */
+       rid = PCIR_BAR(0);
+       sc->bce_res_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid,
+                                                RF_ACTIVE | PCI_RF_DENSE);
+       if (sc->bce_res_mem == NULL) {
+               device_printf(dev, "PCI memory allocation failed\n");
+               return ENXIO;
+       }
+       sc->bce_btag = rman_get_bustag(sc->bce_res_mem);
+       sc->bce_bhandle = rman_get_bushandle(sc->bce_res_mem);
+
+       /* Allocate PCI IRQ resources. */
+#ifdef notyet
+       count = pci_msi_count(dev);
+       if (count == 1 && pci_alloc_msi(dev, &count) == 0) {
+               rid = 1;
+               sc->bce_flags |= BCE_USING_MSI_FLAG;
+       } else
+#endif
+       rid = 0;
+       sc->bce_res_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid,
+                                                RF_SHAREABLE | RF_ACTIVE);
+       if (sc->bce_res_irq == NULL) {
+               device_printf(dev, "PCI map interrupt failed\n");
+               rc = ENXIO;
+               goto fail;
+       }
+
+       /*
+        * Configure byte swap and enable indirect register access.
+        * Rely on CPU to do target byte swapping on big endian systems.
+        * Access to registers outside of PCI configurtion space are not
+        * valid until this is done.
+        */
+       pci_write_config(dev, BCE_PCICFG_MISC_CONFIG,
+                        BCE_PCICFG_MISC_CONFIG_REG_WINDOW_ENA |
+                        BCE_PCICFG_MISC_CONFIG_TARGET_MB_WORD_SWAP, 4);
+
+       /* Save ASIC revsion info. */
+       sc->bce_chipid =  REG_RD(sc, BCE_MISC_ID);
+
+       /* Weed out any non-production controller revisions. */
+       switch(BCE_CHIP_ID(sc)) {
+       case BCE_CHIP_ID_5706_A0:
+       case BCE_CHIP_ID_5706_A1:
+       case BCE_CHIP_ID_5708_A0:
+       case BCE_CHIP_ID_5708_B0:
+               device_printf(dev, "Unsupported chip id 0x%08x!\n",
+                             BCE_CHIP_ID(sc));
+               rc = ENODEV;
+               goto fail;
+       }
+
+       /* 
+        * The embedded PCIe to PCI-X bridge (EPB) 
+        * in the 5708 cannot address memory above 
+        * 40 bits (E7_5708CB1_23043 & E6_5708SB1_23043). 
+        */
+       if (BCE_CHIP_NUM(sc) == BCE_CHIP_NUM_5708)
+               sc->max_bus_addr = BCE_BUS_SPACE_MAXADDR;
+       else
+               sc->max_bus_addr = BUS_SPACE_MAXADDR;
+
+       /*
+        * Find the base address for shared memory access.
+        * Newer versions of bootcode use a signature and offset
+        * while older versions use a fixed address.
+        */
+       val = REG_RD_IND(sc, BCE_SHM_HDR_SIGNATURE);
+       if ((val & BCE_SHM_HDR_SIGNATURE_SIG_MASK) == BCE_SHM_HDR_SIGNATURE_SIG)
+               sc->bce_shmem_base = REG_RD_IND(sc, BCE_SHM_HDR_ADDR_0);
+       else
+               sc->bce_shmem_base = HOST_VIEW_SHMEM_BASE;
+
+       DBPRINT(sc, BCE_INFO, "bce_shmem_base = 0x%08X\n", sc->bce_shmem_base);
+
+       /* Get PCI bus information (speed and type). */
+       val = REG_RD(sc, BCE_PCICFG_MISC_STATUS);
+       if (val & BCE_PCICFG_MISC_STATUS_PCIX_DET) {
+               uint32_t clkreg;
+
+               sc->bce_flags |= BCE_PCIX_FLAG;
+
+               clkreg = REG_RD(sc, BCE_PCICFG_PCI_CLOCK_CONTROL_BITS) &
+                        BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET;
+               switch (clkreg) {
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_133MHZ:
+                       sc->bus_speed_mhz = 133;
+                       break;
+
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_95MHZ:
+                       sc->bus_speed_mhz = 100;
+                       break;
+
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_66MHZ:
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_80MHZ:
+                       sc->bus_speed_mhz = 66;
+                       break;
+
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_48MHZ:
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_55MHZ:
+                       sc->bus_speed_mhz = 50;
+                       break;
+
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_LOW:
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_32MHZ:
+               case BCE_PCICFG_PCI_CLOCK_CONTROL_BITS_PCI_CLK_SPD_DET_38MHZ:
+                       sc->bus_speed_mhz = 33;
+                       break;
+               }
+       } else {
+               if (val & BCE_PCICFG_MISC_STATUS_M66EN)
+                       sc->bus_speed_mhz = 66;
+               else
+                       sc->bus_speed_mhz = 33;
+       }
+
+       if (val & BCE_PCICFG_MISC_STATUS_32BIT_DET)
+               sc->bce_flags |= BCE_PCI_32BIT_FLAG;
+
+       device_printf(dev, "ASIC ID 0x%08X; Revision (%c%d); PCI%s %s %dMHz\n",
+                     sc->bce_chipid,
+                     ((BCE_CHIP_ID(sc) & 0xf000) >> 12) + 'A',
+                     (BCE_CHIP_ID(sc) & 0x0ff0) >> 4,
+                     (sc->bce_flags & BCE_PCIX_FLAG) ? "-X" : "",
+                     (sc->bce_flags & BCE_PCI_32BIT_FLAG) ?
+                     "32-bit" : "64-bit", sc->bus_speed_mhz);
+
+       /* Reset the controller. */
+       rc = bce_reset(sc, BCE_DRV_MSG_CODE_RESET);
+       if (rc != 0)
+               goto fail;
+
+       /* Initialize the controller. */
+       rc = bce_chipinit(sc);
+       if (rc != 0) {
+               device_printf(dev, "Controller initialization failed!\n");
+               goto fail;
+       }
+
+       /* Perform NVRAM test. */
+       rc = bce_nvram_test(sc);
+       if (rc != 0) {
+               device_printf(dev, "NVRAM test failed!\n");
+               goto fail;
+       }
+
+       /* Fetch the permanent Ethernet MAC address. */
+       bce_get_mac_addr(sc);
+
+       /*
+        * Trip points control how many BDs
+        * should be ready before generating an
+        * interrupt while ticks control how long
+        * a BD can sit in the chain before
+        * generating an interrupt.  Set the default 
+        * values for the RX and TX rings.
+        */
+
+#ifdef BCE_DRBUG
+       /* Force more frequent interrupts. */
+       sc->bce_tx_quick_cons_trip_int = 1;
+       sc->bce_tx_quick_cons_trip     = 1;
+       sc->bce_tx_ticks_int           = 0;
+       sc->bce_tx_ticks               = 0;
+
+       sc->bce_rx_quick_cons_trip_int = 1;
+       sc->bce_rx_quick_cons_trip     = 1;
+       sc->bce_rx_ticks_int           = 0;
+       sc->bce_rx_ticks               = 0;
+#else
+       sc->bce_tx_quick_cons_trip_int = 20;
+       sc->bce_tx_quick_cons_trip     = 20;
+       sc->bce_tx_ticks_int           = 80;
+       sc->bce_tx_ticks               = 80;
+
+       sc->bce_rx_quick_cons_trip_int = 6;
+       sc->bce_rx_quick_cons_trip     = 6;
+       sc->bce_rx_ticks_int           = 18;
+       sc->bce_rx_ticks               = 18;
+#endif
+
+       /* Update statistics once every second. */
+       sc->bce_stats_ticks = 1000000 & 0xffff00;
+
+       /*
+        * The copper based NetXtreme II controllers
+        * use an integrated PHY at address 1 while
+        * the SerDes controllers use a PHY at
+        * address 2.
+        */
+       sc->bce_phy_addr = 1;
+
+       if (BCE_CHIP_BOND_ID(sc) & BCE_CHIP_BOND_ID_SERDES_BIT) {
+               sc->bce_phy_flags |= BCE_PHY_SERDES_FLAG;
+               sc->bce_flags |= BCE_NO_WOL_FLAG;
+               if (BCE_CHIP_NUM(sc) == BCE_CHIP_NUM_5708) {
+                       sc->bce_phy_addr = 2;
+                       val = REG_RD_IND(sc, sc->bce_shmem_base +
+                                        BCE_SHARED_HW_CFG_CONFIG);
+                       if (val & BCE_SHARED_HW_CFG_PHY_2_5G)
+                               sc->bce_phy_flags |= BCE_PHY_2_5G_CAPABLE_FLAG;
+               }
+       }
+
+       /* Allocate DMA memory resources. */
+       rc = bce_dma_alloc(sc);
+       if (rc != 0) {
+               device_printf(dev, "DMA resource allocation failed!\n");
+               goto fail;
+       }
+
+       /* Initialize the ifnet interface. */
+       ifp->if_softc = sc;
+       ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
+       ifp->if_ioctl = bce_ioctl;
+       ifp->if_start = bce_start;
+       ifp->if_init = bce_init;
+       ifp->if_watchdog = bce_watchdog;
+#ifdef DEVICE_POLLING
+       ifp->if_poll = bce_poll;
+#endif
+       ifp->if_mtu = ETHERMTU;
+       ifp->if_hwassist = BCE_IF_HWASSIST;
+       ifp->if_capabilities = BCE_IF_CAPABILITIES;
+       ifp->if_capenable = ifp->if_capabilities;
+       ifq_set_maxlen(&ifp->if_snd, USABLE_TX_BD);
+       ifq_set_ready(&ifp->if_snd);
+
+       if (sc->bce_phy_flags & BCE_PHY_2_5G_CAPABLE_FLAG)
+               ifp->if_baudrate = IF_Gbps(2.5);
+       else
+               ifp->if_baudrate = IF_Gbps(1);
+
+       /* Assume a standard 1500 byte MTU size for mbuf allocations. */
+       sc->mbuf_alloc_size  = MCLBYTES;
+
+       /* Look for our PHY. */
+       rc = mii_phy_probe(dev, &sc->bce_miibus,
+                          bce_ifmedia_upd, bce_ifmedia_sts);
+       if (rc != 0) {
+               device_printf(dev, "PHY probe failed!\n");
+               goto fail;
+       }
+
+       /* Attach to the Ethernet interface list. */
+       ether_ifattach(ifp, sc->eaddr, NULL);
+
+       callout_init(&sc->bce_stat_ch);
+
+       /* Hookup IRQ last. */
+       rc = bus_setup_intr(dev, sc->bce_res_irq, INTR_NETSAFE, bce_intr, sc,
+                           &sc->bce_intrhand, ifp->if_serializer);
+       if (rc != 0) {
+               device_printf(dev, "Failed to setup IRQ!\n");
+               ether_ifdetach(ifp);
+               goto fail;
+       }
+
+       /* Print some important debugging info. */
+       DBRUN(BCE_INFO, bce_dump_driver_state(sc));
+
+       /* Add the supported sysctls to the kernel. */
+       bce_add_sysctls(sc);
+
+       /* Get the firmware running so IPMI still works */
+       bce_mgmt_init(sc);
+
+       return 0;
+fail:
+       bce_detach(dev);
+       return(rc);
+}
+
+
+/****************************************************************************/
+/* Device detach function.                                                  */
+/*                                                                          */
+/* Stops the controller, resets the controller, and releases resources.     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_detach(device_t dev)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+
+       if (device_is_attached(dev)) {
+               struct ifnet *ifp = &sc->arpcom.ac_if;
+
+               /* Stop and reset the controller. */
+               lwkt_serialize_enter(ifp->if_serializer);
+               bce_stop(sc);
+               bce_reset(sc, BCE_DRV_MSG_CODE_RESET);
+               bus_teardown_intr(dev, sc->bce_res_irq, sc->bce_intrhand);
+               lwkt_serialize_exit(ifp->if_serializer);
+
+               ether_ifdetach(ifp);
+       }
+
+       /* If we have a child device on the MII bus remove it too. */
+       if (sc->bce_miibus)
+               device_delete_child(dev, sc->bce_miibus);
+       bus_generic_detach(dev);
+
+       if (sc->bce_res_irq != NULL) {
+               bus_release_resource(dev, SYS_RES_IRQ,
+                       sc->bce_flags & BCE_USING_MSI_FLAG ? 1 : 0,
+                       sc->bce_res_irq);
+       }
+
+#ifdef notyet
+       if (sc->bce_flags & BCE_USING_MSI_FLAG)
+               pci_release_msi(dev);
+#endif
+
+       if (sc->bce_res_mem != NULL) {
+               bus_release_resource(dev, SYS_RES_MEMORY, PCIR_BAR(0),
+                                    sc->bce_res_mem);
+       }
+
+       bce_dma_free(sc);
+
+       if (sc->bce_sysctl_tree != NULL)
+               sysctl_ctx_free(&sc->bce_sysctl_ctx);
+
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Device shutdown function.                                                */
+/*                                                                          */
+/* Stops and resets the controller.                                         */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing                                                                */
+/****************************************************************************/
+static void
+bce_shutdown(device_t dev)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       lwkt_serialize_enter(ifp->if_serializer);
+       bce_stop(sc);
+       bce_reset(sc, BCE_DRV_MSG_CODE_RESET);
+       lwkt_serialize_exit(ifp->if_serializer);
+}
+
+
+/****************************************************************************/
+/* Indirect register read.                                                  */
+/*                                                                          */
+/* Reads NetXtreme II registers using an index/data register pair in PCI    */
+/* configuration space.  Using this mechanism avoids issues with posted     */
+/* reads but is much slower than memory-mapped I/O.                         */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   The value of the register.                                             */
+/****************************************************************************/
+static uint32_t
+bce_reg_rd_ind(struct bce_softc *sc, uint32_t offset)
+{
+       device_t dev = sc->bce_dev;
+
+       pci_write_config(dev, BCE_PCICFG_REG_WINDOW_ADDRESS, offset, 4);
+#ifdef BCE_DEBUG
+       {
+               uint32_t val;
+               val = pci_read_config(dev, BCE_PCICFG_REG_WINDOW, 4);
+               DBPRINT(sc, BCE_EXCESSIVE,
+                       "%s(); offset = 0x%08X, val = 0x%08X\n",
+                       __func__, offset, val);
+               return val;
+       }
+#else
+       return pci_read_config(dev, BCE_PCICFG_REG_WINDOW, 4);
+#endif
+}
+
+
+/****************************************************************************/
+/* Indirect register write.                                                 */
+/*                                                                          */
+/* Writes NetXtreme II registers using an index/data register pair in PCI   */
+/* configuration space.  Using this mechanism avoids issues with posted     */
+/* writes but is muchh slower than memory-mapped I/O.                       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_reg_wr_ind(struct bce_softc *sc, uint32_t offset, uint32_t val)
+{
+       device_t dev = sc->bce_dev;
+
+       DBPRINT(sc, BCE_EXCESSIVE, "%s(); offset = 0x%08X, val = 0x%08X\n",
+               __func__, offset, val);
+
+       pci_write_config(dev, BCE_PCICFG_REG_WINDOW_ADDRESS, offset, 4);
+       pci_write_config(dev, BCE_PCICFG_REG_WINDOW, val, 4);
+}
+
+
+/****************************************************************************/
+/* Context memory write.                                                    */
+/*                                                                          */
+/* The NetXtreme II controller uses context memory to track connection      */
+/* information for L2 and higher network protocols.                         */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_ctx_wr(struct bce_softc *sc, uint32_t cid_addr, uint32_t offset,
+          uint32_t val)
+{
+       DBPRINT(sc, BCE_EXCESSIVE, "%s(); cid_addr = 0x%08X, offset = 0x%08X, "
+               "val = 0x%08X\n", __func__, cid_addr, offset, val);
+
+       offset += cid_addr;
+       REG_WR(sc, BCE_CTX_DATA_ADR, offset);
+       REG_WR(sc, BCE_CTX_DATA, val);
+}
+
+
+/****************************************************************************/
+/* PHY register read.                                                       */
+/*                                                                          */
+/* Implements register reads on the MII bus.                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   The value of the register.                                             */
+/****************************************************************************/
+static int
+bce_miibus_read_reg(device_t dev, int phy, int reg)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+       uint32_t val;
+       int i;
+
+       /* Make sure we are accessing the correct PHY address. */
+       if (phy != sc->bce_phy_addr) {
+               DBPRINT(sc, BCE_VERBOSE,
+                       "Invalid PHY address %d for PHY read!\n", phy);
+               return 0;
+       }
+
+       if (sc->bce_phy_flags & BCE_PHY_INT_MODE_AUTO_POLLING_FLAG) {
+               val = REG_RD(sc, BCE_EMAC_MDIO_MODE);
+               val &= ~BCE_EMAC_MDIO_MODE_AUTO_POLL;
+
+               REG_WR(sc, BCE_EMAC_MDIO_MODE, val);
+               REG_RD(sc, BCE_EMAC_MDIO_MODE);
+
+               DELAY(40);
+       }
+
+       val = BCE_MIPHY(phy) | BCE_MIREG(reg) |
+             BCE_EMAC_MDIO_COMM_COMMAND_READ | BCE_EMAC_MDIO_COMM_DISEXT |
+             BCE_EMAC_MDIO_COMM_START_BUSY;
+       REG_WR(sc, BCE_EMAC_MDIO_COMM, val);
+
+       for (i = 0; i < BCE_PHY_TIMEOUT; i++) {
+               DELAY(10);
+
+               val = REG_RD(sc, BCE_EMAC_MDIO_COMM);
+               if (!(val & BCE_EMAC_MDIO_COMM_START_BUSY)) {
+                       DELAY(5);
+
+                       val = REG_RD(sc, BCE_EMAC_MDIO_COMM);
+                       val &= BCE_EMAC_MDIO_COMM_DATA;
+                       break;
+               }
+       }
+
+       if (val & BCE_EMAC_MDIO_COMM_START_BUSY) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Error: PHY read timeout! phy = %d, reg = 0x%04X\n",
+                         phy, reg);
+               val = 0x0;
+       } else {
+               val = REG_RD(sc, BCE_EMAC_MDIO_COMM);
+       }
+
+       DBPRINT(sc, BCE_EXCESSIVE,
+               "%s(): phy = %d, reg = 0x%04X, val = 0x%04X\n",
+               __func__, phy, (uint16_t)reg & 0xffff, (uint16_t) val & 0xffff);
+
+       if (sc->bce_phy_flags & BCE_PHY_INT_MODE_AUTO_POLLING_FLAG) {
+               val = REG_RD(sc, BCE_EMAC_MDIO_MODE);
+               val |= BCE_EMAC_MDIO_MODE_AUTO_POLL;
+
+               REG_WR(sc, BCE_EMAC_MDIO_MODE, val);
+               REG_RD(sc, BCE_EMAC_MDIO_MODE);
+
+               DELAY(40);
+       }
+       return (val & 0xffff);
+}
+
+
+/****************************************************************************/
+/* PHY register write.                                                      */
+/*                                                                          */
+/* Implements register writes on the MII bus.                               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   The value of the register.                                             */
+/****************************************************************************/
+static int
+bce_miibus_write_reg(device_t dev, int phy, int reg, int val)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+       uint32_t val1;
+       int i;
+
+       /* Make sure we are accessing the correct PHY address. */
+       if (phy != sc->bce_phy_addr) {
+               DBPRINT(sc, BCE_WARN,
+                       "Invalid PHY address %d for PHY write!\n", phy);
+               return(0);
+       }
+
+       DBPRINT(sc, BCE_EXCESSIVE,
+               "%s(): phy = %d, reg = 0x%04X, val = 0x%04X\n",
+               __func__, phy, (uint16_t)(reg & 0xffff),
+               (uint16_t)(val & 0xffff));
+
+       if (sc->bce_phy_flags & BCE_PHY_INT_MODE_AUTO_POLLING_FLAG) {
+               val1 = REG_RD(sc, BCE_EMAC_MDIO_MODE);
+               val1 &= ~BCE_EMAC_MDIO_MODE_AUTO_POLL;
+
+               REG_WR(sc, BCE_EMAC_MDIO_MODE, val1);
+               REG_RD(sc, BCE_EMAC_MDIO_MODE);
+
+               DELAY(40);
+       }
+
+       val1 = BCE_MIPHY(phy) | BCE_MIREG(reg) | val |
+               BCE_EMAC_MDIO_COMM_COMMAND_WRITE |
+               BCE_EMAC_MDIO_COMM_START_BUSY | BCE_EMAC_MDIO_COMM_DISEXT;
+       REG_WR(sc, BCE_EMAC_MDIO_COMM, val1);
+
+       for (i = 0; i < BCE_PHY_TIMEOUT; i++) {
+               DELAY(10);
+
+               val1 = REG_RD(sc, BCE_EMAC_MDIO_COMM);
+               if (!(val1 & BCE_EMAC_MDIO_COMM_START_BUSY)) {
+                       DELAY(5);
+                       break;
+               }
+       }
+
+       if (val1 & BCE_EMAC_MDIO_COMM_START_BUSY)
+               if_printf(&sc->arpcom.ac_if, "PHY write timeout!\n");
+
+       if (sc->bce_phy_flags & BCE_PHY_INT_MODE_AUTO_POLLING_FLAG) {
+               val1 = REG_RD(sc, BCE_EMAC_MDIO_MODE);
+               val1 |= BCE_EMAC_MDIO_MODE_AUTO_POLL;
+
+               REG_WR(sc, BCE_EMAC_MDIO_MODE, val1);
+               REG_RD(sc, BCE_EMAC_MDIO_MODE);
+
+               DELAY(40);
+       }
+       return 0;
+}
+
+
+/****************************************************************************/
+/* MII bus status change.                                                   */
+/*                                                                          */
+/* Called by the MII bus driver when the PHY establishes link to set the    */
+/* MAC interface registers.                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_miibus_statchg(device_t dev)
+{
+       struct bce_softc *sc = device_get_softc(dev);
+       struct mii_data *mii = device_get_softc(sc->bce_miibus);
+
+       DBPRINT(sc, BCE_INFO, "mii_media_active = 0x%08X\n",
+               mii->mii_media_active);
+
+#ifdef BCE_DEBUG
+       /* Decode the interface media flags. */
+       if_printf(&sc->arpcom.ac_if, "Media: ( ");
+       switch(IFM_TYPE(mii->mii_media_active)) {
+       case IFM_ETHER:
+               kprintf("Ethernet )");
+               break;
+       default:
+               kprintf("Unknown )");
+               break;
+       }
+
+       kprintf(" Media Options: ( ");
+       switch(IFM_SUBTYPE(mii->mii_media_active)) {
+       case IFM_AUTO:
+               kprintf("Autoselect )");
+               break;
+       case IFM_MANUAL:
+               kprintf("Manual )");
+               break;
+       case IFM_NONE:
+               kprintf("None )");
+               break;
+       case IFM_10_T:
+               kprintf("10Base-T )");
+               break;
+       case IFM_100_TX:
+               kprintf("100Base-TX )");
+               break;
+       case IFM_1000_SX:
+               kprintf("1000Base-SX )");
+               break;
+       case IFM_1000_T:
+               kprintf("1000Base-T )");
+               break;
+       default:
+               kprintf("Other )");
+               break;
+       }
+
+       kprintf(" Global Options: (");
+       if (mii->mii_media_active & IFM_FDX)
+               kprintf(" FullDuplex");
+       if (mii->mii_media_active & IFM_HDX)
+               kprintf(" HalfDuplex");
+       if (mii->mii_media_active & IFM_LOOP)
+               kprintf(" Loopback");
+       if (mii->mii_media_active & IFM_FLAG0)
+               kprintf(" Flag0");
+       if (mii->mii_media_active & IFM_FLAG1)
+               kprintf(" Flag1");
+       if (mii->mii_media_active & IFM_FLAG2)
+               kprintf(" Flag2");
+       kprintf(" )\n");
+#endif
+
+       BCE_CLRBIT(sc, BCE_EMAC_MODE, BCE_EMAC_MODE_PORT);
+
+       /*
+        * Set MII or GMII interface based on the speed negotiated
+        * by the PHY.
+        */
+       if (IFM_SUBTYPE(mii->mii_media_active) == IFM_1000_T || 
+           IFM_SUBTYPE(mii->mii_media_active) == IFM_1000_SX) {
+               DBPRINT(sc, BCE_INFO, "Setting GMII interface.\n");
+               BCE_SETBIT(sc, BCE_EMAC_MODE, BCE_EMAC_MODE_PORT_GMII);
+       } else {
+               DBPRINT(sc, BCE_INFO, "Setting MII interface.\n");
+               BCE_SETBIT(sc, BCE_EMAC_MODE, BCE_EMAC_MODE_PORT_MII);
+       }
+
+       /*
+        * Set half or full duplex based on the duplicity negotiated
+        * by the PHY.
+        */
+       if ((mii->mii_media_active & IFM_GMASK) == IFM_FDX) {
+               DBPRINT(sc, BCE_INFO, "Setting Full-Duplex interface.\n");
+               BCE_CLRBIT(sc, BCE_EMAC_MODE, BCE_EMAC_MODE_HALF_DUPLEX);
+       } else {
+               DBPRINT(sc, BCE_INFO, "Setting Half-Duplex interface.\n");
+               BCE_SETBIT(sc, BCE_EMAC_MODE, BCE_EMAC_MODE_HALF_DUPLEX);
+       }
+}
+
+
+/****************************************************************************/
+/* Acquire NVRAM lock.                                                      */
+/*                                                                          */
+/* Before the NVRAM can be accessed the caller must acquire an NVRAM lock.  */
+/* Locks 0 and 2 are reserved, lock 1 is used by firmware and lock 2 is     */
+/* for use by the driver.                                                   */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_acquire_nvram_lock(struct bce_softc *sc)
+{
+       uint32_t val;
+       int j;
+
+       DBPRINT(sc, BCE_VERBOSE, "Acquiring NVRAM lock.\n");
+
+       /* Request access to the flash interface. */
+       REG_WR(sc, BCE_NVM_SW_ARB, BCE_NVM_SW_ARB_ARB_REQ_SET2);
+       for (j = 0; j < NVRAM_TIMEOUT_COUNT; j++) {
+               val = REG_RD(sc, BCE_NVM_SW_ARB);
+               if (val & BCE_NVM_SW_ARB_ARB_ARB2)
+                       break;
+
+               DELAY(5);
+       }
+
+       if (j >= NVRAM_TIMEOUT_COUNT) {
+               DBPRINT(sc, BCE_WARN, "Timeout acquiring NVRAM lock!\n");
+               return EBUSY;
+       }
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Release NVRAM lock.                                                      */
+/*                                                                          */
+/* When the caller is finished accessing NVRAM the lock must be released.   */
+/* Locks 0 and 2 are reserved, lock 1 is used by firmware and lock 2 is     */
+/* for use by the driver.                                                   */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_release_nvram_lock(struct bce_softc *sc)
+{
+       int j;
+       uint32_t val;
+
+       DBPRINT(sc, BCE_VERBOSE, "Releasing NVRAM lock.\n");
+
+       /*
+        * Relinquish nvram interface.
+        */
+       REG_WR(sc, BCE_NVM_SW_ARB, BCE_NVM_SW_ARB_ARB_REQ_CLR2);
+
+       for (j = 0; j < NVRAM_TIMEOUT_COUNT; j++) {
+               val = REG_RD(sc, BCE_NVM_SW_ARB);
+               if (!(val & BCE_NVM_SW_ARB_ARB_ARB2))
+                       break;
+
+               DELAY(5);
+       }
+
+       if (j >= NVRAM_TIMEOUT_COUNT) {
+               DBPRINT(sc, BCE_WARN, "Timeout reeasing NVRAM lock!\n");
+               return EBUSY;
+       }
+       return 0;
+}
+
+
+#ifdef BCE_NVRAM_WRITE_SUPPORT
+/****************************************************************************/
+/* Enable NVRAM write access.                                               */
+/*                                                                          */
+/* Before writing to NVRAM the caller must enable NVRAM writes.             */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_enable_nvram_write(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       DBPRINT(sc, BCE_VERBOSE, "Enabling NVRAM write.\n");
+
+       val = REG_RD(sc, BCE_MISC_CFG);
+       REG_WR(sc, BCE_MISC_CFG, val | BCE_MISC_CFG_NVM_WR_EN_PCI);
+
+       if (!sc->bce_flash_info->buffered) {
+               int j;
+
+               REG_WR(sc, BCE_NVM_COMMAND, BCE_NVM_COMMAND_DONE);
+               REG_WR(sc, BCE_NVM_COMMAND,
+                      BCE_NVM_COMMAND_WREN | BCE_NVM_COMMAND_DOIT);
+
+               for (j = 0; j < NVRAM_TIMEOUT_COUNT; j++) {
+                       DELAY(5);
+
+                       val = REG_RD(sc, BCE_NVM_COMMAND);
+                       if (val & BCE_NVM_COMMAND_DONE)
+                               break;
+               }
+
+               if (j >= NVRAM_TIMEOUT_COUNT) {
+                       DBPRINT(sc, BCE_WARN, "Timeout writing NVRAM!\n");
+                       return EBUSY;
+               }
+       }
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Disable NVRAM write access.                                              */
+/*                                                                          */
+/* When the caller is finished writing to NVRAM write access must be        */
+/* disabled.                                                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_disable_nvram_write(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       DBPRINT(sc, BCE_VERBOSE, "Disabling NVRAM write.\n");
+
+       val = REG_RD(sc, BCE_MISC_CFG);
+       REG_WR(sc, BCE_MISC_CFG, val & ~BCE_MISC_CFG_NVM_WR_EN);
+}
+#endif /* BCE_NVRAM_WRITE_SUPPORT */
+
+
+/****************************************************************************/
+/* Enable NVRAM access.                                                     */
+/*                                                                          */
+/* Before accessing NVRAM for read or write operations the caller must      */
+/* enabled NVRAM access.                                                    */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_enable_nvram_access(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       DBPRINT(sc, BCE_VERBOSE, "Enabling NVRAM access.\n");
+
+       val = REG_RD(sc, BCE_NVM_ACCESS_ENABLE);
+       /* Enable both bits, even on read. */
+       REG_WR(sc, BCE_NVM_ACCESS_ENABLE,
+              val | BCE_NVM_ACCESS_ENABLE_EN | BCE_NVM_ACCESS_ENABLE_WR_EN);
+}
+
+
+/****************************************************************************/
+/* Disable NVRAM access.                                                    */
+/*                                                                          */
+/* When the caller is finished accessing NVRAM access must be disabled.     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_disable_nvram_access(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       DBPRINT(sc, BCE_VERBOSE, "Disabling NVRAM access.\n");
+
+       val = REG_RD(sc, BCE_NVM_ACCESS_ENABLE);
+
+       /* Disable both bits, even after read. */
+       REG_WR(sc, BCE_NVM_ACCESS_ENABLE,
+              val & ~(BCE_NVM_ACCESS_ENABLE_EN | BCE_NVM_ACCESS_ENABLE_WR_EN));
+}
+
+
+#ifdef BCE_NVRAM_WRITE_SUPPORT
+/****************************************************************************/
+/* Erase NVRAM page before writing.                                         */
+/*                                                                          */
+/* Non-buffered flash parts require that a page be erased before it is      */
+/* written.                                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_nvram_erase_page(struct bce_softc *sc, uint32_t offset)
+{
+       uint32_t cmd;
+       int j;
+
+       /* Buffered flash doesn't require an erase. */
+       if (sc->bce_flash_info->buffered)
+               return 0;
+
+       DBPRINT(sc, BCE_VERBOSE, "Erasing NVRAM page.\n");
+
+       /* Build an erase command. */
+       cmd = BCE_NVM_COMMAND_ERASE | BCE_NVM_COMMAND_WR |
+             BCE_NVM_COMMAND_DOIT;
+
+       /*
+        * Clear the DONE bit separately, set the NVRAM adress to erase,
+        * and issue the erase command.
+        */
+       REG_WR(sc, BCE_NVM_COMMAND, BCE_NVM_COMMAND_DONE);
+       REG_WR(sc, BCE_NVM_ADDR, offset & BCE_NVM_ADDR_NVM_ADDR_VALUE);
+       REG_WR(sc, BCE_NVM_COMMAND, cmd);
+
+       /* Wait for completion. */
+       for (j = 0; j < NVRAM_TIMEOUT_COUNT; j++) {
+               uint32_t val;
+
+               DELAY(5);
+
+               val = REG_RD(sc, BCE_NVM_COMMAND);
+               if (val & BCE_NVM_COMMAND_DONE)
+                       break;
+       }
+
+       if (j >= NVRAM_TIMEOUT_COUNT) {
+               DBPRINT(sc, BCE_WARN, "Timeout erasing NVRAM.\n");
+               return EBUSY;
+       }
+       return 0;
+}
+#endif /* BCE_NVRAM_WRITE_SUPPORT */
+
+
+/****************************************************************************/
+/* Read a dword (32 bits) from NVRAM.                                       */
+/*                                                                          */
+/* Read a 32 bit word from NVRAM.  The caller is assumed to have already    */
+/* obtained the NVRAM lock and enabled the controller for NVRAM access.     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success and the 32 bit value read, positive value on failure.     */
+/****************************************************************************/
+static int
+bce_nvram_read_dword(struct bce_softc *sc, uint32_t offset, uint8_t *ret_val,
+                    uint32_t cmd_flags)
+{
+       uint32_t cmd;
+       int i, rc = 0;
+
+       /* Build the command word. */
+       cmd = BCE_NVM_COMMAND_DOIT | cmd_flags;
+
+       /* Calculate the offset for buffered flash. */
+       if (sc->bce_flash_info->buffered) {
+               offset = ((offset / sc->bce_flash_info->page_size) <<
+                         sc->bce_flash_info->page_bits) +
+                        (offset % sc->bce_flash_info->page_size);
+       }
+
+       /*
+        * Clear the DONE bit separately, set the address to read,
+        * and issue the read.
+        */
+       REG_WR(sc, BCE_NVM_COMMAND, BCE_NVM_COMMAND_DONE);
+       REG_WR(sc, BCE_NVM_ADDR, offset & BCE_NVM_ADDR_NVM_ADDR_VALUE);
+       REG_WR(sc, BCE_NVM_COMMAND, cmd);
+
+       /* Wait for completion. */
+       for (i = 0; i < NVRAM_TIMEOUT_COUNT; i++) {
+               uint32_t val;
+
+               DELAY(5);
+
+               val = REG_RD(sc, BCE_NVM_COMMAND);
+               if (val & BCE_NVM_COMMAND_DONE) {
+                       val = REG_RD(sc, BCE_NVM_READ);
+
+                       val = be32toh(val);
+                       memcpy(ret_val, &val, 4);
+                       break;
+               }
+       }
+
+       /* Check for errors. */
+       if (i >= NVRAM_TIMEOUT_COUNT) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Timeout error reading NVRAM at offset 0x%08X!\n",
+                         offset);
+               rc = EBUSY;
+       }
+       return rc;
+}
+
+
+#ifdef BCE_NVRAM_WRITE_SUPPORT
+/****************************************************************************/
+/* Write a dword (32 bits) to NVRAM.                                        */
+/*                                                                          */
+/* Write a 32 bit word to NVRAM.  The caller is assumed to have already     */
+/* obtained the NVRAM lock, enabled the controller for NVRAM access, and    */
+/* enabled NVRAM write access.                                              */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_nvram_write_dword(struct bce_softc *sc, uint32_t offset, uint8_t *val,
+                     uint32_t cmd_flags)
+{
+       uint32_t cmd, val32;
+       int j;
+
+       /* Build the command word. */
+       cmd = BCE_NVM_COMMAND_DOIT | BCE_NVM_COMMAND_WR | cmd_flags;
+
+       /* Calculate the offset for buffered flash. */
+       if (sc->bce_flash_info->buffered) {
+               offset = ((offset / sc->bce_flash_info->page_size) <<
+                         sc->bce_flash_info->page_bits) +
+                        (offset % sc->bce_flash_info->page_size);
+       }
+
+       /*
+        * Clear the DONE bit separately, convert NVRAM data to big-endian,
+        * set the NVRAM address to write, and issue the write command
+        */
+       REG_WR(sc, BCE_NVM_COMMAND, BCE_NVM_COMMAND_DONE);
+       memcpy(&val32, val, 4);
+       val32 = htobe32(val32);
+       REG_WR(sc, BCE_NVM_WRITE, val32);
+       REG_WR(sc, BCE_NVM_ADDR, offset & BCE_NVM_ADDR_NVM_ADDR_VALUE);
+       REG_WR(sc, BCE_NVM_COMMAND, cmd);
+
+       /* Wait for completion. */
+       for (j = 0; j < NVRAM_TIMEOUT_COUNT; j++) {
+               DELAY(5);
+
+               if (REG_RD(sc, BCE_NVM_COMMAND) & BCE_NVM_COMMAND_DONE)
+                       break;
+       }
+       if (j >= NVRAM_TIMEOUT_COUNT) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Timeout error writing NVRAM at offset 0x%08X\n",
+                         offset);
+               return EBUSY;
+       }
+       return 0;
+}
+#endif /* BCE_NVRAM_WRITE_SUPPORT */
+
+
+/****************************************************************************/
+/* Initialize NVRAM access.                                                 */
+/*                                                                          */
+/* Identify the NVRAM device in use and prepare the NVRAM interface to      */
+/* access that device.                                                      */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_init_nvram(struct bce_softc *sc)
+{
+       uint32_t val;
+       int j, entry_count, rc = 0;
+       const struct flash_spec *flash;
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Entering %s()\n", __func__);
+
+       /* Determine the selected interface. */
+       val = REG_RD(sc, BCE_NVM_CFG1);
+
+       entry_count = sizeof(flash_table) / sizeof(struct flash_spec);
+
+       /*
+        * Flash reconfiguration is required to support additional
+        * NVRAM devices not directly supported in hardware.
+        * Check if the flash interface was reconfigured
+        * by the bootcode.
+        */
+
+       if (val & 0x40000000) {
+               /* Flash interface reconfigured by bootcode. */
+
+               DBPRINT(sc, BCE_INFO_LOAD, 
+                       "%s(): Flash WAS reconfigured.\n", __func__);
+
+               for (j = 0, flash = flash_table; j < entry_count;
+                    j++, flash++) {
+                       if ((val & FLASH_BACKUP_STRAP_MASK) ==
+                           (flash->config1 & FLASH_BACKUP_STRAP_MASK)) {
+                               sc->bce_flash_info = flash;
+                               break;
+                       }
+               }
+       } else {
+               /* Flash interface not yet reconfigured. */
+               uint32_t mask;
+
+               DBPRINT(sc, BCE_INFO_LOAD, 
+                       "%s(): Flash was NOT reconfigured.\n", __func__);
+
+               if (val & (1 << 23))
+                       mask = FLASH_BACKUP_STRAP_MASK;
+               else
+                       mask = FLASH_STRAP_MASK;
+
+               /* Look for the matching NVRAM device configuration data. */
+               for (j = 0, flash = flash_table; j < entry_count;
+                    j++, flash++) {
+                       /* Check if the device matches any of the known devices. */
+                       if ((val & mask) == (flash->strapping & mask)) {
+                               /* Found a device match. */
+                               sc->bce_flash_info = flash;
+
+                               /* Request access to the flash interface. */
+                               rc = bce_acquire_nvram_lock(sc);
+                               if (rc != 0)
+                                       return rc;
+
+                               /* Reconfigure the flash interface. */
+                               bce_enable_nvram_access(sc);
+                               REG_WR(sc, BCE_NVM_CFG1, flash->config1);
+                               REG_WR(sc, BCE_NVM_CFG2, flash->config2);
+                               REG_WR(sc, BCE_NVM_CFG3, flash->config3);
+                               REG_WR(sc, BCE_NVM_WRITE1, flash->write1);
+                               bce_disable_nvram_access(sc);
+                               bce_release_nvram_lock(sc);
+                               break;
+                       }
+               }
+       }
+
+       /* Check if a matching device was found. */
+       if (j == entry_count) {
+               sc->bce_flash_info = NULL;
+               if_printf(&sc->arpcom.ac_if, "Unknown Flash NVRAM found!\n");
+               rc = ENODEV;
+       }
+
+       /* Write the flash config data to the shared memory interface. */
+       val = REG_RD_IND(sc, sc->bce_shmem_base + BCE_SHARED_HW_CFG_CONFIG2) &
+             BCE_SHARED_HW_CFG2_NVM_SIZE_MASK;
+       if (val)
+               sc->bce_flash_size = val;
+       else
+               sc->bce_flash_size = sc->bce_flash_info->total_size;
+
+       DBPRINT(sc, BCE_INFO_LOAD, "%s() flash->total_size = 0x%08X\n",
+               __func__, sc->bce_flash_info->total_size);
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Exiting %s()\n", __func__);
+
+       return rc;
+}
+
+
+/****************************************************************************/
+/* Read an arbitrary range of data from NVRAM.                              */
+/*                                                                          */
+/* Prepares the NVRAM interface for access and reads the requested data     */
+/* into the supplied buffer.                                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success and the data read, positive value on failure.             */
+/****************************************************************************/
+static int
+bce_nvram_read(struct bce_softc *sc, uint32_t offset, uint8_t *ret_buf,
+              int buf_size)
+{
+       uint32_t cmd_flags, offset32, len32, extra;
+       int rc = 0;
+
+       if (buf_size == 0)
+               return 0;
+
+       /* Request access to the flash interface. */
+       rc = bce_acquire_nvram_lock(sc);
+       if (rc != 0)
+               return rc;
+
+       /* Enable access to flash interface */
+       bce_enable_nvram_access(sc);
+
+       len32 = buf_size;
+       offset32 = offset;
+       extra = 0;
+
+       cmd_flags = 0;
+
+       /* XXX should we release nvram lock if read_dword() fails? */
+       if (offset32 & 3) {
+               uint8_t buf[4];
+               uint32_t pre_len;
+
+               offset32 &= ~3;
+               pre_len = 4 - (offset & 3);
+
+               if (pre_len >= len32) {
+                       pre_len = len32;
+                       cmd_flags = BCE_NVM_COMMAND_FIRST | BCE_NVM_COMMAND_LAST;
+               } else {
+                       cmd_flags = BCE_NVM_COMMAND_FIRST;
+               }
+
+               rc = bce_nvram_read_dword(sc, offset32, buf, cmd_flags);
+               if (rc)
+                       return rc;
+
+               memcpy(ret_buf, buf + (offset & 3), pre_len);
+
+               offset32 += 4;
+               ret_buf += pre_len;
+               len32 -= pre_len;
+       }
+
+       if (len32 & 3) {
+               extra = 4 - (len32 & 3);
+               len32 = (len32 + 4) & ~3;
+       }
+
+       if (len32 == 4) {
+               uint8_t buf[4];
+
+               if (cmd_flags)
+                       cmd_flags = BCE_NVM_COMMAND_LAST;
+               else
+                       cmd_flags = BCE_NVM_COMMAND_FIRST |
+                                   BCE_NVM_COMMAND_LAST;
+
+               rc = bce_nvram_read_dword(sc, offset32, buf, cmd_flags);
+
+               memcpy(ret_buf, buf, 4 - extra);
+       } else if (len32 > 0) {
+               uint8_t buf[4];
+
+               /* Read the first word. */
+               if (cmd_flags)
+                       cmd_flags = 0;
+               else
+                       cmd_flags = BCE_NVM_COMMAND_FIRST;
+
+               rc = bce_nvram_read_dword(sc, offset32, ret_buf, cmd_flags);
+
+               /* Advance to the next dword. */
+               offset32 += 4;
+               ret_buf += 4;
+               len32 -= 4;
+
+               while (len32 > 4 && rc == 0) {
+                       rc = bce_nvram_read_dword(sc, offset32, ret_buf, 0);
+
+                       /* Advance to the next dword. */
+                       offset32 += 4;
+                       ret_buf += 4;
+                       len32 -= 4;
+               }
+
+               if (rc)
+                       return rc;
+
+               cmd_flags = BCE_NVM_COMMAND_LAST;
+               rc = bce_nvram_read_dword(sc, offset32, buf, cmd_flags);
+
+               memcpy(ret_buf, buf, 4 - extra);
+       }
+
+       /* Disable access to flash interface and release the lock. */
+       bce_disable_nvram_access(sc);
+       bce_release_nvram_lock(sc);
+
+       return rc;
+}
+
+
+#ifdef BCE_NVRAM_WRITE_SUPPORT
+/****************************************************************************/
+/* Write an arbitrary range of data from NVRAM.                             */
+/*                                                                          */
+/* Prepares the NVRAM interface for write access and writes the requested   */
+/* data from the supplied buffer.  The caller is responsible for            */
+/* calculating any appropriate CRCs.                                        */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_nvram_write(struct bce_softc *sc, uint32_t offset, uint8_t *data_buf,
+               int buf_size)
+{
+       uint32_t written, offset32, len32;
+       uint8_t *buf, start[4], end[4];
+       int rc = 0;
+       int align_start, align_end;
+
+       buf = data_buf;
+       offset32 = offset;
+       len32 = buf_size;
+       align_end = 0;
+       align_start = (offset32 & 3);
+
+       if (align_start) {
+               offset32 &= ~3;
+               len32 += align_start;
+               rc = bce_nvram_read(sc, offset32, start, 4);
+               if (rc)
+                       return rc;
+       }
+
+       if (len32 & 3) {
+               if (len32 > 4 || !align_start) {
+                       align_end = 4 - (len32 & 3);
+                       len32 += align_end;
+                       rc = bce_nvram_read(sc, offset32 + len32 - 4, end, 4);
+                       if (rc)
+                               return rc;
+               }
+       }
+
+       if (align_start || align_end) {
+               buf = kmalloc(len32, M_DEVBUF, M_NOWAIT);
+               if (buf == NULL)
+                       return ENOMEM;
+               if (align_start)
+                       memcpy(buf, start, 4);
+               if (align_end)
+                       memcpy(buf + len32 - 4, end, 4);
+               memcpy(buf + align_start, data_buf, buf_size);
+       }
+
+       written = 0;
+       while (written < len32 && rc == 0) {
+               uint32_t page_start, page_end, data_start, data_end;
+               uint32_t addr, cmd_flags;
+               int i;
+               uint8_t flash_buffer[264];
+
+               /* Find the page_start addr */
+               page_start = offset32 + written;
+               page_start -= (page_start % sc->bce_flash_info->page_size);
+               /* Find the page_end addr */
+               page_end = page_start + sc->bce_flash_info->page_size;
+               /* Find the data_start addr */
+               data_start = (written == 0) ? offset32 : page_start;
+               /* Find the data_end addr */
+               data_end = (page_end > offset32 + len32) ? (offset32 + len32)
+                                                        : page_end;
+
+               /* Request access to the flash interface. */
+               rc = bce_acquire_nvram_lock(sc);
+               if (rc != 0)
+                       goto nvram_write_end;
+
+               /* Enable access to flash interface */
+               bce_enable_nvram_access(sc);
+
+               cmd_flags = BCE_NVM_COMMAND_FIRST;
+               if (sc->bce_flash_info->buffered == 0) {
+                       int j;
+
+                       /*
+                        * Read the whole page into the buffer
+                        * (non-buffer flash only)
+                        */
+                       for (j = 0; j < sc->bce_flash_info->page_size; j += 4) {
+                               if (j == (sc->bce_flash_info->page_size - 4))
+                                       cmd_flags |= BCE_NVM_COMMAND_LAST;
+
+                               rc = bce_nvram_read_dword(sc, page_start + j,
+                                                         &flash_buffer[j],
+                                                         cmd_flags);
+                               if (rc)
+                                       goto nvram_write_end;
+
+                               cmd_flags = 0;
+                       }
+               }
+
+               /* Enable writes to flash interface (unlock write-protect) */
+               rc = bce_enable_nvram_write(sc);
+               if (rc != 0)
+                       goto nvram_write_end;
+
+               /* Erase the page */
+               rc = bce_nvram_erase_page(sc, page_start);
+               if (rc != 0)
+                       goto nvram_write_end;
+
+               /* Re-enable the write again for the actual write */
+               bce_enable_nvram_write(sc);
+
+               /* Loop to write back the buffer data from page_start to
+                * data_start */
+               i = 0;
+               if (sc->bce_flash_info->buffered == 0) {
+                       for (addr = page_start; addr < data_start;
+                            addr += 4, i += 4) {
+                               rc = bce_nvram_write_dword(sc, addr,
+                                                          &flash_buffer[i],
+                                                          cmd_flags);
+                               if (rc != 0)
+                                       goto nvram_write_end;
+
+                               cmd_flags = 0;
+                       }
+               }
+
+               /* Loop to write the new data from data_start to data_end */
+               for (addr = data_start; addr < data_end; addr += 4, i++) {
+                       if (addr == page_end - 4 ||
+                           (sc->bce_flash_info->buffered &&
+                            addr == data_end - 4))
+                               cmd_flags |= BCE_NVM_COMMAND_LAST;
+
+                       rc = bce_nvram_write_dword(sc, addr, buf, cmd_flags);
+                       if (rc != 0)
+                               goto nvram_write_end;
+
+                       cmd_flags = 0;
+                       buf += 4;
+               }
+
+               /* Loop to write back the buffer data from data_end
+                * to page_end */
+               if (sc->bce_flash_info->buffered == 0) {
+                       for (addr = data_end; addr < page_end;
+                            addr += 4, i += 4) {
+                               if (addr == page_end-4)
+                                       cmd_flags = BCE_NVM_COMMAND_LAST;
+
+                               rc = bce_nvram_write_dword(sc, addr,
+                                       &flash_buffer[i], cmd_flags);
+                               if (rc != 0)
+                                       goto nvram_write_end;
+
+                               cmd_flags = 0;
+                       }
+               }
+
+               /* Disable writes to flash interface (lock write-protect) */
+               bce_disable_nvram_write(sc);
+
+               /* Disable access to flash interface */
+               bce_disable_nvram_access(sc);
+               bce_release_nvram_lock(sc);
+
+               /* Increment written */
+               written += data_end - data_start;
+       }
+
+nvram_write_end:
+       if (align_start || align_end)
+               kfree(buf, M_DEVBUF);
+       return rc;
+}
+#endif /* BCE_NVRAM_WRITE_SUPPORT */
+
+
+/****************************************************************************/
+/* Verifies that NVRAM is accessible and contains valid data.               */
+/*                                                                          */
+/* Reads the configuration data from NVRAM and verifies that the CRC is     */
+/* correct.                                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 on success, positive value on failure.                               */
+/****************************************************************************/
+static int
+bce_nvram_test(struct bce_softc *sc)
+{
+       uint32_t buf[BCE_NVRAM_SIZE / 4];
+       uint32_t magic, csum;
+       uint8_t *data = (uint8_t *)buf;
+       int rc = 0;
+
+       /*
+        * Check that the device NVRAM is valid by reading
+        * the magic value at offset 0.
+        */
+       rc = bce_nvram_read(sc, 0, data, 4);
+       if (rc != 0)
+               return rc;
+
+       magic = be32toh(buf[0]);
+       if (magic != BCE_NVRAM_MAGIC) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Invalid NVRAM magic value! Expected: 0x%08X, "
+                         "Found: 0x%08X\n", BCE_NVRAM_MAGIC, magic);
+               return ENODEV;
+       }
+
+       /*
+        * Verify that the device NVRAM includes valid
+        * configuration data.
+        */
+       rc = bce_nvram_read(sc, 0x100, data, BCE_NVRAM_SIZE);
+       if (rc != 0)
+               return rc;
+
+       csum = ether_crc32_le(data, 0x100);
+       if (csum != BCE_CRC32_RESIDUAL) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Invalid Manufacturing Information NVRAM CRC! "
+                         "Expected: 0x%08X, Found: 0x%08X\n",
+                         BCE_CRC32_RESIDUAL, csum);
+               return ENODEV;
+       }
+
+       csum = ether_crc32_le(data + 0x100, 0x100);
+       if (csum != BCE_CRC32_RESIDUAL) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Invalid Feature Configuration Information "
+                         "NVRAM CRC! Expected: 0x%08X, Found: 08%08X\n",
+                         BCE_CRC32_RESIDUAL, csum);
+               rc = ENODEV;
+       }
+       return rc;
+}
+
+
+/****************************************************************************/
+/* Free any DMA memory owned by the driver.                                 */
+/*                                                                          */
+/* Scans through each data structre that requires DMA memory and frees      */
+/* the memory if allocated.                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dma_free(struct bce_softc *sc)
+{
+       int i;
+
+       /* Destroy the status block. */
+       if (sc->status_tag != NULL) {
+               if (sc->status_block != NULL) {
+                       bus_dmamap_unload(sc->status_tag, sc->status_map);
+                       bus_dmamem_free(sc->status_tag, sc->status_block,
+                                       sc->status_map);
+               }
+               bus_dma_tag_destroy(sc->status_tag);
+       }
+
+
+       /* Destroy the statistics block. */
+       if (sc->stats_tag != NULL) {
+               if (sc->stats_block != NULL) {
+                       bus_dmamap_unload(sc->stats_tag, sc->stats_map);
+                       bus_dmamem_free(sc->stats_tag, sc->stats_block,
+                                       sc->stats_map);
+               }
+               bus_dma_tag_destroy(sc->stats_tag);
+       }
+
+       /* Destroy the TX buffer descriptor DMA stuffs. */
+       if (sc->tx_bd_chain_tag != NULL) {
+               for (i = 0; i < TX_PAGES; i++) {
+                       if (sc->tx_bd_chain[i] != NULL) {
+                               bus_dmamap_unload(sc->tx_bd_chain_tag,
+                                                 sc->tx_bd_chain_map[i]);
+                               bus_dmamem_free(sc->tx_bd_chain_tag,
+                                               sc->tx_bd_chain[i],
+                                               sc->tx_bd_chain_map[i]);
+                       }
+               }
+               bus_dma_tag_destroy(sc->tx_bd_chain_tag);
+       }
+
+       /* Destroy the RX buffer descriptor DMA stuffs. */
+       if (sc->rx_bd_chain_tag != NULL) {
+               for (i = 0; i < RX_PAGES; i++) {
+                       if (sc->rx_bd_chain[i] != NULL) {
+                               bus_dmamap_unload(sc->rx_bd_chain_tag,
+                                                 sc->rx_bd_chain_map[i]);
+                               bus_dmamem_free(sc->rx_bd_chain_tag,
+                                               sc->rx_bd_chain[i],
+                                               sc->rx_bd_chain_map[i]);
+                       }
+               }
+               bus_dma_tag_destroy(sc->rx_bd_chain_tag);
+       }
+
+       /* Destroy the TX mbuf DMA stuffs. */
+       if (sc->tx_mbuf_tag != NULL) {
+               for (i = 0; i < TOTAL_TX_BD; i++) {
+                       /* Must have been unloaded in bce_stop() */
+                       KKASSERT(sc->tx_mbuf_ptr[i] == NULL);
+                       bus_dmamap_destroy(sc->tx_mbuf_tag,
+                                          sc->tx_mbuf_map[i]);
+               }
+               bus_dma_tag_destroy(sc->tx_mbuf_tag);
+       }
+
+       /* Destroy the RX mbuf DMA stuffs. */
+       if (sc->rx_mbuf_tag != NULL) {
+               for (i = 0; i < TOTAL_RX_BD; i++) {
+                       /* Must have been unloaded in bce_stop() */
+                       KKASSERT(sc->rx_mbuf_ptr[i] == NULL);
+                       bus_dmamap_destroy(sc->rx_mbuf_tag,
+                                          sc->rx_mbuf_map[i]);
+               }
+               bus_dma_tag_destroy(sc->rx_mbuf_tag);
+       }
+
+       /* Destroy the parent tag */
+       if (sc->parent_tag != NULL)
+               bus_dma_tag_destroy(sc->parent_tag);
+}
+
+
+/****************************************************************************/
+/* Get DMA memory from the OS.                                              */
+/*                                                                          */
+/* Validates that the OS has provided DMA buffers in response to a          */
+/* bus_dmamap_load() call and saves the physical address of those buffers.  */
+/* When the callback is used the OS will return 0 for the mapping function  */
+/* (bus_dmamap_load()) so we use the value of map_arg->maxsegs to pass any  */
+/* failures back to the caller.                                             */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dma_map_addr(void *arg, bus_dma_segment_t *segs, int nseg, int error)
+{
+       bus_addr_t *busaddr = arg;
+
+       /*
+        * Simulate a mapping failure.
+        * XXX not correct.
+        */
+       DBRUNIF(DB_RANDOMTRUE(bce_debug_dma_map_addr_failure),
+               kprintf("bce: %s(%d): Simulating DMA mapping error.\n",
+                       __FILE__, __LINE__);
+               error = ENOMEM);
+               
+       /* Check for an error and signal the caller that an error occurred. */
+       if (error)
+               return;
+
+       KASSERT(nseg == 1, ("only one segment is allowed\n"));
+       *busaddr = segs->ds_addr;
+}
+
+
+static void
+bce_dma_map_mbuf(void *arg, bus_dma_segment_t *segs, int nsegs,
+                bus_size_t mapsz __unused, int error)
+{
+       struct bce_dmamap_arg *ctx = arg;
+       int i;
+
+       if (error)
+               return;
+
+       if (nsegs > ctx->bce_maxsegs) {
+               ctx->bce_maxsegs = 0;
+               return;
+       }
+
+       ctx->bce_maxsegs = nsegs;
+       for (i = 0; i < nsegs; ++i)
+               ctx->bce_segs[i] = segs[i];
+}
+
+
+/****************************************************************************/
+/* Allocate any DMA memory needed by the driver.                            */
+/*                                                                          */
+/* Allocates DMA memory needed for the various global structures needed by  */
+/* hardware.                                                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_dma_alloc(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       int i, j, rc = 0;
+       bus_addr_t busaddr;
+
+       /*
+        * Allocate the parent bus DMA tag appropriate for PCI.
+        */
+       rc = bus_dma_tag_create(NULL, 1, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               MAXBSIZE, BUS_SPACE_UNRESTRICTED,
+                               BUS_SPACE_MAXSIZE_32BIT,
+                               0, &sc->parent_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate parent DMA tag!\n");
+               return rc;
+       }
+
+       /*
+        * Create a DMA tag for the status block, allocate and clear the
+        * memory, map the memory into DMA space, and fetch the physical 
+        * address of the block.
+        */
+       rc = bus_dma_tag_create(sc->parent_tag,
+                               BCE_DMA_ALIGN, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               BCE_STATUS_BLK_SZ, 1, BCE_STATUS_BLK_SZ,
+                               0, &sc->status_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate status block DMA tag!\n");
+               return rc;
+       }
+
+       rc = bus_dmamem_alloc(sc->status_tag, (void **)&sc->status_block,
+                             BUS_DMA_WAITOK | BUS_DMA_ZERO,
+                             &sc->status_map);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate status block DMA memory!\n");
+               return rc;
+       }
+
+       rc = bus_dmamap_load(sc->status_tag, sc->status_map,
+                            sc->status_block, BCE_STATUS_BLK_SZ,
+                            bce_dma_map_addr, &busaddr, BUS_DMA_WAITOK);
+       if (rc != 0) {
+               if_printf(ifp, "Could not map status block DMA memory!\n");
+               bus_dmamem_free(sc->status_tag, sc->status_block,
+                               sc->status_map);
+               sc->status_block = NULL;
+               return rc;
+       }
+
+       sc->status_block_paddr = busaddr;
+       /* DRC - Fix for 64 bit addresses. */
+       DBPRINT(sc, BCE_INFO, "status_block_paddr = 0x%08X\n",
+               (uint32_t)sc->status_block_paddr);
+
+       /*
+        * Create a DMA tag for the statistics block, allocate and clear the
+        * memory, map the memory into DMA space, and fetch the physical 
+        * address of the block.
+        */
+       rc = bus_dma_tag_create(sc->parent_tag,
+                               BCE_DMA_ALIGN, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               BCE_STATS_BLK_SZ, 1, BCE_STATS_BLK_SZ,
+                               0, &sc->stats_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate "
+                         "statistics block DMA tag!\n");
+               return rc;
+       }
+
+       rc = bus_dmamem_alloc(sc->stats_tag, (void **)&sc->stats_block,
+                             BUS_DMA_WAITOK | BUS_DMA_ZERO,
+                             &sc->stats_map);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate "
+                         "statistics block DMA memory!\n");
+               return rc;
+       }
+
+       rc = bus_dmamap_load(sc->stats_tag, sc->stats_map,
+                            sc->stats_block, BCE_STATS_BLK_SZ,
+                            bce_dma_map_addr, &busaddr, BUS_DMA_WAITOK);
+       if (rc != 0) {
+               if_printf(ifp, "Could not map statistics block DMA memory!\n");
+               bus_dmamem_free(sc->stats_tag, sc->stats_block, sc->stats_map);
+               sc->stats_block = NULL;
+               return rc;
+       }
+
+       sc->stats_block_paddr = busaddr;
+       /* DRC - Fix for 64 bit address. */
+       DBPRINT(sc, BCE_INFO, "stats_block_paddr = 0x%08X\n", 
+               (uint32_t)sc->stats_block_paddr);
+
+       /*
+        * Create a DMA tag for the TX buffer descriptor chain,
+        * allocate and clear the  memory, and fetch the
+        * physical address of the block.
+        */
+       rc = bus_dma_tag_create(sc->parent_tag,
+                               BCM_PAGE_SIZE, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               BCE_TX_CHAIN_PAGE_SZ, 1, BCE_TX_CHAIN_PAGE_SZ,
+                               0, &sc->tx_bd_chain_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate "
+                         "TX descriptor chain DMA tag!\n");
+               return rc;
+       }
+
+       for (i = 0; i < TX_PAGES; i++) {
+               rc = bus_dmamem_alloc(sc->tx_bd_chain_tag,
+                                     (void **)&sc->tx_bd_chain[i],
+                                     BUS_DMA_WAITOK, &sc->tx_bd_chain_map[i]);
+               if (rc != 0) {
+                       if_printf(ifp, "Could not allocate %dth TX descriptor "
+                                 "chain DMA memory!\n", i);
+                       return rc;
+               }
+
+               rc = bus_dmamap_load(sc->tx_bd_chain_tag,
+                                    sc->tx_bd_chain_map[i],
+                                    sc->tx_bd_chain[i], BCE_TX_CHAIN_PAGE_SZ,
+                                    bce_dma_map_addr, &busaddr,
+                                    BUS_DMA_WAITOK);
+               if (rc != 0) {
+                       if_printf(ifp, "Could not map %dth TX descriptor "
+                                 "chain DMA memory!\n", i);
+                       bus_dmamem_free(sc->tx_bd_chain_tag,
+                                       sc->tx_bd_chain[i],
+                                       sc->tx_bd_chain_map[i]);
+                       sc->tx_bd_chain[i] = NULL;
+                       return rc;
+               }
+
+               sc->tx_bd_chain_paddr[i] = busaddr;
+               /* DRC - Fix for 64 bit systems. */
+               DBPRINT(sc, BCE_INFO, "tx_bd_chain_paddr[%d] = 0x%08X\n", 
+                       i, (uint32_t)sc->tx_bd_chain_paddr[i]);
+       }
+
+       /* Create a DMA tag for TX mbufs. */
+       rc = bus_dma_tag_create(sc->parent_tag, 1, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               MCLBYTES * BCE_MAX_SEGMENTS,
+                               BCE_MAX_SEGMENTS, MCLBYTES,
+                               0, &sc->tx_mbuf_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate TX mbuf DMA tag!\n");
+               return rc;
+       }
+
+       /* Create DMA maps for the TX mbufs clusters. */
+       for (i = 0; i < TOTAL_TX_BD; i++) {
+               rc = bus_dmamap_create(sc->tx_mbuf_tag, BUS_DMA_WAITOK,
+                                      &sc->tx_mbuf_map[i]);
+               if (rc != 0) {
+                       for (j = 0; j < i; ++j) {
+                               bus_dmamap_destroy(sc->tx_mbuf_tag,
+                                                  sc->tx_mbuf_map[i]);
+                       }
+                       bus_dma_tag_destroy(sc->tx_mbuf_tag);
+                       sc->tx_mbuf_tag = NULL;
+
+                       if_printf(ifp, "Unable to create "
+                                 "%dth TX mbuf DMA map!\n", i);
+                       return rc;
+               }
+       }
+
+       /*
+        * Create a DMA tag for the RX buffer descriptor chain,
+        * allocate and clear the  memory, and fetch the physical
+        * address of the blocks.
+        */
+       rc = bus_dma_tag_create(sc->parent_tag,
+                               BCM_PAGE_SIZE, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               BCE_RX_CHAIN_PAGE_SZ, 1, BCE_RX_CHAIN_PAGE_SZ,
+                               0, &sc->rx_bd_chain_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate "
+                         "RX descriptor chain DMA tag!\n");
+               return rc;
+       }
+
+       for (i = 0; i < RX_PAGES; i++) {
+               rc = bus_dmamem_alloc(sc->rx_bd_chain_tag,
+                                     (void **)&sc->rx_bd_chain[i],
+                                     BUS_DMA_WAITOK | BUS_DMA_ZERO,
+                                     &sc->rx_bd_chain_map[i]);
+               if (rc != 0) {
+                       if_printf(ifp, "Could not allocate %dth RX descriptor "
+                                 "chain DMA memory!\n", i);
+                       return rc;
+               }
+
+               rc = bus_dmamap_load(sc->rx_bd_chain_tag,
+                                    sc->rx_bd_chain_map[i],
+                                    sc->rx_bd_chain[i], BCE_RX_CHAIN_PAGE_SZ,
+                                    bce_dma_map_addr, &busaddr,
+                                    BUS_DMA_WAITOK);
+               if (rc != 0) {
+                       if_printf(ifp, "Could not map %dth RX descriptor "
+                                 "chain DMA memory!\n", i);
+                       bus_dmamem_free(sc->rx_bd_chain_tag,
+                                       sc->rx_bd_chain[i],
+                                       sc->rx_bd_chain_map[i]);
+                       sc->rx_bd_chain[i] = NULL;
+                       return rc;
+               }
+
+               sc->rx_bd_chain_paddr[i] = busaddr;
+               /* DRC - Fix for 64 bit systems. */
+               DBPRINT(sc, BCE_INFO, "rx_bd_chain_paddr[%d] = 0x%08X\n",
+                       i, (uint32_t)sc->rx_bd_chain_paddr[i]);
+       }
+
+       /* Create a DMA tag for RX mbufs. */
+       rc = bus_dma_tag_create(sc->parent_tag, 1, BCE_DMA_BOUNDARY,
+                               sc->max_bus_addr, BUS_SPACE_MAXADDR,
+                               NULL, NULL,
+                               MCLBYTES, 1/* BCE_MAX_SEGMENTS */, MCLBYTES,
+                               0, &sc->rx_mbuf_tag);
+       if (rc != 0) {
+               if_printf(ifp, "Could not allocate RX mbuf DMA tag!\n");
+               return rc;
+       }
+
+       /* Create DMA maps for the RX mbuf clusters. */
+       for (i = 0; i < TOTAL_RX_BD; i++) {
+               rc = bus_dmamap_create(sc->rx_mbuf_tag, BUS_DMA_WAITOK,
+                                      &sc->rx_mbuf_map[i]);
+               if (rc != 0) {
+                       for (j = 0; j < i; ++j) {
+                               bus_dmamap_destroy(sc->rx_mbuf_tag,
+                                                  sc->rx_mbuf_map[j]);
+                       }
+                       bus_dma_tag_destroy(sc->rx_mbuf_tag);
+                       sc->rx_mbuf_tag = NULL;
+
+                       if_printf(ifp, "Unable to create "
+                                 "%dth RX mbuf DMA map!\n", i);
+                       return rc;
+               }
+       }
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Firmware synchronization.                                                */
+/*                                                                          */
+/* Before performing certain events such as a chip reset, synchronize with  */
+/* the firmware first.                                                      */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_fw_sync(struct bce_softc *sc, uint32_t msg_data)
+{
+       int i, rc = 0;
+       uint32_t val;
+
+       /* Don't waste any time if we've timed out before. */
+       if (sc->bce_fw_timed_out)
+               return EBUSY;
+
+       /* Increment the message sequence number. */
+       sc->bce_fw_wr_seq++;
+       msg_data |= sc->bce_fw_wr_seq;
+
+       DBPRINT(sc, BCE_VERBOSE, "bce_fw_sync(): msg_data = 0x%08X\n", msg_data);
+
+       /* Send the message to the bootcode driver mailbox. */
+       REG_WR_IND(sc, sc->bce_shmem_base + BCE_DRV_MB, msg_data);
+
+       /* Wait for the bootcode to acknowledge the message. */
+       for (i = 0; i < FW_ACK_TIME_OUT_MS; i++) {
+               /* Check for a response in the bootcode firmware mailbox. */
+               val = REG_RD_IND(sc, sc->bce_shmem_base + BCE_FW_MB);
+               if ((val & BCE_FW_MSG_ACK) == (msg_data & BCE_DRV_MSG_SEQ))
+                       break;
+               DELAY(1000);
+       }
+
+       /* If we've timed out, tell the bootcode that we've stopped waiting. */
+       if ((val & BCE_FW_MSG_ACK) != (msg_data & BCE_DRV_MSG_SEQ) &&
+           (msg_data & BCE_DRV_MSG_DATA) != BCE_DRV_MSG_DATA_WAIT0) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Firmware synchronization timeout! "
+                         "msg_data = 0x%08X\n", msg_data);
+
+               msg_data &= ~BCE_DRV_MSG_CODE;
+               msg_data |= BCE_DRV_MSG_CODE_FW_TIMEOUT;
+
+               REG_WR_IND(sc, sc->bce_shmem_base + BCE_DRV_MB, msg_data);
+
+               sc->bce_fw_timed_out = 1;
+               rc = EBUSY;
+       }
+       return rc;
+}
+
+
+/****************************************************************************/
+/* Load Receive Virtual 2 Physical (RV2P) processor firmware.               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_load_rv2p_fw(struct bce_softc *sc, uint32_t *rv2p_code,
+                uint32_t rv2p_code_len, uint32_t rv2p_proc)
+{
+       int i;
+       uint32_t val;
+
+       for (i = 0; i < rv2p_code_len; i += 8) {
+               REG_WR(sc, BCE_RV2P_INSTR_HIGH, *rv2p_code);
+               rv2p_code++;
+               REG_WR(sc, BCE_RV2P_INSTR_LOW, *rv2p_code);
+               rv2p_code++;
+
+               if (rv2p_proc == RV2P_PROC1) {
+                       val = (i / 8) | BCE_RV2P_PROC1_ADDR_CMD_RDWR;
+                       REG_WR(sc, BCE_RV2P_PROC1_ADDR_CMD, val);
+               } else {
+                       val = (i / 8) | BCE_RV2P_PROC2_ADDR_CMD_RDWR;
+                       REG_WR(sc, BCE_RV2P_PROC2_ADDR_CMD, val);
+               }
+       }
+
+       /* Reset the processor, un-stall is done later. */
+       if (rv2p_proc == RV2P_PROC1)
+               REG_WR(sc, BCE_RV2P_COMMAND, BCE_RV2P_COMMAND_PROC1_RESET);
+       else
+               REG_WR(sc, BCE_RV2P_COMMAND, BCE_RV2P_COMMAND_PROC2_RESET);
+}
+
+
+/****************************************************************************/
+/* Load RISC processor firmware.                                            */
+/*                                                                          */
+/* Loads firmware from the file if_bcefw.h into the scratchpad memory       */
+/* associated with a particular processor.                                  */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_load_cpu_fw(struct bce_softc *sc, struct cpu_reg *cpu_reg,
+               struct fw_info *fw)
+{
+       uint32_t offset, val;
+       int j;
+
+       /* Halt the CPU. */
+       val = REG_RD_IND(sc, cpu_reg->mode);
+       val |= cpu_reg->mode_value_halt;
+       REG_WR_IND(sc, cpu_reg->mode, val);
+       REG_WR_IND(sc, cpu_reg->state, cpu_reg->state_value_clear);
+
+       /* Load the Text area. */
+       offset = cpu_reg->spad_base + (fw->text_addr - cpu_reg->mips_view_base);
+       if (fw->text) {
+               for (j = 0; j < (fw->text_len / 4); j++, offset += 4)
+                       REG_WR_IND(sc, offset, fw->text[j]);
+       }
+
+       /* Load the Data area. */
+       offset = cpu_reg->spad_base + (fw->data_addr - cpu_reg->mips_view_base);
+       if (fw->data) {
+               for (j = 0; j < (fw->data_len / 4); j++, offset += 4)
+                       REG_WR_IND(sc, offset, fw->data[j]);
+       }
+
+       /* Load the SBSS area. */
+       offset = cpu_reg->spad_base + (fw->sbss_addr - cpu_reg->mips_view_base);
+       if (fw->sbss) {
+               for (j = 0; j < (fw->sbss_len / 4); j++, offset += 4)
+                       REG_WR_IND(sc, offset, fw->sbss[j]);
+       }
+
+       /* Load the BSS area. */
+       offset = cpu_reg->spad_base + (fw->bss_addr - cpu_reg->mips_view_base);
+       if (fw->bss) {
+               for (j = 0; j < (fw->bss_len/4); j++, offset += 4)
+                       REG_WR_IND(sc, offset, fw->bss[j]);
+       }
+
+       /* Load the Read-Only area. */
+       offset = cpu_reg->spad_base +
+               (fw->rodata_addr - cpu_reg->mips_view_base);
+       if (fw->rodata) {
+               for (j = 0; j < (fw->rodata_len / 4); j++, offset += 4)
+                       REG_WR_IND(sc, offset, fw->rodata[j]);
+       }
+
+       /* Clear the pre-fetch instruction. */
+       REG_WR_IND(sc, cpu_reg->inst, 0);
+       REG_WR_IND(sc, cpu_reg->pc, fw->start_addr);
+
+       /* Start the CPU. */
+       val = REG_RD_IND(sc, cpu_reg->mode);
+       val &= ~cpu_reg->mode_value_halt;
+       REG_WR_IND(sc, cpu_reg->state, cpu_reg->state_value_clear);
+       REG_WR_IND(sc, cpu_reg->mode, val);
+}
+
+
+/****************************************************************************/
+/* Initialize the RV2P, RX, TX, TPAT, and COM CPUs.                         */
+/*                                                                          */
+/* Loads the firmware for each CPU and starts the CPU.                      */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_init_cpus(struct bce_softc *sc)
+{
+       struct cpu_reg cpu_reg;
+       struct fw_info fw;
+
+       /* Initialize the RV2P processor. */
+       bce_load_rv2p_fw(sc, bce_rv2p_proc1, sizeof(bce_rv2p_proc1), RV2P_PROC1);
+       bce_load_rv2p_fw(sc, bce_rv2p_proc2, sizeof(bce_rv2p_proc2), RV2P_PROC2);
+
+       /* Initialize the RX Processor. */
+       cpu_reg.mode = BCE_RXP_CPU_MODE;
+       cpu_reg.mode_value_halt = BCE_RXP_CPU_MODE_SOFT_HALT;
+       cpu_reg.mode_value_sstep = BCE_RXP_CPU_MODE_STEP_ENA;
+       cpu_reg.state = BCE_RXP_CPU_STATE;
+       cpu_reg.state_value_clear = 0xffffff;
+       cpu_reg.gpr0 = BCE_RXP_CPU_REG_FILE;
+       cpu_reg.evmask = BCE_RXP_CPU_EVENT_MASK;
+       cpu_reg.pc = BCE_RXP_CPU_PROGRAM_COUNTER;
+       cpu_reg.inst = BCE_RXP_CPU_INSTRUCTION;
+       cpu_reg.bp = BCE_RXP_CPU_HW_BREAKPOINT;
+       cpu_reg.spad_base = BCE_RXP_SCRATCH;
+       cpu_reg.mips_view_base = 0x8000000;
+
+       fw.ver_major = bce_RXP_b06FwReleaseMajor;
+       fw.ver_minor = bce_RXP_b06FwReleaseMinor;
+       fw.ver_fix = bce_RXP_b06FwReleaseFix;
+       fw.start_addr = bce_RXP_b06FwStartAddr;
+
+       fw.text_addr = bce_RXP_b06FwTextAddr;
+       fw.text_len = bce_RXP_b06FwTextLen;
+       fw.text_index = 0;
+       fw.text = bce_RXP_b06FwText;
+
+       fw.data_addr = bce_RXP_b06FwDataAddr;
+       fw.data_len = bce_RXP_b06FwDataLen;
+       fw.data_index = 0;
+       fw.data = bce_RXP_b06FwData;
+
+       fw.sbss_addr = bce_RXP_b06FwSbssAddr;
+       fw.sbss_len = bce_RXP_b06FwSbssLen;
+       fw.sbss_index = 0;
+       fw.sbss = bce_RXP_b06FwSbss;
+
+       fw.bss_addr = bce_RXP_b06FwBssAddr;
+       fw.bss_len = bce_RXP_b06FwBssLen;
+       fw.bss_index = 0;
+       fw.bss = bce_RXP_b06FwBss;
+
+       fw.rodata_addr = bce_RXP_b06FwRodataAddr;
+       fw.rodata_len = bce_RXP_b06FwRodataLen;
+       fw.rodata_index = 0;
+       fw.rodata = bce_RXP_b06FwRodata;
+
+       DBPRINT(sc, BCE_INFO_RESET, "Loading RX firmware.\n");
+       bce_load_cpu_fw(sc, &cpu_reg, &fw);
+
+       /* Initialize the TX Processor. */
+       cpu_reg.mode = BCE_TXP_CPU_MODE;
+       cpu_reg.mode_value_halt = BCE_TXP_CPU_MODE_SOFT_HALT;
+       cpu_reg.mode_value_sstep = BCE_TXP_CPU_MODE_STEP_ENA;
+       cpu_reg.state = BCE_TXP_CPU_STATE;
+       cpu_reg.state_value_clear = 0xffffff;
+       cpu_reg.gpr0 = BCE_TXP_CPU_REG_FILE;
+       cpu_reg.evmask = BCE_TXP_CPU_EVENT_MASK;
+       cpu_reg.pc = BCE_TXP_CPU_PROGRAM_COUNTER;
+       cpu_reg.inst = BCE_TXP_CPU_INSTRUCTION;
+       cpu_reg.bp = BCE_TXP_CPU_HW_BREAKPOINT;
+       cpu_reg.spad_base = BCE_TXP_SCRATCH;
+       cpu_reg.mips_view_base = 0x8000000;
+
+       fw.ver_major = bce_TXP_b06FwReleaseMajor;
+       fw.ver_minor = bce_TXP_b06FwReleaseMinor;
+       fw.ver_fix = bce_TXP_b06FwReleaseFix;
+       fw.start_addr = bce_TXP_b06FwStartAddr;
+
+       fw.text_addr = bce_TXP_b06FwTextAddr;
+       fw.text_len = bce_TXP_b06FwTextLen;
+       fw.text_index = 0;
+       fw.text = bce_TXP_b06FwText;
+
+       fw.data_addr = bce_TXP_b06FwDataAddr;
+       fw.data_len = bce_TXP_b06FwDataLen;
+       fw.data_index = 0;
+       fw.data = bce_TXP_b06FwData;
+
+       fw.sbss_addr = bce_TXP_b06FwSbssAddr;
+       fw.sbss_len = bce_TXP_b06FwSbssLen;
+       fw.sbss_index = 0;
+       fw.sbss = bce_TXP_b06FwSbss;
+
+       fw.bss_addr = bce_TXP_b06FwBssAddr;
+       fw.bss_len = bce_TXP_b06FwBssLen;
+       fw.bss_index = 0;
+       fw.bss = bce_TXP_b06FwBss;
+
+       fw.rodata_addr = bce_TXP_b06FwRodataAddr;
+       fw.rodata_len = bce_TXP_b06FwRodataLen;
+       fw.rodata_index = 0;
+       fw.rodata = bce_TXP_b06FwRodata;
+
+       DBPRINT(sc, BCE_INFO_RESET, "Loading TX firmware.\n");
+       bce_load_cpu_fw(sc, &cpu_reg, &fw);
+
+       /* Initialize the TX Patch-up Processor. */
+       cpu_reg.mode = BCE_TPAT_CPU_MODE;
+       cpu_reg.mode_value_halt = BCE_TPAT_CPU_MODE_SOFT_HALT;
+       cpu_reg.mode_value_sstep = BCE_TPAT_CPU_MODE_STEP_ENA;
+       cpu_reg.state = BCE_TPAT_CPU_STATE;
+       cpu_reg.state_value_clear = 0xffffff;
+       cpu_reg.gpr0 = BCE_TPAT_CPU_REG_FILE;
+       cpu_reg.evmask = BCE_TPAT_CPU_EVENT_MASK;
+       cpu_reg.pc = BCE_TPAT_CPU_PROGRAM_COUNTER;
+       cpu_reg.inst = BCE_TPAT_CPU_INSTRUCTION;
+       cpu_reg.bp = BCE_TPAT_CPU_HW_BREAKPOINT;
+       cpu_reg.spad_base = BCE_TPAT_SCRATCH;
+       cpu_reg.mips_view_base = 0x8000000;
+
+       fw.ver_major = bce_TPAT_b06FwReleaseMajor;
+       fw.ver_minor = bce_TPAT_b06FwReleaseMinor;
+       fw.ver_fix = bce_TPAT_b06FwReleaseFix;
+       fw.start_addr = bce_TPAT_b06FwStartAddr;
+
+       fw.text_addr = bce_TPAT_b06FwTextAddr;
+       fw.text_len = bce_TPAT_b06FwTextLen;
+       fw.text_index = 0;
+       fw.text = bce_TPAT_b06FwText;
+
+       fw.data_addr = bce_TPAT_b06FwDataAddr;
+       fw.data_len = bce_TPAT_b06FwDataLen;
+       fw.data_index = 0;
+       fw.data = bce_TPAT_b06FwData;
+
+       fw.sbss_addr = bce_TPAT_b06FwSbssAddr;
+       fw.sbss_len = bce_TPAT_b06FwSbssLen;
+       fw.sbss_index = 0;
+       fw.sbss = bce_TPAT_b06FwSbss;
+
+       fw.bss_addr = bce_TPAT_b06FwBssAddr;
+       fw.bss_len = bce_TPAT_b06FwBssLen;
+       fw.bss_index = 0;
+       fw.bss = bce_TPAT_b06FwBss;
+
+       fw.rodata_addr = bce_TPAT_b06FwRodataAddr;
+       fw.rodata_len = bce_TPAT_b06FwRodataLen;
+       fw.rodata_index = 0;
+       fw.rodata = bce_TPAT_b06FwRodata;
+
+       DBPRINT(sc, BCE_INFO_RESET, "Loading TPAT firmware.\n");
+       bce_load_cpu_fw(sc, &cpu_reg, &fw);
+
+       /* Initialize the Completion Processor. */
+       cpu_reg.mode = BCE_COM_CPU_MODE;
+       cpu_reg.mode_value_halt = BCE_COM_CPU_MODE_SOFT_HALT;
+       cpu_reg.mode_value_sstep = BCE_COM_CPU_MODE_STEP_ENA;
+       cpu_reg.state = BCE_COM_CPU_STATE;
+       cpu_reg.state_value_clear = 0xffffff;
+       cpu_reg.gpr0 = BCE_COM_CPU_REG_FILE;
+       cpu_reg.evmask = BCE_COM_CPU_EVENT_MASK;
+       cpu_reg.pc = BCE_COM_CPU_PROGRAM_COUNTER;
+       cpu_reg.inst = BCE_COM_CPU_INSTRUCTION;
+       cpu_reg.bp = BCE_COM_CPU_HW_BREAKPOINT;
+       cpu_reg.spad_base = BCE_COM_SCRATCH;
+       cpu_reg.mips_view_base = 0x8000000;
+
+       fw.ver_major = bce_COM_b06FwReleaseMajor;
+       fw.ver_minor = bce_COM_b06FwReleaseMinor;
+       fw.ver_fix = bce_COM_b06FwReleaseFix;
+       fw.start_addr = bce_COM_b06FwStartAddr;
+
+       fw.text_addr = bce_COM_b06FwTextAddr;
+       fw.text_len = bce_COM_b06FwTextLen;
+       fw.text_index = 0;
+       fw.text = bce_COM_b06FwText;
+
+       fw.data_addr = bce_COM_b06FwDataAddr;
+       fw.data_len = bce_COM_b06FwDataLen;
+       fw.data_index = 0;
+       fw.data = bce_COM_b06FwData;
+
+       fw.sbss_addr = bce_COM_b06FwSbssAddr;
+       fw.sbss_len = bce_COM_b06FwSbssLen;
+       fw.sbss_index = 0;
+       fw.sbss = bce_COM_b06FwSbss;
+
+       fw.bss_addr = bce_COM_b06FwBssAddr;
+       fw.bss_len = bce_COM_b06FwBssLen;
+       fw.bss_index = 0;
+       fw.bss = bce_COM_b06FwBss;
+
+       fw.rodata_addr = bce_COM_b06FwRodataAddr;
+       fw.rodata_len = bce_COM_b06FwRodataLen;
+       fw.rodata_index = 0;
+       fw.rodata = bce_COM_b06FwRodata;
+
+       DBPRINT(sc, BCE_INFO_RESET, "Loading COM firmware.\n");
+       bce_load_cpu_fw(sc, &cpu_reg, &fw);
+}
+
+
+/****************************************************************************/
+/* Initialize context memory.                                               */
+/*                                                                          */
+/* Clears the memory associated with each Context ID (CID).                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_init_context(struct bce_softc *sc)
+{
+       uint32_t vcid;
+
+       vcid = 96;
+       while (vcid) {
+               uint32_t vcid_addr, pcid_addr, offset;
+
+               vcid--;
+
+               vcid_addr = GET_CID_ADDR(vcid);
+               pcid_addr = vcid_addr;
+
+               REG_WR(sc, BCE_CTX_VIRT_ADDR, 0x00);
+               REG_WR(sc, BCE_CTX_PAGE_TBL, pcid_addr);
+
+               /* Zero out the context. */
+               for (offset = 0; offset < PHY_CTX_SIZE; offset += 4)
+                       CTX_WR(sc, 0x00, offset, 0);
+
+               REG_WR(sc, BCE_CTX_VIRT_ADDR, vcid_addr);
+               REG_WR(sc, BCE_CTX_PAGE_TBL, pcid_addr);
+       }
+}
+
+
+/****************************************************************************/
+/* Fetch the permanent MAC address of the controller.                       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_get_mac_addr(struct bce_softc *sc)
+{
+       uint32_t mac_lo = 0, mac_hi = 0;
+
+       /*
+        * The NetXtreme II bootcode populates various NIC
+        * power-on and runtime configuration items in a
+        * shared memory area.  The factory configured MAC
+        * address is available from both NVRAM and the
+        * shared memory area so we'll read the value from
+        * shared memory for speed.
+        */
+
+       mac_hi = REG_RD_IND(sc, sc->bce_shmem_base + BCE_PORT_HW_CFG_MAC_UPPER);
+       mac_lo = REG_RD_IND(sc, sc->bce_shmem_base + BCE_PORT_HW_CFG_MAC_LOWER);
+
+       if (mac_lo == 0 && mac_hi == 0) {
+               if_printf(&sc->arpcom.ac_if, "Invalid Ethernet address!\n");
+       } else {
+               sc->eaddr[0] = (u_char)(mac_hi >> 8);
+               sc->eaddr[1] = (u_char)(mac_hi >> 0);
+               sc->eaddr[2] = (u_char)(mac_lo >> 24);
+               sc->eaddr[3] = (u_char)(mac_lo >> 16);
+               sc->eaddr[4] = (u_char)(mac_lo >> 8);
+               sc->eaddr[5] = (u_char)(mac_lo >> 0);
+       }
+
+       DBPRINT(sc, BCE_INFO, "Permanent Ethernet address = %6D\n", sc->eaddr, ":");
+}
+
+
+/****************************************************************************/
+/* Program the MAC address.                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_set_mac_addr(struct bce_softc *sc)
+{
+       const uint8_t *mac_addr = sc->eaddr;
+       uint32_t val;
+
+       DBPRINT(sc, BCE_INFO, "Setting Ethernet address = %6D\n",
+               sc->eaddr, ":");
+
+       val = (mac_addr[0] << 8) | mac_addr[1];
+       REG_WR(sc, BCE_EMAC_MAC_MATCH0, val);
+
+       val = (mac_addr[2] << 24) |
+             (mac_addr[3] << 16) |
+             (mac_addr[4] << 8) |
+             mac_addr[5];
+       REG_WR(sc, BCE_EMAC_MAC_MATCH1, val);
+}
+
+
+/****************************************************************************/
+/* Stop the controller.                                                     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_stop(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       struct mii_data *mii = device_get_softc(sc->bce_miibus);
+       struct ifmedia_entry *ifm;
+       int mtmp, itmp;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       callout_stop(&sc->bce_stat_ch);
+
+       /* Disable the transmit/receive blocks. */
+       REG_WR(sc, BCE_MISC_ENABLE_CLR_BITS, 0x5ffffff);
+       REG_RD(sc, BCE_MISC_ENABLE_CLR_BITS);
+       DELAY(20);
+
+       bce_disable_intr(sc);
+
+       /* Tell firmware that the driver is going away. */
+       bce_reset(sc, BCE_DRV_MSG_CODE_SUSPEND_NO_WOL);
+
+       /* Free the RX lists. */
+       bce_free_rx_chain(sc);
+
+       /* Free TX buffers. */
+       bce_free_tx_chain(sc);
+
+       /*
+        * Isolate/power down the PHY, but leave the media selection
+        * unchanged so that things will be put back to normal when
+        * we bring the interface back up.
+        */
+       itmp = ifp->if_flags;
+       ifp->if_flags |= IFF_UP;
+       ifm = mii->mii_media.ifm_cur;
+       mtmp = ifm->ifm_media;
+       ifm->ifm_media = IFM_ETHER | IFM_NONE;
+       mii_mediachg(mii);
+       ifm->ifm_media = mtmp;
+       ifp->if_flags = itmp;
+
+       sc->bce_link = 0;
+
+       ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
+       ifp->if_timer = 0;
+
+       bce_mgmt_init(sc);
+}
+
+
+static int
+bce_reset(struct bce_softc *sc, uint32_t reset_code)
+{
+       uint32_t val;
+       int i, rc = 0;
+
+       /* Wait for pending PCI transactions to complete. */
+       REG_WR(sc, BCE_MISC_ENABLE_CLR_BITS,
+              BCE_MISC_ENABLE_CLR_BITS_TX_DMA_ENABLE |
+              BCE_MISC_ENABLE_CLR_BITS_DMA_ENGINE_ENABLE |
+              BCE_MISC_ENABLE_CLR_BITS_RX_DMA_ENABLE |
+              BCE_MISC_ENABLE_CLR_BITS_HOST_COALESCE_ENABLE);
+       val = REG_RD(sc, BCE_MISC_ENABLE_CLR_BITS);
+       DELAY(5);
+
+       /* Assume bootcode is running. */
+       sc->bce_fw_timed_out = 0;
+
+       /* Give the firmware a chance to prepare for the reset. */
+       rc = bce_fw_sync(sc, BCE_DRV_MSG_DATA_WAIT0 | reset_code);
+       if (rc) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Firmware is not ready for reset\n");
+               return rc;
+       }
+
+       /* Set a firmware reminder that this is a soft reset. */
+       REG_WR_IND(sc, sc->bce_shmem_base + BCE_DRV_RESET_SIGNATURE,
+                  BCE_DRV_RESET_SIGNATURE_MAGIC);
+
+       /* Dummy read to force the chip to complete all current transactions. */
+       val = REG_RD(sc, BCE_MISC_ID);
+
+       /* Chip reset. */
+       val = BCE_PCICFG_MISC_CONFIG_CORE_RST_REQ |
+             BCE_PCICFG_MISC_CONFIG_REG_WINDOW_ENA |
+             BCE_PCICFG_MISC_CONFIG_TARGET_MB_WORD_SWAP;
+       REG_WR(sc, BCE_PCICFG_MISC_CONFIG, val);
+
+       /* Allow up to 30us for reset to complete. */
+       for (i = 0; i < 10; i++) {
+               val = REG_RD(sc, BCE_PCICFG_MISC_CONFIG);
+               if ((val & (BCE_PCICFG_MISC_CONFIG_CORE_RST_REQ |
+                           BCE_PCICFG_MISC_CONFIG_CORE_RST_BSY)) == 0) {
+                       break;
+               }
+               DELAY(10);
+       }
+
+       /* Check that reset completed successfully. */
+       if (val & (BCE_PCICFG_MISC_CONFIG_CORE_RST_REQ |
+                  BCE_PCICFG_MISC_CONFIG_CORE_RST_BSY)) {
+               if_printf(&sc->arpcom.ac_if, "Reset failed!\n");
+               return EBUSY;
+       }
+
+       /* Make sure byte swapping is properly configured. */
+       val = REG_RD(sc, BCE_PCI_SWAP_DIAG0);
+       if (val != 0x01020304) {
+               if_printf(&sc->arpcom.ac_if, "Byte swap is incorrect!\n");
+               return ENODEV;
+       }
+
+       /* Just completed a reset, assume that firmware is running again. */
+       sc->bce_fw_timed_out = 0;
+
+       /* Wait for the firmware to finish its initialization. */
+       rc = bce_fw_sync(sc, BCE_DRV_MSG_DATA_WAIT1 | reset_code);
+       if (rc) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Firmware did not complete initialization!\n");
+       }
+       return rc;
+}
+
+
+static int
+bce_chipinit(struct bce_softc *sc)
+{
+       uint32_t val;
+       int rc = 0;
+
+       /* Make sure the interrupt is not active. */
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD, BCE_PCICFG_INT_ACK_CMD_MASK_INT);
+
+       /*
+        * Initialize DMA byte/word swapping, configure the number of DMA
+        * channels and PCI clock compensation delay.
+        */
+       val = BCE_DMA_CONFIG_DATA_BYTE_SWAP |
+             BCE_DMA_CONFIG_DATA_WORD_SWAP |
+#if BYTE_ORDER == BIG_ENDIAN
+             BCE_DMA_CONFIG_CNTL_BYTE_SWAP |
+#endif
+             BCE_DMA_CONFIG_CNTL_WORD_SWAP |
+             DMA_READ_CHANS << 12 |
+             DMA_WRITE_CHANS << 16;
+
+       val |= (0x2 << 20) | BCE_DMA_CONFIG_CNTL_PCI_COMP_DLY;
+
+       if ((sc->bce_flags & BCE_PCIX_FLAG) && sc->bus_speed_mhz == 133)
+               val |= BCE_DMA_CONFIG_PCI_FAST_CLK_CMP;
+
+       /*
+        * This setting resolves a problem observed on certain Intel PCI
+        * chipsets that cannot handle multiple outstanding DMA operations.
+        * See errata E9_5706A1_65.
+        */
+       if (BCE_CHIP_NUM(sc) == BCE_CHIP_NUM_5706 &&
+           BCE_CHIP_ID(sc) != BCE_CHIP_ID_5706_A0 &&
+           !(sc->bce_flags & BCE_PCIX_FLAG))
+               val |= BCE_DMA_CONFIG_CNTL_PING_PONG_DMA;
+
+       REG_WR(sc, BCE_DMA_CONFIG, val);
+
+       /* Clear the PCI-X relaxed ordering bit. See errata E3_5708CA0_570. */
+       if (sc->bce_flags & BCE_PCIX_FLAG) {
+               uint16_t cmd;
+
+               cmd = pci_read_config(sc->bce_dev, BCE_PCI_PCIX_CMD, 2);
+               pci_write_config(sc->bce_dev, BCE_PCI_PCIX_CMD, cmd & ~0x2, 2);
+       }
+
+       /* Enable the RX_V2P and Context state machines before access. */
+       REG_WR(sc, BCE_MISC_ENABLE_SET_BITS,
+              BCE_MISC_ENABLE_SET_BITS_HOST_COALESCE_ENABLE |
+              BCE_MISC_ENABLE_STATUS_BITS_RX_V2P_ENABLE |
+              BCE_MISC_ENABLE_STATUS_BITS_CONTEXT_ENABLE);
+
+       /* Initialize context mapping and zero out the quick contexts. */
+       bce_init_context(sc);
+
+       /* Initialize the on-boards CPUs */
+       bce_init_cpus(sc);
+
+       /* Prepare NVRAM for access. */
+       rc = bce_init_nvram(sc);
+       if (rc != 0)
+               return rc;
+
+       /* Set the kernel bypass block size */
+       val = REG_RD(sc, BCE_MQ_CONFIG);
+       val &= ~BCE_MQ_CONFIG_KNL_BYP_BLK_SIZE;
+       val |= BCE_MQ_CONFIG_KNL_BYP_BLK_SIZE_256;
+       REG_WR(sc, BCE_MQ_CONFIG, val);
+
+       val = 0x10000 + (MAX_CID_CNT * MB_KERNEL_CTX_SIZE);
+       REG_WR(sc, BCE_MQ_KNL_BYP_WIND_START, val);
+       REG_WR(sc, BCE_MQ_KNL_WIND_END, val);
+
+       /* Set the page size and clear the RV2P processor stall bits. */
+       val = (BCM_PAGE_BITS - 8) << 24;
+       REG_WR(sc, BCE_RV2P_CONFIG, val);
+
+       /* Configure page size. */
+       val = REG_RD(sc, BCE_TBDR_CONFIG);
+       val &= ~BCE_TBDR_CONFIG_PAGE_SIZE;
+       val |= (BCM_PAGE_BITS - 8) << 24 | 0x40;
+       REG_WR(sc, BCE_TBDR_CONFIG, val);
+
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Initialize the controller in preparation to send/receive traffic.        */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_blockinit(struct bce_softc *sc)
+{
+       uint32_t reg, val;
+       int rc = 0;
+
+       /* Load the hardware default MAC address. */
+       bce_set_mac_addr(sc);
+
+       /* Set the Ethernet backoff seed value */
+       val = sc->eaddr[0] + (sc->eaddr[1] << 8) + (sc->eaddr[2] << 16) +
+             sc->eaddr[3] + (sc->eaddr[4] << 8) + (sc->eaddr[5] << 16);
+       REG_WR(sc, BCE_EMAC_BACKOFF_SEED, val);
+
+       sc->last_status_idx = 0;
+       sc->rx_mode = BCE_EMAC_RX_MODE_SORT_MODE;
+
+       /* Set up link change interrupt generation. */
+       REG_WR(sc, BCE_EMAC_ATTENTION_ENA, BCE_EMAC_ATTENTION_ENA_LINK);
+
+       /* Program the physical address of the status block. */
+       REG_WR(sc, BCE_HC_STATUS_ADDR_L, BCE_ADDR_LO(sc->status_block_paddr));
+       REG_WR(sc, BCE_HC_STATUS_ADDR_H, BCE_ADDR_HI(sc->status_block_paddr));
+
+       /* Program the physical address of the statistics block. */
+       REG_WR(sc, BCE_HC_STATISTICS_ADDR_L,
+              BCE_ADDR_LO(sc->stats_block_paddr));
+       REG_WR(sc, BCE_HC_STATISTICS_ADDR_H,
+              BCE_ADDR_HI(sc->stats_block_paddr));
+
+       /* Program various host coalescing parameters. */
+       REG_WR(sc, BCE_HC_TX_QUICK_CONS_TRIP,
+              (sc->bce_tx_quick_cons_trip_int << 16) |
+              sc->bce_tx_quick_cons_trip);
+       REG_WR(sc, BCE_HC_RX_QUICK_CONS_TRIP,
+              (sc->bce_rx_quick_cons_trip_int << 16) |
+              sc->bce_rx_quick_cons_trip);
+       REG_WR(sc, BCE_HC_COMP_PROD_TRIP,
+              (sc->bce_comp_prod_trip_int << 16) | sc->bce_comp_prod_trip);
+       REG_WR(sc, BCE_HC_TX_TICKS,
+              (sc->bce_tx_ticks_int << 16) | sc->bce_tx_ticks);
+       REG_WR(sc, BCE_HC_RX_TICKS,
+              (sc->bce_rx_ticks_int << 16) | sc->bce_rx_ticks);
+       REG_WR(sc, BCE_HC_COM_TICKS,
+              (sc->bce_com_ticks_int << 16) | sc->bce_com_ticks);
+       REG_WR(sc, BCE_HC_CMD_TICKS,
+              (sc->bce_cmd_ticks_int << 16) | sc->bce_cmd_ticks);
+       REG_WR(sc, BCE_HC_STATS_TICKS, (sc->bce_stats_ticks & 0xffff00));
+       REG_WR(sc, BCE_HC_STAT_COLLECT_TICKS, 0xbb8);   /* 3ms */
+       REG_WR(sc, BCE_HC_CONFIG,
+              BCE_HC_CONFIG_RX_TMR_MODE |
+              BCE_HC_CONFIG_TX_TMR_MODE |
+              BCE_HC_CONFIG_COLLECT_STATS);
+
+       /* Clear the internal statistics counters. */
+       REG_WR(sc, BCE_HC_COMMAND, BCE_HC_COMMAND_CLR_STAT_NOW);
+
+       /* Verify that bootcode is running. */
+       reg = REG_RD_IND(sc, sc->bce_shmem_base + BCE_DEV_INFO_SIGNATURE);
+
+       DBRUNIF(DB_RANDOMTRUE(bce_debug_bootcode_running_failure),
+               if_printf(&sc->arpcom.ac_if,
+                         "%s(%d): Simulating bootcode failure.\n",
+                         __FILE__, __LINE__);
+               reg = 0);
+
+       if ((reg & BCE_DEV_INFO_SIGNATURE_MAGIC_MASK) !=
+           BCE_DEV_INFO_SIGNATURE_MAGIC) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Bootcode not running! Found: 0x%08X, "
+                         "Expected: 08%08X\n",
+                         reg & BCE_DEV_INFO_SIGNATURE_MAGIC_MASK,
+                         BCE_DEV_INFO_SIGNATURE_MAGIC);
+               return ENODEV;
+       }
+
+       /* Check if any management firmware is running. */
+       reg = REG_RD_IND(sc, sc->bce_shmem_base + BCE_PORT_FEATURE);
+       if (reg & (BCE_PORT_FEATURE_ASF_ENABLED |
+                  BCE_PORT_FEATURE_IMD_ENABLED)) {
+               DBPRINT(sc, BCE_INFO, "Management F/W Enabled.\n");
+               sc->bce_flags |= BCE_MFW_ENABLE_FLAG;
+       }
+
+       sc->bce_fw_ver =
+               REG_RD_IND(sc, sc->bce_shmem_base + BCE_DEV_INFO_BC_REV);
+       DBPRINT(sc, BCE_INFO, "bootcode rev = 0x%08X\n", sc->bce_fw_ver);
+
+       /* Allow bootcode to apply any additional fixes before enabling MAC. */
+       rc = bce_fw_sync(sc, BCE_DRV_MSG_DATA_WAIT2 | BCE_DRV_MSG_CODE_RESET);
+
+       /* Enable link state change interrupt generation. */
+       REG_WR(sc, BCE_HC_ATTN_BITS_ENABLE, STATUS_ATTN_BITS_LINK_STATE);
+
+       /* Enable all remaining blocks in the MAC. */
+       REG_WR(sc, BCE_MISC_ENABLE_SET_BITS, 0x5ffffff);
+       REG_RD(sc, BCE_MISC_ENABLE_SET_BITS);
+       DELAY(20);
+
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Encapsulate an mbuf cluster into the rx_bd chain.                        */
+/*                                                                          */
+/* The NetXtreme II can support Jumbo frames by using multiple rx_bd's.     */
+/* This routine will map an mbuf cluster into 1 or more rx_bd's as          */
+/* necessary.                                                               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_newbuf_std(struct bce_softc *sc, struct mbuf *m,
+              uint16_t *prod, uint16_t *chain_prod, uint32_t *prod_bseq)
+{
+       bus_dmamap_t map;
+       struct bce_dmamap_arg ctx;
+       bus_dma_segment_t seg;
+       struct mbuf *m_new;
+       struct rx_bd *rxbd;
+       int error;
+#ifdef BCE_DEBUG
+       uint16_t debug_chain_prod = *chain_prod;
+#endif
+
+       /* Make sure the inputs are valid. */
+       DBRUNIF((*chain_prod > MAX_RX_BD),
+               if_printf(&sc->arpcom.ac_if, "%s(%d): "
+                         "RX producer out of range: 0x%04X > 0x%04X\n",
+                         __FILE__, __LINE__,
+                         *chain_prod, (uint16_t)MAX_RX_BD));
+
+       DBPRINT(sc, BCE_VERBOSE_RECV, "%s(enter): prod = 0x%04X, chain_prod = 0x%04X, "
+               "prod_bseq = 0x%08X\n", __func__, *prod, *chain_prod, *prod_bseq);
+
+       if (m == NULL) {
+               DBRUNIF(DB_RANDOMTRUE(bce_debug_mbuf_allocation_failure),
+                       if_printf(&sc->arpcom.ac_if, "%s(%d): "
+                                 "Simulating mbuf allocation failure.\n",
+                                 __FILE__, __LINE__);
+                       sc->mbuf_alloc_failed++;
+                       return ENOBUFS);
+
+               /* This is a new mbuf allocation. */
+               m_new = m_getcl(MB_DONTWAIT, MT_DATA, M_PKTHDR);
+               if (m_new == NULL)
+                       return ENOBUFS;
+               DBRUNIF(1, sc->rx_mbuf_alloc++);
+       } else {
+               m_new = m;
+               m_new->m_data = m_new->m_ext.ext_buf;
+       }
+       m_new->m_len = m_new->m_pkthdr.len = MCLBYTES;
+
+       /* Map the mbuf cluster into device memory. */
+       map = sc->rx_mbuf_map[*chain_prod];
+
+       ctx.bce_maxsegs = 1;
+       ctx.bce_segs = &seg;
+       error = bus_dmamap_load_mbuf(sc->rx_mbuf_tag, map, m_new,
+                                    bce_dma_map_mbuf, &ctx, BUS_DMA_NOWAIT);
+       if (error || ctx.bce_maxsegs == 0) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Error mapping mbuf into RX chain!\n");
+
+               if (m == NULL)
+                       m_freem(m_new);
+
+               DBRUNIF(1, sc->rx_mbuf_alloc--);
+               return ENOBUFS;
+       }
+
+       /* Watch for overflow. */
+       DBRUNIF((sc->free_rx_bd > USABLE_RX_BD),
+               if_printf(&sc->arpcom.ac_if, "%s(%d): "
+                         "Too many free rx_bd (0x%04X > 0x%04X)!\n",
+                         __FILE__, __LINE__, sc->free_rx_bd,
+                         (uint16_t)USABLE_RX_BD));
+
+       /* Update some debug statistic counters */
+       DBRUNIF((sc->free_rx_bd < sc->rx_low_watermark),
+               sc->rx_low_watermark = sc->free_rx_bd);
+       DBRUNIF((sc->free_rx_bd == 0), sc->rx_empty_count++);
+
+       /* Setup the rx_bd for the first segment. */
+       rxbd = &sc->rx_bd_chain[RX_PAGE(*chain_prod)][RX_IDX(*chain_prod)];
+
+       rxbd->rx_bd_haddr_lo = htole32(BCE_ADDR_LO(seg.ds_addr));
+       rxbd->rx_bd_haddr_hi = htole32(BCE_ADDR_HI(seg.ds_addr));
+       rxbd->rx_bd_len = htole32(seg.ds_len);
+       rxbd->rx_bd_flags = htole32(RX_BD_FLAGS_START);
+       *prod_bseq += seg.ds_len;
+
+       rxbd->rx_bd_flags |= htole32(RX_BD_FLAGS_END);
+
+       /* Save the mbuf and update our counter. */
+       sc->rx_mbuf_ptr[*chain_prod] = m_new;
+       sc->free_rx_bd--;
+
+       DBRUN(BCE_VERBOSE_RECV,
+             bce_dump_rx_mbuf_chain(sc, debug_chain_prod, 1));
+
+       DBPRINT(sc, BCE_VERBOSE_RECV, "%s(exit): prod = 0x%04X, chain_prod = 0x%04X, "
+               "prod_bseq = 0x%08X\n", __func__, *prod, *chain_prod, *prod_bseq);
+
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Allocate memory and initialize the TX data structures.                   */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_init_tx_chain(struct bce_softc *sc)
+{
+       struct tx_bd *txbd;
+       uint32_t val;
+       int i, rc = 0;
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Entering %s()\n", __func__);
+
+       /* Set the initial TX producer/consumer indices. */
+       sc->tx_prod = 0;
+       sc->tx_cons = 0;
+       sc->tx_prod_bseq   = 0;
+       sc->used_tx_bd = 0;
+       sc->max_tx_bd = USABLE_TX_BD;
+       DBRUNIF(1, sc->tx_hi_watermark = USABLE_TX_BD);
+       DBRUNIF(1, sc->tx_full_count = 0);
+
+       /*
+        * The NetXtreme II supports a linked-list structre called
+        * a Buffer Descriptor Chain (or BD chain).  A BD chain
+        * consists of a series of 1 or more chain pages, each of which
+        * consists of a fixed number of BD entries.
+        * The last BD entry on each page is a pointer to the next page
+        * in the chain, and the last pointer in the BD chain
+        * points back to the beginning of the chain.
+        */
+
+       /* Set the TX next pointer chain entries. */
+       for (i = 0; i < TX_PAGES; i++) {
+               int j;
+
+               txbd = &sc->tx_bd_chain[i][USABLE_TX_BD_PER_PAGE];
+
+               /* Check if we've reached the last page. */
+               if (i == (TX_PAGES - 1))
+                       j = 0;
+               else
+                       j = i + 1;
+
+               txbd->tx_bd_haddr_hi =
+                       htole32(BCE_ADDR_HI(sc->tx_bd_chain_paddr[j]));
+               txbd->tx_bd_haddr_lo =
+                       htole32(BCE_ADDR_LO(sc->tx_bd_chain_paddr[j]));
+       }
+
+       for (i = 0; i < TX_PAGES; ++i) {
+               bus_dmamap_sync(sc->tx_bd_chain_tag, sc->tx_bd_chain_map[i],
+                               BUS_DMASYNC_PREWRITE);
+       }
+
+       /* Initialize the context ID for an L2 TX chain. */
+       val = BCE_L2CTX_TYPE_TYPE_L2;
+       val |= BCE_L2CTX_TYPE_SIZE_L2;
+       CTX_WR(sc, GET_CID_ADDR(TX_CID), BCE_L2CTX_TYPE, val);
+
+       val = BCE_L2CTX_CMD_TYPE_TYPE_L2 | (8 << 16);
+       CTX_WR(sc, GET_CID_ADDR(TX_CID), BCE_L2CTX_CMD_TYPE, val);
+
+       /* Point the hardware to the first page in the chain. */
+       val = BCE_ADDR_HI(sc->tx_bd_chain_paddr[0]);
+       CTX_WR(sc, GET_CID_ADDR(TX_CID), BCE_L2CTX_TBDR_BHADDR_HI, val);
+       val = BCE_ADDR_LO(sc->tx_bd_chain_paddr[0]);
+       CTX_WR(sc, GET_CID_ADDR(TX_CID), BCE_L2CTX_TBDR_BHADDR_LO, val);
+
+       DBRUN(BCE_VERBOSE_SEND, bce_dump_tx_chain(sc, 0, TOTAL_TX_BD));
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Exiting %s()\n", __func__);
+
+       return(rc);
+}
+
+
+/****************************************************************************/
+/* Free memory and clear the TX data structures.                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_free_tx_chain(struct bce_softc *sc)
+{
+       int i;
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Entering %s()\n", __func__);
+
+       /* Unmap, unload, and free any mbufs still in the TX mbuf chain. */
+       for (i = 0; i < TOTAL_TX_BD; i++) {
+               if (sc->tx_mbuf_ptr[i] != NULL) {
+                       bus_dmamap_sync(sc->tx_mbuf_tag, sc->tx_mbuf_map[i],
+                                       BUS_DMASYNC_POSTWRITE);
+                       bus_dmamap_unload(sc->tx_mbuf_tag, sc->tx_mbuf_map[i]);
+                       m_freem(sc->tx_mbuf_ptr[i]);
+                       sc->tx_mbuf_ptr[i] = NULL;
+                       DBRUNIF(1, sc->tx_mbuf_alloc--);
+               }
+       }
+
+       /* Clear each TX chain page. */
+       for (i = 0; i < TX_PAGES; i++)
+               bzero(sc->tx_bd_chain[i], BCE_TX_CHAIN_PAGE_SZ);
+
+       /* Check if we lost any mbufs in the process. */
+       DBRUNIF((sc->tx_mbuf_alloc),
+               if_printf(&sc->arpcom.ac_if,
+                         "%s(%d): Memory leak! "
+                         "Lost %d mbufs from tx chain!\n",
+                         __FILE__, __LINE__, sc->tx_mbuf_alloc));
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Exiting %s()\n", __func__);
+}
+
+
+/****************************************************************************/
+/* Allocate memory and initialize the RX data structures.                   */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_init_rx_chain(struct bce_softc *sc)
+{
+       struct rx_bd *rxbd;
+       int i, rc = 0;
+       uint16_t prod, chain_prod;
+       uint32_t prod_bseq, val;
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Entering %s()\n", __func__);
+
+       /* Initialize the RX producer and consumer indices. */
+       sc->rx_prod = 0;
+       sc->rx_cons = 0;
+       sc->rx_prod_bseq = 0;
+       sc->free_rx_bd = USABLE_RX_BD;
+       sc->max_rx_bd = USABLE_RX_BD;
+       DBRUNIF(1, sc->rx_low_watermark = USABLE_RX_BD);
+       DBRUNIF(1, sc->rx_empty_count = 0);
+
+       /* Initialize the RX next pointer chain entries. */
+       for (i = 0; i < RX_PAGES; i++) {
+               int j;
+
+               rxbd = &sc->rx_bd_chain[i][USABLE_RX_BD_PER_PAGE];
+
+               /* Check if we've reached the last page. */
+               if (i == (RX_PAGES - 1))
+                       j = 0;
+               else
+                       j = i + 1;
+
+               /* Setup the chain page pointers. */
+               rxbd->rx_bd_haddr_hi =
+                       htole32(BCE_ADDR_HI(sc->rx_bd_chain_paddr[j]));
+               rxbd->rx_bd_haddr_lo =
+                       htole32(BCE_ADDR_LO(sc->rx_bd_chain_paddr[j]));
+       }
+
+       /* Initialize the context ID for an L2 RX chain. */
+       val = BCE_L2CTX_CTX_TYPE_CTX_BD_CHN_TYPE_VALUE;
+       val |= BCE_L2CTX_CTX_TYPE_SIZE_L2;
+       val |= 0x02 << 8;
+       CTX_WR(sc, GET_CID_ADDR(RX_CID), BCE_L2CTX_CTX_TYPE, val);
+
+       /* Point the hardware to the first page in the chain. */
+       /* XXX shouldn't this after RX descriptor initialization? */
+       val = BCE_ADDR_HI(sc->rx_bd_chain_paddr[0]);
+       CTX_WR(sc, GET_CID_ADDR(RX_CID), BCE_L2CTX_NX_BDHADDR_HI, val);
+       val = BCE_ADDR_LO(sc->rx_bd_chain_paddr[0]);
+       CTX_WR(sc, GET_CID_ADDR(RX_CID), BCE_L2CTX_NX_BDHADDR_LO, val);
+
+       /* Allocate mbuf clusters for the rx_bd chain. */
+       prod = prod_bseq = 0;
+       while (prod < TOTAL_RX_BD) {
+               chain_prod = RX_CHAIN_IDX(prod);
+               if (bce_newbuf_std(sc, NULL, &prod, &chain_prod, &prod_bseq)) {
+                       if_printf(&sc->arpcom.ac_if,
+                                 "Error filling RX chain: rx_bd[0x%04X]!\n",
+                                 chain_prod);
+                       rc = ENOBUFS;
+                       break;
+               }
+               prod = NEXT_RX_BD(prod);
+       }
+
+       /* Save the RX chain producer index. */
+       sc->rx_prod = prod;
+       sc->rx_prod_bseq = prod_bseq;
+
+       for (i = 0; i < RX_PAGES; i++) {
+               bus_dmamap_sync(sc->rx_bd_chain_tag, sc->rx_bd_chain_map[i],
+                               BUS_DMASYNC_PREWRITE);
+       }
+
+       /* Tell the chip about the waiting rx_bd's. */
+       REG_WR16(sc, MB_RX_CID_ADDR + BCE_L2CTX_HOST_BDIDX, sc->rx_prod);
+       REG_WR(sc, MB_RX_CID_ADDR + BCE_L2CTX_HOST_BSEQ, sc->rx_prod_bseq);
+
+       DBRUN(BCE_VERBOSE_RECV, bce_dump_rx_chain(sc, 0, TOTAL_RX_BD));
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Exiting %s()\n", __func__);
+
+       return(rc);
+}
+
+
+/****************************************************************************/
+/* Free memory and clear the RX data structures.                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_free_rx_chain(struct bce_softc *sc)
+{
+       int i;
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Entering %s()\n", __func__);
+
+       /* Free any mbufs still in the RX mbuf chain. */
+       for (i = 0; i < TOTAL_RX_BD; i++) {
+               if (sc->rx_mbuf_ptr[i] != NULL) {
+                       bus_dmamap_sync(sc->rx_mbuf_tag, sc->rx_mbuf_map[i],
+                                       BUS_DMASYNC_POSTREAD);
+                       bus_dmamap_unload(sc->rx_mbuf_tag, sc->rx_mbuf_map[i]);
+                       m_freem(sc->rx_mbuf_ptr[i]);
+                       sc->rx_mbuf_ptr[i] = NULL;
+                       DBRUNIF(1, sc->rx_mbuf_alloc--);
+               }
+       }
+
+       /* Clear each RX chain page. */
+       for (i = 0; i < RX_PAGES; i++)
+               bzero(sc->rx_bd_chain[i], BCE_RX_CHAIN_PAGE_SZ);
+
+       /* Check if we lost any mbufs in the process. */
+       DBRUNIF((sc->rx_mbuf_alloc),
+               if_printf(&sc->arpcom.ac_if,
+                         "%s(%d): Memory leak! "
+                         "Lost %d mbufs from rx chain!\n",
+                         __FILE__, __LINE__, sc->rx_mbuf_alloc));
+
+       DBPRINT(sc, BCE_VERBOSE_RESET, "Exiting %s()\n", __func__);
+}
+
+
+/****************************************************************************/
+/* Set media options.                                                       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_ifmedia_upd(struct ifnet *ifp)
+{
+       struct bce_softc *sc = ifp->if_softc;
+       struct mii_data *mii = device_get_softc(sc->bce_miibus);
+
+       /*
+        * 'mii' will be NULL, when this function is called on following
+        * code path: bce_attach() -> bce_mgmt_init()
+        */
+       if (mii != NULL) {
+               /* Make sure the MII bus has been enumerated. */
+               sc->bce_link = 0;
+               if (mii->mii_instance) {
+                       struct mii_softc *miisc;
+
+                       LIST_FOREACH(miisc, &mii->mii_phys, mii_list)
+                               mii_phy_reset(miisc);
+               }
+               mii_mediachg(mii);
+       }
+       return 0;
+}
+
+
+/****************************************************************************/
+/* Reports current media status.                                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr)
+{
+       struct bce_softc *sc = ifp->if_softc;
+       struct mii_data *mii = device_get_softc(sc->bce_miibus);
+
+       mii_pollstat(mii);
+       ifmr->ifm_active = mii->mii_media_active;
+       ifmr->ifm_status = mii->mii_media_status;
+}
+
+
+/****************************************************************************/
+/* Handles PHY generated interrupt events.                                  */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_phy_intr(struct bce_softc *sc)
+{
+       uint32_t new_link_state, old_link_state;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       new_link_state = sc->status_block->status_attn_bits &
+                        STATUS_ATTN_BITS_LINK_STATE;
+       old_link_state = sc->status_block->status_attn_bits_ack &
+                        STATUS_ATTN_BITS_LINK_STATE;
+
+       /* Handle any changes if the link state has changed. */
+       if (new_link_state != old_link_state) { /* XXX redundant? */
+               DBRUN(BCE_VERBOSE_INTR, bce_dump_status_block(sc));
+
+               sc->bce_link = 0;
+               callout_stop(&sc->bce_stat_ch);
+               bce_tick_serialized(sc);
+
+               /* Update the status_attn_bits_ack field in the status block. */
+               if (new_link_state) {
+                       REG_WR(sc, BCE_PCICFG_STATUS_BIT_SET_CMD,
+                              STATUS_ATTN_BITS_LINK_STATE);
+                       if (bootverbose)
+                               if_printf(ifp, "Link is now UP.\n");
+               } else {
+                       REG_WR(sc, BCE_PCICFG_STATUS_BIT_CLEAR_CMD,
+                              STATUS_ATTN_BITS_LINK_STATE);
+                       if (bootverbose)
+                               if_printf(ifp, "Link is now DOWN.\n");
+               }
+       }
+
+       /* Acknowledge the link change interrupt. */
+       REG_WR(sc, BCE_EMAC_STATUS, BCE_EMAC_STATUS_LINK_CHANGE);
+}
+
+
+/****************************************************************************/
+/* Handles received frame interrupt events.                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_rx_intr(struct bce_softc *sc, int count)
+{
+       struct status_block *sblk = sc->status_block;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint16_t hw_cons, sw_cons, sw_chain_cons, sw_prod, sw_chain_prod;
+       uint32_t sw_prod_bseq;
+       int i;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       DBRUNIF(1, sc->rx_interrupts++);
+
+       /* Prepare the RX chain pages to be accessed by the host CPU. */
+       for (i = 0; i < RX_PAGES; i++) {
+               bus_dmamap_sync(sc->rx_bd_chain_tag,
+                               sc->rx_bd_chain_map[i], BUS_DMASYNC_POSTREAD);
+       }
+
+       /* Get the hardware's view of the RX consumer index. */
+       hw_cons = sc->hw_rx_cons = sblk->status_rx_quick_consumer_index0;
+       if ((hw_cons & USABLE_RX_BD_PER_PAGE) == USABLE_RX_BD_PER_PAGE)
+               hw_cons++;
+
+       /* Get working copies of the driver's view of the RX indices. */
+       sw_cons = sc->rx_cons;
+       sw_prod = sc->rx_prod;
+       sw_prod_bseq = sc->rx_prod_bseq;
+
+       DBPRINT(sc, BCE_INFO_RECV, "%s(enter): sw_prod = 0x%04X, "
+               "sw_cons = 0x%04X, sw_prod_bseq = 0x%08X\n",
+               __func__, sw_prod, sw_cons, sw_prod_bseq);
+
+       /* Prevent speculative reads from getting ahead of the status block. */
+       bus_space_barrier(sc->bce_btag, sc->bce_bhandle, 0, 0,
+                         BUS_SPACE_BARRIER_READ);
+
+       /* Update some debug statistics counters */
+       DBRUNIF((sc->free_rx_bd < sc->rx_low_watermark),
+               sc->rx_low_watermark = sc->free_rx_bd);
+       DBRUNIF((sc->free_rx_bd == 0), sc->rx_empty_count++);
+
+       /* Scan through the receive chain as long as there is work to do. */
+       while (sw_cons != hw_cons) {
+               struct mbuf *m = NULL;
+               struct l2_fhdr *l2fhdr = NULL;
+               struct rx_bd *rxbd;
+               unsigned int len;
+               uint32_t status = 0;
+
+#ifdef foo /* DEVICE_POLLING */
+               /*
+                * Even if polling(4) is enabled, we can't just reap
+                * 'count' RX descriptors and leave.  It seems that RX
+                * engine would be left in a wired state, if we broke
+                * out the loop in the middle.
+                */
+               if (count >= 0 && count-- == 0)
+                       break;
+#endif
+
+               /*
+                * Convert the producer/consumer indices
+                * to an actual rx_bd index.
+                */
+               sw_chain_cons = RX_CHAIN_IDX(sw_cons);
+               sw_chain_prod = RX_CHAIN_IDX(sw_prod);
+
+               /* Get the used rx_bd. */
+               rxbd = &sc->rx_bd_chain[RX_PAGE(sw_chain_cons)]
+                                      [RX_IDX(sw_chain_cons)];
+               sc->free_rx_bd++;
+       
+               DBRUN(BCE_VERBOSE_RECV,
+                     if_printf(ifp, "%s(): ", __func__);
+                     bce_dump_rxbd(sc, sw_chain_cons, rxbd));
+
+               /* The mbuf is stored with the last rx_bd entry of a packet. */
+               if (sc->rx_mbuf_ptr[sw_chain_cons] != NULL) {
+                       /* Validate that this is the last rx_bd. */
+                       DBRUNIF((!(rxbd->rx_bd_flags & RX_BD_FLAGS_END)),
+                               if_printf(ifp, "%s(%d): "
+                               "Unexpected mbuf found in rx_bd[0x%04X]!\n",
+                               __FILE__, __LINE__, sw_chain_cons);
+                               bce_breakpoint(sc));
+
+                       /*
+                        * ToDo: If the received packet is small enough
+                        * to fit into a single, non-M_EXT mbuf,
+                        * allocate a new mbuf here, copy the data to
+                        * that mbuf, and recycle the mapped jumbo frame.
+                        */
+
+                       /* Unmap the mbuf from DMA space. */
+                       bus_dmamap_sync(sc->rx_mbuf_tag,
+                                       sc->rx_mbuf_map[sw_chain_cons],
+                                       BUS_DMASYNC_POSTREAD);
+                       bus_dmamap_unload(sc->rx_mbuf_tag,
+                                         sc->rx_mbuf_map[sw_chain_cons]);
+
+                       /* Remove the mbuf from the driver's chain. */
+                       m = sc->rx_mbuf_ptr[sw_chain_cons];
+                       sc->rx_mbuf_ptr[sw_chain_cons] = NULL;
+
+                       /*
+                        * Frames received on the NetXteme II are prepended 
+                        * with an l2_fhdr structure which provides status
+                        * information about the received frame (including
+                        * VLAN tags and checksum info).  The frames are also
+                        * automatically adjusted to align the IP header
+                        * (i.e. two null bytes are inserted before the 
+                        * Ethernet header).
+                        */
+                       l2fhdr = mtod(m, struct l2_fhdr *);
+
+                       len = l2fhdr->l2_fhdr_pkt_len;
+                       status = l2fhdr->l2_fhdr_status;
+
+                       DBRUNIF(DB_RANDOMTRUE(bce_debug_l2fhdr_status_check),
+                               if_printf(ifp,
+                               "Simulating l2_fhdr status error.\n");
+                               status = status | L2_FHDR_ERRORS_PHY_DECODE);
+
+                       /* Watch for unusual sized frames. */
+                       DBRUNIF((len < BCE_MIN_MTU ||
+                                len > BCE_MAX_JUMBO_ETHER_MTU_VLAN),
+                               if_printf(ifp,
+                               "%s(%d): Unusual frame size found. "
+                               "Min(%d), Actual(%d), Max(%d)\n",
+                               __FILE__, __LINE__,
+                               (int)BCE_MIN_MTU, len,
+                               (int)BCE_MAX_JUMBO_ETHER_MTU_VLAN);
+                               bce_dump_mbuf(sc, m);
+                               bce_breakpoint(sc));
+
+                       len -= ETHER_CRC_LEN;
+
+                       /* Check the received frame for errors. */
+                       if (status & (L2_FHDR_ERRORS_BAD_CRC |
+                                     L2_FHDR_ERRORS_PHY_DECODE |
+                                     L2_FHDR_ERRORS_ALIGNMENT |
+                                     L2_FHDR_ERRORS_TOO_SHORT |
+                                     L2_FHDR_ERRORS_GIANT_FRAME)) {
+                               ifp->if_ierrors++;
+                               DBRUNIF(1, sc->l2fhdr_status_errors++);
+
+                               /* Reuse the mbuf for a new frame. */
+                               if (bce_newbuf_std(sc, m, &sw_prod,
+                                                  &sw_chain_prod,
+                                                  &sw_prod_bseq)) {
+                                       DBRUNIF(1, bce_breakpoint(sc));
+                                       /* XXX */
+                                       panic("%s: Can't reuse RX mbuf!\n",
+                                             ifp->if_xname);
+                               }
+                               m = NULL;
+                               goto bce_rx_int_next_rx;
+                       }
+
+                       /* 
+                        * Get a new mbuf for the rx_bd.   If no new
+                        * mbufs are available then reuse the current mbuf,
+                        * log an ierror on the interface, and generate
+                        * an error in the system log.
+                        */
+                       if (bce_newbuf_std(sc, NULL, &sw_prod, &sw_chain_prod,
+                                          &sw_prod_bseq)) {
+                               DBRUN(BCE_WARN,
+                                     if_printf(ifp,
+                                     "%s(%d): Failed to allocate new mbuf, "
+                                     "incoming frame dropped!\n",
+                                     __FILE__, __LINE__));
+
+                               ifp->if_ierrors++;
+
+                               /* Try and reuse the exisitng mbuf. */
+                               if (bce_newbuf_std(sc, m, &sw_prod,
+                                                  &sw_chain_prod,
+                                                  &sw_prod_bseq)) {
+                                       DBRUNIF(1, bce_breakpoint(sc));
+                                       /* XXX */
+                                       panic("%s: Double mbuf allocation "
+                                             "failure!", ifp->if_xname);
+                               }
+                               m = NULL;
+                               goto bce_rx_int_next_rx;
+                       }
+
+                       /*
+                        * Skip over the l2_fhdr when passing
+                        * the data up the stack.
+                        */
+                       m_adj(m, sizeof(struct l2_fhdr) + ETHER_ALIGN);
+
+                       m->m_pkthdr.len = m->m_len = len;
+                       m->m_pkthdr.rcvif = ifp;
+
+                       DBRUN(BCE_VERBOSE_RECV,
+                             struct ether_header *eh;
+                             eh = mtod(m, struct ether_header *);
+                             if_printf(ifp, "%s(): to: %6D, from: %6D, "
+                                       "type: 0x%04X\n", __func__,
+                                       eh->ether_dhost, ":", 
+                                       eh->ether_shost, ":",
+                                       htons(eh->ether_type)));
+
+                       /* Validate the checksum if offload enabled. */
+                       if (ifp->if_capenable & IFCAP_RXCSUM) {
+                               /* Check for an IP datagram. */
+                               if (status & L2_FHDR_STATUS_IP_DATAGRAM) {
+                                       m->m_pkthdr.csum_flags |=
+                                               CSUM_IP_CHECKED;
+
+                                       /* Check if the IP checksum is valid. */
+                                       if ((l2fhdr->l2_fhdr_ip_xsum ^
+                                            0xffff) == 0) {
+                                               m->m_pkthdr.csum_flags |=
+                                                       CSUM_IP_VALID;
+                                       } else {
+                                               DBPRINT(sc, BCE_WARN_RECV, 
+                                                       "%s(): Invalid IP checksum = 0x%04X!\n",
+                                                       __func__, l2fhdr->l2_fhdr_ip_xsum);
+                                       }
+                               }
+
+                               /* Check for a valid TCP/UDP frame. */
+                               if (status & (L2_FHDR_STATUS_TCP_SEGMENT |
+                                             L2_FHDR_STATUS_UDP_DATAGRAM)) {
+
+                                       /* Check for a good TCP/UDP checksum. */
+                                       if ((status &
+                                            (L2_FHDR_ERRORS_TCP_XSUM |
+                                             L2_FHDR_ERRORS_UDP_XSUM)) == 0) {
+                                               m->m_pkthdr.csum_data =
+                                               l2fhdr->l2_fhdr_tcp_udp_xsum;
+                                               m->m_pkthdr.csum_flags |=
+                                                       CSUM_DATA_VALID |
+                                                       CSUM_PSEUDO_HDR;
+                                       } else {
+                                               DBPRINT(sc, BCE_WARN_RECV,
+                                                       "%s(): Invalid TCP/UDP checksum = 0x%04X!\n",
+                                                       __func__, l2fhdr->l2_fhdr_tcp_udp_xsum);
+                                       }
+                               }
+                       }
+
+                       ifp->if_ipackets++;
+bce_rx_int_next_rx:
+                       sw_prod = NEXT_RX_BD(sw_prod);
+               }
+
+               sw_cons = NEXT_RX_BD(sw_cons);
+
+               /* If we have a packet, pass it up the stack */
+               if (m) {
+                       DBPRINT(sc, BCE_VERBOSE_RECV,
+                               "%s(): Passing received frame up.\n", __func__);
+
+                       if (status & L2_FHDR_STATUS_L2_VLAN_TAG)
+                               VLAN_INPUT_TAG(m, l2fhdr->l2_fhdr_vlan_tag);
+                       else
+                               ifp->if_input(ifp, m);
+
+                       DBRUNIF(1, sc->rx_mbuf_alloc--);
+               }
+
+               /*
+                * If polling(4) is not enabled, refresh hw_cons to see
+                * whether there's new work.
+                *
+                * If polling(4) is enabled, i.e count >= 0, refreshing
+                * should not be performed, so that we would not spend
+                * too much time in RX processing.
+                */
+               if (count < 0 && sw_cons == hw_cons) {
+                       hw_cons = sc->hw_rx_cons =
+                               sblk->status_rx_quick_consumer_index0;
+                       if ((hw_cons & USABLE_RX_BD_PER_PAGE) ==
+                           USABLE_RX_BD_PER_PAGE)
+                               hw_cons++;
+               }
+
+               /*
+                * Prevent speculative reads from getting ahead
+                * of the status block.
+                */
+               bus_space_barrier(sc->bce_btag, sc->bce_bhandle, 0, 0,
+                                 BUS_SPACE_BARRIER_READ);
+       }
+
+       for (i = 0; i < RX_PAGES; i++) {
+               bus_dmamap_sync(sc->rx_bd_chain_tag,
+                               sc->rx_bd_chain_map[i], BUS_DMASYNC_PREWRITE);
+       }
+
+       sc->rx_cons = sw_cons;
+       sc->rx_prod = sw_prod;
+       sc->rx_prod_bseq = sw_prod_bseq;
+
+       REG_WR16(sc, MB_RX_CID_ADDR + BCE_L2CTX_HOST_BDIDX, sc->rx_prod);
+       REG_WR(sc, MB_RX_CID_ADDR + BCE_L2CTX_HOST_BSEQ, sc->rx_prod_bseq);
+
+       DBPRINT(sc, BCE_INFO_RECV, "%s(exit): rx_prod = 0x%04X, "
+               "rx_cons = 0x%04X, rx_prod_bseq = 0x%08X\n",
+               __func__, sc->rx_prod, sc->rx_cons, sc->rx_prod_bseq);
+}
+
+
+/****************************************************************************/
+/* Handles transmit completion interrupt events.                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_tx_intr(struct bce_softc *sc)
+{
+       struct status_block *sblk = sc->status_block;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint16_t hw_tx_cons, sw_tx_cons, sw_tx_chain_cons;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       DBRUNIF(1, sc->tx_interrupts++);
+
+       /* Get the hardware's view of the TX consumer index. */
+       hw_tx_cons = sc->hw_tx_cons = sblk->status_tx_quick_consumer_index0;
+
+       /* Skip to the next entry if this is a chain page pointer. */
+       if ((hw_tx_cons & USABLE_TX_BD_PER_PAGE) == USABLE_TX_BD_PER_PAGE)
+               hw_tx_cons++;
+
+       sw_tx_cons = sc->tx_cons;
+
+       /* Prevent speculative reads from getting ahead of the status block. */
+       bus_space_barrier(sc->bce_btag, sc->bce_bhandle, 0, 0,
+                         BUS_SPACE_BARRIER_READ);
+
+       /* Cycle through any completed TX chain page entries. */
+       while (sw_tx_cons != hw_tx_cons) {
+#ifdef BCE_DEBUG
+               struct tx_bd *txbd = NULL;
+#endif
+               sw_tx_chain_cons = TX_CHAIN_IDX(sw_tx_cons);
+
+               DBPRINT(sc, BCE_INFO_SEND,
+                       "%s(): hw_tx_cons = 0x%04X, sw_tx_cons = 0x%04X, "
+                       "sw_tx_chain_cons = 0x%04X\n",
+                       __func__, hw_tx_cons, sw_tx_cons, sw_tx_chain_cons);
+
+               DBRUNIF((sw_tx_chain_cons > MAX_TX_BD),
+                       if_printf(ifp, "%s(%d): "
+                                 "TX chain consumer out of range! "
+                                 " 0x%04X > 0x%04X\n",
+                                 __FILE__, __LINE__, sw_tx_chain_cons,
+                                 (int)MAX_TX_BD);
+                       bce_breakpoint(sc));
+
+               DBRUNIF(1, txbd = &sc->tx_bd_chain[TX_PAGE(sw_tx_chain_cons)]
+                               [TX_IDX(sw_tx_chain_cons)]);
+
+               DBRUNIF((txbd == NULL),
+                       if_printf(ifp, "%s(%d): "
+                                 "Unexpected NULL tx_bd[0x%04X]!\n",
+                                 __FILE__, __LINE__, sw_tx_chain_cons);
+                       bce_breakpoint(sc));
+
+               DBRUN(BCE_INFO_SEND,
+                     if_printf(ifp, "%s(): ", __func__);
+                     bce_dump_txbd(sc, sw_tx_chain_cons, txbd));
+
+               /*
+                * Free the associated mbuf. Remember
+                * that only the last tx_bd of a packet
+                * has an mbuf pointer and DMA map.
+                */
+               if (sc->tx_mbuf_ptr[sw_tx_chain_cons] != NULL) {
+                       /* Validate that this is the last tx_bd. */
+                       DBRUNIF((!(txbd->tx_bd_flags & TX_BD_FLAGS_END)),
+                               if_printf(ifp, "%s(%d): "
+                               "tx_bd END flag not set but "
+                               "txmbuf == NULL!\n", __FILE__, __LINE__);
+                               bce_breakpoint(sc));
+
+                       DBRUN(BCE_INFO_SEND,
+                             if_printf(ifp, "%s(): Unloading map/freeing mbuf "
+                                       "from tx_bd[0x%04X]\n", __func__,
+                                       sw_tx_chain_cons));
+
+                       /* Unmap the mbuf. */
+                       bus_dmamap_unload(sc->tx_mbuf_tag,
+                                         sc->tx_mbuf_map[sw_tx_chain_cons]);
+
+                       /* Free the mbuf. */
+                       m_freem(sc->tx_mbuf_ptr[sw_tx_chain_cons]);
+                       sc->tx_mbuf_ptr[sw_tx_chain_cons] = NULL;
+                       DBRUNIF(1, sc->tx_mbuf_alloc--);
+
+                       ifp->if_opackets++;
+               }
+
+               sc->used_tx_bd--;
+               sw_tx_cons = NEXT_TX_BD(sw_tx_cons);
+
+               if (sw_tx_cons == hw_tx_cons) {
+                       /* Refresh hw_cons to see if there's new work. */
+                       hw_tx_cons = sc->hw_tx_cons =
+                               sblk->status_tx_quick_consumer_index0;
+                       if ((hw_tx_cons & USABLE_TX_BD_PER_PAGE) ==
+                           USABLE_TX_BD_PER_PAGE)
+                               hw_tx_cons++;
+               }
+
+               /*
+                * Prevent speculative reads from getting
+                * ahead of the status block.
+                */
+               bus_space_barrier(sc->bce_btag, sc->bce_bhandle, 0, 0,
+                                 BUS_SPACE_BARRIER_READ);
+       }
+
+       if (sc->used_tx_bd == 0) {
+               /* Clear the TX timeout timer. */
+               ifp->if_timer = 0;
+       }
+
+       /* Clear the tx hardware queue full flag. */
+       if (sc->max_tx_bd - sc->used_tx_bd >= BCE_TX_SPARE_SPACE) {
+               DBRUNIF((ifp->if_flags & IFF_OACTIVE),
+                       DBPRINT(sc, BCE_WARN_SEND,
+                               "%s(): Open TX chain! %d/%d (used/total)\n", 
+                               __func__, sc->used_tx_bd, sc->max_tx_bd));
+               ifp->if_flags &= ~IFF_OACTIVE;
+       }
+       sc->tx_cons = sw_tx_cons;
+}
+
+
+/****************************************************************************/
+/* Disables interrupt generation.                                           */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_disable_intr(struct bce_softc *sc)
+{
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD, BCE_PCICFG_INT_ACK_CMD_MASK_INT);
+       REG_RD(sc, BCE_PCICFG_INT_ACK_CMD);
+       lwkt_serialize_handler_disable(sc->arpcom.ac_if.if_serializer);
+}
+
+
+/****************************************************************************/
+/* Enables interrupt generation.                                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_enable_intr(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       lwkt_serialize_handler_enable(sc->arpcom.ac_if.if_serializer);
+
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD,
+              BCE_PCICFG_INT_ACK_CMD_INDEX_VALID |
+              BCE_PCICFG_INT_ACK_CMD_MASK_INT | sc->last_status_idx);
+
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD,
+              BCE_PCICFG_INT_ACK_CMD_INDEX_VALID | sc->last_status_idx);
+
+       val = REG_RD(sc, BCE_HC_COMMAND);
+       REG_WR(sc, BCE_HC_COMMAND, val | BCE_HC_COMMAND_COAL_NOW);
+}
+
+
+/****************************************************************************/
+/* Handles controller initialization.                                       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_init(void *xsc)
+{
+       struct bce_softc *sc = xsc;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint32_t ether_mtu;
+       int error;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       /* Check if the driver is still running and bail out if it is. */
+       if (ifp->if_flags & IFF_RUNNING)
+               return;
+
+       bce_stop(sc);
+
+       error = bce_reset(sc, BCE_DRV_MSG_CODE_RESET);
+       if (error) {
+               if_printf(ifp, "Controller reset failed!\n");
+               goto back;
+       }
+
+       error = bce_chipinit(sc);
+       if (error) {
+               if_printf(ifp, "Controller initialization failed!\n");
+               goto back;
+       }
+
+       error = bce_blockinit(sc);
+       if (error) {
+               if_printf(ifp, "Block initialization failed!\n");
+               goto back;
+       }
+
+       /* Load our MAC address. */
+       bcopy(IF_LLADDR(ifp), sc->eaddr, ETHER_ADDR_LEN);
+       bce_set_mac_addr(sc);
+
+       /* Calculate and program the Ethernet MTU size. */
+       ether_mtu = ETHER_HDR_LEN + EVL_ENCAPLEN + ifp->if_mtu + ETHER_CRC_LEN;
+
+       DBPRINT(sc, BCE_INFO, "%s(): setting mtu = %d\n", __func__, ether_mtu);
+
+       /* 
+        * Program the mtu, enabling jumbo frame 
+        * support if necessary.  Also set the mbuf
+        * allocation count for RX frames.
+        */
+       if (ether_mtu > ETHER_MAX_LEN + EVL_ENCAPLEN) {
+#ifdef notyet
+               REG_WR(sc, BCE_EMAC_RX_MTU_SIZE,
+                      min(ether_mtu, BCE_MAX_JUMBO_ETHER_MTU) |
+                      BCE_EMAC_RX_MTU_SIZE_JUMBO_ENA);
+               sc->mbuf_alloc_size = MJUM9BYTES;
+#else
+               panic("jumbo buffer is not supported yet\n");
+#endif
+       } else {
+               REG_WR(sc, BCE_EMAC_RX_MTU_SIZE, ether_mtu);
+               sc->mbuf_alloc_size = MCLBYTES;
+       }
+
+       /* Calculate the RX Ethernet frame size for rx_bd's. */
+       sc->max_frame_size = sizeof(struct l2_fhdr) + 2 + ether_mtu + 8;
+
+       DBPRINT(sc, BCE_INFO,
+               "%s(): mclbytes = %d, mbuf_alloc_size = %d, "
+               "max_frame_size = %d\n",
+               __func__, (int)MCLBYTES, sc->mbuf_alloc_size,
+               sc->max_frame_size);
+
+       /* Program appropriate promiscuous/multicast filtering. */
+       bce_set_rx_mode(sc);
+
+       /* Init RX buffer descriptor chain. */
+       bce_init_rx_chain(sc);  /* XXX return value */
+
+       /* Init TX buffer descriptor chain. */
+       bce_init_tx_chain(sc);  /* XXX return value */
+
+#ifdef DEVICE_POLLING
+       /* Disable interrupts if we are polling. */
+       if (ifp->if_flags & IFF_POLLING) {
+               bce_disable_intr(sc);
+
+               REG_WR(sc, BCE_HC_RX_QUICK_CONS_TRIP,
+                      (1 << 16) | sc->bce_rx_quick_cons_trip);
+               REG_WR(sc, BCE_HC_TX_QUICK_CONS_TRIP,
+                      (1 << 16) | sc->bce_tx_quick_cons_trip);
+       } else
+#endif
+       /* Enable host interrupts. */
+       bce_enable_intr(sc);
+
+       bce_ifmedia_upd(ifp);
+
+       ifp->if_flags |= IFF_RUNNING;
+       ifp->if_flags &= ~IFF_OACTIVE;
+
+       callout_reset(&sc->bce_stat_ch, hz, bce_tick, sc);
+back:
+       if (error)
+               bce_stop(sc);
+}
+
+
+/****************************************************************************/
+/* Initialize the controller just enough so that any management firmware    */
+/* running on the device will continue to operate corectly.                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_mgmt_init(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint32_t val;
+
+       /* Check if the driver is still running and bail out if it is. */
+       if (ifp->if_flags & IFF_RUNNING)
+               return;
+
+       /* Initialize the on-boards CPUs */
+       bce_init_cpus(sc);
+
+       /* Set the page size and clear the RV2P processor stall bits. */
+       val = (BCM_PAGE_BITS - 8) << 24;
+       REG_WR(sc, BCE_RV2P_CONFIG, val);
+
+       /* Enable all critical blocks in the MAC. */
+       REG_WR(sc, BCE_MISC_ENABLE_SET_BITS,
+              BCE_MISC_ENABLE_SET_BITS_RX_V2P_ENABLE |
+              BCE_MISC_ENABLE_SET_BITS_RX_DMA_ENABLE |
+              BCE_MISC_ENABLE_SET_BITS_COMPLETION_ENABLE);
+       REG_RD(sc, BCE_MISC_ENABLE_SET_BITS);
+       DELAY(20);
+
+       bce_ifmedia_upd(ifp);
+}
+
+
+/****************************************************************************/
+/* Encapsultes an mbuf cluster into the tx_bd chain structure and makes the */
+/* memory visible to the controller.                                        */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_encap(struct bce_softc *sc, struct mbuf **m_head)
+{
+       struct bce_dmamap_arg ctx;
+       bus_dma_segment_t segs[BCE_MAX_SEGMENTS];
+       bus_dmamap_t map, tmp_map;
+       struct mbuf *m0 = *m_head;
+       struct tx_bd *txbd = NULL;
+       uint16_t vlan_tag = 0, flags = 0;
+       uint16_t chain_prod, chain_prod_start, prod;
+       uint32_t prod_bseq;
+       int i, error, maxsegs;
+#ifdef BCE_DEBUG
+       uint16_t debug_prod;
+#endif
+
+       /* Transfer any checksum offload flags to the bd. */
+       if (m0->m_pkthdr.csum_flags) {
+               if (m0->m_pkthdr.csum_flags & CSUM_IP)
+                       flags |= TX_BD_FLAGS_IP_CKSUM;
+               if (m0->m_pkthdr.csum_flags & (CSUM_TCP | CSUM_UDP))
+                       flags |= TX_BD_FLAGS_TCP_UDP_CKSUM;
+       }
+
+       /* Transfer any VLAN tags to the bd. */
+       if ((m0->m_flags & (M_PROTO1 | M_PKTHDR)) == (M_PROTO1 | M_PKTHDR) &&
+           m0->m_pkthdr.rcvif != NULL &&
+           m0->m_pkthdr.rcvif->if_type == IFT_L2VLAN) {
+               struct ifvlan *ifv = m0->m_pkthdr.rcvif->if_softc;
+
+               flags |= TX_BD_FLAGS_VLAN_TAG;
+               vlan_tag = ifv->ifv_tag;
+       }
+
+       prod = sc->tx_prod;
+       chain_prod_start = chain_prod = TX_CHAIN_IDX(prod);
+
+       /* Map the mbuf into DMAable memory. */
+       map = sc->tx_mbuf_map[chain_prod_start];
+
+       maxsegs = sc->max_tx_bd - sc->used_tx_bd;
+       KASSERT(maxsegs >= BCE_TX_SPARE_SPACE,
+               ("not enough segements %d\n", maxsegs));
+       if (maxsegs > BCE_MAX_SEGMENTS)
+               maxsegs = BCE_MAX_SEGMENTS;
+
+       /* Map the mbuf into our DMA address space. */
+       ctx.bce_maxsegs = maxsegs;
+       ctx.bce_segs = segs;
+       error = bus_dmamap_load_mbuf(sc->tx_mbuf_tag, map, m0,
+                                    bce_dma_map_mbuf, &ctx, BUS_DMA_NOWAIT);
+       if (error == EFBIG || ctx.bce_maxsegs == 0) {
+               DBPRINT(sc, BCE_WARN, "%s(): fragmented mbuf\n", __func__);
+               DBRUNIF(1, bce_dump_mbuf(sc, m0););
+
+               m0 = m_defrag(*m_head, MB_DONTWAIT);
+               if (m0 == NULL) {
+                       error = ENOBUFS;
+                       goto back;
+               }
+               *m_head = m0;
+
+               ctx.bce_maxsegs = maxsegs;
+               ctx.bce_segs = segs;
+               error = bus_dmamap_load_mbuf(sc->tx_mbuf_tag, map, m0,
+                                            bce_dma_map_mbuf, &ctx,
+                                            BUS_DMA_NOWAIT);
+               if (error || ctx.bce_maxsegs == 0) {
+                       if_printf(&sc->arpcom.ac_if,
+                                 "Error mapping mbuf into TX chain\n");
+                       if (error == 0)
+                               error = EFBIG;
+                       goto back;
+               }
+       } else if (error) {
+               if_printf(&sc->arpcom.ac_if,
+                         "Error mapping mbuf into TX chain\n");
+               goto back;
+       }
+
+       /* prod points to an empty tx_bd at this point. */
+       prod_bseq  = sc->tx_prod_bseq;
+
+#ifdef BCE_DEBUG
+       debug_prod = chain_prod;
+#endif
+
+       DBPRINT(sc, BCE_INFO_SEND,
+               "%s(): Start: prod = 0x%04X, chain_prod = %04X, "
+               "prod_bseq = 0x%08X\n",
+               __func__, prod, chain_prod, prod_bseq);
+
+       /*
+        * Cycle through each mbuf segment that makes up
+        * the outgoing frame, gathering the mapping info
+        * for that segment and creating a tx_bd to for
+        * the mbuf.
+        */
+       for (i = 0; i < ctx.bce_maxsegs; i++) {
+               chain_prod = TX_CHAIN_IDX(prod);
+               txbd= &sc->tx_bd_chain[TX_PAGE(chain_prod)][TX_IDX(chain_prod)];
+
+               txbd->tx_bd_haddr_lo = htole32(BCE_ADDR_LO(segs[i].ds_addr));
+               txbd->tx_bd_haddr_hi = htole32(BCE_ADDR_HI(segs[i].ds_addr));
+               txbd->tx_bd_mss_nbytes = htole16(segs[i].ds_len);
+               txbd->tx_bd_vlan_tag = htole16(vlan_tag);
+               txbd->tx_bd_flags = htole16(flags);
+               prod_bseq += segs[i].ds_len;
+               if (i == 0)
+                       txbd->tx_bd_flags |= htole16(TX_BD_FLAGS_START);
+               prod = NEXT_TX_BD(prod);
+       }
+
+       /* Set the END flag on the last TX buffer descriptor. */
+       txbd->tx_bd_flags |= htole16(TX_BD_FLAGS_END);
+
+       DBRUN(BCE_EXCESSIVE_SEND,
+             bce_dump_tx_chain(sc, debug_prod, ctx.bce_maxsegs));
+
+       DBPRINT(sc, BCE_INFO_SEND,
+               "%s(): End: prod = 0x%04X, chain_prod = %04X, "
+               "prod_bseq = 0x%08X\n",
+               __func__, prod, chain_prod, prod_bseq);
+
+       bus_dmamap_sync(sc->tx_mbuf_tag, map, BUS_DMASYNC_PREWRITE);
+
+       /*
+        * Ensure that the mbuf pointer for this transmission
+        * is placed at the array index of the last
+        * descriptor in this chain.  This is done
+        * because a single map is used for all 
+        * segments of the mbuf and we don't want to
+        * unload the map before all of the segments
+        * have been freed.
+        */
+       sc->tx_mbuf_ptr[chain_prod] = m0;
+
+       tmp_map = sc->tx_mbuf_map[chain_prod];
+       sc->tx_mbuf_map[chain_prod] = map;
+       sc->tx_mbuf_map[chain_prod_start] = tmp_map;
+
+       sc->used_tx_bd += ctx.bce_maxsegs;
+
+       /* Update some debug statistic counters */
+       DBRUNIF((sc->used_tx_bd > sc->tx_hi_watermark),
+               sc->tx_hi_watermark = sc->used_tx_bd);
+       DBRUNIF((sc->used_tx_bd == sc->max_tx_bd), sc->tx_full_count++);
+       DBRUNIF(1, sc->tx_mbuf_alloc++);
+
+       DBRUN(BCE_VERBOSE_SEND,
+             bce_dump_tx_mbuf_chain(sc, chain_prod, ctx.bce_maxsegs));
+
+       /* prod points to the next free tx_bd at this point. */
+       sc->tx_prod = prod;
+       sc->tx_prod_bseq = prod_bseq;
+back:
+       if (error) {
+               m_freem(*m_head);
+               *m_head = NULL;
+       }
+       return error;
+}
+
+
+/****************************************************************************/
+/* Main transmit routine when called from another routine with a lock.      */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_start(struct ifnet *ifp)
+{
+       struct bce_softc *sc = ifp->if_softc;
+       int count = 0;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       /* If there's no link or the transmit queue is empty then just exit. */
+       if ((ifp->if_flags & (IFF_RUNNING | IFF_OACTIVE)) != IFF_RUNNING ||
+           !sc->bce_link)
+               return;
+
+       DBPRINT(sc, BCE_INFO_SEND,
+               "%s(): Start: tx_prod = 0x%04X, tx_chain_prod = %04X, "
+               "tx_prod_bseq = 0x%08X\n",
+               __func__,
+               sc->tx_prod, TX_CHAIN_IDX(sc->tx_prod), sc->tx_prod_bseq);
+
+       for (;;) {
+               struct mbuf *m_head;
+
+               /*
+                * We keep BCE_TX_SPARE_SPACE entries, so bce_encap() is
+                * unlikely to fail.
+                */
+               if (sc->max_tx_bd - sc->used_tx_bd < BCE_TX_SPARE_SPACE) {
+                       ifp->if_flags |= IFF_OACTIVE;
+                       break;
+               }
+
+               /* Check for any frames to send. */
+               m_head = ifq_dequeue(&ifp->if_snd, NULL);
+               if (m_head == NULL)
+                       break;
+
+               /*
+                * Pack the data into the transmit ring. If we
+                * don't have room, place the mbuf back at the
+                * head of the queue and set the OACTIVE flag
+                * to wait for the NIC to drain the chain.
+                */
+               if (bce_encap(sc, &m_head)) {
+                       ifp->if_flags |= IFF_OACTIVE;
+                       DBPRINT(sc, BCE_INFO_SEND,
+                               "TX chain is closed for business! "
+                               "Total tx_bd used = %d\n", 
+                               sc->used_tx_bd);
+                       break;
+               }
+
+               count++;
+
+               /* Send a copy of the frame to any BPF listeners. */
+               BPF_MTAP(ifp, m_head);
+       }
+
+       if (count == 0) {
+               /* no packets were dequeued */
+               DBPRINT(sc, BCE_VERBOSE_SEND,
+                       "%s(): No packets were dequeued\n", __func__);
+               return;
+       }
+
+       DBPRINT(sc, BCE_INFO_SEND,
+               "%s(): End: tx_prod = 0x%04X, tx_chain_prod = 0x%04X, "
+               "tx_prod_bseq = 0x%08X\n",
+               __func__,
+               sc->tx_prod, TX_CHAIN_IDX(sc->tx_prod), sc->tx_prod_bseq);
+
+       /* Start the transmit. */
+       REG_WR16(sc, MB_TX_CID_ADDR + BCE_L2CTX_TX_HOST_BIDX, sc->tx_prod);
+       REG_WR(sc, MB_TX_CID_ADDR + BCE_L2CTX_TX_HOST_BSEQ, sc->tx_prod_bseq);
+
+       /* Set the tx timeout. */
+       ifp->if_timer = BCE_TX_TIMEOUT;
+}
+
+
+/****************************************************************************/
+/* Handles any IOCTL calls from the operating system.                       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_ioctl(struct ifnet *ifp, u_long command, caddr_t data, struct ucred *cr)
+{
+       struct bce_softc *sc = ifp->if_softc;
+       struct ifreq *ifr = (struct ifreq *)data;
+       struct mii_data *mii;
+       int mask, error = 0;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       switch(command) {
+       case SIOCSIFMTU:
+               /* Check that the MTU setting is supported. */
+               if (ifr->ifr_mtu < BCE_MIN_MTU ||
+#ifdef notyet
+                   ifr->ifr_mtu > BCE_MAX_JUMBO_MTU
+#else
+                   ifr->ifr_mtu > ETHERMTU
+#endif
+                  ) {
+                       error = EINVAL;
+                       break;
+               }
+
+               DBPRINT(sc, BCE_INFO, "Setting new MTU of %d\n", ifr->ifr_mtu);
+
+               ifp->if_mtu = ifr->ifr_mtu;
+               ifp->if_flags &= ~IFF_RUNNING;  /* Force reinitialize */
+               bce_init(sc);
+               break;
+
+       case SIOCSIFFLAGS:
+               if (ifp->if_flags & IFF_UP) {
+                       if (ifp->if_flags & IFF_RUNNING) {
+                               mask = ifp->if_flags ^ sc->bce_if_flags;
+
+                               if (mask & (IFF_PROMISC | IFF_ALLMULTI))
+                                       bce_set_rx_mode(sc);
+                       } else {
+                               bce_init(sc);
+                       }
+               } else if (ifp->if_flags & IFF_RUNNING) {
+                       bce_stop(sc);
+               }
+               sc->bce_if_flags = ifp->if_flags;
+               break;
+
+       case SIOCADDMULTI:
+       case SIOCDELMULTI:
+               if (ifp->if_flags & IFF_RUNNING)
+                       bce_set_rx_mode(sc);
+               break;
+
+       case SIOCSIFMEDIA:
+       case SIOCGIFMEDIA:
+               DBPRINT(sc, BCE_VERBOSE, "bce_phy_flags = 0x%08X\n",
+                       sc->bce_phy_flags);
+               DBPRINT(sc, BCE_VERBOSE, "Copper media set/get\n");
+
+               mii = device_get_softc(sc->bce_miibus);
+               error = ifmedia_ioctl(ifp, ifr, &mii->mii_media, command);
+               break;
+
+       case SIOCSIFCAP:
+               mask = ifr->ifr_reqcap ^ ifp->if_capenable;
+               DBPRINT(sc, BCE_INFO, "Received SIOCSIFCAP = 0x%08X\n",
+                       (uint32_t) mask);
+
+               if (mask & IFCAP_HWCSUM) {
+                       ifp->if_capenable ^= IFCAP_HWCSUM;
+                       if (IFCAP_HWCSUM & ifp->if_capenable)
+                               ifp->if_hwassist = BCE_IF_HWASSIST;
+                       else
+                               ifp->if_hwassist = 0;
+               }
+               break;
+
+       default:
+               error = ether_ioctl(ifp, command, data);
+               break;
+       }
+       return error;
+}
+
+
+/****************************************************************************/
+/* Transmit timeout handler.                                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_watchdog(struct ifnet *ifp)
+{
+       struct bce_softc *sc = ifp->if_softc;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       DBRUN(BCE_VERBOSE_SEND,
+             bce_dump_driver_state(sc);
+             bce_dump_status_block(sc));
+
+       /*
+        * If we are in this routine because of pause frames, then
+        * don't reset the hardware.
+        */
+       if (REG_RD(sc, BCE_EMAC_TX_STATUS) & BCE_EMAC_TX_STATUS_XOFFED) 
+               return;
+
+       if_printf(ifp, "Watchdog timeout occurred, resetting!\n");
+
+       /* DBRUN(BCE_FATAL, bce_breakpoint(sc)); */
+
+       ifp->if_flags &= ~IFF_RUNNING;  /* Force reinitialize */
+       bce_init(sc);
+
+       ifp->if_oerrors++;
+
+       if (!ifq_is_empty(&ifp->if_snd))
+               ifp->if_start(ifp);
+}
+
+
+#ifdef DEVICE_POLLING
+
+static void
+bce_poll(struct ifnet *ifp, enum poll_cmd cmd, int count)
+{
+       struct bce_softc *sc = ifp->if_softc;
+       struct status_block *sblk = sc->status_block;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       switch (cmd) {
+       case POLL_REGISTER:
+               bce_disable_intr(sc);
+
+               REG_WR(sc, BCE_HC_RX_QUICK_CONS_TRIP,
+                      (1 << 16) | sc->bce_rx_quick_cons_trip);
+               REG_WR(sc, BCE_HC_TX_QUICK_CONS_TRIP,
+                      (1 << 16) | sc->bce_tx_quick_cons_trip);
+               return;
+       case POLL_DEREGISTER:
+               bce_enable_intr(sc);
+
+               REG_WR(sc, BCE_HC_TX_QUICK_CONS_TRIP,
+                      (sc->bce_tx_quick_cons_trip_int << 16) |
+                      sc->bce_tx_quick_cons_trip);
+               REG_WR(sc, BCE_HC_RX_QUICK_CONS_TRIP,
+                      (sc->bce_rx_quick_cons_trip_int << 16) |
+                      sc->bce_rx_quick_cons_trip);
+               return;
+       default:
+               break;
+       }
+
+       bus_dmamap_sync(sc->status_tag, sc->status_map, BUS_DMASYNC_POSTREAD);
+
+       if (cmd == POLL_AND_CHECK_STATUS) {
+               uint32_t status_attn_bits;
+
+               status_attn_bits = sblk->status_attn_bits;
+
+               DBRUNIF(DB_RANDOMTRUE(bce_debug_unexpected_attention),
+                       if_printf(ifp,
+                       "Simulating unexpected status attention bit set.");
+                       status_attn_bits |= STATUS_ATTN_BITS_PARITY_ERROR);
+
+               /* Was it a link change interrupt? */
+               if ((status_attn_bits & STATUS_ATTN_BITS_LINK_STATE) !=
+                   (sblk->status_attn_bits_ack & STATUS_ATTN_BITS_LINK_STATE))
+                       bce_phy_intr(sc);
+
+               /*
+                * If any other attention is asserted then
+                * the chip is toast.
+                */
+               if ((status_attn_bits & ~STATUS_ATTN_BITS_LINK_STATE) !=
+                    (sblk->status_attn_bits_ack &
+                     ~STATUS_ATTN_BITS_LINK_STATE)) {
+                       DBRUN(1, sc->unexpected_attentions++);
+
+                       if_printf(ifp, "Fatal attention detected: 0x%08X\n",
+                                 sblk->status_attn_bits);
+
+                       DBRUN(BCE_FATAL,
+                       if (bce_debug_unexpected_attention == 0)
+                               bce_breakpoint(sc));
+
+                       bce_init(sc);
+                       return;
+               }
+       }
+
+       /* Check for any completed RX frames. */
+       if (sblk->status_rx_quick_consumer_index0 != sc->hw_rx_cons)
+               bce_rx_intr(sc, count);
+
+       /* Check for any completed TX frames. */
+       if (sblk->status_tx_quick_consumer_index0 != sc->hw_tx_cons)
+               bce_tx_intr(sc);
+
+       bus_dmamap_sync(sc->status_tag, sc->status_map, BUS_DMASYNC_PREWRITE);
+
+       /* Check for new frames to transmit. */
+       if (!ifq_is_empty(&ifp->if_snd))
+               ifp->if_start(ifp);
+}
+
+#endif /* DEVICE_POLLING */
+
+
+#if 0
+static inline int
+bce_has_work(struct bce_softc *sc)
+{
+       struct status_block *stat = sc->status_block;
+
+       if ((stat->status_rx_quick_consumer_index0 != sc->hw_rx_cons) ||
+           (stat->status_tx_quick_consumer_index0 != sc->hw_tx_cons))
+               return 1;
+
+       if (((stat->status_attn_bits & STATUS_ATTN_BITS_LINK_STATE) != 0) !=
+           bp->link_up)
+               return 1;
+
+       return 0;
+}
+#endif
+
+
+/*
+ * Interrupt handler.
+ */
+/****************************************************************************/
+/* Main interrupt entry point.  Verifies that the controller generated the  */
+/* interrupt and then calls a separate routine for handle the various       */
+/* interrupt causes (PHY, TX, RX).                                          */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static void
+bce_intr(void *xsc)
+{
+       struct bce_softc *sc = xsc;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       struct status_block *sblk;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       DBPRINT(sc, BCE_EXCESSIVE, "Entering %s()\n", __func__);
+       DBRUNIF(1, sc->interrupts_generated++);
+
+       bus_dmamap_sync(sc->status_tag, sc->status_map, BUS_DMASYNC_POSTREAD);
+       sblk = sc->status_block;
+
+       /*
+        * If the hardware status block index matches the last value
+        * read by the driver and we haven't asserted our interrupt
+        * then there's nothing to do.
+        */
+       if (sblk->status_idx == sc->last_status_idx &&
+           (REG_RD(sc, BCE_PCICFG_MISC_STATUS) &
+            BCE_PCICFG_MISC_STATUS_INTA_VALUE))
+               return;
+
+       /* Ack the interrupt and stop others from occuring. */
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD,
+              BCE_PCICFG_INT_ACK_CMD_USE_INT_HC_PARAM |
+              BCE_PCICFG_INT_ACK_CMD_MASK_INT);
+
+       /* Keep processing data as long as there is work to do. */
+       for (;;) {
+               uint32_t status_attn_bits;
+
+               status_attn_bits = sblk->status_attn_bits;
+
+               DBRUNIF(DB_RANDOMTRUE(bce_debug_unexpected_attention),
+                       if_printf(ifp,
+                       "Simulating unexpected status attention bit set.");
+                       status_attn_bits |= STATUS_ATTN_BITS_PARITY_ERROR);
+
+               /* Was it a link change interrupt? */
+               if ((status_attn_bits & STATUS_ATTN_BITS_LINK_STATE) !=
+                   (sblk->status_attn_bits_ack & STATUS_ATTN_BITS_LINK_STATE))
+                       bce_phy_intr(sc);
+
+               /*
+                * If any other attention is asserted then
+                * the chip is toast.
+                */
+               if ((status_attn_bits & ~STATUS_ATTN_BITS_LINK_STATE) !=
+                    (sblk->status_attn_bits_ack &
+                     ~STATUS_ATTN_BITS_LINK_STATE)) {
+                       DBRUN(1, sc->unexpected_attentions++);
+
+                       if_printf(ifp, "Fatal attention detected: 0x%08X\n",
+                                 sblk->status_attn_bits);
+
+                       DBRUN(BCE_FATAL,
+                       if (bce_debug_unexpected_attention == 0)
+                               bce_breakpoint(sc));
+
+                       bce_init(sc);
+                       return;
+               }
+
+               /* Check for any completed RX frames. */
+               if (sblk->status_rx_quick_consumer_index0 != sc->hw_rx_cons)
+                       bce_rx_intr(sc, -1);
+
+               /* Check for any completed TX frames. */
+               if (sblk->status_tx_quick_consumer_index0 != sc->hw_tx_cons)
+                       bce_tx_intr(sc);
+
+               /*
+                * Save the status block index value
+                * for use during the next interrupt.
+                */
+               sc->last_status_idx = sblk->status_idx;
+
+               /*
+                * Prevent speculative reads from getting
+                * ahead of the status block.
+                */
+               bus_space_barrier(sc->bce_btag, sc->bce_bhandle, 0, 0,
+                                 BUS_SPACE_BARRIER_READ);
+
+               /*
+                * If there's no work left then exit the
+                * interrupt service routine.
+                */
+               if (sblk->status_rx_quick_consumer_index0 == sc->hw_rx_cons &&
+                   sblk->status_tx_quick_consumer_index0 == sc->hw_tx_cons)
+                       break;
+       }
+
+       bus_dmamap_sync(sc->status_tag, sc->status_map, BUS_DMASYNC_PREWRITE);
+
+       /* Re-enable interrupts. */
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD,
+              BCE_PCICFG_INT_ACK_CMD_INDEX_VALID | sc->last_status_idx |
+              BCE_PCICFG_INT_ACK_CMD_MASK_INT);
+       REG_WR(sc, BCE_PCICFG_INT_ACK_CMD,
+              BCE_PCICFG_INT_ACK_CMD_INDEX_VALID | sc->last_status_idx);
+
+       /* Handle any frames that arrived while handling the interrupt. */
+       if (!ifq_is_empty(&ifp->if_snd))
+               ifp->if_start(ifp);
+}
+
+
+/****************************************************************************/
+/* Programs the various packet receive modes (broadcast and multicast).     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_set_rx_mode(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       struct ifmultiaddr *ifma;
+       uint32_t hashes[NUM_MC_HASH_REGISTERS] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+       uint32_t rx_mode, sort_mode;
+       int h, i;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       /* Initialize receive mode default settings. */
+       rx_mode = sc->rx_mode &
+                 ~(BCE_EMAC_RX_MODE_PROMISCUOUS |
+                   BCE_EMAC_RX_MODE_KEEP_VLAN_TAG);
+       sort_mode = 1 | BCE_RPM_SORT_USER0_BC_EN;
+
+       /*
+        * ASF/IPMI/UMP firmware requires that VLAN tag stripping
+        * be enbled.
+        */
+       if (!(BCE_IF_CAPABILITIES & IFCAP_VLAN_HWTAGGING) &&
+           !(sc->bce_flags & BCE_MFW_ENABLE_FLAG))
+               rx_mode |= BCE_EMAC_RX_MODE_KEEP_VLAN_TAG;
+
+       /*
+        * Check for promiscuous, all multicast, or selected
+        * multicast address filtering.
+        */
+       if (ifp->if_flags & IFF_PROMISC) {
+               DBPRINT(sc, BCE_INFO, "Enabling promiscuous mode.\n");
+
+               /* Enable promiscuous mode. */
+               rx_mode |= BCE_EMAC_RX_MODE_PROMISCUOUS;
+               sort_mode |= BCE_RPM_SORT_USER0_PROM_EN;
+       } else if (ifp->if_flags & IFF_ALLMULTI) {
+               DBPRINT(sc, BCE_INFO, "Enabling all multicast mode.\n");
+
+               /* Enable all multicast addresses. */
+               for (i = 0; i < NUM_MC_HASH_REGISTERS; i++) {
+                       REG_WR(sc, BCE_EMAC_MULTICAST_HASH0 + (i * 4),
+                              0xffffffff);
+               }
+               sort_mode |= BCE_RPM_SORT_USER0_MC_EN;
+       } else {
+               /* Accept one or more multicast(s). */
+               DBPRINT(sc, BCE_INFO, "Enabling selective multicast mode.\n");
+
+               LIST_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
+                       if (ifma->ifma_addr->sa_family != AF_LINK)
+                               continue;
+                       h = ether_crc32_le(
+                           LLADDR((struct sockaddr_dl *)ifma->ifma_addr),
+                           ETHER_ADDR_LEN) & 0xFF;
+                       hashes[(h & 0xE0) >> 5] |= 1 << (h & 0x1F);
+               }
+
+               for (i = 0; i < NUM_MC_HASH_REGISTERS; i++) {
+                       REG_WR(sc, BCE_EMAC_MULTICAST_HASH0 + (i * 4),
+                              hashes[i]);
+               }
+               sort_mode |= BCE_RPM_SORT_USER0_MC_HSH_EN;
+       }
+
+       /* Only make changes if the recive mode has actually changed. */
+       if (rx_mode != sc->rx_mode) {
+               DBPRINT(sc, BCE_VERBOSE, "Enabling new receive mode: 0x%08X\n",
+                       rx_mode);
+
+               sc->rx_mode = rx_mode;
+               REG_WR(sc, BCE_EMAC_RX_MODE, rx_mode);
+       }
+
+       /* Disable and clear the exisitng sort before enabling a new sort. */
+       REG_WR(sc, BCE_RPM_SORT_USER0, 0x0);
+       REG_WR(sc, BCE_RPM_SORT_USER0, sort_mode);
+       REG_WR(sc, BCE_RPM_SORT_USER0, sort_mode | BCE_RPM_SORT_USER0_ENA);
+}
+
+
+/****************************************************************************/
+/* Called periodically to updates statistics from the controllers           */
+/* statistics block.                                                        */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_stats_update(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       struct statistics_block *stats = sc->stats_block;
+
+       DBPRINT(sc, BCE_EXCESSIVE, "Entering %s()\n", __func__);
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       /* 
+        * Update the interface statistics from the hardware statistics.
+        */
+       ifp->if_collisions = (u_long)stats->stat_EtherStatsCollisions;
+
+       ifp->if_ierrors = (u_long)stats->stat_EtherStatsUndersizePkts +
+                         (u_long)stats->stat_EtherStatsOverrsizePkts +
+                         (u_long)stats->stat_IfInMBUFDiscards +
+                         (u_long)stats->stat_Dot3StatsAlignmentErrors +
+                         (u_long)stats->stat_Dot3StatsFCSErrors;
+
+       ifp->if_oerrors =
+       (u_long)stats->stat_emac_tx_stat_dot3statsinternalmactransmiterrors +
+       (u_long)stats->stat_Dot3StatsExcessiveCollisions +
+       (u_long)stats->stat_Dot3StatsLateCollisions;
+
+       /* 
+        * Certain controllers don't report carrier sense errors correctly.
+        * See errata E11_5708CA0_1165.
+        */
+       if (!(BCE_CHIP_NUM(sc) == BCE_CHIP_NUM_5706) &&
+           !(BCE_CHIP_ID(sc) == BCE_CHIP_ID_5708_A0)) {
+               ifp->if_oerrors +=
+                       (u_long)stats->stat_Dot3StatsCarrierSenseErrors;
+       }
+
+       /*
+        * Update the sysctl statistics from the hardware statistics.
+        */
+       sc->stat_IfHCInOctets =
+               ((uint64_t)stats->stat_IfHCInOctets_hi << 32) +
+                (uint64_t)stats->stat_IfHCInOctets_lo;
+
+       sc->stat_IfHCInBadOctets =
+               ((uint64_t)stats->stat_IfHCInBadOctets_hi << 32) +
+                (uint64_t)stats->stat_IfHCInBadOctets_lo;
+
+       sc->stat_IfHCOutOctets =
+               ((uint64_t)stats->stat_IfHCOutOctets_hi << 32) +
+                (uint64_t)stats->stat_IfHCOutOctets_lo;
+
+       sc->stat_IfHCOutBadOctets =
+               ((uint64_t)stats->stat_IfHCOutBadOctets_hi << 32) +
+                (uint64_t)stats->stat_IfHCOutBadOctets_lo;
+
+       sc->stat_IfHCInUcastPkts =
+               ((uint64_t)stats->stat_IfHCInUcastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCInUcastPkts_lo;
+
+       sc->stat_IfHCInMulticastPkts =
+               ((uint64_t)stats->stat_IfHCInMulticastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCInMulticastPkts_lo;
+
+       sc->stat_IfHCInBroadcastPkts =
+               ((uint64_t)stats->stat_IfHCInBroadcastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCInBroadcastPkts_lo;
+
+       sc->stat_IfHCOutUcastPkts =
+               ((uint64_t)stats->stat_IfHCOutUcastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCOutUcastPkts_lo;
+
+       sc->stat_IfHCOutMulticastPkts =
+               ((uint64_t)stats->stat_IfHCOutMulticastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCOutMulticastPkts_lo;
+
+       sc->stat_IfHCOutBroadcastPkts =
+               ((uint64_t)stats->stat_IfHCOutBroadcastPkts_hi << 32) +
+                (uint64_t)stats->stat_IfHCOutBroadcastPkts_lo;
+
+       sc->stat_emac_tx_stat_dot3statsinternalmactransmiterrors =
+               stats->stat_emac_tx_stat_dot3statsinternalmactransmiterrors;
+
+       sc->stat_Dot3StatsCarrierSenseErrors =
+               stats->stat_Dot3StatsCarrierSenseErrors;
+
+       sc->stat_Dot3StatsFCSErrors =
+               stats->stat_Dot3StatsFCSErrors;
+
+       sc->stat_Dot3StatsAlignmentErrors =
+               stats->stat_Dot3StatsAlignmentErrors;
+
+       sc->stat_Dot3StatsSingleCollisionFrames =
+               stats->stat_Dot3StatsSingleCollisionFrames;
+
+       sc->stat_Dot3StatsMultipleCollisionFrames =
+               stats->stat_Dot3StatsMultipleCollisionFrames;
+
+       sc->stat_Dot3StatsDeferredTransmissions =
+               stats->stat_Dot3StatsDeferredTransmissions;
+
+       sc->stat_Dot3StatsExcessiveCollisions =
+               stats->stat_Dot3StatsExcessiveCollisions;
+
+       sc->stat_Dot3StatsLateCollisions =
+               stats->stat_Dot3StatsLateCollisions;
+
+       sc->stat_EtherStatsCollisions =
+               stats->stat_EtherStatsCollisions;
+
+       sc->stat_EtherStatsFragments =
+               stats->stat_EtherStatsFragments;
+
+       sc->stat_EtherStatsJabbers =
+               stats->stat_EtherStatsJabbers;
+
+       sc->stat_EtherStatsUndersizePkts =
+               stats->stat_EtherStatsUndersizePkts;
+
+       sc->stat_EtherStatsOverrsizePkts =
+               stats->stat_EtherStatsOverrsizePkts;
+
+       sc->stat_EtherStatsPktsRx64Octets =
+               stats->stat_EtherStatsPktsRx64Octets;
+
+       sc->stat_EtherStatsPktsRx65Octetsto127Octets =
+               stats->stat_EtherStatsPktsRx65Octetsto127Octets;
+
+       sc->stat_EtherStatsPktsRx128Octetsto255Octets =
+               stats->stat_EtherStatsPktsRx128Octetsto255Octets;
+
+       sc->stat_EtherStatsPktsRx256Octetsto511Octets =
+               stats->stat_EtherStatsPktsRx256Octetsto511Octets;
+
+       sc->stat_EtherStatsPktsRx512Octetsto1023Octets =
+               stats->stat_EtherStatsPktsRx512Octetsto1023Octets;
+
+       sc->stat_EtherStatsPktsRx1024Octetsto1522Octets =
+               stats->stat_EtherStatsPktsRx1024Octetsto1522Octets;
+
+       sc->stat_EtherStatsPktsRx1523Octetsto9022Octets =
+               stats->stat_EtherStatsPktsRx1523Octetsto9022Octets;
+
+       sc->stat_EtherStatsPktsTx64Octets =
+               stats->stat_EtherStatsPktsTx64Octets;
+
+       sc->stat_EtherStatsPktsTx65Octetsto127Octets =
+               stats->stat_EtherStatsPktsTx65Octetsto127Octets;
+
+       sc->stat_EtherStatsPktsTx128Octetsto255Octets =
+               stats->stat_EtherStatsPktsTx128Octetsto255Octets;
+
+       sc->stat_EtherStatsPktsTx256Octetsto511Octets =
+               stats->stat_EtherStatsPktsTx256Octetsto511Octets;
+
+       sc->stat_EtherStatsPktsTx512Octetsto1023Octets =
+               stats->stat_EtherStatsPktsTx512Octetsto1023Octets;
+
+       sc->stat_EtherStatsPktsTx1024Octetsto1522Octets =
+               stats->stat_EtherStatsPktsTx1024Octetsto1522Octets;
+
+       sc->stat_EtherStatsPktsTx1523Octetsto9022Octets =
+               stats->stat_EtherStatsPktsTx1523Octetsto9022Octets;
+
+       sc->stat_XonPauseFramesReceived =
+               stats->stat_XonPauseFramesReceived;
+
+       sc->stat_XoffPauseFramesReceived =
+               stats->stat_XoffPauseFramesReceived;
+
+       sc->stat_OutXonSent =
+               stats->stat_OutXonSent;
+
+       sc->stat_OutXoffSent =
+               stats->stat_OutXoffSent;
+
+       sc->stat_FlowControlDone =
+               stats->stat_FlowControlDone;
+
+       sc->stat_MacControlFramesReceived =
+               stats->stat_MacControlFramesReceived;
+
+       sc->stat_XoffStateEntered =
+               stats->stat_XoffStateEntered;
+
+       sc->stat_IfInFramesL2FilterDiscards =
+               stats->stat_IfInFramesL2FilterDiscards;
+
+       sc->stat_IfInRuleCheckerDiscards =
+               stats->stat_IfInRuleCheckerDiscards;
+
+       sc->stat_IfInFTQDiscards =
+               stats->stat_IfInFTQDiscards;
+
+       sc->stat_IfInMBUFDiscards =
+               stats->stat_IfInMBUFDiscards;
+
+       sc->stat_IfInRuleCheckerP4Hit =
+               stats->stat_IfInRuleCheckerP4Hit;
+
+       sc->stat_CatchupInRuleCheckerDiscards =
+               stats->stat_CatchupInRuleCheckerDiscards;
+
+       sc->stat_CatchupInFTQDiscards =
+               stats->stat_CatchupInFTQDiscards;
+
+       sc->stat_CatchupInMBUFDiscards =
+               stats->stat_CatchupInMBUFDiscards;
+
+       sc->stat_CatchupInRuleCheckerP4Hit =
+               stats->stat_CatchupInRuleCheckerP4Hit;
+
+       sc->com_no_buffers = REG_RD_IND(sc, 0x120084);
+
+       DBPRINT(sc, BCE_EXCESSIVE, "Exiting %s()\n", __func__);
+}
+
+
+/****************************************************************************/
+/* Periodic function to perform maintenance tasks.                          */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_tick_serialized(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       struct mii_data *mii;
+       uint32_t msg;
+
+       ASSERT_SERIALIZED(ifp->if_serializer);
+
+       /* Tell the firmware that the driver is still running. */
+#ifdef BCE_DEBUG
+       msg = (uint32_t)BCE_DRV_MSG_DATA_PULSE_CODE_ALWAYS_ALIVE;
+#else
+       msg = (uint32_t)++sc->bce_fw_drv_pulse_wr_seq;
+#endif
+       REG_WR_IND(sc, sc->bce_shmem_base + BCE_DRV_PULSE_MB, msg);
+
+       /* Update the statistics from the hardware statistics block. */
+       bce_stats_update(sc);
+
+       /* Schedule the next tick. */
+       callout_reset(&sc->bce_stat_ch, hz, bce_tick, sc);
+
+       /* If link is up already up then we're done. */
+       if (sc->bce_link)
+               return;
+
+       mii = device_get_softc(sc->bce_miibus);
+       mii_tick(mii);
+
+       /* Check if the link has come up. */
+       if (!sc->bce_link && (mii->mii_media_status & IFM_ACTIVE) &&
+           IFM_SUBTYPE(mii->mii_media_active) != IFM_NONE) {
+               sc->bce_link++;
+               /* Now that link is up, handle any outstanding TX traffic. */
+               if (!ifq_is_empty(&ifp->if_snd))
+                       ifp->if_start(ifp);
+       }
+}
+
+
+static void
+bce_tick(void *xsc)
+{
+       struct bce_softc *sc = xsc;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       lwkt_serialize_enter(ifp->if_serializer);
+       bce_tick_serialized(sc);
+       lwkt_serialize_exit(ifp->if_serializer);
+}
+
+
+#ifdef BCE_DEBUG
+/****************************************************************************/
+/* Allows the driver state to be dumped through the sysctl interface.       */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_driver_state(SYSCTL_HANDLER_ARGS)
+{
+        int error;
+        int result;
+        struct bce_softc *sc;
+
+        result = -1;
+        error = sysctl_handle_int(oidp, &result, 0, req);
+
+        if (error || !req->newptr)
+                return (error);
+
+        if (result == 1) {
+                sc = (struct bce_softc *)arg1;
+                bce_dump_driver_state(sc);
+        }
+
+        return error;
+}
+
+
+/****************************************************************************/
+/* Allows the hardware state to be dumped through the sysctl interface.     */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_hw_state(SYSCTL_HANDLER_ARGS)
+{
+        int error;
+        int result;
+        struct bce_softc *sc;
+
+        result = -1;
+        error = sysctl_handle_int(oidp, &result, 0, req);
+
+        if (error || !req->newptr)
+                return (error);
+
+        if (result == 1) {
+                sc = (struct bce_softc *)arg1;
+                bce_dump_hw_state(sc);
+        }
+
+        return error;
+}
+
+
+/****************************************************************************/
+/* Provides a sysctl interface to allows dumping the RX chain.              */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_dump_rx_chain(SYSCTL_HANDLER_ARGS)
+{
+        int error;
+        int result;
+        struct bce_softc *sc;
+
+        result = -1;
+        error = sysctl_handle_int(oidp, &result, 0, req);
+
+        if (error || !req->newptr)
+                return (error);
+
+        if (result == 1) {
+                sc = (struct bce_softc *)arg1;
+                bce_dump_rx_chain(sc, 0, USABLE_RX_BD);
+        }
+
+        return error;
+}
+
+
+/****************************************************************************/
+/* Provides a sysctl interface to allows dumping the TX chain.              */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_dump_tx_chain(SYSCTL_HANDLER_ARGS)
+{
+        int error;
+        int result;
+        struct bce_softc *sc;
+
+        result = -1;
+        error = sysctl_handle_int(oidp, &result, 0, req);
+
+        if (error || !req->newptr)
+                return (error);
+
+        if (result == 1) {
+                sc = (struct bce_softc *)arg1;
+                bce_dump_tx_chain(sc, 0, USABLE_TX_BD);
+        }
+
+        return error;
+}
+
+
+/****************************************************************************/
+/* Provides a sysctl interface to allow reading arbitrary registers in the  */
+/* device.  DO NOT ENABLE ON PRODUCTION SYSTEMS!                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_reg_read(SYSCTL_HANDLER_ARGS)
+{
+       struct bce_softc *sc;
+       int error;
+       uint32_t val, result;
+
+       result = -1;
+       error = sysctl_handle_int(oidp, &result, 0, req);
+       if (error || (req->newptr == NULL))
+               return (error);
+
+       /* Make sure the register is accessible. */
+       if (result < 0x8000) {
+               sc = (struct bce_softc *)arg1;
+               val = REG_RD(sc, result);
+               if_printf(&sc->arpcom.ac_if, "reg 0x%08X = 0x%08X\n",
+                         result, val);
+       } else if (result < 0x0280000) {
+               sc = (struct bce_softc *)arg1;
+               val = REG_RD_IND(sc, result);
+               if_printf(&sc->arpcom.ac_if, "reg 0x%08X = 0x%08X\n",
+                         result, val);
+       }
+       return (error);
+}
+
+
+/****************************************************************************/
+/* Provides a sysctl interface to allow reading arbitrary PHY registers in  */\r
+/* the device.  DO NOT ENABLE ON PRODUCTION SYSTEMS!                        */\r
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_phy_read(SYSCTL_HANDLER_ARGS)
+{
+       struct bce_softc *sc;
+       device_t dev;
+       int error, result;
+       uint16_t val;
+
+       result = -1;
+       error = sysctl_handle_int(oidp, &result, 0, req);
+       if (error || (req->newptr == NULL))
+               return (error);
+
+       /* Make sure the register is accessible. */
+       if (result < 0x20) {
+               sc = (struct bce_softc *)arg1;
+               dev = sc->bce_dev;
+               val = bce_miibus_read_reg(dev, sc->bce_phy_addr, result);
+               if_printf(&sc->arpcom.ac_if,
+                         "phy 0x%02X = 0x%04X\n", result, val);
+       }
+       return (error);
+}
+
+
+/****************************************************************************/
+/* Provides a sysctl interface to forcing the driver to dump state and      */\r
+/* enter the debugger.  DO NOT ENABLE ON PRODUCTION SYSTEMS!                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static int
+bce_sysctl_breakpoint(SYSCTL_HANDLER_ARGS)
+{
+        int error;
+        int result;
+        struct bce_softc *sc;
+
+        result = -1;
+        error = sysctl_handle_int(oidp, &result, 0, req);
+
+        if (error || !req->newptr)
+                return (error);
+
+        if (result == 1) {
+                sc = (struct bce_softc *)arg1;
+                bce_breakpoint(sc);
+        }
+
+        return error;
+}
+#endif
+
+
+/****************************************************************************/
+/* Adds any sysctl parameters for tuning or debugging purposes.             */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   0 for success, positive value for failure.                             */
+/****************************************************************************/
+static void
+bce_add_sysctls(struct bce_softc *sc)
+{
+       struct sysctl_ctx_list *ctx;
+       struct sysctl_oid_list *children;
+
+       sysctl_ctx_init(&sc->bce_sysctl_ctx);
+       sc->bce_sysctl_tree = SYSCTL_ADD_NODE(&sc->bce_sysctl_ctx,
+                                             SYSCTL_STATIC_CHILDREN(_hw),
+                                             OID_AUTO,
+                                             device_get_nameunit(sc->bce_dev),
+                                             CTLFLAG_RD, 0, "");
+       if (sc->bce_sysctl_tree == NULL) {
+               device_printf(sc->bce_dev, "can't add sysctl node\n");
+               return;
+       }
+
+       ctx = &sc->bce_sysctl_ctx;
+       children = SYSCTL_CHILDREN(sc->bce_sysctl_tree);
+
+#ifdef BCE_DEBUG
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "rx_low_watermark",
+               CTLFLAG_RD, &sc->rx_low_watermark,
+               0, "Lowest level of free rx_bd's");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "rx_empty_count",
+               CTLFLAG_RD, &sc->rx_empty_count,
+               0, "Number of times the RX chain was empty");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "tx_hi_watermark",
+               CTLFLAG_RD, &sc->tx_hi_watermark,
+               0, "Highest level of used tx_bd's");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "tx_full_count",
+               CTLFLAG_RD, &sc->tx_full_count,
+               0, "Number of times the TX chain was full");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "l2fhdr_status_errors",
+               CTLFLAG_RD, &sc->l2fhdr_status_errors,
+               0, "l2_fhdr status errors");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "unexpected_attentions",
+               CTLFLAG_RD, &sc->unexpected_attentions,
+               0, "unexpected attentions");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "lost_status_block_updates",
+               CTLFLAG_RD, &sc->lost_status_block_updates,
+               0, "lost status block updates");
+
+       SYSCTL_ADD_INT(ctx, children, OID_AUTO, 
+               "mbuf_alloc_failed",
+               CTLFLAG_RD, &sc->mbuf_alloc_failed,
+               0, "mbuf cluster allocation failures");
+#endif
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHcInOctets",
+               CTLFLAG_RD, &sc->stat_IfHCInOctets,
+               "Bytes received");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCInBadOctets",
+               CTLFLAG_RD, &sc->stat_IfHCInBadOctets,
+               "Bad bytes received");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCOutOctets",
+               CTLFLAG_RD, &sc->stat_IfHCOutOctets,
+               "Bytes sent");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCOutBadOctets",
+               CTLFLAG_RD, &sc->stat_IfHCOutBadOctets,
+               "Bad bytes sent");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCInUcastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCInUcastPkts,
+               "Unicast packets received");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCInMulticastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCInMulticastPkts,
+               "Multicast packets received");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCInBroadcastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCInBroadcastPkts,
+               "Broadcast packets received");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCOutUcastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCOutUcastPkts,
+               "Unicast packets sent");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCOutMulticastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCOutMulticastPkts,
+               "Multicast packets sent");
+
+       SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, 
+               "stat_IfHCOutBroadcastPkts",
+               CTLFLAG_RD, &sc->stat_IfHCOutBroadcastPkts,
+               "Broadcast packets sent");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_emac_tx_stat_dot3statsinternalmactransmiterrors",
+               CTLFLAG_RD, &sc->stat_emac_tx_stat_dot3statsinternalmactransmiterrors,
+               0, "Internal MAC transmit errors");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsCarrierSenseErrors",
+               CTLFLAG_RD, &sc->stat_Dot3StatsCarrierSenseErrors,
+               0, "Carrier sense errors");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsFCSErrors",
+               CTLFLAG_RD, &sc->stat_Dot3StatsFCSErrors,
+               0, "Frame check sequence errors");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsAlignmentErrors",
+               CTLFLAG_RD, &sc->stat_Dot3StatsAlignmentErrors,
+               0, "Alignment errors");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsSingleCollisionFrames",
+               CTLFLAG_RD, &sc->stat_Dot3StatsSingleCollisionFrames,
+               0, "Single Collision Frames");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsMultipleCollisionFrames",
+               CTLFLAG_RD, &sc->stat_Dot3StatsMultipleCollisionFrames,
+               0, "Multiple Collision Frames");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsDeferredTransmissions",
+               CTLFLAG_RD, &sc->stat_Dot3StatsDeferredTransmissions,
+               0, "Deferred Transmissions");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsExcessiveCollisions",
+               CTLFLAG_RD, &sc->stat_Dot3StatsExcessiveCollisions,
+               0, "Excessive Collisions");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_Dot3StatsLateCollisions",
+               CTLFLAG_RD, &sc->stat_Dot3StatsLateCollisions,
+               0, "Late Collisions");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsCollisions",
+               CTLFLAG_RD, &sc->stat_EtherStatsCollisions,
+               0, "Collisions");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsFragments",
+               CTLFLAG_RD, &sc->stat_EtherStatsFragments,
+               0, "Fragments");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsJabbers",
+               CTLFLAG_RD, &sc->stat_EtherStatsJabbers,
+               0, "Jabbers");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsUndersizePkts",
+               CTLFLAG_RD, &sc->stat_EtherStatsUndersizePkts,
+               0, "Undersize packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsOverrsizePkts",
+               CTLFLAG_RD, &sc->stat_EtherStatsOverrsizePkts,
+               0, "stat_EtherStatsOverrsizePkts");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx64Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx64Octets,
+               0, "Bytes received in 64 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx65Octetsto127Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx65Octetsto127Octets,
+               0, "Bytes received in 65 to 127 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx128Octetsto255Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx128Octetsto255Octets,
+               0, "Bytes received in 128 to 255 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx256Octetsto511Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx256Octetsto511Octets,
+               0, "Bytes received in 256 to 511 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx512Octetsto1023Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx512Octetsto1023Octets,
+               0, "Bytes received in 512 to 1023 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx1024Octetsto1522Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx1024Octetsto1522Octets,
+               0, "Bytes received in 1024 t0 1522 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsRx1523Octetsto9022Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsRx1523Octetsto9022Octets,
+               0, "Bytes received in 1523 to 9022 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx64Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx64Octets,
+               0, "Bytes sent in 64 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx65Octetsto127Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx65Octetsto127Octets,
+               0, "Bytes sent in 65 to 127 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx128Octetsto255Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx128Octetsto255Octets,
+               0, "Bytes sent in 128 to 255 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx256Octetsto511Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx256Octetsto511Octets,
+               0, "Bytes sent in 256 to 511 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx512Octetsto1023Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx512Octetsto1023Octets,
+               0, "Bytes sent in 512 to 1023 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx1024Octetsto1522Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx1024Octetsto1522Octets,
+               0, "Bytes sent in 1024 to 1522 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_EtherStatsPktsTx1523Octetsto9022Octets",
+               CTLFLAG_RD, &sc->stat_EtherStatsPktsTx1523Octetsto9022Octets,
+               0, "Bytes sent in 1523 to 9022 byte packets");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_XonPauseFramesReceived",
+               CTLFLAG_RD, &sc->stat_XonPauseFramesReceived,
+               0, "XON pause frames receved");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_XoffPauseFramesReceived",
+               CTLFLAG_RD, &sc->stat_XoffPauseFramesReceived,
+               0, "XOFF pause frames received");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_OutXonSent",
+               CTLFLAG_RD, &sc->stat_OutXonSent,
+               0, "XON pause frames sent");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_OutXoffSent",
+               CTLFLAG_RD, &sc->stat_OutXoffSent,
+               0, "XOFF pause frames sent");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_FlowControlDone",
+               CTLFLAG_RD, &sc->stat_FlowControlDone,
+               0, "Flow control done");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_MacControlFramesReceived",
+               CTLFLAG_RD, &sc->stat_MacControlFramesReceived,
+               0, "MAC control frames received");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_XoffStateEntered",
+               CTLFLAG_RD, &sc->stat_XoffStateEntered,
+               0, "XOFF state entered");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_IfInFramesL2FilterDiscards",
+               CTLFLAG_RD, &sc->stat_IfInFramesL2FilterDiscards,
+               0, "Received L2 packets discarded");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_IfInRuleCheckerDiscards",
+               CTLFLAG_RD, &sc->stat_IfInRuleCheckerDiscards,
+               0, "Received packets discarded by rule");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_IfInFTQDiscards",
+               CTLFLAG_RD, &sc->stat_IfInFTQDiscards,
+               0, "Received packet FTQ discards");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_IfInMBUFDiscards",
+               CTLFLAG_RD, &sc->stat_IfInMBUFDiscards,
+               0, "Received packets discarded due to lack of controller buffer memory");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_IfInRuleCheckerP4Hit",
+               CTLFLAG_RD, &sc->stat_IfInRuleCheckerP4Hit,
+               0, "Received packets rule checker hits");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_CatchupInRuleCheckerDiscards",
+               CTLFLAG_RD, &sc->stat_CatchupInRuleCheckerDiscards,
+               0, "Received packets discarded in Catchup path");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_CatchupInFTQDiscards",
+               CTLFLAG_RD, &sc->stat_CatchupInFTQDiscards,
+               0, "Received packets discarded in FTQ in Catchup path");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_CatchupInMBUFDiscards",
+               CTLFLAG_RD, &sc->stat_CatchupInMBUFDiscards,
+               0, "Received packets discarded in controller buffer memory in Catchup path");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "stat_CatchupInRuleCheckerP4Hit",
+               CTLFLAG_RD, &sc->stat_CatchupInRuleCheckerP4Hit,
+               0, "Received packets rule checker hits in Catchup path");
+
+       SYSCTL_ADD_UINT(ctx, children, OID_AUTO, 
+               "com_no_buffers",
+               CTLFLAG_RD, &sc->com_no_buffers,
+               0, "Valid packets received but no RX buffers available");
+
+#ifdef BCE_DEBUG
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "driver_state", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_driver_state, "I", "Drive state information");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "hw_state", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_hw_state, "I", "Hardware state information");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "dump_rx_chain", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_dump_rx_chain, "I", "Dump rx_bd chain");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "dump_tx_chain", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_dump_tx_chain, "I", "Dump tx_bd chain");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "breakpoint", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_breakpoint, "I", "Driver breakpoint");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO,
+               "reg_read", CTLTYPE_INT | CTLFLAG_RW,
+               (void *)sc, 0,
+               bce_sysctl_reg_read, "I", "Register read");
+
+       SYSCTL_ADD_PROC(ctx, children, OID_AUTO, \r
+               "phy_read", CTLTYPE_INT | CTLFLAG_RW, \r
+               (void *)sc, 0, \r
+               bce_sysctl_phy_read, "I", "PHY register read");
+
+#endif
+
+}
+
+
+/****************************************************************************/
+/* BCE Debug Routines                                                       */
+/****************************************************************************/
+#ifdef BCE_DEBUG
+
+/****************************************************************************/
+/* Freezes the controller to allow for a cohesive state dump.               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_freeze_controller(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       val = REG_RD(sc, BCE_MISC_COMMAND);
+       val |= BCE_MISC_COMMAND_DISABLE_ALL;
+       REG_WR(sc, BCE_MISC_COMMAND, val);
+}
+
+
+/****************************************************************************/
+/* Unfreezes the controller after a freeze operation.  This may not always  */\r
+/* work and the controller will require a reset!                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_unfreeze_controller(struct bce_softc *sc)
+{
+       uint32_t val;
+
+       val = REG_RD(sc, BCE_MISC_COMMAND);
+       val |= BCE_MISC_COMMAND_ENABLE_ALL;
+       REG_WR(sc, BCE_MISC_COMMAND, val);
+}
+
+
+/****************************************************************************/
+/* Prints out information about an mbuf.                                    */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_mbuf(struct bce_softc *sc, struct mbuf *m)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint32_t val_hi, val_lo;
+       struct mbuf *mp = m;
+
+       if (m == NULL) {
+               /* Index out of range. */
+               if_printf(ifp, "mbuf: null pointer\n");
+               return;
+       }
+
+       while (mp) {
+               val_hi = BCE_ADDR_HI(mp);
+               val_lo = BCE_ADDR_LO(mp);
+               if_printf(ifp, "mbuf: vaddr = 0x%08X:%08X, m_len = %d, "
+                         "m_flags = ( ", val_hi, val_lo, mp->m_len);
+
+               if (mp->m_flags & M_EXT)
+                       kprintf("M_EXT ");
+               if (mp->m_flags & M_PKTHDR)
+                       kprintf("M_PKTHDR ");
+               if (mp->m_flags & M_EOR)
+                       kprintf("M_EOR ");
+#ifdef M_RDONLY
+               if (mp->m_flags & M_RDONLY)
+                       kprintf("M_RDONLY ");
+#endif
+
+               val_hi = BCE_ADDR_HI(mp->m_data);
+               val_lo = BCE_ADDR_LO(mp->m_data);
+               kprintf(") m_data = 0x%08X:%08X\n", val_hi, val_lo);
+
+               if (mp->m_flags & M_PKTHDR) {
+                       if_printf(ifp, "- m_pkthdr: flags = ( ");
+                       if (mp->m_flags & M_BCAST) 
+                               kprintf("M_BCAST ");
+                       if (mp->m_flags & M_MCAST)
+                               kprintf("M_MCAST ");
+                       if (mp->m_flags & M_FRAG)
+                               kprintf("M_FRAG ");
+                       if (mp->m_flags & M_FIRSTFRAG)
+                               kprintf("M_FIRSTFRAG ");
+                       if (mp->m_flags & M_LASTFRAG)
+                               kprintf("M_LASTFRAG ");
+#ifdef M_VLANTAG
+                       if (mp->m_flags & M_VLANTAG)
+                               kprintf("M_VLANTAG ");
+#endif
+#ifdef M_PROMISC
+                       if (mp->m_flags & M_PROMISC)
+                               kprintf("M_PROMISC ");
+#endif
+                       kprintf(") csum_flags = ( ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_IP)
+                               kprintf("CSUM_IP ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_TCP)
+                               kprintf("CSUM_TCP ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_UDP)
+                               kprintf("CSUM_UDP ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_IP_FRAGS)
+                               kprintf("CSUM_IP_FRAGS ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_FRAGMENT)
+                               kprintf("CSUM_FRAGMENT ");
+#ifdef CSUM_TSO
+                       if (mp->m_pkthdr.csum_flags & CSUM_TSO)
+                               kprintf("CSUM_TSO ");
+#endif
+                       if (mp->m_pkthdr.csum_flags & CSUM_IP_CHECKED)
+                               kprintf("CSUM_IP_CHECKED ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_IP_VALID)
+                               kprintf("CSUM_IP_VALID ");
+                       if (mp->m_pkthdr.csum_flags & CSUM_DATA_VALID)
+                               kprintf("CSUM_DATA_VALID ");
+                       kprintf(")\n");
+               }
+
+               if (mp->m_flags & M_EXT) {
+                       val_hi = BCE_ADDR_HI(mp->m_ext.ext_buf);
+                       val_lo = BCE_ADDR_LO(mp->m_ext.ext_buf);
+                       if_printf(ifp, "- m_ext: vaddr = 0x%08X:%08X, "
+                                 "ext_size = %d\n",
+                                 val_hi, val_lo, mp->m_ext.ext_size);
+               }
+               mp = mp->m_next;
+       }
+}
+
+
+/****************************************************************************/
+/* Prints out the mbufs in the TX mbuf chain.                               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_tx_mbuf_chain(struct bce_softc *sc, int chain_prod, int count)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       int i;
+
+       if_printf(ifp,
+       "----------------------------"
+       "  tx mbuf data  "
+       "----------------------------\n");
+
+       for (i = 0; i < count; i++) {
+               if_printf(ifp, "txmbuf[%d]\n", chain_prod);
+               bce_dump_mbuf(sc, sc->tx_mbuf_ptr[chain_prod]);
+               chain_prod = TX_CHAIN_IDX(NEXT_TX_BD(chain_prod));
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out the mbufs in the RX mbuf chain.                               */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_rx_mbuf_chain(struct bce_softc *sc, int chain_prod, int count)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       int i;
+
+       if_printf(ifp,
+       "----------------------------"
+       "  rx mbuf data  "
+       "----------------------------\n");
+
+       for (i = 0; i < count; i++) {
+               if_printf(ifp, "rxmbuf[0x%04X]\n", chain_prod);
+               bce_dump_mbuf(sc, sc->rx_mbuf_ptr[chain_prod]);
+               chain_prod = RX_CHAIN_IDX(NEXT_RX_BD(chain_prod));
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out a tx_bd structure.                                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_txbd(struct bce_softc *sc, int idx, struct tx_bd *txbd)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       if (idx > MAX_TX_BD) {
+               /* Index out of range. */
+               if_printf(ifp, "tx_bd[0x%04X]: Invalid tx_bd index!\n", idx);
+       } else if ((idx & USABLE_TX_BD_PER_PAGE) == USABLE_TX_BD_PER_PAGE) {
+               /* TX Chain page pointer. */
+               if_printf(ifp, "tx_bd[0x%04X]: haddr = 0x%08X:%08X, "
+                         "chain page pointer\n",
+                         idx, txbd->tx_bd_haddr_hi, txbd->tx_bd_haddr_lo);
+       } else {
+               /* Normal tx_bd entry. */
+               if_printf(ifp, "tx_bd[0x%04X]: haddr = 0x%08X:%08X, "
+                         "nbytes = 0x%08X, "
+                         "vlan tag= 0x%04X, flags = 0x%04X (",
+                         idx, txbd->tx_bd_haddr_hi, txbd->tx_bd_haddr_lo,
+                         txbd->tx_bd_mss_nbytes,
+                         txbd->tx_bd_vlan_tag, txbd->tx_bd_flags);
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_CONN_FAULT)
+                       kprintf(" CONN_FAULT");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_TCP_UDP_CKSUM)
+                       kprintf(" TCP_UDP_CKSUM");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_IP_CKSUM)
+                       kprintf(" IP_CKSUM");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_VLAN_TAG)
+                       kprintf("  VLAN");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_COAL_NOW)
+                       kprintf(" COAL_NOW");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_DONT_GEN_CRC)
+                       kprintf(" DONT_GEN_CRC");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_START)
+                       kprintf(" START");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_END)
+                       kprintf(" END");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_SW_LSO)
+                       kprintf(" LSO");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_SW_OPTION_WORD)
+                       kprintf(" OPTION_WORD");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_SW_FLAGS)
+                       kprintf(" FLAGS");
+
+               if (txbd->tx_bd_flags & TX_BD_FLAGS_SW_SNAP)
+                       kprintf(" SNAP");
+
+               kprintf(" )\n");
+       }
+}
+
+
+/****************************************************************************/
+/* Prints out a rx_bd structure.                                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_rxbd(struct bce_softc *sc, int idx, struct rx_bd *rxbd)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       if (idx > MAX_RX_BD) {
+               /* Index out of range. */
+               if_printf(ifp, "rx_bd[0x%04X]: Invalid rx_bd index!\n", idx);
+       } else if ((idx & USABLE_RX_BD_PER_PAGE) == USABLE_RX_BD_PER_PAGE) {
+               /* TX Chain page pointer. */
+               if_printf(ifp, "rx_bd[0x%04X]: haddr = 0x%08X:%08X, "
+                         "chain page pointer\n",
+                         idx, rxbd->rx_bd_haddr_hi, rxbd->rx_bd_haddr_lo);
+       } else {
+               /* Normal tx_bd entry. */
+               if_printf(ifp, "rx_bd[0x%04X]: haddr = 0x%08X:%08X, "
+                         "nbytes = 0x%08X, flags = 0x%08X\n",
+                         idx, rxbd->rx_bd_haddr_hi, rxbd->rx_bd_haddr_lo,
+                         rxbd->rx_bd_len, rxbd->rx_bd_flags);
+       }
+}
+
+
+/****************************************************************************/
+/* Prints out a l2_fhdr structure.                                          */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_l2fhdr(struct bce_softc *sc, int idx, struct l2_fhdr *l2fhdr)
+{
+       if_printf(&sc->arpcom.ac_if, "l2_fhdr[0x%04X]: status = 0x%08X, "
+                 "pkt_len = 0x%04X, vlan = 0x%04x, "
+                 "ip_xsum = 0x%04X, tcp_udp_xsum = 0x%04X\n",
+                 idx, l2fhdr->l2_fhdr_status,
+                 l2fhdr->l2_fhdr_pkt_len, l2fhdr->l2_fhdr_vlan_tag,
+                 l2fhdr->l2_fhdr_ip_xsum, l2fhdr->l2_fhdr_tcp_udp_xsum);
+}
+
+
+/****************************************************************************/
+/* Prints out the tx chain.                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_tx_chain(struct bce_softc *sc, int tx_prod, int count)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       int i;
+
+       /* First some info about the tx_bd chain structure. */
+       if_printf(ifp,
+       "----------------------------"
+       "  tx_bd  chain  "
+       "----------------------------\n");
+
+       if_printf(ifp, "page size      = 0x%08X, "
+                 "tx chain pages        = 0x%08X\n",
+                 (uint32_t)BCM_PAGE_SIZE, (uint32_t)TX_PAGES);
+
+       if_printf(ifp, "tx_bd per page = 0x%08X, "
+                 "usable tx_bd per page = 0x%08X\n",
+                 (uint32_t)TOTAL_TX_BD_PER_PAGE,
+                 (uint32_t)USABLE_TX_BD_PER_PAGE);
+
+       if_printf(ifp, "total tx_bd    = 0x%08X\n", (uint32_t)TOTAL_TX_BD);
+
+       if_printf(ifp,
+       "----------------------------"
+       "  tx_bd data    "
+       "----------------------------\n");
+
+       /* Now print out the tx_bd's themselves. */
+       for (i = 0; i < count; i++) {
+               struct tx_bd *txbd;
+
+               txbd = &sc->tx_bd_chain[TX_PAGE(tx_prod)][TX_IDX(tx_prod)];
+               bce_dump_txbd(sc, tx_prod, txbd);
+               tx_prod = TX_CHAIN_IDX(NEXT_TX_BD(tx_prod));
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out the rx chain.                                                 */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_rx_chain(struct bce_softc *sc, int rx_prod, int count)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       int i;
+
+       /* First some info about the tx_bd chain structure. */
+       if_printf(ifp,
+       "----------------------------"
+       "  rx_bd  chain  "
+       "----------------------------\n");
+
+       if_printf(ifp, "page size      = 0x%08X, "
+                 "rx chain pages        = 0x%08X\n",
+                 (uint32_t)BCM_PAGE_SIZE, (uint32_t)RX_PAGES);
+
+       if_printf(ifp, "rx_bd per page = 0x%08X, "
+                 "usable rx_bd per page = 0x%08X\n",
+                 (uint32_t)TOTAL_RX_BD_PER_PAGE,
+                 (uint32_t)USABLE_RX_BD_PER_PAGE);
+
+       if_printf(ifp, "total rx_bd    = 0x%08X\n", (uint32_t)TOTAL_RX_BD);
+
+       if_printf(ifp,
+       "----------------------------"
+       "   rx_bd data   "
+       "----------------------------\n");
+
+       /* Now print out the rx_bd's themselves. */
+       for (i = 0; i < count; i++) {
+               struct rx_bd *rxbd;
+
+               rxbd = &sc->rx_bd_chain[RX_PAGE(rx_prod)][RX_IDX(rx_prod)];
+               bce_dump_rxbd(sc, rx_prod, rxbd);
+               rx_prod = RX_CHAIN_IDX(NEXT_RX_BD(rx_prod));
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out the status block from host memory.                            */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_status_block(struct bce_softc *sc)
+{
+       struct status_block *sblk = sc->status_block;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       if_printf(ifp,
+       "----------------------------"
+       "  Status Block  "
+       "----------------------------\n");
+
+       if_printf(ifp, "    0x%08X - attn_bits\n", sblk->status_attn_bits);
+
+       if_printf(ifp, "    0x%08X - attn_bits_ack\n",
+                 sblk->status_attn_bits_ack);
+
+       if_printf(ifp, "0x%04X(0x%04X) - rx_cons0\n",
+           sblk->status_rx_quick_consumer_index0,
+           (uint16_t)RX_CHAIN_IDX(sblk->status_rx_quick_consumer_index0));
+
+       if_printf(ifp, "0x%04X(0x%04X) - tx_cons0\n",
+           sblk->status_tx_quick_consumer_index0,
+           (uint16_t)TX_CHAIN_IDX(sblk->status_tx_quick_consumer_index0));
+
+       if_printf(ifp, "        0x%04X - status_idx\n", sblk->status_idx);
+
+       /* Theses indices are not used for normal L2 drivers. */
+       if (sblk->status_rx_quick_consumer_index1) {
+               if_printf(ifp, "0x%04X(0x%04X) - rx_cons1\n",
+               sblk->status_rx_quick_consumer_index1,
+               (uint16_t)RX_CHAIN_IDX(sblk->status_rx_quick_consumer_index1));
+       }
+
+       if (sblk->status_tx_quick_consumer_index1) {
+               if_printf(ifp, "0x%04X(0x%04X) - tx_cons1\n",
+               sblk->status_tx_quick_consumer_index1,
+               (uint16_t)TX_CHAIN_IDX(sblk->status_tx_quick_consumer_index1));
+       }
+
+       if (sblk->status_rx_quick_consumer_index2) {
+               if_printf(ifp, "0x%04X(0x%04X)- rx_cons2\n",
+               sblk->status_rx_quick_consumer_index2,
+               (uint16_t)RX_CHAIN_IDX(sblk->status_rx_quick_consumer_index2));
+       }
+
+       if (sblk->status_tx_quick_consumer_index2) {
+               if_printf(ifp, "0x%04X(0x%04X) - tx_cons2\n",
+               sblk->status_tx_quick_consumer_index2,
+               (uint16_t)TX_CHAIN_IDX(sblk->status_tx_quick_consumer_index2));
+       }
+
+       if (sblk->status_rx_quick_consumer_index3) {
+               if_printf(ifp, "0x%04X(0x%04X) - rx_cons3\n",
+               sblk->status_rx_quick_consumer_index3,
+               (uint16_t)RX_CHAIN_IDX(sblk->status_rx_quick_consumer_index3));
+       }
+
+       if (sblk->status_tx_quick_consumer_index3) {
+               if_printf(ifp, "0x%04X(0x%04X) - tx_cons3\n",
+               sblk->status_tx_quick_consumer_index3,
+               (uint16_t)TX_CHAIN_IDX(sblk->status_tx_quick_consumer_index3));
+       }
+
+       if (sblk->status_rx_quick_consumer_index4 ||
+           sblk->status_rx_quick_consumer_index5) {
+               if_printf(ifp, "rx_cons4  = 0x%08X, rx_cons5      = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index4,
+                         sblk->status_rx_quick_consumer_index5);
+       }
+
+       if (sblk->status_rx_quick_consumer_index6 ||
+           sblk->status_rx_quick_consumer_index7) {
+               if_printf(ifp, "rx_cons6  = 0x%08X, rx_cons7      = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index6,
+                         sblk->status_rx_quick_consumer_index7);
+       }
+
+       if (sblk->status_rx_quick_consumer_index8 ||
+           sblk->status_rx_quick_consumer_index9) {
+               if_printf(ifp, "rx_cons8  = 0x%08X, rx_cons9      = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index8,
+                         sblk->status_rx_quick_consumer_index9);
+       }
+
+       if (sblk->status_rx_quick_consumer_index10 ||
+           sblk->status_rx_quick_consumer_index11) {
+               if_printf(ifp, "rx_cons10 = 0x%08X, rx_cons11     = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index10,
+                         sblk->status_rx_quick_consumer_index11);
+       }
+
+       if (sblk->status_rx_quick_consumer_index12 ||
+           sblk->status_rx_quick_consumer_index13) {
+               if_printf(ifp, "rx_cons12 = 0x%08X, rx_cons13     = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index12,
+                         sblk->status_rx_quick_consumer_index13);
+       }
+
+       if (sblk->status_rx_quick_consumer_index14 ||
+           sblk->status_rx_quick_consumer_index15) {
+               if_printf(ifp, "rx_cons14 = 0x%08X, rx_cons15     = 0x%08X\n",
+                         sblk->status_rx_quick_consumer_index14,
+                         sblk->status_rx_quick_consumer_index15);
+       }
+
+       if (sblk->status_completion_producer_index ||
+           sblk->status_cmd_consumer_index) {
+               if_printf(ifp, "com_prod  = 0x%08X, cmd_cons      = 0x%08X\n",
+                         sblk->status_completion_producer_index,
+                         sblk->status_cmd_consumer_index);
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out the statistics block.                                         */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_stats_block(struct bce_softc *sc)
+{
+       struct statistics_block *sblk = sc->stats_block;
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+
+       if_printf(ifp,
+       "---------------"
+       " Stats Block  (All Stats Not Shown Are 0) "
+       "---------------\n");
+
+       if (sblk->stat_IfHCInOctets_hi || sblk->stat_IfHCInOctets_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcInOctets\n",
+                         sblk->stat_IfHCInOctets_hi,
+                         sblk->stat_IfHCInOctets_lo);
+       }
+
+       if (sblk->stat_IfHCInBadOctets_hi || sblk->stat_IfHCInBadOctets_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcInBadOctets\n",
+                         sblk->stat_IfHCInBadOctets_hi,
+                         sblk->stat_IfHCInBadOctets_lo);
+       }
+
+       if (sblk->stat_IfHCOutOctets_hi || sblk->stat_IfHCOutOctets_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcOutOctets\n",
+                         sblk->stat_IfHCOutOctets_hi,
+                         sblk->stat_IfHCOutOctets_lo);
+       }
+
+       if (sblk->stat_IfHCOutBadOctets_hi || sblk->stat_IfHCOutBadOctets_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcOutBadOctets\n",
+                         sblk->stat_IfHCOutBadOctets_hi,
+                         sblk->stat_IfHCOutBadOctets_lo);
+       }
+
+       if (sblk->stat_IfHCInUcastPkts_hi || sblk->stat_IfHCInUcastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcInUcastPkts\n",
+                         sblk->stat_IfHCInUcastPkts_hi,
+                         sblk->stat_IfHCInUcastPkts_lo);
+       }
+
+       if (sblk->stat_IfHCInBroadcastPkts_hi ||
+           sblk->stat_IfHCInBroadcastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcInBroadcastPkts\n",
+                         sblk->stat_IfHCInBroadcastPkts_hi,
+                         sblk->stat_IfHCInBroadcastPkts_lo);
+       }
+
+       if (sblk->stat_IfHCInMulticastPkts_hi ||
+           sblk->stat_IfHCInMulticastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcInMulticastPkts\n",
+                         sblk->stat_IfHCInMulticastPkts_hi,
+                         sblk->stat_IfHCInMulticastPkts_lo);
+       }
+
+       if (sblk->stat_IfHCOutUcastPkts_hi || sblk->stat_IfHCOutUcastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcOutUcastPkts\n",
+                         sblk->stat_IfHCOutUcastPkts_hi,
+                         sblk->stat_IfHCOutUcastPkts_lo);
+       }
+
+       if (sblk->stat_IfHCOutBroadcastPkts_hi ||
+           sblk->stat_IfHCOutBroadcastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcOutBroadcastPkts\n",
+                         sblk->stat_IfHCOutBroadcastPkts_hi,
+                         sblk->stat_IfHCOutBroadcastPkts_lo);
+       }
+
+       if (sblk->stat_IfHCOutMulticastPkts_hi ||
+           sblk->stat_IfHCOutMulticastPkts_lo) {
+               if_printf(ifp, "0x%08X:%08X : IfHcOutMulticastPkts\n",
+                         sblk->stat_IfHCOutMulticastPkts_hi,
+                         sblk->stat_IfHCOutMulticastPkts_lo);
+       }
+
+       if (sblk->stat_emac_tx_stat_dot3statsinternalmactransmiterrors) {
+               if_printf(ifp, "         0x%08X : "
+               "emac_tx_stat_dot3statsinternalmactransmiterrors\n", 
+               sblk->stat_emac_tx_stat_dot3statsinternalmactransmiterrors);
+       }
+
+       if (sblk->stat_Dot3StatsCarrierSenseErrors) {
+               if_printf(ifp, "         0x%08X : "
+                         "Dot3StatsCarrierSenseErrors\n",
+                         sblk->stat_Dot3StatsCarrierSenseErrors);
+       }
+
+       if (sblk->stat_Dot3StatsFCSErrors) {
+               if_printf(ifp, "         0x%08X : Dot3StatsFCSErrors\n",
+                         sblk->stat_Dot3StatsFCSErrors);
+       }
+
+       if (sblk->stat_Dot3StatsAlignmentErrors) {
+               if_printf(ifp, "         0x%08X : Dot3StatsAlignmentErrors\n",
+                         sblk->stat_Dot3StatsAlignmentErrors);
+       }
+
+       if (sblk->stat_Dot3StatsSingleCollisionFrames) {
+               if_printf(ifp, "         0x%08X : "
+                         "Dot3StatsSingleCollisionFrames\n",
+                         sblk->stat_Dot3StatsSingleCollisionFrames);
+       }
+
+       if (sblk->stat_Dot3StatsMultipleCollisionFrames) {
+               if_printf(ifp, "         0x%08X : "
+                         "Dot3StatsMultipleCollisionFrames\n",
+                         sblk->stat_Dot3StatsMultipleCollisionFrames);
+       }
+
+       if (sblk->stat_Dot3StatsDeferredTransmissions) {
+               if_printf(ifp, "         0x%08X : "
+                         "Dot3StatsDeferredTransmissions\n",
+                         sblk->stat_Dot3StatsDeferredTransmissions);
+       }
+
+       if (sblk->stat_Dot3StatsExcessiveCollisions) {
+               if_printf(ifp, "         0x%08X : "
+                         "Dot3StatsExcessiveCollisions\n",
+                         sblk->stat_Dot3StatsExcessiveCollisions);
+       }
+
+       if (sblk->stat_Dot3StatsLateCollisions) {
+               if_printf(ifp, "         0x%08X : Dot3StatsLateCollisions\n",
+                         sblk->stat_Dot3StatsLateCollisions);
+       }
+
+       if (sblk->stat_EtherStatsCollisions) {
+               if_printf(ifp, "         0x%08X : EtherStatsCollisions\n",
+                         sblk->stat_EtherStatsCollisions);
+       }
+
+       if (sblk->stat_EtherStatsFragments)  {
+               if_printf(ifp, "         0x%08X : EtherStatsFragments\n",
+                         sblk->stat_EtherStatsFragments);
+       }
+
+       if (sblk->stat_EtherStatsJabbers) {
+               if_printf(ifp, "         0x%08X : EtherStatsJabbers\n",
+                         sblk->stat_EtherStatsJabbers);
+       }
+
+       if (sblk->stat_EtherStatsUndersizePkts) {
+               if_printf(ifp, "         0x%08X : EtherStatsUndersizePkts\n",
+                         sblk->stat_EtherStatsUndersizePkts);
+       }
+
+       if (sblk->stat_EtherStatsOverrsizePkts) {
+               if_printf(ifp, "         0x%08X : EtherStatsOverrsizePkts\n",
+                         sblk->stat_EtherStatsOverrsizePkts);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx64Octets) {
+               if_printf(ifp, "         0x%08X : EtherStatsPktsRx64Octets\n",
+                         sblk->stat_EtherStatsPktsRx64Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx65Octetsto127Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx65Octetsto127Octets\n",
+                         sblk->stat_EtherStatsPktsRx65Octetsto127Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx128Octetsto255Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx128Octetsto255Octets\n",
+                         sblk->stat_EtherStatsPktsRx128Octetsto255Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx256Octetsto511Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx256Octetsto511Octets\n",
+                         sblk->stat_EtherStatsPktsRx256Octetsto511Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx512Octetsto1023Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx512Octetsto1023Octets\n",
+                         sblk->stat_EtherStatsPktsRx512Octetsto1023Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx1024Octetsto1522Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx1024Octetsto1522Octets\n",
+                         sblk->stat_EtherStatsPktsRx1024Octetsto1522Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsRx1523Octetsto9022Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsRx1523Octetsto9022Octets\n",
+                         sblk->stat_EtherStatsPktsRx1523Octetsto9022Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx64Octets) {
+               if_printf(ifp, "         0x%08X : EtherStatsPktsTx64Octets\n",
+                         sblk->stat_EtherStatsPktsTx64Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx65Octetsto127Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx65Octetsto127Octets\n",
+                         sblk->stat_EtherStatsPktsTx65Octetsto127Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx128Octetsto255Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx128Octetsto255Octets\n",
+                         sblk->stat_EtherStatsPktsTx128Octetsto255Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx256Octetsto511Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx256Octetsto511Octets\n",
+                         sblk->stat_EtherStatsPktsTx256Octetsto511Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx512Octetsto1023Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx512Octetsto1023Octets\n",
+                         sblk->stat_EtherStatsPktsTx512Octetsto1023Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx1024Octetsto1522Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx1024Octetsto1522Octets\n",
+                         sblk->stat_EtherStatsPktsTx1024Octetsto1522Octets);
+       }
+
+       if (sblk->stat_EtherStatsPktsTx1523Octetsto9022Octets) {
+               if_printf(ifp, "         0x%08X : "
+                         "EtherStatsPktsTx1523Octetsto9022Octets\n",
+                         sblk->stat_EtherStatsPktsTx1523Octetsto9022Octets);
+       }
+
+       if (sblk->stat_XonPauseFramesReceived) {
+               if_printf(ifp, "         0x%08X : XonPauseFramesReceived\n",
+                         sblk->stat_XonPauseFramesReceived);
+       }
+
+       if (sblk->stat_XoffPauseFramesReceived) {
+               if_printf(ifp, "          0x%08X : XoffPauseFramesReceived\n",
+                         sblk->stat_XoffPauseFramesReceived);
+       }
+
+       if (sblk->stat_OutXonSent) {
+               if_printf(ifp, "         0x%08X : OutXoffSent\n",
+                         sblk->stat_OutXonSent);
+       }
+
+       if (sblk->stat_OutXoffSent) {
+               if_printf(ifp, "         0x%08X : OutXoffSent\n",
+                         sblk->stat_OutXoffSent);
+       }
+
+       if (sblk->stat_FlowControlDone) {
+               if_printf(ifp, "         0x%08X : FlowControlDone\n",
+                         sblk->stat_FlowControlDone);
+       }
+
+       if (sblk->stat_MacControlFramesReceived) {
+               if_printf(ifp, "         0x%08X : MacControlFramesReceived\n",
+                         sblk->stat_MacControlFramesReceived);
+       }
+
+       if (sblk->stat_XoffStateEntered) {
+               if_printf(ifp, "         0x%08X : XoffStateEntered\n",
+                         sblk->stat_XoffStateEntered);
+       }
+
+       if (sblk->stat_IfInFramesL2FilterDiscards) {
+               if_printf(ifp, "         0x%08X : IfInFramesL2FilterDiscards\n",                          sblk->stat_IfInFramesL2FilterDiscards);
+       }
+
+       if (sblk->stat_IfInRuleCheckerDiscards) {
+               if_printf(ifp, "         0x%08X : IfInRuleCheckerDiscards\n",
+                         sblk->stat_IfInRuleCheckerDiscards);
+       }
+
+       if (sblk->stat_IfInFTQDiscards) {
+               if_printf(ifp, "         0x%08X : IfInFTQDiscards\n",
+                         sblk->stat_IfInFTQDiscards);
+       }
+
+       if (sblk->stat_IfInMBUFDiscards) {
+               if_printf(ifp, "         0x%08X : IfInMBUFDiscards\n",
+                         sblk->stat_IfInMBUFDiscards);
+       }
+
+       if (sblk->stat_IfInRuleCheckerP4Hit) {
+               if_printf(ifp, "         0x%08X : IfInRuleCheckerP4Hit\n",
+                         sblk->stat_IfInRuleCheckerP4Hit);
+       }
+
+       if (sblk->stat_CatchupInRuleCheckerDiscards) {
+               if_printf(ifp, "         0x%08X : "
+                         "CatchupInRuleCheckerDiscards\n",
+                         sblk->stat_CatchupInRuleCheckerDiscards);
+       }
+
+       if (sblk->stat_CatchupInFTQDiscards) {
+               if_printf(ifp, "         0x%08X : CatchupInFTQDiscards\n",
+                         sblk->stat_CatchupInFTQDiscards);
+       }
+
+       if (sblk->stat_CatchupInMBUFDiscards) {
+               if_printf(ifp, "         0x%08X : CatchupInMBUFDiscards\n",
+                         sblk->stat_CatchupInMBUFDiscards);
+       }
+
+       if (sblk->stat_CatchupInRuleCheckerP4Hit) {
+               if_printf(ifp, "         0x%08X : CatchupInRuleCheckerP4Hit\n",
+                         sblk->stat_CatchupInRuleCheckerP4Hit);
+       }
+
+       if_printf(ifp,
+       "----------------------------"
+       "----------------"
+       "----------------------------\n");
+}
+
+
+/****************************************************************************/
+/* Prints out a summary of the driver state.                                */
+/*                                                                          */
+/* Returns:                                                                 */
+/*   Nothing.                                                               */
+/****************************************************************************/
+static void
+bce_dump_driver_state(struct bce_softc *sc)
+{
+       struct ifnet *ifp = &sc->arpcom.ac_if;
+       uint32_t val_hi, val_lo;
+
+       if_printf(ifp,
+       "-----------------------------"
+       " Driver State "
+       "-----------------------------\n");
+
+       val_hi = BCE_ADDR_HI(sc);
+       val_lo = BCE_ADDR_LO(sc);
+       if_printf(ifp, "0x%08X:%08X - (sc) driver softc structure "
+                 "virtual address\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->status_block);
+       val_lo = BCE_ADDR_LO(sc->status_block);
+       if_printf(ifp, "0x%08X:%08X - (sc->status_block) status block "
+                 "virtual address\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->stats_block);
+       val_lo = BCE_ADDR_LO(sc->stats_block);
+       if_printf(ifp, "0x%08X:%08X - (sc->stats_block) statistics block "
+                 "virtual address\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->tx_bd_chain);
+       val_lo = BCE_ADDR_LO(sc->tx_bd_chain);
+       if_printf(ifp, "0x%08X:%08X - (sc->tx_bd_chain) tx_bd chain "
+                 "virtual adddress\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->rx_bd_chain);
+       val_lo = BCE_ADDR_LO(sc->rx_bd_chain);
+       if_printf(ifp, "0x%08X:%08X - (sc->rx_bd_chain) rx_bd chain "
+                 "virtual address\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->tx_mbuf_ptr);
+       val_lo = BCE_ADDR_LO(sc->tx_mbuf_ptr);
+       if_printf(ifp, "0x%08X:%08X - (sc->tx_mbuf_ptr) tx mbuf chain "
+                 "virtual address\n", val_hi, val_lo);
+
+       val_hi = BCE_ADDR_HI(sc->rx_mbuf_ptr);
+       val_lo = BCE_ADDR_LO(sc->rx_mbuf_ptr);
+       if_printf(ifp, "0x%08X:%08X - (sc->rx_mbuf_ptr) rx mbuf chain "
+                 "virtual address\n", val_hi, val_lo);
+
+       if_printf(ifp, "         0x%08X - (sc->interrupts_generated) "
+                 "h/w intrs\n", sc->interrupts_generated);
+
+       if_printf(ifp, "         0x%08X - (sc->rx_interrupts) "
+                 "rx interrupts handled\n", sc->rx_interrupts);
+
+       if_printf(ifp, "         0x%08X - (sc->tx_interrupts) "
+                 "tx interrupts handled\n", sc->tx_interrupts);
+
+       if_printf(ifp, "         0x%08X - (sc->last_status_idx) "
+                 "status block index\n", sc->last_status_idx);
+
+       if_printf(ifp, "     0x%04X(0x%04X) - (sc->tx_prod) "
+                 "tx producer index\n",
+                 sc->tx_prod, (uint16_t)TX_CHAIN_IDX(sc->tx_prod));
+
+       if_printf(ifp, "     0x%04X(0x%04X) - (sc->tx_cons) "
+                 "tx consumer index\n",
+                 sc->tx_cons, (uint16_t)TX_CHAIN_IDX(sc->tx_cons));
+
+       if_printf(ifp, "         0x%08X - (sc->tx_prod_bseq) "
+                 "tx producer bseq index\n", sc->tx_prod_bseq);
+
+       if_printf(ifp, "     0x%04X(0x%04X) - (sc->rx_prod) "
+                 "rx producer index\n",
+                 sc->rx_prod, (uint16_t)RX_CHAIN_IDX(sc->rx_prod));
+
+       if_printf(ifp, "     0x%04X(0x%04X) - (sc->rx_cons) "
+                 "rx consumer index\n",
+                 sc->rx_cons, (uint16_t)RX_CHAIN_IDX(sc->rx_cons));
+
+       if_printf(ifp, "         0x%08X - (sc->rx_prod_bseq) "
+                 "rx producer bseq index\n", sc->rx_prod_bseq);
+
+       if_printf(ifp, "         0x%08X - (sc->rx_mbuf_alloc) "
+                 "rx mbufs allocated\n", sc->rx_mbuf_alloc);
+
+       if_printf(ifp, "         0x%08X - (sc->free_rx_bd) "
+                 "free rx_bd's\n", sc->free_rx_bd);
+
+       if_printf(ifp, "0x%08X/%08X - (sc->rx_low_watermark) rx "
+                 "low watermark\n", sc->rx_low_watermark, sc->max_rx_bd);
+
+       if_printf(ifp, "         0x%08X - (sc->txmbuf_alloc) "
+                 "tx mbufs allocated\n", sc->tx_mbuf_alloc);
+
+       if_printf(ifp, "         0x%08X - (sc->rx_mbuf_alloc) "
+                 "rx mbufs allocated\n", sc->rx_mbuf_alloc);
+
+       if_printf(ifp, "         0x%08X - (sc->used_tx_bd) used tx_bd's\n",
+                 sc->used_tx_bd);
+
+       if_printf(ifp, "0x%08X/%08X - (sc->tx_hi_watermark) tx hi watermark\n",
+                 sc->tx_hi_watermark, sc->max_tx_bd);
+
+       if_printf(ifp, "         0x%08X - (sc->mbuf_alloc_failed) "
+                 "failed mbuf alloc\n", sc->mbuf_alloc_failed);
+
+       if_printf(ifp,