diff options
author | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-04-16 15:20:36 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-04-16 15:20:36 -0700 |
commit | 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 (patch) | |
tree | 0bba044c4ce775e45a88a51686b5d9f90697ea9d /drivers/atm |
Linux-2.6.12-rc2v2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
Diffstat (limited to 'drivers/atm')
46 files changed, 46840 insertions, 0 deletions
diff --git a/drivers/atm/Kconfig b/drivers/atm/Kconfig new file mode 100644 index 000000000000..489de81ea609 --- /dev/null +++ b/drivers/atm/Kconfig @@ -0,0 +1,448 @@ +# +# ATM device configuration +# + +menu "ATM drivers" + depends on NETDEVICES && ATM + +config ATM_TCP + tristate "ATM over TCP" + depends on INET && ATM + help + ATM over TCP driver. Useful mainly for development and for + experiments. If unsure, say N. + +config ATM_LANAI + tristate "Efficient Networks Speedstream 3010" + depends on PCI && ATM + help + Supports ATM cards based on the Efficient Networks "Lanai" + chipset such as the Speedstream 3010 and the ENI-25p. The + Speedstream 3060 is currently not supported since we don't + have the code to drive the on-board Alcatel DSL chipset (yet). + +config ATM_ENI + tristate "Efficient Networks ENI155P" + depends on PCI && ATM + ---help--- + Driver for the Efficient Networks ENI155p series and SMC ATM + Power155 155 Mbps ATM adapters. Both, the versions with 512KB and + 2MB on-board RAM (Efficient calls them "C" and "S", respectively), + and the FPGA and the ASIC Tonga versions of the board are supported. + The driver works with MMF (-MF or ...F) and UTP-5 (-U5 or ...D) + adapters. + + To compile this driver as a module, choose M here: the module will + be called eni. + +config ATM_ENI_DEBUG + bool "Enable extended debugging" + depends on ATM_ENI + help + Extended debugging records various events and displays that list + when an inconsistency is detected. This mechanism is faster than + generally using printks, but still has some impact on performance. + Note that extended debugging may create certain race conditions + itself. Enable this ONLY if you suspect problems with the driver. + +config ATM_ENI_TUNE_BURST + bool "Fine-tune burst settings" + depends on ATM_ENI + ---help--- + In order to obtain good throughput, the ENI NIC can transfer + multiple words of data per PCI bus access cycle. Such a multi-word + transfer is called a burst. + + The default settings for the burst sizes are suitable for most PCI + chipsets. However, in some cases, large bursts may overrun buffers + in the PCI chipset and cause data corruption. In such cases, large + bursts must be disabled and only (slower) small bursts can be used. + The burst sizes can be set independently in the send (TX) and + receive (RX) direction. + + Note that enabling many different burst sizes in the same direction + may increase the cost of setting up a transfer such that the + resulting throughput is lower than when using only the largest + available burst size. + + Also, sometimes larger bursts lead to lower throughput, e.g. on an + Intel 440FX board, a drop from 135 Mbps to 103 Mbps was observed + when going from 8W to 16W bursts. + +config ATM_ENI_BURST_TX_16W + bool "Enable 16W TX bursts (discouraged)" + depends on ATM_ENI_TUNE_BURST + help + Burst sixteen words at once in the send direction. This may work + with recent PCI chipsets, but is known to fail with older chipsets. + +config ATM_ENI_BURST_TX_8W + bool "Enable 8W TX bursts (recommended)" + depends on ATM_ENI_TUNE_BURST + help + Burst eight words at once in the send direction. This is the default + setting. + +config ATM_ENI_BURST_TX_4W + bool "Enable 4W TX bursts (optional)" + depends on ATM_ENI_TUNE_BURST + help + Burst four words at once in the send direction. You may want to try + this if you have disabled 8W bursts. Enabling 4W if 8W is also set + may or may not improve throughput. + +config ATM_ENI_BURST_TX_2W + bool "Enable 2W TX bursts (optional)" + depends on ATM_ENI_TUNE_BURST + help + Burst two words at once in the send direction. You may want to try + this if you have disabled 4W and 8W bursts. Enabling 2W if 4W or 8W + are also set may or may not improve throughput. + +config ATM_ENI_BURST_RX_16W + bool "Enable 16W RX bursts (discouraged)" + depends on ATM_ENI_TUNE_BURST + help + Burst sixteen words at once in the receive direction. This may work + with recent PCI chipsets, but is known to fail with older chipsets. + +config ATM_ENI_BURST_RX_8W + bool "Enable 8W RX bursts (discouraged)" + depends on ATM_ENI_TUNE_BURST + help + Burst eight words at once in the receive direction. This may work + with recent PCI chipsets, but is known to fail with older chipsets, + such as the Intel Neptune series. + +config ATM_ENI_BURST_RX_4W + bool "Enable 4W RX bursts (recommended)" + depends on ATM_ENI_TUNE_BURST + help + Burst four words at once in the receive direction. This is the + default setting. Enabling 4W if 8W is also set may or may not + improve throughput. + +config ATM_ENI_BURST_RX_2W + bool "Enable 2W RX bursts (optional)" + depends on ATM_ENI_TUNE_BURST + help + Burst two words at once in the receive direction. You may want to + try this if you have disabled 4W and 8W bursts. Enabling 2W if 4W or + 8W are also set may or may not improve throughput. + +config ATM_FIRESTREAM + tristate "Fujitsu FireStream (FS50/FS155) " + depends on PCI && ATM + help + Driver for the Fujitsu FireStream 155 (MB86697) and + FireStream 50 (MB86695) ATM PCI chips. + + To compile this driver as a module, choose M here: the module will + be called firestream. + +config ATM_ZATM + tristate "ZeitNet ZN1221/ZN1225" + depends on PCI && ATM + help + Driver for the ZeitNet ZN1221 (MMF) and ZN1225 (UTP-5) 155 Mbps ATM + adapters. + + To compile this driver as a module, choose M here: the module will + be called zatm. + +config ATM_ZATM_DEBUG + bool "Enable extended debugging" + depends on ATM_ZATM + help + Extended debugging records various events and displays that list + when an inconsistency is detected. This mechanism is faster than + generally using printks, but still has some impact on performance. + Note that extended debugging may create certain race conditions + itself. Enable this ONLY if you suspect problems with the driver. + +# bool 'Rolfs TI TNETA1570' CONFIG_ATM_TNETA1570 y +# if [ "$CONFIG_ATM_TNETA1570" = "y" ]; then +# bool ' Enable extended debugging' CONFIG_ATM_TNETA1570_DEBUG n +# fi +config ATM_NICSTAR + tristate "IDT 77201 (NICStAR) (ForeRunnerLE)" + depends on PCI && ATM && !64BIT + help + The NICStAR chipset family is used in a large number of ATM NICs for + 25 and for 155 Mbps, including IDT cards and the Fore ForeRunnerLE + series. Say Y if you have one of those. + + To compile this driver as a module, choose M here: the module will + be called nicstar. + +config ATM_NICSTAR_USE_SUNI + bool "Use suni PHY driver (155Mbps)" + depends on ATM_NICSTAR + help + Support for the S-UNI and compatible PHYsical layer chips. These are + found in most 155Mbps NICStAR based ATM cards, namely in the + ForeRunner LE155 cards. This driver provides detection of cable~ + removal and reinsertion and provides some statistics. This driver + doesn't have removal capability when compiled as a module, so if you + need that capability don't include S-UNI support (it's not needed to + make the card work). + +config ATM_NICSTAR_USE_IDT77105 + bool "Use IDT77015 PHY driver (25Mbps)" + depends on ATM_NICSTAR + help + Support for the PHYsical layer chip in ForeRunner LE25 cards. In + addition to cable removal/reinsertion detection, this driver allows + you to control the loopback mode of the chip via a dedicated IOCTL. + This driver is required for proper handling of temporary carrier + loss, so if you have a 25Mbps NICStAR based ATM card you must say Y. + +config ATM_IDT77252 + tristate "IDT 77252 (NICStAR II)" + depends on PCI && ATM + help + Driver for the IDT 77252 ATM PCI chips. + + To compile this driver as a module, choose M here: the module will + be called idt77252. + +config ATM_IDT77252_DEBUG + bool "Enable debugging messages" + depends on ATM_IDT77252 + help + Somewhat useful debugging messages are available. The choice of + messages is controlled by a bitmap. This may be specified as a + module argument. See the file <file:drivers/atm/idt77252.h> for + the meanings of the bits in the mask. + + When active, these messages can have a significant impact on the + speed of the driver, and the size of your syslog files! When + inactive, they will have only a modest impact on performance. + +config ATM_IDT77252_RCV_ALL + bool "Receive ALL cells in raw queue" + depends on ATM_IDT77252 + help + Enable receiving of all cells on the ATM link, that do not match + an open connection in the raw cell queue of the driver. Useful + for debugging or special applications only, so the safe answer is N. + +config ATM_IDT77252_USE_SUNI + bool + depends on ATM_IDT77252 + default y + +config ATM_AMBASSADOR + tristate "Madge Ambassador (Collage PCI 155 Server)" + depends on PCI && ATM + help + This is a driver for ATMizer based ATM card produced by Madge + Networks Ltd. Say Y (or M to compile as a module named ambassador) + here if you have one of these cards. + +config ATM_AMBASSADOR_DEBUG + bool "Enable debugging messages" + depends on ATM_AMBASSADOR + ---help--- + Somewhat useful debugging messages are available. The choice of + messages is controlled by a bitmap. This may be specified as a + module argument (kernel command line argument as well?), changed + dynamically using an ioctl (not yet) or changed by sending the + string "Dxxxx" to VCI 1023 (where x is a hex digit). See the file + <file:drivers/atm/ambassador.h> for the meanings of the bits in the + mask. + + When active, these messages can have a significant impact on the + speed of the driver, and the size of your syslog files! When + inactive, they will have only a modest impact on performance. + +config ATM_HORIZON + tristate "Madge Horizon [Ultra] (Collage PCI 25 and Collage PCI 155 Client)" + depends on PCI && ATM + help + This is a driver for the Horizon chipset ATM adapter cards once + produced by Madge Networks Ltd. Say Y (or M to compile as a module + named horizon) here if you have one of these cards. + +config ATM_HORIZON_DEBUG + bool "Enable debugging messages" + depends on ATM_HORIZON + ---help--- + Somewhat useful debugging messages are available. The choice of + messages is controlled by a bitmap. This may be specified as a + module argument (kernel command line argument as well?), changed + dynamically using an ioctl (not yet) or changed by sending the + string "Dxxxx" to VCI 1023 (where x is a hex digit). See the file + <file:drivers/atm/horizon.h> for the meanings of the bits in the + mask. + + When active, these messages can have a significant impact on the + speed of the driver, and the size of your syslog files! When + inactive, they will have only a modest impact on performance. + +config ATM_IA + tristate "Interphase ATM PCI x575/x525/x531" + depends on PCI && ATM && !64BIT + ---help--- + This is a driver for the Interphase (i)ChipSAR adapter cards + which include a variety of variants in term of the size of the + control memory (128K-1KVC, 512K-4KVC), the size of the packet + memory (128K, 512K, 1M), and the PHY type (Single/Multi mode OC3, + UTP155, UTP25, DS3 and E3). Go to: + <http://www.iphase.com/products/ClassSheet.cfm?ClassID=ATM> + for more info about the cards. Say Y (or M to compile as a module + named iphase) here if you have one of these cards. + + See the file <file:Documentation/networking/iphase.txt> for further + details. + +config ATM_IA_DEBUG + bool "Enable debugging messages" + depends on ATM_IA + ---help--- + Somewhat useful debugging messages are available. The choice of + messages is controlled by a bitmap. This may be specified as a + module argument (kernel command line argument as well?), changed + dynamically using an ioctl (Get the debug utility, iadbg, from + <ftp://ftp.iphase.com/pub/atm/pci/>). + + See the file <file:drivers/atm/iphase.h> for the meanings of the + bits in the mask. + + When active, these messages can have a significant impact on the + speed of the driver, and the size of your syslog files! When + inactive, they will have only a modest impact on performance. + +config ATM_FORE200E_MAYBE + tristate "FORE Systems 200E-series" + depends on (PCI || SBUS) && ATM + ---help--- + This is a driver for the FORE Systems 200E-series ATM adapter + cards. It simultaneously supports PCA-200E and SBA-200E models + on PCI and SBUS hosts. Say Y (or M to compile as a module + named fore_200e) here if you have one of these ATM adapters. + + Note that the driver will actually be compiled only if you + additionally enable the support for PCA-200E and/or SBA-200E + cards. + + See the file <file:Documentation/networking/fore200e.txt> for + further details. + +config ATM_FORE200E_PCA + bool "PCA-200E support" + depends on ATM_FORE200E_MAYBE && PCI + help + Say Y here if you want your PCA-200E cards to be probed. + +config ATM_FORE200E_PCA_DEFAULT_FW + bool "Use default PCA-200E firmware (normally enabled)" + depends on ATM_FORE200E_PCA + help + Use the default PCA-200E firmware data shipped with the driver. + + Normal users do not have to deal with the firmware stuff, so + they should say Y here. + +config ATM_FORE200E_PCA_FW + string "Pathname of user-supplied binary firmware" + depends on ATM_FORE200E_PCA && !ATM_FORE200E_PCA_DEFAULT_FW + default "" + help + This defines the pathname of an alternative PCA-200E binary + firmware image supplied by the user. This pathname may be + absolute or relative to the drivers/atm directory. + + The driver comes with an adequate firmware image, so normal users do + not have to supply an alternative one. They just say Y to "Use + default PCA-200E firmware" instead. + +config ATM_FORE200E_SBA + bool "SBA-200E support" + depends on ATM_FORE200E_MAYBE && SBUS + help + Say Y here if you want your SBA-200E cards to be probed. + +config ATM_FORE200E_SBA_DEFAULT_FW + bool "Use default SBA-200E firmware (normally enabled)" + depends on ATM_FORE200E_SBA + help + Use the default SBA-200E firmware data shipped with the driver. + + Normal users do not have to deal with the firmware stuff, so + they should say Y here. + +config ATM_FORE200E_SBA_FW + string "Pathname of user-supplied binary firmware" + depends on ATM_FORE200E_SBA && !ATM_FORE200E_SBA_DEFAULT_FW + default "" + help + This defines the pathname of an alternative SBA-200E binary + firmware image supplied by the user. This pathname may be + absolute or relative to the drivers/atm directory. + + The driver comes with an adequate firmware image, so normal users do + not have to supply an alternative one. They just say Y to "Use + default SBA-200E firmware", above. + +config ATM_FORE200E_USE_TASKLET + bool "Defer interrupt work to a tasklet" + depends on (PCI || SBUS) && (ATM_FORE200E_PCA || ATM_FORE200E_SBA) + default n + help + This defers work to be done by the interrupt handler to a + tasklet instead of hanlding everything at interrupt time. This + may improve the responsive of the host. + +config ATM_FORE200E_TX_RETRY + int "Maximum number of tx retries" + depends on (PCI || SBUS) && (ATM_FORE200E_PCA || ATM_FORE200E_SBA) + default "16" + ---help--- + Specifies the number of times the driver attempts to transmit + a message before giving up, if the transmit queue of the ATM card + is transiently saturated. + + Saturation of the transmit queue may occur only under extreme + conditions, e.g. when a fast host continuously submits very small + frames (<64 bytes) or raw AAL0 cells (48 bytes) to the ATM adapter. + + Note that under common conditions, it is unlikely that you encounter + a saturation of the transmit queue, so the retry mechanism never + comes into play. + +config ATM_FORE200E_DEBUG + int "Debugging level (0-3)" + depends on (PCI || SBUS) && (ATM_FORE200E_PCA || ATM_FORE200E_SBA) + default "0" + help + Specifies the level of debugging messages issued by the driver. + The verbosity of the driver increases with the value of this + parameter. + + When active, these messages can have a significant impact on + the performances of the driver, and the size of your syslog files! + Keep the debugging level to 0 during normal operations. + +config ATM_FORE200E + tristate + depends on (PCI || SBUS) && (ATM_FORE200E_PCA || ATM_FORE200E_SBA) + default m if ATM_FORE200E_MAYBE!=y + default y if ATM_FORE200E_MAYBE=y + +config ATM_HE + tristate "ForeRunner HE Series" + depends on PCI && ATM + help + This is a driver for the Marconi ForeRunner HE-series ATM adapter + cards. It simultaneously supports the 155 and 622 versions. + +config ATM_HE_USE_SUNI + bool "Use S/UNI PHY driver" + depends on ATM_HE + help + Support for the S/UNI-Ultra and S/UNI-622 found in the ForeRunner + HE cards. This driver provides carrier detection some statistics. + +endmenu + diff --git a/drivers/atm/Makefile b/drivers/atm/Makefile new file mode 100644 index 000000000000..d1dcd8eae3c9 --- /dev/null +++ b/drivers/atm/Makefile @@ -0,0 +1,71 @@ +# +# Makefile for the Linux network (ATM) device drivers. +# + +fore_200e-objs := fore200e.o +hostprogs-y := fore200e_mkfirm + +# Files generated that shall be removed upon make clean +clean-files := atmsar11.bin atmsar11.bin1 atmsar11.bin2 pca200e.bin \ + pca200e.bin1 pca200e.bin2 pca200e_ecd.bin pca200e_ecd.bin1 \ + pca200e_ecd.bin2 sba200e_ecd.bin sba200e_ecd.bin1 sba200e_ecd.bin2 +# Firmware generated that shall be removed upon make clean +clean-files += fore200e_pca_fw.c fore200e_sba_fw.c + +obj-$(CONFIG_ATM_ZATM) += zatm.o uPD98402.o +obj-$(CONFIG_ATM_NICSTAR) += nicstar.o +obj-$(CONFIG_ATM_AMBASSADOR) += ambassador.o +obj-$(CONFIG_ATM_HORIZON) += horizon.o +obj-$(CONFIG_ATM_IA) += iphase.o suni.o +obj-$(CONFIG_ATM_FORE200E) += fore_200e.o +obj-$(CONFIG_ATM_ENI) += eni.o suni.o +obj-$(CONFIG_ATM_IDT77252) += idt77252.o + +ifeq ($(CONFIG_ATM_NICSTAR_USE_SUNI),y) + obj-$(CONFIG_ATM_NICSTAR) += suni.o +endif +ifeq ($(CONFIG_ATM_NICSTAR_USE_IDT77105),y) + obj-$(CONFIG_ATM_NICSTAR) += idt77105.o +endif +ifeq ($(CONFIG_ATM_IDT77252_USE_SUNI),y) + obj-$(CONFIG_ATM_IDT77252) += suni.o +endif + +obj-$(CONFIG_ATM_TCP) += atmtcp.o +obj-$(CONFIG_ATM_FIRESTREAM) += firestream.o +obj-$(CONFIG_ATM_LANAI) += lanai.o + +ifeq ($(CONFIG_ATM_FORE200E_PCA),y) + fore_200e-objs += fore200e_pca_fw.o + # guess the target endianess to choose the right PCA-200E firmware image + ifeq ($(CONFIG_ATM_FORE200E_PCA_DEFAULT_FW),y) + CONFIG_ATM_FORE200E_PCA_FW = $(shell if test -n "`$(CC) -E -dM $(src)/../../include/asm/byteorder.h | grep ' __LITTLE_ENDIAN '`"; then echo $(obj)/pca200e.bin; else echo $(obj)/pca200e_ecd.bin2; fi) + endif +endif + +ifeq ($(CONFIG_ATM_FORE200E_SBA),y) + fore_200e-objs += fore200e_sba_fw.o + ifeq ($(CONFIG_ATM_FORE200E_SBA_DEFAULT_FW),y) + CONFIG_ATM_FORE200E_SBA_FW := $(obj)/sba200e_ecd.bin2 + endif +endif +obj-$(CONFIG_ATM_HE) += he.o +ifeq ($(CONFIG_ATM_HE_USE_SUNI),y) + obj-$(CONFIG_ATM_HE) += suni.o +endif + +# FORE Systems 200E-series firmware magic +$(obj)/fore200e_pca_fw.c: $(patsubst "%", %, $(CONFIG_ATM_FORE200E_PCA_FW)) \ + $(obj)/fore200e_mkfirm + $(obj)/fore200e_mkfirm -k -b _fore200e_pca_fw \ + -i $(CONFIG_ATM_FORE200E_PCA_FW) -o $@ + +$(obj)/fore200e_sba_fw.c: $(patsubst "%", %, $(CONFIG_ATM_FORE200E_SBA_FW)) \ + $(obj)/fore200e_mkfirm + $(obj)/fore200e_mkfirm -k -b _fore200e_sba_fw \ + -i $(CONFIG_ATM_FORE200E_SBA_FW) -o $@ + +# deal with the various suffixes of the binary firmware images +$(obj)/%.bin $(obj)/%.bin1 $(obj)/%.bin2: $(src)/%.data + objcopy -Iihex $< -Obinary $@.gz + gzip -n -df $@.gz diff --git a/drivers/atm/ambassador.c b/drivers/atm/ambassador.c new file mode 100644 index 000000000000..c46d9520c5a7 --- /dev/null +++ b/drivers/atm/ambassador.c @@ -0,0 +1,2463 @@ +/* + Madge Ambassador ATM Adapter driver. + Copyright (C) 1995-1999 Madge Networks Ltd. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian + system and in the file COPYING in the Linux kernel source. +*/ + +/* * dedicated to the memory of Graham Gordon 1971-1998 * */ + +#include <linux/module.h> +#include <linux/types.h> +#include <linux/pci.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/ioport.h> +#include <linux/atmdev.h> +#include <linux/delay.h> +#include <linux/interrupt.h> + +#include <asm/atomic.h> +#include <asm/io.h> +#include <asm/byteorder.h> + +#include "ambassador.h" + +#define maintainer_string "Giuliano Procida at Madge Networks <gprocida@madge.com>" +#define description_string "Madge ATM Ambassador driver" +#define version_string "1.2.4" + +static inline void __init show_version (void) { + printk ("%s version %s\n", description_string, version_string); +} + +/* + + Theory of Operation + + I Hardware, detection, initialisation and shutdown. + + 1. Supported Hardware + + This driver is for the PCI ATMizer-based Ambassador card (except + very early versions). It is not suitable for the similar EISA "TR7" + card. Commercially, both cards are known as Collage Server ATM + adapters. + + The loader supports image transfer to the card, image start and few + other miscellaneous commands. + + Only AAL5 is supported with vpi = 0 and vci in the range 0 to 1023. + + The cards are big-endian. + + 2. Detection + + Standard PCI stuff, the early cards are detected and rejected. + + 3. Initialisation + + The cards are reset and the self-test results are checked. The + microcode image is then transferred and started. This waits for a + pointer to a descriptor containing details of the host-based queues + and buffers and various parameters etc. Once they are processed + normal operations may begin. The BIA is read using a microcode + command. + + 4. Shutdown + + This may be accomplished either by a card reset or via the microcode + shutdown command. Further investigation required. + + 5. Persistent state + + The card reset does not affect PCI configuration (good) or the + contents of several other "shared run-time registers" (bad) which + include doorbell and interrupt control as well as EEPROM and PCI + control. The driver must be careful when modifying these registers + not to touch bits it does not use and to undo any changes at exit. + + II Driver software + + 0. Generalities + + The adapter is quite intelligent (fast) and has a simple interface + (few features). VPI is always zero, 1024 VCIs are supported. There + is limited cell rate support. UBR channels can be capped and ABR + (explicit rate, but not EFCI) is supported. There is no CBR or VBR + support. + + 1. Driver <-> Adapter Communication + + Apart from the basic loader commands, the driver communicates + through three entities: the command queue (CQ), the transmit queue + pair (TXQ) and the receive queue pairs (RXQ). These three entities + are set up by the host and passed to the microcode just after it has + been started. + + All queues are host-based circular queues. They are contiguous and + (due to hardware limitations) have some restrictions as to their + locations in (bus) memory. They are of the "full means the same as + empty so don't do that" variety since the adapter uses pointers + internally. + + The queue pairs work as follows: one queue is for supply to the + adapter, items in it are pending and are owned by the adapter; the + other is the queue for return from the adapter, items in it have + been dealt with by the adapter. The host adds items to the supply + (TX descriptors and free RX buffer descriptors) and removes items + from the return (TX and RX completions). The adapter deals with out + of order completions. + + Interrupts (card to host) and the doorbell (host to card) are used + for signalling. + + 1. CQ + + This is to communicate "open VC", "close VC", "get stats" etc. to + the adapter. At most one command is retired every millisecond by the + card. There is no out of order completion or notification. The + driver needs to check the return code of the command, waiting as + appropriate. + + 2. TXQ + + TX supply items are of variable length (scatter gather support) and + so the queue items are (more or less) pointers to the real thing. + Each TX supply item contains a unique, host-supplied handle (the skb + bus address seems most sensible as this works for Alphas as well, + there is no need to do any endian conversions on the handles). + + TX return items consist of just the handles above. + + 3. RXQ (up to 4 of these with different lengths and buffer sizes) + + RX supply items consist of a unique, host-supplied handle (the skb + bus address again) and a pointer to the buffer data area. + + RX return items consist of the handle above, the VC, length and a + status word. This just screams "oh so easy" doesn't it? + + Note on RX pool sizes: + + Each pool should have enough buffers to handle a back-to-back stream + of minimum sized frames on a single VC. For example: + + frame spacing = 3us (about right) + + delay = IRQ lat + RX handling + RX buffer replenish = 20 (us) (a guess) + + min number of buffers for one VC = 1 + delay/spacing (buffers) + + delay/spacing = latency = (20+2)/3 = 7 (buffers) (rounding up) + + The 20us delay assumes that there is no need to sleep; if we need to + sleep to get buffers we are going to drop frames anyway. + + In fact, each pool should have enough buffers to support the + simultaneous reassembly of a separate frame on each VC and cope with + the case in which frames complete in round robin cell fashion on + each VC. + + Only one frame can complete at each cell arrival, so if "n" VCs are + open, the worst case is to have them all complete frames together + followed by all starting new frames together. + + desired number of buffers = n + delay/spacing + + These are the extreme requirements, however, they are "n+k" for some + "k" so we have only the constant to choose. This is the argument + rx_lats which current defaults to 7. + + Actually, "n ? n+k : 0" is better and this is what is implemented, + subject to the limit given by the pool size. + + 4. Driver locking + + Simple spinlocks are used around the TX and RX queue mechanisms. + Anyone with a faster, working method is welcome to implement it. + + The adapter command queue is protected with a spinlock. We always + wait for commands to complete. + + A more complex form of locking is used around parts of the VC open + and close functions. There are three reasons for a lock: 1. we need + to do atomic rate reservation and release (not used yet), 2. Opening + sometimes involves two adapter commands which must not be separated + by another command on the same VC, 3. the changes to RX pool size + must be atomic. The lock needs to work over context switches, so we + use a semaphore. + + III Hardware Features and Microcode Bugs + + 1. Byte Ordering + + *%^"$&%^$*&^"$(%^$#&^%$(&#%$*(&^#%!"!"!*! + + 2. Memory access + + All structures that are not accessed using DMA must be 4-byte + aligned (not a problem) and must not cross 4MB boundaries. + + There is a DMA memory hole at E0000000-E00000FF (groan). + + TX fragments (DMA read) must not cross 4MB boundaries (would be 16MB + but for a hardware bug). + + RX buffers (DMA write) must not cross 16MB boundaries and must + include spare trailing bytes up to the next 4-byte boundary; they + will be written with rubbish. + + The PLX likes to prefetch; if reading up to 4 u32 past the end of + each TX fragment is not a problem, then TX can be made to go a + little faster by passing a flag at init that disables a prefetch + workaround. We do not pass this flag. (new microcode only) + + Now we: + . Note that alloc_skb rounds up size to a 16byte boundary. + . Ensure all areas do not traverse 4MB boundaries. + . Ensure all areas do not start at a E00000xx bus address. + (I cannot be certain, but this may always hold with Linux) + . Make all failures cause a loud message. + . Discard non-conforming SKBs (causes TX failure or RX fill delay). + . Discard non-conforming TX fragment descriptors (the TX fails). + In the future we could: + . Allow RX areas that traverse 4MB (but not 16MB) boundaries. + . Segment TX areas into some/more fragments, when necessary. + . Relax checks for non-DMA items (ignore hole). + . Give scatter-gather (iovec) requirements using ???. (?) + + 3. VC close is broken (only for new microcode) + + The VC close adapter microcode command fails to do anything if any + frames have been received on the VC but none have been transmitted. + Frames continue to be reassembled and passed (with IRQ) to the + driver. + + IV To Do List + + . Fix bugs! + + . Timer code may be broken. + + . Deal with buggy VC close (somehow) in microcode 12. + + . Handle interrupted and/or non-blocking writes - is this a job for + the protocol layer? + + . Add code to break up TX fragments when they span 4MB boundaries. + + . Add SUNI phy layer (need to know where SUNI lives on card). + + . Implement a tx_alloc fn to (a) satisfy TX alignment etc. and (b) + leave extra headroom space for Ambassador TX descriptors. + + . Understand these elements of struct atm_vcc: recvq (proto?), + sleep, callback, listenq, backlog_quota, reply and user_back. + + . Adjust TX/RX skb allocation to favour IP with LANE/CLIP (configurable). + + . Impose a TX-pending limit (2?) on each VC, help avoid TX q overflow. + + . Decide whether RX buffer recycling is or can be made completely safe; + turn it back on. It looks like Werner is going to axe this. + + . Implement QoS changes on open VCs (involves extracting parts of VC open + and close into separate functions and using them to make changes). + + . Hack on command queue so that someone can issue multiple commands and wait + on the last one (OR only "no-op" or "wait" commands are waited for). + + . Eliminate need for while-schedule around do_command. + +*/ + +/********** microcode **********/ + +#ifdef AMB_NEW_MICROCODE +#define UCODE(x) UCODE2(atmsar12.x) +#else +#define UCODE(x) UCODE2(atmsar11.x) +#endif +#define UCODE2(x) #x + +static u32 __devinitdata ucode_start = +#include UCODE(start) +; + +static region __devinitdata ucode_regions[] = { +#include UCODE(regions) + { 0, 0 } +}; + +static u32 __devinitdata ucode_data[] = { +#include UCODE(data) + 0xdeadbeef +}; + +static void do_housekeeping (unsigned long arg); +/********** globals **********/ + +static unsigned short debug = 0; +static unsigned int cmds = 8; +static unsigned int txs = 32; +static unsigned int rxs[NUM_RX_POOLS] = { 64, 64, 64, 64 }; +static unsigned int rxs_bs[NUM_RX_POOLS] = { 4080, 12240, 36720, 65535 }; +static unsigned int rx_lats = 7; +static unsigned char pci_lat = 0; + +static const unsigned long onegigmask = -1 << 30; + +/********** access to adapter **********/ + +static inline void wr_plain (const amb_dev * dev, size_t addr, u32 data) { + PRINTD (DBG_FLOW|DBG_REGS, "wr: %08zx <- %08x", addr, data); +#ifdef AMB_MMIO + dev->membase[addr / sizeof(u32)] = data; +#else + outl (data, dev->iobase + addr); +#endif +} + +static inline u32 rd_plain (const amb_dev * dev, size_t addr) { +#ifdef AMB_MMIO + u32 data = dev->membase[addr / sizeof(u32)]; +#else + u32 data = inl (dev->iobase + addr); +#endif + PRINTD (DBG_FLOW|DBG_REGS, "rd: %08zx -> %08x", addr, data); + return data; +} + +static inline void wr_mem (const amb_dev * dev, size_t addr, u32 data) { + __be32 be = cpu_to_be32 (data); + PRINTD (DBG_FLOW|DBG_REGS, "wr: %08zx <- %08x b[%08x]", addr, data, be); +#ifdef AMB_MMIO + dev->membase[addr / sizeof(u32)] = be; +#else + outl (be, dev->iobase + addr); +#endif +} + +static inline u32 rd_mem (const amb_dev * dev, size_t addr) { +#ifdef AMB_MMIO + __be32 be = dev->membase[addr / sizeof(u32)]; +#else + __be32 be = inl (dev->iobase + addr); +#endif + u32 data = be32_to_cpu (be); + PRINTD (DBG_FLOW|DBG_REGS, "rd: %08zx -> %08x b[%08x]", addr, data, be); + return data; +} + +/********** dump routines **********/ + +static inline void dump_registers (const amb_dev * dev) { +#ifdef DEBUG_AMBASSADOR + if (debug & DBG_REGS) { + size_t i; + PRINTD (DBG_REGS, "reading PLX control: "); + for (i = 0x00; i < 0x30; i += sizeof(u32)) + rd_mem (dev, i); + PRINTD (DBG_REGS, "reading mailboxes: "); + for (i = 0x40; i < 0x60; i += sizeof(u32)) + rd_mem (dev, i); + PRINTD (DBG_REGS, "reading doorb irqev irqen reset:"); + for (i = 0x60; i < 0x70; i += sizeof(u32)) + rd_mem (dev, i); + } +#else + (void) dev; +#endif + return; +} + +static inline void dump_loader_block (volatile loader_block * lb) { +#ifdef DEBUG_AMBASSADOR + unsigned int i; + PRINTDB (DBG_LOAD, "lb @ %p; res: %d, cmd: %d, pay:", + lb, be32_to_cpu (lb->result), be32_to_cpu (lb->command)); + for (i = 0; i < MAX_COMMAND_DATA; ++i) + PRINTDM (DBG_LOAD, " %08x", be32_to_cpu (lb->payload.data[i])); + PRINTDE (DBG_LOAD, ", vld: %08x", be32_to_cpu (lb->valid)); +#else + (void) lb; +#endif + return; +} + +static inline void dump_command (command * cmd) { +#ifdef DEBUG_AMBASSADOR + unsigned int i; + PRINTDB (DBG_CMD, "cmd @ %p, req: %08x, pars:", + cmd, /*be32_to_cpu*/ (cmd->request)); + for (i = 0; i < 3; ++i) + PRINTDM (DBG_CMD, " %08x", /*be32_to_cpu*/ (cmd->args.par[i])); + PRINTDE (DBG_CMD, ""); +#else + (void) cmd; +#endif + return; +} + +static inline void dump_skb (char * prefix, unsigned int vc, struct sk_buff * skb) { +#ifdef DEBUG_AMBASSADOR + unsigned int i; + unsigned char * data = skb->data; + PRINTDB (DBG_DATA, "%s(%u) ", prefix, vc); + for (i=0; i<skb->len && i < 256;i++) + PRINTDM (DBG_DATA, "%02x ", data[i]); + PRINTDE (DBG_DATA,""); +#else + (void) prefix; + (void) vc; + (void) skb; +#endif + return; +} + +/********** check memory areas for use by Ambassador **********/ + +/* see limitations under Hardware Features */ + +static inline int check_area (void * start, size_t length) { + // assumes length > 0 + const u32 fourmegmask = -1 << 22; + const u32 twofivesixmask = -1 << 8; + const u32 starthole = 0xE0000000; + u32 startaddress = virt_to_bus (start); + u32 lastaddress = startaddress+length-1; + if ((startaddress ^ lastaddress) & fourmegmask || + (startaddress & twofivesixmask) == starthole) { + PRINTK (KERN_ERR, "check_area failure: [%x,%x] - mail maintainer!", + startaddress, lastaddress); + return -1; + } else { + return 0; + } +} + +/********** free an skb (as per ATM device driver documentation) **********/ + +static inline void amb_kfree_skb (struct sk_buff * skb) { + if (ATM_SKB(skb)->vcc->pop) { + ATM_SKB(skb)->vcc->pop (ATM_SKB(skb)->vcc, skb); + } else { + dev_kfree_skb_any (skb); + } +} + +/********** TX completion **********/ + +static inline void tx_complete (amb_dev * dev, tx_out * tx) { + tx_simple * tx_descr = bus_to_virt (tx->handle); + struct sk_buff * skb = tx_descr->skb; + + PRINTD (DBG_FLOW|DBG_TX, "tx_complete %p %p", dev, tx); + + // VC layer stats + atomic_inc(&ATM_SKB(skb)->vcc->stats->tx); + + // free the descriptor + kfree (tx_descr); + + // free the skb + amb_kfree_skb (skb); + + dev->stats.tx_ok++; + return; +} + +/********** RX completion **********/ + +static void rx_complete (amb_dev * dev, rx_out * rx) { + struct sk_buff * skb = bus_to_virt (rx->handle); + u16 vc = be16_to_cpu (rx->vc); + // unused: u16 lec_id = be16_to_cpu (rx->lec_id); + u16 status = be16_to_cpu (rx->status); + u16 rx_len = be16_to_cpu (rx->length); + + PRINTD (DBG_FLOW|DBG_RX, "rx_complete %p %p (len=%hu)", dev, rx, rx_len); + + // XXX move this in and add to VC stats ??? + if (!status) { + struct atm_vcc * atm_vcc = dev->rxer[vc]; + dev->stats.rx.ok++; + + if (atm_vcc) { + + if (rx_len <= atm_vcc->qos.rxtp.max_sdu) { + + if (atm_charge (atm_vcc, skb->truesize)) { + + // prepare socket buffer + ATM_SKB(skb)->vcc = atm_vcc; + skb_put (skb, rx_len); + + dump_skb ("<<<", vc, skb); + + // VC layer stats + atomic_inc(&atm_vcc->stats->rx); + do_gettimeofday(&skb->stamp); + // end of our responsability + atm_vcc->push (atm_vcc, skb); + return; + + } else { + // someone fix this (message), please! + PRINTD (DBG_INFO|DBG_RX, "dropped thanks to atm_charge (vc %hu, truesize %u)", vc, skb->truesize); + // drop stats incremented in atm_charge + } + + } else { + PRINTK (KERN_INFO, "dropped over-size frame"); + // should we count this? + atomic_inc(&atm_vcc->stats->rx_drop); + } + + } else { + PRINTD (DBG_WARN|DBG_RX, "got frame but RX closed for channel %hu", vc); + // this is an adapter bug, only in new version of microcode + } + + } else { + dev->stats.rx.error++; + if (status & CRC_ERR) + dev->stats.rx.badcrc++; + if (status & LEN_ERR) + dev->stats.rx.toolong++; + if (status & ABORT_ERR) + dev->stats.rx.aborted++; + if (status & UNUSED_ERR) + dev->stats.rx.unused++; + } + + dev_kfree_skb_any (skb); + return; +} + +/* + + Note on queue handling. + + Here "give" and "take" refer to queue entries and a queue (pair) + rather than frames to or from the host or adapter. Empty frame + buffers are given to the RX queue pair and returned unused or + containing RX frames. TX frames (well, pointers to TX fragment + lists) are given to the TX queue pair, completions are returned. + +*/ + +/********** command queue **********/ + +// I really don't like this, but it's the best I can do at the moment + +// also, the callers are responsible for byte order as the microcode +// sometimes does 16-bit accesses (yuk yuk yuk) + +static int command_do (amb_dev * dev, command * cmd) { + amb_cq * cq = &dev->cq; + volatile amb_cq_ptrs * ptrs = &cq->ptrs; + command * my_slot; + + PRINTD (DBG_FLOW|DBG_CMD, "command_do %p", dev); + + if (test_bit (dead, &dev->flags)) + return 0; + + spin_lock (&cq->lock); + + // if not full... + if (cq->pending < cq->maximum) { + // remember my slot for later + my_slot = ptrs->in; + PRINTD (DBG_CMD, "command in slot %p", my_slot); + + dump_command (cmd); + + // copy command in + *ptrs->in = *cmd; + cq->pending++; + ptrs->in = NEXTQ (ptrs->in, ptrs->start, ptrs->limit); + + // mail the command + wr_mem (dev, offsetof(amb_mem, mb.adapter.cmd_address), virt_to_bus (ptrs->in)); + + if (cq->pending > cq->high) + cq->high = cq->pending; + spin_unlock (&cq->lock); + + // these comments were in a while-loop before, msleep removes the loop + // go to sleep + // PRINTD (DBG_CMD, "wait: sleeping %lu for command", timeout); + msleep(cq->pending); + + // wait for my slot to be reached (all waiters are here or above, until...) + while (ptrs->out != my_slot) { + PRINTD (DBG_CMD, "wait: command slot (now at %p)", ptrs->out); + set_current_state(TASK_UNINTERRUPTIBLE); + schedule(); + } + + // wait on my slot (... one gets to its slot, and... ) + while (ptrs->out->request != cpu_to_be32 (SRB_COMPLETE)) { + PRINTD (DBG_CMD, "wait: command slot completion"); + set_current_state(TASK_UNINTERRUPTIBLE); + schedule(); + } + + PRINTD (DBG_CMD, "command complete"); + // update queue (... moves the queue along to the next slot) + spin_lock (&cq->lock); + cq->pending--; + // copy command out + *cmd = *ptrs->out; + ptrs->out = NEXTQ (ptrs->out, ptrs->start, ptrs->limit); + spin_unlock (&cq->lock); + + return 0; + } else { + cq->filled++; + spin_unlock (&cq->lock); + return -EAGAIN; + } + +} + +/********** TX queue pair **********/ + +static inline int tx_give (amb_dev * dev, tx_in * tx) { + amb_txq * txq = &dev->txq; + unsigned long flags; + + PRINTD (DBG_FLOW|DBG_TX, "tx_give %p", dev); + + if (test_bit (dead, &dev->flags)) + return 0; + + spin_lock_irqsave (&txq->lock, flags); + + if (txq->pending < txq->maximum) { + PRINTD (DBG_TX, "TX in slot %p", txq->in.ptr); + + *txq->in.ptr = *tx; + txq->pending++; + txq->in.ptr = NEXTQ (txq->in.ptr, txq->in.start, txq->in.limit); + // hand over the TX and ring the bell + wr_mem (dev, offsetof(amb_mem, mb.adapter.tx_address), virt_to_bus (txq->in.ptr)); + wr_mem (dev, offsetof(amb_mem, doorbell), TX_FRAME); + + if (txq->pending > txq->high) + txq->high = txq->pending; + spin_unlock_irqrestore (&txq->lock, flags); + return 0; + } else { + txq->filled++; + spin_unlock_irqrestore (&txq->lock, flags); + return -EAGAIN; + } +} + +static inline int tx_take (amb_dev * dev) { + amb_txq * txq = &dev->txq; + unsigned long flags; + + PRINTD (DBG_FLOW|DBG_TX, "tx_take %p", dev); + + spin_lock_irqsave (&txq->lock, flags); + + if (txq->pending && txq->out.ptr->handle) { + // deal with TX completion + tx_complete (dev, txq->out.ptr); + // mark unused again + txq->out.ptr->handle = 0; + // remove item + txq->pending--; + txq->out.ptr = NEXTQ (txq->out.ptr, txq->out.start, txq->out.limit); + + spin_unlock_irqrestore (&txq->lock, flags); + return 0; + } else { + + spin_unlock_irqrestore (&txq->lock, flags); + return -1; + } +} + +/********** RX queue pairs **********/ + +static inline int rx_give (amb_dev * dev, rx_in * rx, unsigned char pool) { + amb_rxq * rxq = &dev->rxq[pool]; + unsigned long flags; + + PRINTD (DBG_FLOW|DBG_RX, "rx_give %p[%hu]", dev, pool); + + spin_lock_irqsave (&rxq->lock, flags); + + if (rxq->pending < rxq->maximum) { + PRINTD (DBG_RX, "RX in slot %p", rxq->in.ptr); + + *rxq->in.ptr = *rx; + rxq->pending++; + rxq->in.ptr = NEXTQ (rxq->in.ptr, rxq->in.start, rxq->in.limit); + // hand over the RX buffer + wr_mem (dev, offsetof(amb_mem, mb.adapter.rx_address[pool]), virt_to_bus (rxq->in.ptr)); + + spin_unlock_irqrestore (&rxq->lock, flags); + return 0; + } else { + spin_unlock_irqrestore (&rxq->lock, flags); + return -1; + } +} + +static inline int rx_take (amb_dev * dev, unsigned char pool) { + amb_rxq * rxq = &dev->rxq[pool]; + unsigned long flags; + + PRINTD (DBG_FLOW|DBG_RX, "rx_take %p[%hu]", dev, pool); + + spin_lock_irqsave (&rxq->lock, flags); + + if (rxq->pending && (rxq->out.ptr->status || rxq->out.ptr->length)) { + // deal with RX completion + rx_complete (dev, rxq->out.ptr); + // mark unused again + rxq->out.ptr->status = 0; + rxq->out.ptr->length = 0; + // remove item + rxq->pending--; + rxq->out.ptr = NEXTQ (rxq->out.ptr, rxq->out.start, rxq->out.limit); + + if (rxq->pending < rxq->low) + rxq->low = rxq->pending; + spin_unlock_irqrestore (&rxq->lock, flags); + return 0; + } else { + if (!rxq->pending && rxq->buffers_wanted) + rxq->emptied++; + spin_unlock_irqrestore (&rxq->lock, flags); + return -1; + } +} + +/********** RX Pool handling **********/ + +/* pre: buffers_wanted = 0, post: pending = 0 */ +static inline void drain_rx_pool (amb_dev * dev, unsigned char pool) { + amb_rxq * rxq = &dev->rxq[pool]; + + PRINTD (DBG_FLOW|DBG_POOL, "drain_rx_pool %p %hu", dev, pool); + + if (test_bit (dead, &dev->flags)) + return; + + /* we are not quite like the fill pool routines as we cannot just + remove one buffer, we have to remove all of them, but we might as + well pretend... */ + if (rxq->pending > rxq->buffers_wanted) { + command cmd; + cmd.request = cpu_to_be32 (SRB_FLUSH_BUFFER_Q); + cmd.args.flush.flags = cpu_to_be32 (pool << SRB_POOL_SHIFT); + while (command_do (dev, &cmd)) + schedule(); + /* the pool may also be emptied via the interrupt handler */ + while (rxq->pending > rxq->buffers_wanted) + if (rx_take (dev, pool)) + schedule(); + } + + return; +} + +static void drain_rx_pools (amb_dev * dev) { + unsigned char pool; + + PRINTD (DBG_FLOW|DBG_POOL, "drain_rx_pools %p", dev); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + drain_rx_pool (dev, pool); +} + +static inline void fill_rx_pool (amb_dev * dev, unsigned char pool, int priority) { + rx_in rx; + amb_rxq * rxq; + + PRINTD (DBG_FLOW|DBG_POOL, "fill_rx_pool %p %hu %x", dev, pool, priority); + + if (test_bit (dead, &dev->flags)) + return; + + rxq = &dev->rxq[pool]; + while (rxq->pending < rxq->maximum && rxq->pending < rxq->buffers_wanted) { + + struct sk_buff * skb = alloc_skb (rxq->buffer_size, priority); + if (!skb) { + PRINTD (DBG_SKB|DBG_POOL, "failed to allocate skb for RX pool %hu", pool); + return; + } + if (check_area (skb->data, skb->truesize)) { + dev_kfree_skb_any (skb); + return; + } + // cast needed as there is no %? for pointer differences + PRINTD (DBG_SKB, "allocated skb at %p, head %p, area %li", + skb, skb->head, (long) (skb->end - skb->head)); + rx.handle = virt_to_bus (skb); + rx.host_address = cpu_to_be32 (virt_to_bus (skb->data)); + if (rx_give (dev, &rx, pool)) + dev_kfree_skb_any (skb); + + } + + return; +} + +// top up all RX pools (can also be called as a bottom half) +static void fill_rx_pools (amb_dev * dev) { + unsigned char pool; + + PRINTD (DBG_FLOW|DBG_POOL, "fill_rx_pools %p", dev); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + fill_rx_pool (dev, pool, GFP_ATOMIC); + + return; +} + +/********** enable host interrupts **********/ + +static inline void interrupts_on (amb_dev * dev) { + wr_plain (dev, offsetof(amb_mem, interrupt_control), + rd_plain (dev, offsetof(amb_mem, interrupt_control)) + | AMB_INTERRUPT_BITS); +} + +/********** disable host interrupts **********/ + +static inline void interrupts_off (amb_dev * dev) { + wr_plain (dev, offsetof(amb_mem, interrupt_control), + rd_plain (dev, offsetof(amb_mem, interrupt_control)) + &~ AMB_INTERRUPT_BITS); +} + +/********** interrupt handling **********/ + +static irqreturn_t interrupt_handler(int irq, void *dev_id, + struct pt_regs *pt_regs) { + amb_dev * dev = (amb_dev *) dev_id; + (void) pt_regs; + + PRINTD (DBG_IRQ|DBG_FLOW, "interrupt_handler: %p", dev_id); + + if (!dev_id) { + PRINTD (DBG_IRQ|DBG_ERR, "irq with NULL dev_id: %d", irq); + return IRQ_NONE; + } + + { + u32 interrupt = rd_plain (dev, offsetof(amb_mem, interrupt)); + + // for us or someone else sharing the same interrupt + if (!interrupt) { + PRINTD (DBG_IRQ, "irq not for me: %d", irq); + return IRQ_NONE; + } + + // definitely for us + PRINTD (DBG_IRQ, "FYI: interrupt was %08x", interrupt); + wr_plain (dev, offsetof(amb_mem, interrupt), -1); + } + + { + unsigned int irq_work = 0; + unsigned char pool; + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + while (!rx_take (dev, pool)) + ++irq_work; + while (!tx_take (dev)) + ++irq_work; + + if (irq_work) { +#ifdef FILL_RX_POOLS_IN_BH + schedule_work (&dev->bh); +#else + fill_rx_pools (dev); +#endif + + PRINTD (DBG_IRQ, "work done: %u", irq_work); + } else { + PRINTD (DBG_IRQ|DBG_WARN, "no work done"); + } + } + + PRINTD (DBG_IRQ|DBG_FLOW, "interrupt_handler done: %p", dev_id); + return IRQ_HANDLED; +} + +/********** make rate (not quite as much fun as Horizon) **********/ + +static unsigned int make_rate (unsigned int rate, rounding r, + u16 * bits, unsigned int * actual) { + unsigned char exp = -1; // hush gcc + unsigned int man = -1; // hush gcc + + PRINTD (DBG_FLOW|DBG_QOS, "make_rate %u", rate); + + // rates in cells per second, ITU format (nasty 16-bit floating-point) + // given 5-bit e and 9-bit m: + // rate = EITHER (1+m/2^9)*2^e OR 0 + // bits = EITHER 1<<14 | e<<9 | m OR 0 + // (bit 15 is "reserved", bit 14 "non-zero") + // smallest rate is 0 (special representation) + // largest rate is (1+511/512)*2^31 = 4290772992 (< 2^32-1) + // smallest non-zero rate is (1+0/512)*2^0 = 1 (> 0) + // simple algorithm: + // find position of top bit, this gives e + // remove top bit and shift (rounding if feeling clever) by 9-e + + // ucode bug: please don't set bit 14! so 0 rate not representable + + if (rate > 0xffc00000U) { + // larger than largest representable rate + + if (r == round_up) { + return -EINVAL; + } else { + exp = 31; + man = 511; + } + + } else if (rate) { + // representable rate + + exp = 31; + man = rate; + + // invariant: rate = man*2^(exp-31) + while (!(man & (1<<31))) { + exp = exp - 1; + man = man<<1; + } + + // man has top bit set + // rate = (2^31+(man-2^31))*2^(exp-31) + // rate = (1+(man-2^31)/2^31)*2^exp + man = man<<1; + man &= 0xffffffffU; // a nop on 32-bit systems + // rate = (1+man/2^32)*2^exp + + // exp is in the range 0 to 31, man is in the range 0 to 2^32-1 + // time to lose significance... we want m in the range 0 to 2^9-1 + // rounding presents a minor problem... we first decide which way + // we are rounding (based on given rounding direction and possibly + // the bits of the mantissa that are to be discarded). + + switch (r) { + case round_down: { + // just truncate + man = man>>(32-9); + break; + } + case round_up: { + // check all bits that we are discarding + if (man & (-1>>9)) { + man = (man>>(32-9)) + 1; + if (man == (1<<9)) { + // no need to check for round up outside of range + man = 0; + exp += 1; + } + } else { + man = (man>>(32-9)); + } + break; + } + case round_nearest: { + // check msb that we are discarding + if (man & (1<<(32-9-1))) { + man = (man>>(32-9)) + 1; + if (man == (1<<9)) { + // no need to check for round up outside of range + man = 0; + exp += 1; + } + } else { + man = (man>>(32-9)); + } + break; + } + } + + } else { + // zero rate - not representable + + if (r == round_down) { + return -EINVAL; + } else { + exp = 0; + man = 0; + } + + } + + PRINTD (DBG_QOS, "rate: man=%u, exp=%hu", man, exp); + + if (bits) + *bits = /* (1<<14) | */ (exp<<9) | man; + + if (actual) + *actual = (exp >= 9) + ? (1 << exp) + (man << (exp-9)) + : (1 << exp) + ((man + (1<<(9-exp-1))) >> (9-exp)); + + return 0; +} + +/********** Linux ATM Operations **********/ + +// some are not yet implemented while others do not make sense for +// this device + +/********** Open a VC **********/ + +static int amb_open (struct atm_vcc * atm_vcc) +{ + int error; + + struct atm_qos * qos; + struct atm_trafprm * txtp; + struct atm_trafprm * rxtp; + u16 tx_rate_bits; + u16 tx_vc_bits = -1; // hush gcc + u16 tx_frame_bits = -1; // hush gcc + + amb_dev * dev = AMB_DEV(atm_vcc->dev); + amb_vcc * vcc; + unsigned char pool = -1; // hush gcc + short vpi = atm_vcc->vpi; + int vci = atm_vcc->vci; + + PRINTD (DBG_FLOW|DBG_VCC, "amb_open %x %x", vpi, vci); + +#ifdef ATM_VPI_UNSPEC + // UNSPEC is deprecated, remove this code eventually + if (vpi == ATM_VPI_UNSPEC || vci == ATM_VCI_UNSPEC) { + PRINTK (KERN_WARNING, "rejecting open with unspecified VPI/VCI (deprecated)"); + return -EINVAL; + } +#endif + + if (!(0 <= vpi && vpi < (1<<NUM_VPI_BITS) && + 0 <= vci && vci < (1<<NUM_VCI_BITS))) { + PRINTD (DBG_WARN|DBG_VCC, "VPI/VCI out of range: %hd/%d", vpi, vci); + return -EINVAL; + } + + qos = &atm_vcc->qos; + + if (qos->aal != ATM_AAL5) { + PRINTD (DBG_QOS, "AAL not supported"); + return -EINVAL; + } + + // traffic parameters + + PRINTD (DBG_QOS, "TX:"); + txtp = &qos->txtp; + if (txtp->traffic_class != ATM_NONE) { + switch (txtp->traffic_class) { + case ATM_UBR: { + // we take "the PCR" as a rate-cap + int pcr = atm_pcr_goal (txtp); + if (!pcr) { + // no rate cap + tx_rate_bits = 0; + tx_vc_bits = TX_UBR; + tx_frame_bits = TX_FRAME_NOTCAP; + } else { + rounding r; + if (pcr < 0) { + r = round_down; + pcr = -pcr; + } else { + r = round_up; + } + error = make_rate (pcr, r, &tx_rate_bits, NULL); + tx_vc_bits = TX_UBR_CAPPED; + tx_frame_bits = TX_FRAME_CAPPED; + } + break; + } +#if 0 + case ATM_ABR: { + pcr = atm_pcr_goal (txtp); + PRINTD (DBG_QOS, "pcr goal = %d", pcr); + break; + } +#endif + default: { + // PRINTD (DBG_QOS, "request for non-UBR/ABR denied"); + PRINTD (DBG_QOS, "request for non-UBR denied"); + return -EINVAL; + } + } + PRINTD (DBG_QOS, "tx_rate_bits=%hx, tx_vc_bits=%hx", + tx_rate_bits, tx_vc_bits); + } + + PRINTD (DBG_QOS, "RX:"); + rxtp = &qos->rxtp; + if (rxtp->traffic_class == ATM_NONE) { + // do nothing + } else { + // choose an RX pool (arranged in increasing size) + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + if ((unsigned int) rxtp->max_sdu <= dev->rxq[pool].buffer_size) { + PRINTD (DBG_VCC|DBG_QOS|DBG_POOL, "chose pool %hu (max_sdu %u <= %u)", + pool, rxtp->max_sdu, dev->rxq[pool].buffer_size); + break; + } + if (pool == NUM_RX_POOLS) { + PRINTD (DBG_WARN|DBG_VCC|DBG_QOS|DBG_POOL, + "no pool suitable for VC (RX max_sdu %d is too large)", + rxtp->max_sdu); + return -EINVAL; + } + + switch (rxtp->traffic_class) { + case ATM_UBR: { + break; + } +#if 0 + case ATM_ABR: { + pcr = atm_pcr_goal (rxtp); + PRINTD (DBG_QOS, "pcr goal = %d", pcr); + break; + } +#endif + default: { + // PRINTD (DBG_QOS, "request for non-UBR/ABR denied"); + PRINTD (DBG_QOS, "request for non-UBR denied"); + return -EINVAL; + } + } + } + + // get space for our vcc stuff + vcc = kmalloc (sizeof(amb_vcc), GFP_KERNEL); + if (!vcc) { + PRINTK (KERN_ERR, "out of memory!"); + return -ENOMEM; + } + atm_vcc->dev_data = (void *) vcc; + + // no failures beyond this point + + // we are not really "immediately before allocating the connection + // identifier in hardware", but it will just have to do! + set_bit(ATM_VF_ADDR,&atm_vcc->flags); + + if (txtp->traffic_class != ATM_NONE) { + command cmd; + + vcc->tx_frame_bits = tx_frame_bits; + + down (&dev->vcc_sf); + if (dev->rxer[vci]) { + // RXer on the channel already, just modify rate... + cmd.request = cpu_to_be32 (SRB_MODIFY_VC_RATE); + cmd.args.modify_rate.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.modify_rate.rate = cpu_to_be32 (tx_rate_bits << SRB_RATE_SHIFT); + while (command_do (dev, &cmd)) + schedule(); + // ... and TX flags, preserving the RX pool + cmd.request = cpu_to_be32 (SRB_MODIFY_VC_FLAGS); + cmd.args.modify_flags.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.modify_flags.flags = cpu_to_be32 + ( (AMB_VCC(dev->rxer[vci])->rx_info.pool << SRB_POOL_SHIFT) + | (tx_vc_bits << SRB_FLAGS_SHIFT) ); + while (command_do (dev, &cmd)) + schedule(); + } else { + // no RXer on the channel, just open (with pool zero) + cmd.request = cpu_to_be32 (SRB_OPEN_VC); + cmd.args.open.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.open.flags = cpu_to_be32 (tx_vc_bits << SRB_FLAGS_SHIFT); + cmd.args.open.rate = cpu_to_be32 (tx_rate_bits << SRB_RATE_SHIFT); + while (command_do (dev, &cmd)) + schedule(); + } + dev->txer[vci].tx_present = 1; + up (&dev->vcc_sf); + } + + if (rxtp->traffic_class != ATM_NONE) { + command cmd; + + vcc->rx_info.pool = pool; + + down (&dev->vcc_sf); + /* grow RX buffer pool */ + if (!dev->rxq[pool].buffers_wanted) + dev->rxq[pool].buffers_wanted = rx_lats; + dev->rxq[pool].buffers_wanted += 1; + fill_rx_pool (dev, pool, GFP_KERNEL); + + if (dev->txer[vci].tx_present) { + // TXer on the channel already + // switch (from pool zero) to this pool, preserving the TX bits + cmd.request = cpu_to_be32 (SRB_MODIFY_VC_FLAGS); + cmd.args.modify_flags.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.modify_flags.flags = cpu_to_be32 + ( (pool << SRB_POOL_SHIFT) + | (dev->txer[vci].tx_vc_bits << SRB_FLAGS_SHIFT) ); + } else { + // no TXer on the channel, open the VC (with no rate info) + cmd.request = cpu_to_be32 (SRB_OPEN_VC); + cmd.args.open.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.open.flags = cpu_to_be32 (pool << SRB_POOL_SHIFT); + cmd.args.open.rate = cpu_to_be32 (0); + } + while (command_do (dev, &cmd)) + schedule(); + // this link allows RX frames through + dev->rxer[vci] = atm_vcc; + up (&dev->vcc_sf); + } + + // indicate readiness + set_bit(ATM_VF_READY,&atm_vcc->flags); + + return 0; +} + +/********** Close a VC **********/ + +static void amb_close (struct atm_vcc * atm_vcc) { + amb_dev * dev = AMB_DEV (atm_vcc->dev); + amb_vcc * vcc = AMB_VCC (atm_vcc); + u16 vci = atm_vcc->vci; + + PRINTD (DBG_VCC|DBG_FLOW, "amb_close"); + + // indicate unreadiness + clear_bit(ATM_VF_READY,&atm_vcc->flags); + + // disable TXing + if (atm_vcc->qos.txtp.traffic_class != ATM_NONE) { + command cmd; + + down (&dev->vcc_sf); + if (dev->rxer[vci]) { + // RXer still on the channel, just modify rate... XXX not really needed + cmd.request = cpu_to_be32 (SRB_MODIFY_VC_RATE); + cmd.args.modify_rate.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.modify_rate.rate = cpu_to_be32 (0); + // ... and clear TX rate flags (XXX to stop RM cell output?), preserving RX pool + } else { + // no RXer on the channel, close channel + cmd.request = cpu_to_be32 (SRB_CLOSE_VC); + cmd.args.close.vc = cpu_to_be32 (vci); // vpi 0 + } + dev->txer[vci].tx_present = 0; + while (command_do (dev, &cmd)) + schedule(); + up (&dev->vcc_sf); + } + + // disable RXing + if (atm_vcc->qos.rxtp.traffic_class != ATM_NONE) { + command cmd; + + // this is (the?) one reason why we need the amb_vcc struct + unsigned char pool = vcc->rx_info.pool; + + down (&dev->vcc_sf); + if (dev->txer[vci].tx_present) { + // TXer still on the channel, just go to pool zero XXX not really needed + cmd.request = cpu_to_be32 (SRB_MODIFY_VC_FLAGS); + cmd.args.modify_flags.vc = cpu_to_be32 (vci); // vpi 0 + cmd.args.modify_flags.flags = cpu_to_be32 + (dev->txer[vci].tx_vc_bits << SRB_FLAGS_SHIFT); + } else { + // no TXer on the channel, close the VC + cmd.request = cpu_to_be32 (SRB_CLOSE_VC); + cmd.args.close.vc = cpu_to_be32 (vci); // vpi 0 + } + // forget the rxer - no more skbs will be pushed + if (atm_vcc != dev->rxer[vci]) + PRINTK (KERN_ERR, "%s vcc=%p rxer[vci]=%p", + "arghhh! we're going to die!", + vcc, dev->rxer[vci]); + dev->rxer[vci] = NULL; + while (command_do (dev, &cmd)) + schedule(); + + /* shrink RX buffer pool */ + dev->rxq[pool].buffers_wanted -= 1; + if (dev->rxq[pool].buffers_wanted == rx_lats) { + dev->rxq[pool].buffers_wanted = 0; + drain_rx_pool (dev, pool); + } + up (&dev->vcc_sf); + } + + // free our structure + kfree (vcc); + + // say the VPI/VCI is free again + clear_bit(ATM_VF_ADDR,&atm_vcc->flags); + + return; +} + +/********** Set socket options for a VC **********/ + +// int amb_getsockopt (struct atm_vcc * atm_vcc, int level, int optname, void * optval, int optlen); + +/********** Set socket options for a VC **********/ + +// int amb_setsockopt (struct atm_vcc * atm_vcc, int level, int optname, void * optval, int optlen); + +/********** Send **********/ + +static int amb_send (struct atm_vcc * atm_vcc, struct sk_buff * skb) { + amb_dev * dev = AMB_DEV(atm_vcc->dev); + amb_vcc * vcc = AMB_VCC(atm_vcc); + u16 vc = atm_vcc->vci; + unsigned int tx_len = skb->len; + unsigned char * tx_data = skb->data; + tx_simple * tx_descr; + tx_in tx; + + if (test_bit (dead, &dev->flags)) + return -EIO; + + PRINTD (DBG_FLOW|DBG_TX, "amb_send vc %x data %p len %u", + vc, tx_data, tx_len); + + dump_skb (">>>", vc, skb); + + if (!dev->txer[vc].tx_present) { + PRINTK (KERN_ERR, "attempt to send on RX-only VC %x", vc); + return -EBADFD; + } + + // this is a driver private field so we have to set it ourselves, + // despite the fact that we are _required_ to use it to check for a + // pop function + ATM_SKB(skb)->vcc = atm_vcc; + + if (skb->len > (size_t) atm_vcc->qos.txtp.max_sdu) { + PRINTK (KERN_ERR, "sk_buff length greater than agreed max_sdu, dropping..."); + return -EIO; + } + + if (check_area (skb->data, skb->len)) { + atomic_inc(&atm_vcc->stats->tx_err); + return -ENOMEM; // ? + } + + // allocate memory for fragments + tx_descr = kmalloc (sizeof(tx_simple), GFP_KERNEL); + if (!tx_descr) { + PRINTK (KERN_ERR, "could not allocate TX descriptor"); + return -ENOMEM; + } + if (check_area (tx_descr, sizeof(tx_simple))) { + kfree (tx_descr); + return -ENOMEM; + } + PRINTD (DBG_TX, "fragment list allocated at %p", tx_descr); + + tx_descr->skb = skb; + + tx_descr->tx_frag.bytes = cpu_to_be32 (tx_len); + tx_descr->tx_frag.address = cpu_to_be32 (virt_to_bus (tx_data)); + + tx_descr->tx_frag_end.handle = virt_to_bus (tx_descr); + tx_descr->tx_frag_end.vc = 0; + tx_descr->tx_frag_end.next_descriptor_length = 0; + tx_descr->tx_frag_end.next_descriptor = 0; +#ifdef AMB_NEW_MICROCODE + tx_descr->tx_frag_end.cpcs_uu = 0; + tx_descr->tx_frag_end.cpi = 0; + tx_descr->tx_frag_end.pad = 0; +#endif + + tx.vc = cpu_to_be16 (vcc->tx_frame_bits | vc); + tx.tx_descr_length = cpu_to_be16 (sizeof(tx_frag)+sizeof(tx_frag_end)); + tx.tx_descr_addr = cpu_to_be32 (virt_to_bus (&tx_descr->tx_frag)); + + while (tx_give (dev, &tx)) + schedule(); + return 0; +} + +/********** Change QoS on a VC **********/ + +// int amb_change_qos (struct atm_vcc * atm_vcc, struct atm_qos * qos, int flags); + +/********** Free RX Socket Buffer **********/ + +#if 0 +static void amb_free_rx_skb (struct atm_vcc * atm_vcc, struct sk_buff * skb) { + amb_dev * dev = AMB_DEV (atm_vcc->dev); + amb_vcc * vcc = AMB_VCC (atm_vcc); + unsigned char pool = vcc->rx_info.pool; + rx_in rx; + + // This may be unsafe for various reasons that I cannot really guess + // at. However, I note that the ATM layer calls kfree_skb rather + // than dev_kfree_skb at this point so we are least covered as far + // as buffer locking goes. There may be bugs if pcap clones RX skbs. + + PRINTD (DBG_FLOW|DBG_SKB, "amb_rx_free skb %p (atm_vcc %p, vcc %p)", + skb, atm_vcc, vcc); + + rx.handle = virt_to_bus (skb); + rx.host_address = cpu_to_be32 (virt_to_bus (skb->data)); + + skb->data = skb->head; + skb->tail = skb->head; + skb->len = 0; + + if (!rx_give (dev, &rx, pool)) { + // success + PRINTD (DBG_SKB|DBG_POOL, "recycled skb for pool %hu", pool); + return; + } + + // just do what the ATM layer would have done + dev_kfree_skb_any (skb); + + return; +} +#endif + +/********** Proc File Output **********/ + +static int amb_proc_read (struct atm_dev * atm_dev, loff_t * pos, char * page) { + amb_dev * dev = AMB_DEV (atm_dev); + int left = *pos; + unsigned char pool; + + PRINTD (DBG_FLOW, "amb_proc_read"); + + /* more diagnostics here? */ + + if (!left--) { + amb_stats * s = &dev->stats; + return sprintf (page, + "frames: TX OK %lu, RX OK %lu, RX bad %lu " + "(CRC %lu, long %lu, aborted %lu, unused %lu).\n", + s->tx_ok, s->rx.ok, s->rx.error, + s->rx.badcrc, s->rx.toolong, + s->rx.aborted, s->rx.unused); + } + + if (!left--) { + amb_cq * c = &dev->cq; + return sprintf (page, "cmd queue [cur/hi/max]: %u/%u/%u. ", + c->pending, c->high, c->maximum); + } + + if (!left--) { + amb_txq * t = &dev->txq; + return sprintf (page, "TX queue [cur/max high full]: %u/%u %u %u.\n", + t->pending, t->maximum, t->high, t->filled); + } + + if (!left--) { + unsigned int count = sprintf (page, "RX queues [cur/max/req low empty]:"); + for (pool = 0; pool < NUM_RX_POOLS; ++pool) { + amb_rxq * r = &dev->rxq[pool]; + count += sprintf (page+count, " %u/%u/%u %u %u", + r->pending, r->maximum, r->buffers_wanted, r->low, r->emptied); + } + count += sprintf (page+count, ".\n"); + return count; + } + + if (!left--) { + unsigned int count = sprintf (page, "RX buffer sizes:"); + for (pool = 0; pool < NUM_RX_POOLS; ++pool) { + amb_rxq * r = &dev->rxq[pool]; + count += sprintf (page+count, " %u", r->buffer_size); + } + count += sprintf (page+count, ".\n"); + return count; + } + +#if 0 + if (!left--) { + // suni block etc? + } +#endif + + return 0; +} + +/********** Operation Structure **********/ + +static const struct atmdev_ops amb_ops = { + .open = amb_open, + .close = amb_close, + .send = amb_send, + .proc_read = amb_proc_read, + .owner = THIS_MODULE, +}; + +/********** housekeeping **********/ +static void do_housekeeping (unsigned long arg) { + amb_dev * dev = (amb_dev *) arg; + + // could collect device-specific (not driver/atm-linux) stats here + + // last resort refill once every ten seconds + fill_rx_pools (dev); + mod_timer(&dev->housekeeping, jiffies + 10*HZ); + + return; +} + +/********** creation of communication queues **********/ + +static int __devinit create_queues (amb_dev * dev, unsigned int cmds, + unsigned int txs, unsigned int * rxs, + unsigned int * rx_buffer_sizes) { + unsigned char pool; + size_t total = 0; + void * memory; + void * limit; + + PRINTD (DBG_FLOW, "create_queues %p", dev); + + total += cmds * sizeof(command); + + total += txs * (sizeof(tx_in) + sizeof(tx_out)); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + total += rxs[pool] * (sizeof(rx_in) + sizeof(rx_out)); + + memory = kmalloc (total, GFP_KERNEL); + if (!memory) { + PRINTK (KERN_ERR, "could not allocate queues"); + return -ENOMEM; + } + if (check_area (memory, total)) { + PRINTK (KERN_ERR, "queues allocated in nasty area"); + kfree (memory); + return -ENOMEM; + } + + limit = memory + total; + PRINTD (DBG_INIT, "queues from %p to %p", memory, limit); + + PRINTD (DBG_CMD, "command queue at %p", memory); + + { + command * cmd = memory; + amb_cq * cq = &dev->cq; + + cq->pending = 0; + cq->high = 0; + cq->maximum = cmds - 1; + + cq->ptrs.start = cmd; + cq->ptrs.in = cmd; + cq->ptrs.out = cmd; + cq->ptrs.limit = cmd + cmds; + + memory = cq->ptrs.limit; + } + + PRINTD (DBG_TX, "TX queue pair at %p", memory); + + { + tx_in * in = memory; + tx_out * out; + amb_txq * txq = &dev->txq; + + txq->pending = 0; + txq->high = 0; + txq->filled = 0; + txq->maximum = txs - 1; + + txq->in.start = in; + txq->in.ptr = in; + txq->in.limit = in + txs; + + memory = txq->in.limit; + out = memory; + + txq->out.start = out; + txq->out.ptr = out; + txq->out.limit = out + txs; + + memory = txq->out.limit; + } + + PRINTD (DBG_RX, "RX queue pairs at %p", memory); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) { + rx_in * in = memory; + rx_out * out; + amb_rxq * rxq = &dev->rxq[pool]; + + rxq->buffer_size = rx_buffer_sizes[pool]; + rxq->buffers_wanted = 0; + + rxq->pending = 0; + rxq->low = rxs[pool] - 1; + rxq->emptied = 0; + rxq->maximum = rxs[pool] - 1; + + rxq->in.start = in; + rxq->in.ptr = in; + rxq->in.limit = in + rxs[pool]; + + memory = rxq->in.limit; + out = memory; + + rxq->out.start = out; + rxq->out.ptr = out; + rxq->out.limit = out + rxs[pool]; + + memory = rxq->out.limit; + } + + if (memory == limit) { + return 0; + } else { + PRINTK (KERN_ERR, "bad queue alloc %p != %p (tell maintainer)", memory, limit); + kfree (limit - total); + return -ENOMEM; + } + +} + +/********** destruction of communication queues **********/ + +static void destroy_queues (amb_dev * dev) { + // all queues assumed empty + void * memory = dev->cq.ptrs.start; + // includes txq.in, txq.out, rxq[].in and rxq[].out + + PRINTD (DBG_FLOW, "destroy_queues %p", dev); + + PRINTD (DBG_INIT, "freeing queues at %p", memory); + kfree (memory); + + return; +} + +/********** basic loader commands and error handling **********/ +// centisecond timeouts - guessing away here +static unsigned int command_timeouts [] = { + [host_memory_test] = 15, + [read_adapter_memory] = 2, + [write_adapter_memory] = 2, + [adapter_start] = 50, + [get_version_number] = 10, + [interrupt_host] = 1, + [flash_erase_sector] = 1, + [adap_download_block] = 1, + [adap_erase_flash] = 1, + [adap_run_in_iram] = 1, + [adap_end_download] = 1 +}; + + +static unsigned int command_successes [] = { + [host_memory_test] = COMMAND_PASSED_TEST, + [read_adapter_memory] = COMMAND_READ_DATA_OK, + [write_adapter_memory] = COMMAND_WRITE_DATA_OK, + [adapter_start] = COMMAND_COMPLETE, + [get_version_number] = COMMAND_COMPLETE, + [interrupt_host] = COMMAND_COMPLETE, + [flash_erase_sector] = COMMAND_COMPLETE, + [adap_download_block] = COMMAND_COMPLETE, + [adap_erase_flash] = COMMAND_COMPLETE, + [adap_run_in_iram] = COMMAND_COMPLETE, + [adap_end_download] = COMMAND_COMPLETE +}; + +static int decode_loader_result (loader_command cmd, u32 result) +{ + int res; + const char *msg; + + if (result == command_successes[cmd]) + return 0; + + switch (result) { + case BAD_COMMAND: + res = -EINVAL; + msg = "bad command"; + break; + case COMMAND_IN_PROGRESS: + res = -ETIMEDOUT; + msg = "command in progress"; + break; + case COMMAND_PASSED_TEST: + res = 0; + msg = "command passed test"; + break; + case COMMAND_FAILED_TEST: + res = -EIO; + msg = "command failed test"; + break; + case COMMAND_READ_DATA_OK: + res = 0; + msg = "command read data ok"; + break; + case COMMAND_READ_BAD_ADDRESS: + res = -EINVAL; + msg = "command read bad address"; + break; + case COMMAND_WRITE_DATA_OK: + res = 0; + msg = "command write data ok"; + break; + case COMMAND_WRITE_BAD_ADDRESS: + res = -EINVAL; + msg = "command write bad address"; + break; + case COMMAND_WRITE_FLASH_FAILURE: + res = -EIO; + msg = "command write flash failure"; + break; + case COMMAND_COMPLETE: + res = 0; + msg = "command complete"; + break; + case COMMAND_FLASH_ERASE_FAILURE: + res = -EIO; + msg = "command flash erase failure"; + break; + case COMMAND_WRITE_BAD_DATA: + res = -EINVAL; + msg = "command write bad data"; + break; + default: + res = -EINVAL; + msg = "unknown error"; + PRINTD (DBG_LOAD|DBG_ERR, + "decode_loader_result got %d=%x !", + result, result); + break; + } + + PRINTK (KERN_ERR, "%s", msg); + return res; +} + +static int __devinit do_loader_command (volatile loader_block * lb, + const amb_dev * dev, loader_command cmd) { + + unsigned long timeout; + + PRINTD (DBG_FLOW|DBG_LOAD, "do_loader_command"); + + /* do a command + + Set the return value to zero, set the command type and set the + valid entry to the right magic value. The payload is already + correctly byte-ordered so we leave it alone. Hit the doorbell + with the bus address of this structure. + + */ + + lb->result = 0; + lb->command = cpu_to_be32 (cmd); + lb->valid = cpu_to_be32 (DMA_VALID); + // dump_registers (dev); + // dump_loader_block (lb); + wr_mem (dev, offsetof(amb_mem, doorbell), virt_to_bus (lb) & ~onegigmask); + + timeout = command_timeouts[cmd] * 10; + + while (!lb->result || lb->result == cpu_to_be32 (COMMAND_IN_PROGRESS)) + if (timeout) { + timeout = msleep_interruptible(timeout); + } else { + PRINTD (DBG_LOAD|DBG_ERR, "command %d timed out", cmd); + dump_registers (dev); + dump_loader_block (lb); + return -ETIMEDOUT; + } + + if (cmd == adapter_start) { + // wait for start command to acknowledge... + timeout = 100; + while (rd_plain (dev, offsetof(amb_mem, doorbell))) + if (timeout) { + timeout = msleep_interruptible(timeout); + } else { + PRINTD (DBG_LOAD|DBG_ERR, "start command did not clear doorbell, res=%08x", + be32_to_cpu (lb->result)); + dump_registers (dev); + return -ETIMEDOUT; + } + return 0; + } else { + return decode_loader_result (cmd, be32_to_cpu (lb->result)); + } + +} + +/* loader: determine loader version */ + +static int __devinit get_loader_version (loader_block * lb, + const amb_dev * dev, u32 * version) { + int res; + + PRINTD (DBG_FLOW|DBG_LOAD, "get_loader_version"); + + res = do_loader_command (lb, dev, get_version_number); + if (res) + return res; + if (version) + *version = be32_to_cpu (lb->payload.version); + return 0; +} + +/* loader: write memory data blocks */ + +static int __devinit loader_write (loader_block * lb, + const amb_dev * dev, const u32 * data, + u32 address, unsigned int count) { + unsigned int i; + transfer_block * tb = &lb->payload.transfer; + + PRINTD (DBG_FLOW|DBG_LOAD, "loader_write"); + + if (count > MAX_TRANSFER_DATA) + return -EINVAL; + tb->address = cpu_to_be32 (address); + tb->count = cpu_to_be32 (count); + for (i = 0; i < count; ++i) + tb->data[i] = cpu_to_be32 (data[i]); + return do_loader_command (lb, dev, write_adapter_memory); +} + +/* loader: verify memory data blocks */ + +static int __devinit loader_verify (loader_block * lb, + const amb_dev * dev, const u32 * data, + u32 address, unsigned int count) { + unsigned int i; + transfer_block * tb = &lb->payload.transfer; + int res; + + PRINTD (DBG_FLOW|DBG_LOAD, "loader_verify"); + + if (count > MAX_TRANSFER_DATA) + return -EINVAL; + tb->address = cpu_to_be32 (address); + tb->count = cpu_to_be32 (count); + res = do_loader_command (lb, dev, read_adapter_memory); + if (!res) + for (i = 0; i < count; ++i) + if (tb->data[i] != cpu_to_be32 (data[i])) { + res = -EINVAL; + break; + } + return res; +} + +/* loader: start microcode */ + +static int __devinit loader_start (loader_block * lb, + const amb_dev * dev, u32 address) { + PRINTD (DBG_FLOW|DBG_LOAD, "loader_start"); + + lb->payload.start = cpu_to_be32 (address); + return do_loader_command (lb, dev, adapter_start); +} + +/********** reset card **********/ + +static inline void sf (const char * msg) +{ + PRINTK (KERN_ERR, "self-test failed: %s", msg); +} + +static int amb_reset (amb_dev * dev, int diags) { + u32 word; + + PRINTD (DBG_FLOW|DBG_LOAD, "amb_reset"); + + word = rd_plain (dev, offsetof(amb_mem, reset_control)); + // put card into reset state + wr_plain (dev, offsetof(amb_mem, reset_control), word | AMB_RESET_BITS); + // wait a short while + udelay (10); +#if 1 + // put card into known good state + wr_plain (dev, offsetof(amb_mem, interrupt_control), AMB_DOORBELL_BITS); + // clear all interrupts just in case + wr_plain (dev, offsetof(amb_mem, interrupt), -1); +#endif + // clear self-test done flag + wr_plain (dev, offsetof(amb_mem, mb.loader.ready), 0); + // take card out of reset state + wr_plain (dev, offsetof(amb_mem, reset_control), word &~ AMB_RESET_BITS); + + if (diags) { + unsigned long timeout; + // 4.2 second wait + msleep(4200); + // half second time-out + timeout = 500; + while (!rd_plain (dev, offsetof(amb_mem, mb.loader.ready))) + if (timeout) { + timeout = msleep_interruptible(timeout); + } else { + PRINTD (DBG_LOAD|DBG_ERR, "reset timed out"); + return -ETIMEDOUT; + } + + // get results of self-test + // XXX double check byte-order + word = rd_mem (dev, offsetof(amb_mem, mb.loader.result)); + if (word & SELF_TEST_FAILURE) { + if (word & GPINT_TST_FAILURE) + sf ("interrupt"); + if (word & SUNI_DATA_PATTERN_FAILURE) + sf ("SUNI data pattern"); + if (word & SUNI_DATA_BITS_FAILURE) + sf ("SUNI data bits"); + if (word & SUNI_UTOPIA_FAILURE) + sf ("SUNI UTOPIA interface"); + if (word & SUNI_FIFO_FAILURE) + sf ("SUNI cell buffer FIFO"); + if (word & SRAM_FAILURE) + sf ("bad SRAM"); + // better return value? + return -EIO; + } + + } + return 0; +} + +/********** transfer and start the microcode **********/ + +static int __devinit ucode_init (loader_block * lb, amb_dev * dev) { + unsigned int i = 0; + unsigned int total = 0; + const u32 * pointer = ucode_data; + u32 address; + unsigned int count; + int res; + + PRINTD (DBG_FLOW|DBG_LOAD, "ucode_init"); + + while (address = ucode_regions[i].start, + count = ucode_regions[i].count) { + PRINTD (DBG_LOAD, "starting region (%x, %u)", address, count); + while (count) { + unsigned int words; + if (count <= MAX_TRANSFER_DATA) + words = count; + else + words = MAX_TRANSFER_DATA; + total += words; + res = loader_write (lb, dev, pointer, address, words); + if (res) + return res; + res = loader_verify (lb, dev, pointer, address, words); + if (res) + return res; + count -= words; + address += sizeof(u32) * words; + pointer += words; + } + i += 1; + } + if (*pointer == 0xdeadbeef) { + return loader_start (lb, dev, ucode_start); + } else { + // cast needed as there is no %? for pointer differnces + PRINTD (DBG_LOAD|DBG_ERR, + "offset=%li, *pointer=%x, address=%x, total=%u", + (long) (pointer - ucode_data), *pointer, address, total); + PRINTK (KERN_ERR, "incorrect microcode data"); + return -ENOMEM; + } +} + +/********** give adapter parameters **********/ + +static inline __be32 bus_addr(void * addr) { + return cpu_to_be32 (virt_to_bus (addr)); +} + +static int __devinit amb_talk (amb_dev * dev) { + adap_talk_block a; + unsigned char pool; + unsigned long timeout; + + PRINTD (DBG_FLOW, "amb_talk %p", dev); + + a.command_start = bus_addr (dev->cq.ptrs.start); + a.command_end = bus_addr (dev->cq.ptrs.limit); + a.tx_start = bus_addr (dev->txq.in.start); + a.tx_end = bus_addr (dev->txq.in.limit); + a.txcom_start = bus_addr (dev->txq.out.start); + a.txcom_end = bus_addr (dev->txq.out.limit); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) { + // the other "a" items are set up by the adapter + a.rec_struct[pool].buffer_start = bus_addr (dev->rxq[pool].in.start); + a.rec_struct[pool].buffer_end = bus_addr (dev->rxq[pool].in.limit); + a.rec_struct[pool].rx_start = bus_addr (dev->rxq[pool].out.start); + a.rec_struct[pool].rx_end = bus_addr (dev->rxq[pool].out.limit); + a.rec_struct[pool].buffer_size = cpu_to_be32 (dev->rxq[pool].buffer_size); + } + +#ifdef AMB_NEW_MICROCODE + // disable fast PLX prefetching + a.init_flags = 0; +#endif + + // pass the structure + wr_mem (dev, offsetof(amb_mem, doorbell), virt_to_bus (&a)); + + // 2.2 second wait (must not touch doorbell during 2 second DMA test) + msleep(2200); + // give the adapter another half second? + timeout = 500; + while (rd_plain (dev, offsetof(amb_mem, doorbell))) + if (timeout) { + timeout = msleep_interruptible(timeout); + } else { + PRINTD (DBG_INIT|DBG_ERR, "adapter init timed out"); + return -ETIMEDOUT; + } + + return 0; +} + +// get microcode version +static void __devinit amb_ucode_version (amb_dev * dev) { + u32 major; + u32 minor; + command cmd; + cmd.request = cpu_to_be32 (SRB_GET_VERSION); + while (command_do (dev, &cmd)) { + set_current_state(TASK_UNINTERRUPTIBLE); + schedule(); + } + major = be32_to_cpu (cmd.args.version.major); + minor = be32_to_cpu (cmd.args.version.minor); + PRINTK (KERN_INFO, "microcode version is %u.%u", major, minor); +} + +// swap bits within byte to get Ethernet ordering +static u8 bit_swap (u8 byte) +{ + const u8 swap[] = { + 0x0, 0x8, 0x4, 0xc, + 0x2, 0xa, 0x6, 0xe, + 0x1, 0x9, 0x5, 0xd, + 0x3, 0xb, 0x7, 0xf + }; + return ((swap[byte & 0xf]<<4) | swap[byte>>4]); +} + +// get end station address +static void __devinit amb_esi (amb_dev * dev, u8 * esi) { + u32 lower4; + u16 upper2; + command cmd; + + cmd.request = cpu_to_be32 (SRB_GET_BIA); + while (command_do (dev, &cmd)) { + set_current_state(TASK_UNINTERRUPTIBLE); + schedule(); + } + lower4 = be32_to_cpu (cmd.args.bia.lower4); + upper2 = be32_to_cpu (cmd.args.bia.upper2); + PRINTD (DBG_LOAD, "BIA: lower4: %08x, upper2 %04x", lower4, upper2); + + if (esi) { + unsigned int i; + + PRINTDB (DBG_INIT, "ESI:"); + for (i = 0; i < ESI_LEN; ++i) { + if (i < 4) + esi[i] = bit_swap (lower4>>(8*i)); + else + esi[i] = bit_swap (upper2>>(8*(i-4))); + PRINTDM (DBG_INIT, " %02x", esi[i]); + } + + PRINTDE (DBG_INIT, ""); + } + + return; +} + +static void fixup_plx_window (amb_dev *dev, loader_block *lb) +{ + // fix up the PLX-mapped window base address to match the block + unsigned long blb; + u32 mapreg; + blb = virt_to_bus(lb); + // the kernel stack had better not ever cross a 1Gb boundary! + mapreg = rd_plain (dev, offsetof(amb_mem, stuff[10])); + mapreg &= ~onegigmask; + mapreg |= blb & onegigmask; + wr_plain (dev, offsetof(amb_mem, stuff[10]), mapreg); + return; +} + +static int __devinit amb_init (amb_dev * dev) +{ + loader_block lb; + + u32 version; + + if (amb_reset (dev, 1)) { + PRINTK (KERN_ERR, "card reset failed!"); + } else { + fixup_plx_window (dev, &lb); + + if (get_loader_version (&lb, dev, &version)) { + PRINTK (KERN_INFO, "failed to get loader version"); + } else { + PRINTK (KERN_INFO, "loader version is %08x", version); + + if (ucode_init (&lb, dev)) { + PRINTK (KERN_ERR, "microcode failure"); + } else if (create_queues (dev, cmds, txs, rxs, rxs_bs)) { + PRINTK (KERN_ERR, "failed to get memory for queues"); + } else { + + if (amb_talk (dev)) { + PRINTK (KERN_ERR, "adapter did not accept queues"); + } else { + + amb_ucode_version (dev); + return 0; + + } /* amb_talk */ + + destroy_queues (dev); + } /* create_queues, ucode_init */ + + amb_reset (dev, 0); + } /* get_loader_version */ + + } /* amb_reset */ + + return -EINVAL; +} + +static void setup_dev(amb_dev *dev, struct pci_dev *pci_dev) +{ + unsigned char pool; + memset (dev, 0, sizeof(amb_dev)); + + // set up known dev items straight away + dev->pci_dev = pci_dev; + pci_set_drvdata(pci_dev, dev); + + dev->iobase = pci_resource_start (pci_dev, 1); + dev->irq = pci_dev->irq; + dev->membase = bus_to_virt(pci_resource_start(pci_dev, 0)); + + // flags (currently only dead) + dev->flags = 0; + + // Allocate cell rates (fibre) + // ATM_OC3_PCR = 1555200000/8/270*260/53 - 29/53 + // to be really pedantic, this should be ATM_OC3c_PCR + dev->tx_avail = ATM_OC3_PCR; + dev->rx_avail = ATM_OC3_PCR; + +#ifdef FILL_RX_POOLS_IN_BH + // initialise bottom half + INIT_WORK(&dev->bh, (void (*)(void *)) fill_rx_pools, dev); +#endif + + // semaphore for txer/rxer modifications - we cannot use a + // spinlock as the critical region needs to switch processes + init_MUTEX (&dev->vcc_sf); + // queue manipulation spinlocks; we want atomic reads and + // writes to the queue descriptors (handles IRQ and SMP) + // consider replacing "int pending" -> "atomic_t available" + // => problem related to who gets to move queue pointers + spin_lock_init (&dev->cq.lock); + spin_lock_init (&dev->txq.lock); + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + spin_lock_init (&dev->rxq[pool].lock); +} + +static void setup_pci_dev(struct pci_dev *pci_dev) +{ + unsigned char lat; + + // enable bus master accesses + pci_set_master(pci_dev); + + // frobnicate latency (upwards, usually) + pci_read_config_byte (pci_dev, PCI_LATENCY_TIMER, &lat); + + if (!pci_lat) + pci_lat = (lat < MIN_PCI_LATENCY) ? MIN_PCI_LATENCY : lat; + + if (lat != pci_lat) { + PRINTK (KERN_INFO, "Changing PCI latency timer from %hu to %hu", + lat, pci_lat); + pci_write_config_byte(pci_dev, PCI_LATENCY_TIMER, pci_lat); + } +} + +static int __devinit amb_probe(struct pci_dev *pci_dev, const struct pci_device_id *pci_ent) +{ + amb_dev * dev; + int err; + unsigned int irq; + + err = pci_enable_device(pci_dev); + if (err < 0) { + PRINTK (KERN_ERR, "skipped broken (PLX rev 2) card"); + goto out; + } + + // read resources from PCI configuration space + irq = pci_dev->irq; + + if (pci_dev->device == PCI_DEVICE_ID_MADGE_AMBASSADOR_BAD) { + PRINTK (KERN_ERR, "skipped broken (PLX rev 2) card"); + err = -EINVAL; + goto out_disable; + } + + PRINTD (DBG_INFO, "found Madge ATM adapter (amb) at" + " IO %lx, IRQ %u, MEM %p", pci_resource_start(pci_dev, 1), + irq, bus_to_virt(pci_resource_start(pci_dev, 0))); + + // check IO region + err = pci_request_region(pci_dev, 1, DEV_LABEL); + if (err < 0) { + PRINTK (KERN_ERR, "IO range already in use!"); + goto out_disable; + } + + dev = kmalloc (sizeof(amb_dev), GFP_KERNEL); + if (!dev) { + PRINTK (KERN_ERR, "out of memory!"); + err = -ENOMEM; + goto out_release; + } + + setup_dev(dev, pci_dev); + + err = amb_init(dev); + if (err < 0) { + PRINTK (KERN_ERR, "adapter initialisation failure"); + goto out_free; + } + + setup_pci_dev(pci_dev); + + // grab (but share) IRQ and install handler + err = request_irq(irq, interrupt_handler, SA_SHIRQ, DEV_LABEL, dev); + if (err < 0) { + PRINTK (KERN_ERR, "request IRQ failed!"); + goto out_reset; + } + + dev->atm_dev = atm_dev_register (DEV_LABEL, &amb_ops, -1, NULL); + if (!dev->atm_dev) { + PRINTD (DBG_ERR, "failed to register Madge ATM adapter"); + err = -EINVAL; + goto out_free_irq; + } + + PRINTD (DBG_INFO, "registered Madge ATM adapter (no. %d) (%p) at %p", + dev->atm_dev->number, dev, dev->atm_dev); + dev->atm_dev->dev_data = (void *) dev; + + // register our address + amb_esi (dev, dev->atm_dev->esi); + + // 0 bits for vpi, 10 bits for vci + dev->atm_dev->ci_range.vpi_bits = NUM_VPI_BITS; + dev->atm_dev->ci_range.vci_bits = NUM_VCI_BITS; + + init_timer(&dev->housekeeping); + dev->housekeeping.function = do_housekeeping; + dev->housekeeping.data = (unsigned long) dev; + mod_timer(&dev->housekeeping, jiffies); + + // enable host interrupts + interrupts_on (dev); + +out: + return err; + +out_free_irq: + free_irq(irq, dev); +out_reset: + amb_reset(dev, 0); +out_free: + kfree(dev); +out_release: + pci_release_region(pci_dev, 1); +out_disable: + pci_disable_device(pci_dev); + goto out; +} + + +static void __devexit amb_remove_one(struct pci_dev *pci_dev) +{ + struct amb_dev *dev; + + dev = pci_get_drvdata(pci_dev); + + PRINTD(DBG_INFO|DBG_INIT, "closing %p (atm_dev = %p)", dev, dev->atm_dev); + del_timer_sync(&dev->housekeeping); + // the drain should not be necessary + drain_rx_pools(dev); + interrupts_off(dev); + amb_reset(dev, 0); + free_irq(dev->irq, dev); + pci_disable_device(pci_dev); + destroy_queues(dev); + atm_dev_deregister(dev->atm_dev); + kfree(dev); + pci_release_region(pci_dev, 1); +} + +static void __init amb_check_args (void) { + unsigned char pool; + unsigned int max_rx_size; + +#ifdef DEBUG_AMBASSADOR + PRINTK (KERN_NOTICE, "debug bitmap is %hx", debug &= DBG_MASK); +#else + if (debug) + PRINTK (KERN_NOTICE, "no debugging support"); +#endif + + if (cmds < MIN_QUEUE_SIZE) + PRINTK (KERN_NOTICE, "cmds has been raised to %u", + cmds = MIN_QUEUE_SIZE); + + if (txs < MIN_QUEUE_SIZE) + PRINTK (KERN_NOTICE, "txs has been raised to %u", + txs = MIN_QUEUE_SIZE); + + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + if (rxs[pool] < MIN_QUEUE_SIZE) + PRINTK (KERN_NOTICE, "rxs[%hu] has been raised to %u", + pool, rxs[pool] = MIN_QUEUE_SIZE); + + // buffers sizes should be greater than zero and strictly increasing + max_rx_size = 0; + for (pool = 0; pool < NUM_RX_POOLS; ++pool) + if (rxs_bs[pool] <= max_rx_size) + PRINTK (KERN_NOTICE, "useless pool (rxs_bs[%hu] = %u)", + pool, rxs_bs[pool]); + else + max_rx_size = rxs_bs[pool]; + + if (rx_lats < MIN_RX_BUFFERS) + PRINTK (KERN_NOTICE, "rx_lats has been raised to %u", + rx_lats = MIN_RX_BUFFERS); + + return; +} + +/********** module stuff **********/ + +MODULE_AUTHOR(maintainer_string); +MODULE_DESCRIPTION(description_string); +MODULE_LICENSE("GPL"); +module_param(debug, ushort, 0644); +module_param(cmds, uint, 0); +module_param(txs, uint, 0); +module_param_array(rxs, uint, NULL, 0); +module_param_array(rxs_bs, uint, NULL, 0); +module_param(rx_lats, uint, 0); +module_param(pci_lat, byte, 0); +MODULE_PARM_DESC(debug, "debug bitmap, see .h file"); +MODULE_PARM_DESC(cmds, "number of command queue entries"); +MODULE_PARM_DESC(txs, "number of TX queue entries"); +MODULE_PARM_DESC(rxs, "number of RX queue entries [" __MODULE_STRING(NUM_RX_POOLS) "]"); +MODULE_PARM_DESC(rxs_bs, "size of RX buffers [" __MODULE_STRING(NUM_RX_POOLS) "]"); +MODULE_PARM_DESC(rx_lats, "number of extra buffers to cope with RX latencies"); +MODULE_PARM_DESC(pci_lat, "PCI latency in bus cycles"); + +/********** module entry **********/ + +static struct pci_device_id amb_pci_tbl[] = { + { PCI_VENDOR_ID_MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, 0 }, + { PCI_VENDOR_ID_MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR_BAD, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, 0 }, + { 0, } +}; + +MODULE_DEVICE_TABLE(pci, amb_pci_tbl); + +static struct pci_driver amb_driver = { + .name = "amb", + .probe = amb_probe, + .remove = __devexit_p(amb_remove_one), + .id_table = amb_pci_tbl, +}; + +static int __init amb_module_init (void) +{ + PRINTD (DBG_FLOW|DBG_INIT, "init_module"); + + // sanity check - cast needed as printk does not support %Zu + if (sizeof(amb_mem) != 4*16 + 4*12) { + PRINTK (KERN_ERR, "Fix amb_mem (is %lu words).", + (unsigned long) sizeof(amb_mem)); + return -ENOMEM; + } + + show_version(); + + amb_check_args(); + + // get the juice + return pci_register_driver(&amb_driver); +} + +/********** module exit **********/ + +static void __exit amb_module_exit (void) +{ + PRINTD (DBG_FLOW|DBG_INIT, "cleanup_module"); + + return pci_unregister_driver(&amb_driver); +} + +module_init(amb_module_init); +module_exit(amb_module_exit); diff --git a/drivers/atm/ambassador.h b/drivers/atm/ambassador.h new file mode 100644 index 000000000000..84a93063cfe1 --- /dev/null +++ b/drivers/atm/ambassador.h @@ -0,0 +1,679 @@ +/* + Madge Ambassador ATM Adapter driver. + Copyright (C) 1995-1999 Madge Networks Ltd. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian + system and in the file COPYING in the Linux kernel source. +*/ + +#ifndef AMBASSADOR_H +#define AMBASSADOR_H + +#include <linux/config.h> + +#ifdef CONFIG_ATM_AMBASSADOR_DEBUG +#define DEBUG_AMBASSADOR +#endif + +#define DEV_LABEL "amb" + +#ifndef PCI_VENDOR_ID_MADGE +#define PCI_VENDOR_ID_MADGE 0x10B6 +#endif +#ifndef PCI_VENDOR_ID_MADGE_AMBASSADOR +#define PCI_DEVICE_ID_MADGE_AMBASSADOR 0x1001 +#endif +#ifndef PCI_VENDOR_ID_MADGE_AMBASSADOR_BAD +#define PCI_DEVICE_ID_MADGE_AMBASSADOR_BAD 0x1002 +#endif + +// diagnostic output + +#define PRINTK(severity,format,args...) \ + printk(severity DEV_LABEL ": " format "\n" , ## args) + +#ifdef DEBUG_AMBASSADOR + +#define DBG_ERR 0x0001 +#define DBG_WARN 0x0002 +#define DBG_INFO 0x0004 +#define DBG_INIT 0x0008 +#define DBG_LOAD 0x0010 +#define DBG_VCC 0x0020 +#define DBG_QOS 0x0040 +#define DBG_CMD 0x0080 +#define DBG_TX 0x0100 +#define DBG_RX 0x0200 +#define DBG_SKB 0x0400 +#define DBG_POOL 0x0800 +#define DBG_IRQ 0x1000 +#define DBG_FLOW 0x2000 +#define DBG_REGS 0x4000 +#define DBG_DATA 0x8000 +#define DBG_MASK 0xffff + +/* the ## prevents the annoying double expansion of the macro arguments */ +/* KERN_INFO is used since KERN_DEBUG often does not make it to the console */ +#define PRINTDB(bits,format,args...) \ + ( (debug & (bits)) ? printk (KERN_INFO DEV_LABEL ": " format , ## args) : 1 ) +#define PRINTDM(bits,format,args...) \ + ( (debug & (bits)) ? printk (format , ## args) : 1 ) +#define PRINTDE(bits,format,args...) \ + ( (debug & (bits)) ? printk (format "\n" , ## args) : 1 ) +#define PRINTD(bits,format,args...) \ + ( (debug & (bits)) ? printk (KERN_INFO DEV_LABEL ": " format "\n" , ## args) : 1 ) + +#else + +#define PRINTD(bits,format,args...) +#define PRINTDB(bits,format,args...) +#define PRINTDM(bits,format,args...) +#define PRINTDE(bits,format,args...) + +#endif + +#define PRINTDD(bits,format,args...) +#define PRINTDDB(sec,fmt,args...) +#define PRINTDDM(sec,fmt,args...) +#define PRINTDDE(sec,fmt,args...) + +// tunable values (?) + +/* MUST be powers of two -- why ? */ +#define COM_Q_ENTRIES 8 +#define TX_Q_ENTRIES 32 +#define RX_Q_ENTRIES 64 + +// fixed values + +// guessing +#define AMB_EXTENT 0x80 + +// Minimum allowed size for an Ambassador queue +#define MIN_QUEUE_SIZE 2 + +// Ambassador microcode allows 1 to 4 pools, we use 4 (simpler) +#define NUM_RX_POOLS 4 + +// minimum RX buffers required to cope with replenishing delay +#define MIN_RX_BUFFERS 1 + +// minimum PCI latency we will tolerate (32 IS TOO SMALL) +#define MIN_PCI_LATENCY 64 // 255 + +// VCs supported by card (VPI always 0) +#define NUM_VPI_BITS 0 +#define NUM_VCI_BITS 10 +#define NUM_VCS 1024 + +/* The status field bits defined so far. */ +#define RX_ERR 0x8000 // always present if there is an error (hmm) +#define CRC_ERR 0x4000 // AAL5 CRC error +#define LEN_ERR 0x2000 // overlength frame +#define ABORT_ERR 0x1000 // zero length field in received frame +#define UNUSED_ERR 0x0800 // buffer returned unused + +// Adaptor commands + +#define SRB_OPEN_VC 0 +/* par_0: dwordswap(VC_number) */ +/* par_1: dwordswap(flags<<16) or wordswap(flags)*/ +/* flags: */ + +/* LANE: 0x0004 */ +/* NOT_UBR: 0x0008 */ +/* ABR: 0x0010 */ + +/* RxPool0: 0x0000 */ +/* RxPool1: 0x0020 */ +/* RxPool2: 0x0040 */ +/* RxPool3: 0x0060 */ + +/* par_2: dwordswap(fp_rate<<16) or wordswap(fp_rate) */ + +#define SRB_CLOSE_VC 1 +/* par_0: dwordswap(VC_number) */ + +#define SRB_GET_BIA 2 +/* returns */ +/* par_0: dwordswap(half BIA) */ +/* par_1: dwordswap(half BIA) */ + +#define SRB_GET_SUNI_STATS 3 +/* par_0: dwordswap(physical_host_address) */ + +#define SRB_SET_BITS_8 4 +#define SRB_SET_BITS_16 5 +#define SRB_SET_BITS_32 6 +#define SRB_CLEAR_BITS_8 7 +#define SRB_CLEAR_BITS_16 8 +#define SRB_CLEAR_BITS_32 9 +/* par_0: dwordswap(ATMizer address) */ +/* par_1: dwordswap(mask) */ + +#define SRB_SET_8 10 +#define SRB_SET_16 11 +#define SRB_SET_32 12 +/* par_0: dwordswap(ATMizer address) */ +/* par_1: dwordswap(data) */ + +#define SRB_GET_32 13 +/* par_0: dwordswap(ATMizer address) */ +/* returns */ +/* par_1: dwordswap(ATMizer data) */ + +#define SRB_GET_VERSION 14 +/* returns */ +/* par_0: dwordswap(Major Version) */ +/* par_1: dwordswap(Minor Version) */ + +#define SRB_FLUSH_BUFFER_Q 15 +/* Only flags to define which buffer pool; all others must be zero */ +/* par_0: dwordswap(flags<<16) or wordswap(flags)*/ + +#define SRB_GET_DMA_SPEEDS 16 +/* returns */ +/* par_0: dwordswap(Read speed (bytes/sec)) */ +/* par_1: dwordswap(Write speed (bytes/sec)) */ + +#define SRB_MODIFY_VC_RATE 17 +/* par_0: dwordswap(VC_number) */ +/* par_1: dwordswap(fp_rate<<16) or wordswap(fp_rate) */ + +#define SRB_MODIFY_VC_FLAGS 18 +/* par_0: dwordswap(VC_number) */ +/* par_1: dwordswap(flags<<16) or wordswap(flags)*/ + +/* flags: */ + +/* LANE: 0x0004 */ +/* NOT_UBR: 0x0008 */ +/* ABR: 0x0010 */ + +/* RxPool0: 0x0000 */ +/* RxPool1: 0x0020 */ +/* RxPool2: 0x0040 */ +/* RxPool3: 0x0060 */ + +#define SRB_RATE_SHIFT 16 +#define SRB_POOL_SHIFT (SRB_FLAGS_SHIFT+5) +#define SRB_FLAGS_SHIFT 16 + +#define SRB_STOP_TASKING 19 +#define SRB_START_TASKING 20 +#define SRB_SHUT_DOWN 21 +#define MAX_SRB 21 + +#define SRB_COMPLETE 0xffffffff + +#define TX_FRAME 0x80000000 + +// number of types of SRB MUST be a power of two -- why? +#define NUM_OF_SRB 32 + +// number of bits of period info for rate +#define MAX_RATE_BITS 6 + +#define TX_UBR 0x0000 +#define TX_UBR_CAPPED 0x0008 +#define TX_ABR 0x0018 +#define TX_FRAME_NOTCAP 0x0000 +#define TX_FRAME_CAPPED 0x8000 + +#define FP_155_RATE 0x24b1 +#define FP_25_RATE 0x1f9d + +/* #define VERSION_NUMBER 0x01000000 // initial release */ +/* #define VERSION_NUMBER 0x01010000 // fixed startup probs PLX MB0 not cleared */ +/* #define VERSION_NUMBER 0x01020000 // changed SUNI reset timings; allowed r/w onchip */ + +/* #define VERSION_NUMBER 0x01030000 // clear local doorbell int reg on reset */ +/* #define VERSION_NUMBER 0x01040000 // PLX bug work around version PLUS */ +/* remove race conditions on basic interface */ +/* indicate to the host that diagnostics */ +/* have finished; if failed, how and what */ +/* failed */ +/* fix host memory test to fix PLX bug */ +/* allow flash upgrade and BIA upgrade directly */ +/* */ +#define VERSION_NUMBER 0x01050025 /* Jason's first hacked version. */ +/* Change in download algorithm */ + +#define DMA_VALID 0xb728e149 /* completely random */ + +#define FLASH_BASE 0xa0c00000 +#define FLASH_SIZE 0x00020000 /* 128K */ +#define BIA_BASE (FLASH_BASE+0x0001c000) /* Flash Sector 7 */ +#define BIA_ADDRESS ((void *)0xa0c1c000) +#define PLX_BASE 0xe0000000 + +typedef enum { + host_memory_test = 1, + read_adapter_memory, + write_adapter_memory, + adapter_start, + get_version_number, + interrupt_host, + flash_erase_sector, + adap_download_block = 0x20, + adap_erase_flash, + adap_run_in_iram, + adap_end_download +} loader_command; + +#define BAD_COMMAND (-1) +#define COMMAND_IN_PROGRESS 1 +#define COMMAND_PASSED_TEST 2 +#define COMMAND_FAILED_TEST 3 +#define COMMAND_READ_DATA_OK 4 +#define COMMAND_READ_BAD_ADDRESS 5 +#define COMMAND_WRITE_DATA_OK 6 +#define COMMAND_WRITE_BAD_ADDRESS 7 +#define COMMAND_WRITE_FLASH_FAILURE 8 +#define COMMAND_COMPLETE 9 +#define COMMAND_FLASH_ERASE_FAILURE 10 +#define COMMAND_WRITE_BAD_DATA 11 + +/* bit fields for mailbox[0] return values */ + +#define GPINT_TST_FAILURE 0x00000001 +#define SUNI_DATA_PATTERN_FAILURE 0x00000002 +#define SUNI_DATA_BITS_FAILURE 0x00000004 +#define SUNI_UTOPIA_FAILURE 0x00000008 +#define SUNI_FIFO_FAILURE 0x00000010 +#define SRAM_FAILURE 0x00000020 +#define SELF_TEST_FAILURE 0x0000003f + +/* mailbox[1] = 0 in progress, -1 on completion */ +/* mailbox[2] = current test 00 00 test(8 bit) phase(8 bit) */ +/* mailbox[3] = last failure, 00 00 test(8 bit) phase(8 bit) */ +/* mailbox[4],mailbox[5],mailbox[6] random failure values */ + +/* PLX/etc. memory map including command structure */ + +/* These registers may also be memory mapped in PCI memory */ + +#define UNUSED_LOADER_MAILBOXES 6 + +typedef struct { + u32 stuff[16]; + union { + struct { + u32 result; + u32 ready; + u32 stuff[UNUSED_LOADER_MAILBOXES]; + } loader; + struct { + u32 cmd_address; + u32 tx_address; + u32 rx_address[NUM_RX_POOLS]; + u32 gen_counter; + u32 spare; + } adapter; + } mb; + u32 doorbell; + u32 interrupt; + u32 interrupt_control; + u32 reset_control; +} amb_mem; + +/* RESET bit, IRQ (card to host) and doorbell (host to card) enable bits */ +#define AMB_RESET_BITS 0x40000000 +#define AMB_INTERRUPT_BITS 0x00000300 +#define AMB_DOORBELL_BITS 0x00030000 + +/* loader commands */ + +#define MAX_COMMAND_DATA 13 +#define MAX_TRANSFER_DATA 11 + +typedef struct { + __be32 address; + __be32 count; + __be32 data[MAX_TRANSFER_DATA]; +} transfer_block; + +typedef struct { + __be32 result; + __be32 command; + union { + transfer_block transfer; + __be32 version; + __be32 start; + __be32 data[MAX_COMMAND_DATA]; + } payload; + __be32 valid; +} loader_block; + +/* command queue */ + +/* Again all data are BIG ENDIAN */ + +typedef struct { + union { + struct { + __be32 vc; + __be32 flags; + __be32 rate; + } open; + struct { + __be32 vc; + __be32 rate; + } modify_rate; + struct { + __be32 vc; + __be32 flags; + } modify_flags; + struct { + __be32 vc; + } close; + struct { + __be32 lower4; + __be32 upper2; + } bia; + struct { + __be32 address; + } suni; + struct { + __be32 major; + __be32 minor; + } version; + struct { + __be32 read; + __be32 write; + } speed; + struct { + __be32 flags; + } flush; + struct { + __be32 address; + __be32 data; + } memory; + __be32 par[3]; + } args; + __be32 request; +} command; + +/* transmit queues and associated structures */ + +/* The hosts transmit structure. All BIG ENDIAN; host address + restricted to first 1GByte, but address passed to the card must + have the top MS bit or'ed in. -- check this */ + +/* TX is described by 1+ tx_frags followed by a tx_frag_end */ + +typedef struct { + __be32 bytes; + __be32 address; +} tx_frag; + +/* apart from handle the fields here are for the adapter to play with + and should be set to zero */ + +typedef struct { + u32 handle; + u16 vc; + u16 next_descriptor_length; + u32 next_descriptor; +#ifdef AMB_NEW_MICROCODE + u8 cpcs_uu; + u8 cpi; + u16 pad; +#endif +} tx_frag_end; + +typedef struct { + tx_frag tx_frag; + tx_frag_end tx_frag_end; + struct sk_buff * skb; +} tx_simple; + +#if 0 +typedef union { + tx_frag fragment; + tx_frag_end end_of_list; +} tx_descr; +#endif + +/* this "points" to the sequence of fragments and trailer */ + +typedef struct { + __be16 vc; + __be16 tx_descr_length; + __be32 tx_descr_addr; +} tx_in; + +/* handle is the handle from tx_in */ + +typedef struct { + u32 handle; +} tx_out; + +/* receive frame structure */ + +/* All BIG ENDIAN; handle is as passed from host; length is zero for + aborted frames, and frames with errors. Header is actually VC + number, lec-id is NOT yet supported. */ + +typedef struct { + u32 handle; + __be16 vc; + __be16 lec_id; // unused + __be16 status; + __be16 length; +} rx_out; + +/* buffer supply structure */ + +typedef struct { + u32 handle; + __be32 host_address; +} rx_in; + +/* This first structure is the area in host memory where the adapter + writes its pointer values. These pointer values are BIG ENDIAN and + reside in the same 4MB 'page' as this structure. The host gives the + adapter the address of this block by sending a doorbell interrupt + to the adapter after downloading the code and setting it going. The + addresses have the top 10 bits set to 1010000010b -- really? + + The host must initialise these before handing the block to the + adapter. */ + +typedef struct { + __be32 command_start; /* SRB commands completions */ + __be32 command_end; /* SRB commands completions */ + __be32 tx_start; + __be32 tx_end; + __be32 txcom_start; /* tx completions */ + __be32 txcom_end; /* tx completions */ + struct { + __be32 buffer_start; + __be32 buffer_end; + u32 buffer_q_get; + u32 buffer_q_end; + u32 buffer_aptr; + __be32 rx_start; /* rx completions */ + __be32 rx_end; + u32 rx_ptr; + __be32 buffer_size; /* size of host buffer */ + } rec_struct[NUM_RX_POOLS]; +#ifdef AMB_NEW_MICROCODE + u16 init_flags; + u16 talk_block_spare; +#endif +} adap_talk_block; + +/* This structure must be kept in line with the vcr image in sarmain.h + + This is the structure in the host filled in by the adapter by + GET_SUNI_STATS */ + +typedef struct { + u8 racp_chcs; + u8 racp_uhcs; + u16 spare; + u32 racp_rcell; + u32 tacp_tcell; + u32 flags; + u32 dropped_cells; + u32 dropped_frames; +} suni_stats; + +typedef enum { + dead +} amb_flags; + +#define NEXTQ(current,start,limit) \ + ( (current)+1 < (limit) ? (current)+1 : (start) ) + +typedef struct { + command * start; + command * in; + command * out; + command * limit; +} amb_cq_ptrs; + +typedef struct { + spinlock_t lock; + unsigned int pending; + unsigned int high; + unsigned int filled; + unsigned int maximum; // size - 1 (q implementation) + amb_cq_ptrs ptrs; +} amb_cq; + +typedef struct { + spinlock_t lock; + unsigned int pending; + unsigned int high; + unsigned int filled; + unsigned int maximum; // size - 1 (q implementation) + struct { + tx_in * start; + tx_in * ptr; + tx_in * limit; + } in; + struct { + tx_out * start; + tx_out * ptr; + tx_out * limit; + } out; +} amb_txq; + +typedef struct { + spinlock_t lock; + unsigned int pending; + unsigned int low; + unsigned int emptied; + unsigned int maximum; // size - 1 (q implementation) + struct { + rx_in * start; + rx_in * ptr; + rx_in * limit; + } in; + struct { + rx_out * start; + rx_out * ptr; + rx_out * limit; + } out; + unsigned int buffers_wanted; + unsigned int buffer_size; +} amb_rxq; + +typedef struct { + unsigned long tx_ok; + struct { + unsigned long ok; + unsigned long error; + unsigned long badcrc; + unsigned long toolong; + unsigned long aborted; + unsigned long unused; + } rx; +} amb_stats; + +// a single struct pointed to by atm_vcc->dev_data + +typedef struct { + u8 tx_vc_bits:7; + u8 tx_present:1; +} amb_tx_info; + +typedef struct { + unsigned char pool; +} amb_rx_info; + +typedef struct { + amb_rx_info rx_info; + u16 tx_frame_bits; + unsigned int tx_rate; + unsigned int rx_rate; +} amb_vcc; + +struct amb_dev { + u8 irq; + long flags; + u32 iobase; + u32 * membase; + +#ifdef FILL_RX_POOLS_IN_BH + struct work_struct bh; +#endif + + amb_cq cq; + amb_txq txq; + amb_rxq rxq[NUM_RX_POOLS]; + + struct semaphore vcc_sf; + amb_tx_info txer[NUM_VCS]; + struct atm_vcc * rxer[NUM_VCS]; + unsigned int tx_avail; + unsigned int rx_avail; + + amb_stats stats; + + struct atm_dev * atm_dev; + struct pci_dev * pci_dev; + struct timer_list housekeeping; +}; + +typedef struct amb_dev amb_dev; + +#define AMB_DEV(atm_dev) ((amb_dev *) (atm_dev)->dev_data) +#define AMB_VCC(atm_vcc) ((amb_vcc *) (atm_vcc)->dev_data) + +/* the microcode */ + +typedef struct { + u32 start; + unsigned int count; +} region; + +static region ucode_regions[]; +static u32 ucode_data[]; +static u32 ucode_start; + +/* rate rounding */ + +typedef enum { + round_up, + round_down, + round_nearest +} rounding; + +#endif diff --git a/drivers/atm/atmdev_init.c b/drivers/atm/atmdev_init.c new file mode 100644 index 000000000000..0e09e5c28e3f --- /dev/null +++ b/drivers/atm/atmdev_init.c @@ -0,0 +1,54 @@ +/* drivers/atm/atmdev_init.c - ATM device driver initialization */ + +/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */ + + +#include <linux/config.h> +#include <linux/init.h> + + +#ifdef CONFIG_ATM_ZATM +extern int zatm_detect(void); +#endif +#ifdef CONFIG_ATM_AMBASSADOR +extern int amb_detect(void); +#endif +#ifdef CONFIG_ATM_HORIZON +extern int hrz_detect(void); +#endif +#ifdef CONFIG_ATM_FORE200E +extern int fore200e_detect(void); +#endif +#ifdef CONFIG_ATM_LANAI +extern int lanai_detect(void); +#endif + + +/* + * For historical reasons, atmdev_init returns the number of devices found. + * Note that some detections may not go via atmdev_init (e.g. eni.c), so this + * number is meaningless. + */ + +int __init atmdev_init(void) +{ + int devs; + + devs = 0; +#ifdef CONFIG_ATM_ZATM + devs += zatm_detect(); +#endif +#ifdef CONFIG_ATM_AMBASSADOR + devs += amb_detect(); +#endif +#ifdef CONFIG_ATM_HORIZON + devs += hrz_detect(); +#endif +#ifdef CONFIG_ATM_FORE200E + devs += fore200e_detect(); +#endif +#ifdef CONFIG_ATM_LANAI + devs += lanai_detect(); +#endif + return devs; +} diff --git a/drivers/atm/atmsar11.data b/drivers/atm/atmsar11.data new file mode 100644 index 000000000000..5dc8a7613f57 --- /dev/null +++ b/drivers/atm/atmsar11.data @@ -0,0 +1,2063 @@ +/* + Madge Ambassador ATM Adapter microcode. + Copyright (C) 1995-1999 Madge Networks Ltd. + + This microcode data is placed under the terms of the GNU General + Public License. The GPL is contained in /usr/doc/copyright/GPL on a + Debian system and in the file COPYING in the Linux kernel source. + + We would prefer you not to distribute modified versions without + consultation and not to ask for assembly/other microcode source. +*/ + + 0x401a6800, + 0x00000000, + 0x335b007c, + 0x13600005, + 0x335b1000, + 0x3c1aa0c0, + 0x375a0180, + 0x03400008, + 0x00000000, + 0x1760fffb, + 0x335b4000, + 0x401a7000, + 0x13600003, + 0x241b0fc0, + 0xaf9b4500, + 0x25080008, + 0x03400008, + 0x42000010, + 0x8f810c90, + 0x32220002, + 0x10400003, + 0x3c03a0d1, + 0x2463f810, + 0x0060f809, + 0x24210001, + 0x1000001a, + 0xaf810c90, + 0x82020011, + 0xaf900c48, + 0x0441000a, + 0x34420080, + 0x967d0002, + 0x96020012, + 0x00000000, + 0x105d0011, + 0x00000000, + 0x04110161, + 0xa6620002, + 0x1000000d, + 0xae62000c, + 0x34848000, + 0xa2020011, + 0x4d01ffff, + 0x00000000, + 0x8f834c00, + 0x00000000, + 0xaf830fec, + 0x00e0f809, + 0x03e03821, + 0x00041400, + 0x0440fff7, + 0x00000000, + 0xaf80460c, + 0x8e100008, + 0x4d01ffff, + 0x00000000, + 0x8f834c00, + 0x4900001d, + 0xaf830fec, + 0x8f820cbc, + 0x8f9d0c4c, + 0x24420001, + 0x97be0000, + 0xaf820cbc, + 0x13c00009, + 0xaca200d8, + 0xa7a00000, + 0x3c0100d1, + 0x003e0825, + 0x9422002c, + 0x0411013f, + 0xa4220002, + 0xac22000c, + 0xac200010, + 0x8f9e0c54, + 0x27bd0002, + 0x17be0002, + 0x8ca200c0, + 0x8f9d0c50, + 0x8f970fc8, + 0xaf9d0c4c, + 0x12e20005, + 0x87804002, + 0x3c02a0d1, + 0x2442f94c, + 0x0040f809, + 0x00000000, + 0x00e0f809, + 0x03e03821, + 0x4500ffdc, + 0x8e11000c, + 0x3c1300d1, + 0x00111102, + 0x2c430400, + 0x1060ffb9, + 0x00021180, + 0x02629821, + 0x8e76003c, + 0x32220008, + 0x1440ffb7, + 0x8e770034, + 0x8e750030, + 0x3c03cfb0, + 0x16c00003, + 0x02d5102b, + 0x041100be, + 0x00000000, + 0x1040ffa6, + 0x00701826, + 0x4d01ffff, + 0x00000000, + 0x8f824c00, + 0xaf974c00, + 0xaf820fec, + 0xac760010, + 0x02609021, + 0x32220002, + 0x10400007, + 0x8f944a00, + 0x9602003a, + 0x34840004, + 0x14400003, + 0xaf820fbc, + 0x3c029000, + 0xaf820fbc, + 0x8e100008, + 0x32943f00, + 0x8e11000c, + 0x2694ff00, + 0x12800073, + 0x3c1300d1, + 0x49010071, + 0x32370008, + 0x16e0006f, + 0x00111102, + 0x2c430400, + 0x1060006c, + 0x0002b980, + 0x00041740, + 0x0440003a, + 0x02779821, + 0x12720023, + 0x26d60030, + 0xae56003c, + 0x8e76003c, + 0x8e770034, + 0x8e750030, + 0x3c03cfb0, + 0x16c00003, + 0x02d5102b, + 0x04110091, + 0x00000000, + 0x10400060, + 0x2e821000, + 0x14400009, + 0x00701826, + 0x4d01ffff, + 0x00000000, + 0x8f824c00, + 0xaf974c00, + 0xac760010, + 0xae420034, + 0x1000ffd0, + 0xaf80460c, + 0x00e0f809, + 0x03e03821, + 0x3c03cfb0, + 0x00701826, + 0xae460034, + 0x4d01ffff, + 0x00000000, + 0x8f824c00, + 0xaf974c00, + 0xaf820fec, + 0xac760010, + 0x1000ffc3, + 0xaf80460c, + 0x02d5102b, + 0x10400042, + 0x3c17cfb0, + 0x2e821000, + 0x14400006, + 0x02f0b826, + 0x4d01ffff, + 0x00000000, + 0xaef60010, + 0x1000ffb8, + 0xaf80460c, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x8f824c00, + 0xaf864c00, + 0xaef60010, + 0xaf820fec, + 0x1000ffae, + 0xaf80460c, + 0x3084fffb, + 0x8e570038, + 0x3242ffc0, + 0x00021182, + 0xa7820fb8, + 0xaf970fb4, + 0x865d002a, + 0x865e0008, + 0xa79d0fba, + 0x279d0f18, + 0x33de0060, + 0x03bee821, + 0x001ef0c2, + 0x03bee821, + 0x8f970c58, + 0x4d01ffff, + 0x00000000, + 0x8f834c00, + 0x8fa2001c, + 0x12e30003, + 0x3c030c40, + 0x3c1ec000, + 0xaf9e0fbc, + 0xac620fb4, + 0x8fa30018, + 0x2442000c, + 0x14430002, + 0xaf80460c, + 0x8fa20014, + 0xae40003c, + 0xafa2001c, + 0x8e76003c, + 0x8e770034, + 0x8e750030, + 0x3c03cfb0, + 0x16c00003, + 0x02d5102b, + 0x0411003c, + 0x00000000, + 0x00701826, + 0x4d01ffff, + 0x00000000, + 0xaca500e4, + 0x10400032, + 0xaf974c00, + 0x1000ff7f, + 0xac760010, + 0x00041740, + 0x04400007, + 0x26d60030, + 0xae56003c, + 0x00e0f809, + 0x03e03821, + 0xaf80460c, + 0x1000ff39, + 0xae460034, + 0x8e570038, + 0x3242ffc0, + 0x00021182, + 0xa7820fb8, + 0xaf970fb4, + 0x8f970c58, + 0x00e0f809, + 0x03e03821, + 0x12e60003, + 0x3c030c40, + 0x3c02c000, + 0xaf820fbc, + 0x865d002a, + 0x865e0008, + 0xa79d0fba, + 0x279d0f18, + 0x33de0060, + 0x03bee821, + 0x001ef0c2, + 0x03bee821, + 0x8fa2001c, + 0x4d01ffff, + 0x00000000, + 0x8f974c00, + 0xac620fb4, + 0x3084fffb, + 0x8fa30018, + 0x2442000c, + 0x14430002, + 0xaf80460c, + 0x8fa20014, + 0xae40003c, + 0xafa2001c, + 0x4d01ffff, + 0x00000000, + 0xaca500e4, + 0x1000ff13, + 0xaf974c00, + 0x00e0f809, + 0x03e03821, + 0x1000ff0f, + 0x00000000, + 0x1040005b, + 0x867e0008, + 0x279d0f18, + 0x33de0060, + 0x03bee821, + 0x001e10c2, + 0x03a2e821, + 0x8fb70008, + 0x8fa2000c, + 0x8ef60004, + 0x12e20028, + 0x86620008, + 0x82030010, + 0x00021740, + 0x04410019, + 0x24630001, + 0x10600017, + 0x3c02d1b0, + 0x00501026, + 0x4d01ffff, + 0x00000000, + 0x8f9e4c00, + 0xac560010, + 0x26d6fffe, + 0x86020010, + 0x3c03cfb0, + 0x34632000, + 0xa662002a, + 0x8ee20000, + 0x26f70008, + 0xae620038, + 0x8fa20020, + 0xafb70008, + 0x2417ffff, + 0x02c2a821, + 0x4d01ffff, + 0x00000000, + 0xaf9e4c00, + 0x03e00008, + 0xae750030, + 0x8ee20000, + 0x26f70008, + 0xae620038, + 0x8fa20020, + 0xafb70008, + 0x2417ffff, + 0xa677002a, + 0x02c2a821, + 0x3c03cfb0, + 0x03e00008, + 0xae750030, + 0x001e18c2, + 0x00651821, + 0x8c6300c8, + 0x8fa20010, + 0x00000000, + 0x0062b023, + 0x1ec00003, + 0x8fa10004, + 0x12c0001b, + 0x0022b023, + 0x2ec30041, + 0x14600002, + 0x3c150040, + 0x24160040, + 0x00161e80, + 0x00031882, + 0x00751825, + 0x4d01ffff, + 0x00000000, + 0x8f954c00, + 0x001eb840, + 0x00771821, + 0xac624d00, + 0x00561021, + 0x14410002, + 0x27830d00, + 0x8fa20000, + 0x02e3b821, + 0xafa20010, + 0x02d71821, + 0xafa3000c, + 0x4d01ffff, + 0x00000000, + 0x8ef60004, + 0x1000ffb5, + 0xaf954c00, + 0x3c16dead, + 0xae76003c, + 0xae600038, + 0x26d5ffff, + 0x00001021, + 0x03e00008, + 0xae750030, + 0x2c430ab2, + 0x10600005, + 0x2c4324b2, + 0x10000004, + 0x24020ab2, + 0x10000002, + 0x240224b1, + 0x1060fffd, + 0x304301ff, + 0x00031840, + 0x3c1da0d1, + 0x27bdd6cc, + 0x007d1821, + 0x94630000, + 0x0002ea42, + 0x00031c00, + 0x27bdfffb, + 0x03e00008, + 0x03a31006, + 0x24030fc0, + 0xaf834500, + 0x10000002, + 0x01206021, + 0x3c0ccfb0, + 0x11e00056, + 0x01896026, + 0x85fe0000, + 0x00000000, + 0x13c00047, + 0x3c02cfb0, + 0x07c0002d, + 0x001e1f80, + 0x04610034, + 0x001e1fc0, + 0x04600009, + 0x3c02d3b0, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x8f864c00, + 0x8f990fec, + 0x1000000b, + 0xaf994c00, + 0x01e27826, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x8f864c00, + 0xaf994c00, + 0xadef2010, + 0x3c02d3b0, + 0x01e27826, + 0x8f820fc0, + 0x8f830fc4, + 0xaf824d00, + 0x8de20004, + 0xa5e00000, + 0xac620000, + 0x8c620000, + 0x24020380, + 0xaf824d00, + 0x8f824d00, + 0x8f820f14, + 0x24630004, + 0x14620002, + 0x2419ffff, + 0x8f830f10, + 0xaca500e4, + 0xaf830fc4, + 0x4d01ffff, + 0x00000000, + 0x8f824c80, + 0x1000001f, + 0xade2003c, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0xa5e00000, + 0x8f864c00, + 0x15800022, + 0xaf8f4540, + 0x10000017, + 0x01e27826, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x8f864c00, + 0xaf994c00, + 0xadef2010, + 0x3c02cfb0, + 0x01e27826, + 0xa5e00000, + 0x4d01ffff, + 0x00000000, + 0x10000007, + 0x8f994c00, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x8f864c00, + 0x8f990fec, + 0x1580000a, + 0xaf8f4500, + 0x00007821, + 0x10000014, + 0xaf190014, + 0x00e0f809, + 0x03e03821, + 0x4d01ffff, + 0x00000000, + 0x1180fff8, + 0x8f864c00, + 0x85220000, + 0x01207821, + 0x0440000a, + 0x8d290008, + 0x130b0004, + 0x000c1602, + 0xaf190014, + 0x8d790014, + 0x0160c021, + 0xaf994c00, + 0xad8e4010, + 0x3042003f, + 0x01c27021, + 0x00041780, + 0x0440018b, + 0x8f824a00, + 0x30818000, + 0x30420004, + 0x1440ff8d, + 0x8d4b0000, + 0x1020000c, + 0x30847fff, + 0x8f820c48, + 0x0120f021, + 0x24430034, + 0x8c5d000c, + 0x24420004, + 0xafdd000c, + 0x1462fffc, + 0x27de0004, + 0xa5210000, + 0x1000ff82, + 0x25080008, + 0x11600058, + 0x00000000, + 0x857d0008, + 0x8d63000c, + 0x9562000a, + 0x8d410004, + 0x07a10026, + 0x00621821, + 0xa563000a, + 0x00031c02, + 0x041101a0, + 0x000318c0, + 0x001d16c0, + 0x0441001f, + 0x27a20080, + 0x00021cc0, + 0x0461000e, + 0x0040e821, + 0x27bd0080, + 0x95620000, + 0x95630002, + 0x3442000c, + 0xad22000c, + 0x24020100, + 0xa5220010, + 0x9562002c, + 0xa5230014, + 0xa5220012, + 0xa5200016, + 0x34028000, + 0xa5220000, + 0xa57d0008, + 0x07a0000c, + 0x8f820c4c, + 0x8f830c50, + 0x2441ffe8, + 0x0023f02b, + 0x13c00002, + 0x00201021, + 0x24420400, + 0x945e0000, + 0x2441fffe, + 0x17c0fff9, + 0xad620010, + 0xa44b0000, + 0x142b001c, + 0xad400000, + 0xad400004, + 0x254a0008, + 0x3142007f, + 0x1440000e, + 0x00041780, + 0x04410003, + 0x8f820fe0, + 0x10000006, + 0x34840001, + 0x34840002, + 0x24420008, + 0x34421000, + 0x38421000, + 0xaf820fe0, + 0x354a0100, + 0x394a0100, + 0x39420080, + 0xaf820fe4, + 0x001d14c0, + 0x04410003, + 0x33a2efff, + 0x1000ff3c, + 0xa5620008, + 0x07a0009f, + 0x33a2fffe, + 0x10000021, + 0xa5620008, + 0x8d620024, + 0x001d1cc0, + 0x04610004, + 0xad420000, + 0x33a3efff, + 0x1000ff31, + 0xa5630008, + 0x07a00005, + 0x33a3fffe, + 0xa5630008, + 0x8d4b0000, + 0x1000ffaa, + 0x00000000, + 0x1000008e, + 0x25080008, + 0x254a0008, + 0x3142007f, + 0x1440000e, + 0x00041780, + 0x04410003, + 0x8f820fe0, + 0x10000006, + 0x34840001, + 0x34840002, + 0x24420008, + 0x34421000, + 0x38421000, + 0xaf820fe0, + 0x354a0100, + 0x394a0100, + 0x39420080, + 0xaf820fe4, + 0x11000003, + 0x8d4b0000, + 0x1000ff93, + 0x2508fff8, + 0x8f820fd8, + 0x8f830fdc, + 0x8f810fd4, + 0x1062001d, + 0x24620008, + 0x4d01ffff, + 0x00000000, + 0x8f8c4c00, + 0x847f0000, + 0x3c1e00d1, + 0x33fd03ff, + 0x001d5980, + 0x017e5821, + 0x857e0008, + 0x001de900, + 0x001e0f00, + 0x03e1f825, + 0x07e00003, + 0xaf820fdc, + 0x879e0ca0, + 0x278b0c98, + 0x07c10042, + 0x3c020840, + 0x3c01f7b0, + 0x8d620020, + 0x00230826, + 0xac220000, + 0x8c620004, + 0x94630002, + 0x2442fff8, + 0x00431021, + 0x1000004e, + 0xad620020, + 0x8f820fd0, + 0x87830ca0, + 0x14220007, + 0x278b0c98, + 0x41000051, + 0x3c018000, + 0xaca100e0, + 0x8ca100c4, + 0x00000000, + 0x1022004c, + 0x0022e823, + 0x8f9f0f0c, + 0x07a10002, + 0xaf810fd4, + 0x03e2e823, + 0x2fa30041, + 0x14600002, + 0x3c1e0040, + 0x241d0040, + 0x001d1e80, + 0x00031882, + 0x007e1825, + 0x4d01ffff, + 0x00000000, + 0x8f8c4c00, + 0xac624cc0, + 0x005d1021, + 0x145f0002, + 0x27830cc0, + 0x8f820f08, + 0x03a3f021, + 0xaf820fd0, + 0xaf9e0fd8, + 0x4d01ffff, + 0x00000000, + 0x1000ffc3, + 0x24620008, + 0x8d63000c, + 0x8d7d0010, + 0xa563000a, + 0x13a00002, + 0x00031c02, + 0xa7a00000, + 0x000318c0, + 0x041100ef, + 0x00681821, + 0x4d01ffff, + 0x00000000, + 0x8f820c44, + 0x8f830c40, + 0xad620010, + 0xa5630004, + 0xa5630006, + 0x10000021, + 0xaf8c4c00, + 0xa57d0000, + 0x8c7d0004, + 0x94630002, + 0xac5d4c40, + 0x27a20008, + 0xad620018, + 0x03a3e821, + 0x27bdfff4, + 0xad7d001c, + 0x27bd0004, + 0xad7d0020, + 0x37c18001, + 0x001e17c0, + 0x0441ffe0, + 0xa5610008, + 0x4d01ffff, + 0x00000000, + 0x8f820c44, + 0x8f830c40, + 0xad620010, + 0xa5630004, + 0xa5630006, + 0x8f820fd8, + 0x8f830fdc, + 0x4d01ffff, + 0x00000000, + 0x1462ff95, + 0x24620008, + 0xaf8c4c00, + 0x87830ca0, + 0x278b0c98, + 0x0461fe97, + 0x00041700, + 0x04400005, + 0x95620000, + 0x11780006, + 0x00000000, + 0xaf0e0010, + 0xa70d0004, + 0x3084fff7, + 0x956d0004, + 0x8d6e0010, + 0x25adffd0, + 0x05a1fe8f, + 0xad22000c, + 0x3c0cffb0, + 0x01896026, + 0x000d1822, + 0x25ad0030, + 0x8d7e0018, + 0x8d61001c, + 0x4d01ffff, + 0x00000000, + 0x103e0036, + 0x8f9d4c00, + 0x3c010840, + 0xac3e4c40, + 0x27de0008, + 0x11a00017, + 0xad7e0018, + 0x000df600, + 0x019e6025, + 0x4d01ffff, + 0x00000000, + 0xad8e4010, + 0x8f8d0c40, + 0x957e0006, + 0x8f8e0c44, + 0x03cdf021, + 0xa57e0006, + 0x000cf782, + 0x000c0e02, + 0x03c1f021, + 0x001e0f80, + 0x000c6200, + 0x000c6202, + 0x01816025, + 0x33de003c, + 0x019e6021, + 0x34010001, + 0x10000008, + 0xa5210000, + 0x957e0006, + 0x4d01ffff, + 0x00000000, + 0x8f8d0c40, + 0x8f8e0c44, + 0x03cdf021, + 0xa57e0006, + 0x4d01ffff, + 0x00000000, + 0x01a3f02b, + 0x17c00008, + 0x0003f600, + 0x01a36823, + 0x019e6025, + 0x01896026, + 0x4d01fff7, + 0x00000000, + 0x1000fe58, + 0xaf9d4c00, + 0x8d7e0018, + 0x8d61001c, + 0x00000000, + 0x143effce, + 0x006d1823, + 0x4d01ffff, + 0x00000000, + 0x2c610008, + 0x10200017, + 0x95610008, + 0x00000000, + 0x0001ff80, + 0x07e0000b, + 0x34210002, + 0x006d1821, + 0x00031e00, + 0x01836025, + 0x01896026, + 0x240d002c, + 0xa5610008, + 0x4d01ffff, + 0x00000000, + 0x1000fe40, + 0xaf9d4c00, + 0x3c1f0c40, + 0xaffe4fa8, + 0x3021fffd, + 0xa5610008, + 0x3c0cd3cf, + 0x358ce000, + 0x10000008, + 0x34030002, + 0x3c1f0c40, + 0xaffe4fa8, + 0x11a0fff9, + 0x000df600, + 0x34030003, + 0x019e6025, + 0x01896026, + 0x34840008, + 0x34420002, + 0xad22000c, + 0x95620006, + 0xa5230000, + 0xad220038, + 0x4d01ffff, + 0x00000000, + 0x857e0008, + 0x8f820fa8, + 0x97830fac, + 0xad220004, + 0x33c17fff, + 0xad600010, + 0xa5610008, + 0x1060fe20, + 0xaf9d4c00, + 0xa57e0008, + 0x00031900, + 0x30633ff0, + 0xa5630000, + 0x8f820fb0, + 0x3c030840, + 0xac624c40, + 0x24430008, + 0xad630018, + 0x97830fae, + 0x2442fff4, + 0x00621821, + 0xad63001c, + 0x4d01ffff, + 0x00000000, + 0x8f8d0c40, + 0x8f830c44, + 0xa56d0004, + 0xa56d0006, + 0xad630010, + 0x1000fe0a, + 0xaf9d4c00, + 0x8f820fe0, + 0x00040fc0, + 0x8c430000, + 0x0421001b, + 0x8f9f0fe4, + 0x8c5d0004, + 0xac400004, + 0x1060000e, + 0xac400000, + 0x00000000, + 0x94620028, + 0x00000000, + 0x005f1020, + 0x8c410004, + 0x00000000, + 0x10200003, + 0xac430004, + 0x10000002, + 0xac230024, + 0xac430000, + 0x17a3fff4, + 0x8c630024, + 0x8f820fe0, + 0x3bff0080, + 0x24420008, + 0x34421000, + 0x38421000, + 0xaf820fe0, + 0xaf9f0fe4, + 0x1000fe57, + 0x3084fffe, + 0x10600010, + 0x00000000, + 0x947d0028, + 0x00000000, + 0x03bfe820, + 0x8fa10004, + 0xafa30004, + 0x10200003, + 0x8c5e0004, + 0x10000002, + 0xac230024, + 0xafa30000, + 0x8c610024, + 0x17c3fe48, + 0xac410000, + 0xac400004, + 0xac400000, + 0x1000fe44, + 0x3084fffd, + 0x2c620100, + 0x1440000e, + 0x006a1021, + 0x3143007f, + 0x01431823, + 0x00431823, + 0x3062007f, + 0xa5620028, + 0x00621823, + 0x00031902, + 0x8f820fe0, + 0x2463fff8, + 0x00621821, + 0x34631000, + 0x10000003, + 0x38631000, + 0x34430100, + 0x38630100, + 0x8c620004, + 0x00000000, + 0x10400003, + 0xac6b0004, + 0x03e00008, + 0xac4b0024, + 0x03e00008, + 0xac6b0000, + 0x00000002, + 0xa0d0e000, + 0x00000000, + 0x00001000, + 0x00000006, + 0x00000008, + 0x00000000, + 0x00000008, + 0x00000002, + 0xa0d0d648, + 0x00000000, + 0x00000888, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x24313200, + 0x24313200, + 0x24313200, + 0x00000000, + 0x244d4352, + 0x2420436f, + 0x70797269, + 0x67687420, + 0x28632920, + 0x4d616467, + 0x65204e65, + 0x74776f72, + 0x6b73204c, + 0x74642031, + 0x3939352e, + 0x20416c6c, + 0x20726967, + 0x68747320, + 0x72657365, + 0x72766564, + 0x2e004d61, + 0x64676520, + 0x416d6261, + 0x73736164, + 0x6f722076, + 0x312e3031, + 0x00000000, + 0x00000001, + 0x00000001, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0xfff04000, + 0x00000000, + 0x0c343e2d, + 0x00000000, + 0x3c1ca0d1, + 0x279c5638, + 0x3c1da0d1, + 0x27bddfd0, + 0x3c08a0d1, + 0x2508dfd0, + 0xaf878008, + 0x0c343c13, + 0x00000000, + 0x24040003, + 0x0097000d, + 0x3c08bfc0, + 0x35080230, + 0x8d080000, + 0x00000000, + 0x01000008, + 0x00000000, + 0x27bdffd0, + 0xafbf001c, + 0xafb10018, + 0xafb00014, + 0x3c11fff0, + 0x00008021, + 0x3c180056, + 0x37183b79, + 0x26190200, + 0x17200002, + 0x0319001a, + 0x0007000d, + 0x2401ffff, + 0x17210005, + 0x00000000, + 0x3c018000, + 0x17010002, + 0x00000000, + 0x0006000d, + 0x00001012, + 0x00101840, + 0x3c05a0d1, + 0x24a5d6cc, + 0x00a32021, + 0xa4820000, + 0x26100001, + 0x2a010200, + 0x1420ffea, + 0x00000000, + 0x3c06a0d1, + 0x24c6f9e4, + 0x3c07a0d1, + 0x24e7d648, + 0xace60000, + 0x3c08a0d1, + 0x2508fb14, + 0xace80004, + 0x3c09a0d1, + 0x2529fc94, + 0xace90008, + 0x3c0aa0d1, + 0x254afcd4, + 0xacea000c, + 0x3c0ba0d1, + 0x256bfba8, + 0xaceb0010, + 0x3c0ca0d1, + 0x258cfbc4, + 0xacec0014, + 0x3c0da0d1, + 0x25adfbe0, + 0xaced0018, + 0x3c0ea0d1, + 0x25cefbfc, + 0xacee001c, + 0x3c0fa0d1, + 0x25effc18, + 0xacef0020, + 0x3c18a0d1, + 0x2718fc34, + 0xacf80024, + 0x3c19a0d1, + 0x2739fc50, + 0xacf90028, + 0x3c02a0d1, + 0x2442fc60, + 0xace2002c, + 0x3c03a0d1, + 0x2463fc70, + 0xace30030, + 0x3c04a0d1, + 0x2484fc80, + 0xace40034, + 0x3c05a0d1, + 0x24a5fcb4, + 0xace50038, + 0x3c06a0d1, + 0x24c6fe08, + 0xace6003c, + 0x3c08a0d1, + 0x2508fe90, + 0xace80040, + 0x3c09a0d1, + 0x2529fa38, + 0xace90044, + 0x3c0aa0d1, + 0x254afa74, + 0xacea0048, + 0x24100013, + 0x3c0ba0d1, + 0x256bf9d8, + 0x00106080, + 0x3c0ea0d1, + 0x25ced648, + 0x01cc6821, + 0xadab0000, + 0x26100001, + 0x2a010020, + 0x1420fff6, + 0x00000000, + 0x8f988000, + 0x00000000, + 0xaf000100, + 0x8f828000, + 0x241903ff, + 0xa4590202, + 0x00008021, + 0x8f868000, + 0x24030fff, + 0x00102040, + 0x24c70380, + 0x00e42821, + 0xa4a30000, + 0x26100001, + 0x2a010008, + 0x1420fff7, + 0x00000000, + 0x8f898000, + 0x34089c40, + 0xad2803a0, + 0x8f8b8000, + 0x3c0a00ff, + 0x354affff, + 0xad6a03a4, + 0x00008021, + 0x8f8f8000, + 0x240c0fff, + 0x00106840, + 0x25f80300, + 0x030d7021, + 0xa5cc0000, + 0x26100001, + 0x2a010008, + 0x1420fff7, + 0x00000000, + 0x8f828000, + 0x34199c40, + 0xac590320, + 0x8f848000, + 0x3c0300ff, + 0x3463ffff, + 0xac830324, + 0x8f868000, + 0x240502ff, + 0xa4c50202, + 0x3c08a0c0, + 0x35080180, + 0x3c09a0d1, + 0x2529d5b8, + 0x250a0028, + 0x8d0b0000, + 0x8d0c0004, + 0xad2b0000, + 0xad2c0004, + 0x25080008, + 0x150afffa, + 0x25290008, + 0x40026000, + 0x00000000, + 0xafa20028, + 0x24030022, + 0x3c04a0e0, + 0x34840014, + 0xac830000, + 0x8fa50028, + 0x00000000, + 0x34a61001, + 0x00c01021, + 0xafa60028, + 0x3c07ffbf, + 0x34e7ffff, + 0x00c73824, + 0x00e01021, + 0xafa70028, + 0x40876000, + 0x00000000, + 0x3c080002, + 0x3508d890, + 0x3c09fffe, + 0x35290130, + 0xad280000, + 0x8faa0028, + 0x3c0bf000, + 0x014b5825, + 0x01601021, + 0xafab0028, + 0x01606021, + 0x408c6000, + 0x00000000, + 0x00008021, + 0x00107080, + 0x022e7821, + 0xade00000, + 0x26100001, + 0x2a010400, + 0x1420fffa, + 0x00000000, + 0x24180001, + 0x3c19a0e8, + 0xaf380000, + 0x24020011, + 0x3c03a0f0, + 0x34630017, + 0xa0620000, + 0x3c04f0eb, + 0x34840070, + 0x3c05fff0, + 0x34a54a00, + 0xaca40000, + 0x3c06fceb, + 0x34c60070, + 0xaca60000, + 0x3c07fff0, + 0x34e74700, + 0xace00000, + 0x00008021, + 0x3c08fff0, + 0x35080fc0, + 0x3c09fff0, + 0x35294500, + 0xad280000, + 0x26100001, + 0x2a010004, + 0x1420fff8, + 0x00000000, + 0x00008021, + 0x3c0adead, + 0x00105980, + 0x3c0100d1, + 0x002b0821, + 0xac2a003c, + 0x3c0100d1, + 0x002b0821, + 0xac200030, + 0x3c0100d1, + 0x002b0821, + 0xac200038, + 0x240dffff, + 0x3c0100d1, + 0x002b0821, + 0xac2d0014, + 0x00107100, + 0x3c0100d1, + 0x002b0821, + 0xa42e0000, + 0x3c0100d1, + 0x002b0821, + 0xa4200004, + 0x24180020, + 0x3c0100d1, + 0x002b0821, + 0xa4380008, + 0x3c0100d1, + 0x002b0821, + 0xac200010, + 0x26100001, + 0x2a010400, + 0x1420ffe0, + 0x00000000, + 0x00008021, + 0x001018c0, + 0x3c05a0d1, + 0x24a5e000, + 0x00a32021, + 0xac800000, + 0x3c07a0d1, + 0x24e7e000, + 0x24e80004, + 0x01033021, + 0xacc00000, + 0x26100001, + 0x2a010009, + 0x1420fff3, + 0x00000000, + 0x24090380, + 0x3c0afff0, + 0x354a4d00, + 0xad490000, + 0x3c0ca080, + 0x358c009c, + 0xad800000, + 0x3c0da080, + 0x35ad00a0, + 0xada00000, + 0x3c0e1100, + 0x3c0fa080, + 0x35ef00a8, + 0xadee0000, + 0x41010003, + 0x00000000, + 0x4100ffff, + 0x00000000, + 0x3c18a080, + 0x371800e0, + 0x8f190000, + 0x3c01a0d1, + 0xac39d6c8, + 0x0c343d43, + 0x03202021, + 0x8fb00014, + 0x8fbf001c, + 0x8fb10018, + 0x03e00008, + 0x27bd0030, + 0x0080b821, + 0x3c1cfff0, + 0xa3800c84, + 0xa3800c88, + 0x8f904400, + 0x00002021, + 0xaf800cbc, + 0x240200a8, + 0x27830f00, + 0x2c5d0040, + 0x17a0000c, + 0x3c1dffb0, + 0x03a3e826, + 0xafb74000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x4d01ffff, + 0x00000000, + 0x2442ffc0, + 0x24630040, + 0x1000fff3, + 0x26f70040, + 0x1040000d, + 0x00000000, + 0x0002ee00, + 0x3c010040, + 0x03a1e825, + 0x3c01fff0, + 0x03a1e826, + 0x03a3e826, + 0xafb74000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x4d01ffff, + 0x00000000, + 0x3c05a080, + 0x8f820f08, + 0x00000000, + 0xaf820fd4, + 0xaf820fd0, + 0xaca200c4, + 0x8f820f10, + 0x00000000, + 0x00021d82, + 0xaf830fc0, + 0x00031d80, + 0x00431023, + 0x3c01a080, + 0x00411025, + 0xaf820fc4, + 0xaf820f10, + 0x8f820f14, + 0x00000000, + 0x00431023, + 0x3c01a080, + 0x00411025, + 0xaf820f14, + 0x24030003, + 0x279d0f18, + 0x24be00c8, + 0x27810d00, + 0x8fa20000, + 0x00000000, + 0xafa20010, + 0xafc20000, + 0xafa10008, + 0xafa1000c, + 0x8fa20014, + 0x00000000, + 0xafa2001c, + 0x27bd0024, + 0x27de0004, + 0x24210040, + 0x1460fff3, + 0x2463ffff, + 0x8f820f00, + 0x00000000, + 0xaf820fc8, + 0xaca200c0, + 0x27820800, + 0x2403000f, + 0xac400000, + 0x24420004, + 0x1460fffd, + 0x2463ffff, + 0x8f830fc0, + 0x00000000, + 0xaf834d00, + 0x8f834d00, + 0x8f830f14, + 0x8f820f10, + 0x2463fffc, + 0xac400000, + 0x1443fffe, + 0x24420004, + 0x24020380, + 0xaf824d00, + 0x279d0f18, + 0x27a10090, + 0x8fa20014, + 0x8fa30018, + 0x00000000, + 0x00621823, + 0x2c7f0040, + 0x17e00009, + 0x3c1f0040, + 0x37ff0800, + 0x03a0f021, + 0x4d01ffff, + 0x00000000, + 0xafe20000, + 0x24420040, + 0x1000fff6, + 0x2463ffc0, + 0x10600006, + 0x37ff0800, + 0x00031e00, + 0x03e3f825, + 0x4d01ffff, + 0x00000000, + 0xafe20000, + 0x27bd0024, + 0x17a1ffe8, + 0x00000000, + 0x00003821, + 0x8fc20014, + 0x8fc30018, + 0x00000000, + 0x00621823, + 0x2c7f0040, + 0x13e00004, + 0x3c1f0040, + 0x00030e00, + 0x10000002, + 0x03e1f825, + 0x24030040, + 0x37ff0800, + 0x241e03e7, + 0x00000821, + 0x4d01ffff, + 0x00000000, + 0xafe20000, + 0x00230821, + 0x4900fffb, + 0x00000000, + 0x87804002, + 0x17c0fff8, + 0x27deffff, + 0x14e00004, + 0x34e74000, + 0x03e7f825, + 0x1000fff0, + 0xaf810c60, + 0xaf810c5c, + 0x3c01a0d1, + 0x8c22d6c8, + 0x00000000, + 0x3c01a080, + 0xac2200e0, + 0x3c01a080, + 0x8c2000e0, + 0xaf800fb4, + 0xa7800fb8, + 0xa7800fba, + 0xa7800fbc, + 0xa7800fbe, + 0x27820cc0, + 0xaf820fdc, + 0xaf820fd8, + 0x3c02a0d1, + 0x2442dacc, + 0xaf820c4c, + 0xaf820c50, + 0x24420400, + 0xaf820c54, + 0x2402001e, + 0x3c03fff0, + 0x247d0040, + 0xac7d0008, + 0x03a01821, + 0x1440fffc, + 0x2442ffff, + 0x3c1dfff0, + 0xac7d0008, + 0x3c02c704, + 0x3442dd7b, + 0xaf820c58, + 0x3c070000, + 0x24e70158, + 0x08343fa9, + 0x00000000, + 0x8e620038, + 0x00000000, + 0x14400005, + 0x8f830c94, + 0x12a00022, + 0x24630001, + 0x10000020, + 0xaf830c94, + 0xaf820fb4, + 0x3262ffc0, + 0x00021182, + 0x8663002a, + 0xa7820fb8, + 0x3c02a000, + 0xaf820fbc, + 0xa7830fba, + 0x867e0008, + 0x279d0f18, + 0x33de0060, + 0x03bee821, + 0x001ef0c2, + 0x03bee821, + 0x8fa2001c, + 0x3c030c40, + 0x4d01ffff, + 0x00000000, + 0x8f974c00, + 0xac620fb4, + 0x8fa30018, + 0x2442000c, + 0x14430003, + 0x00000000, + 0x8fa20014, + 0x00000000, + 0xafa2001c, + 0x4d01ffff, + 0x00000000, + 0xaca500e4, + 0xaf974c00, + 0x03e00008, + 0xae60003c, + 0x3c0da0d1, + 0x25add500, + 0x11a00021, + 0x00000000, + 0x8da90000, + 0x00000000, + 0x1120001d, + 0x00000000, + 0x8daa0004, + 0x8dab0008, + 0x8dac000c, + 0x00094740, + 0x05010004, + 0x00000000, + 0x3c08a0d1, + 0x2508d638, + 0x01485021, + 0x00094780, + 0x05010007, + 0x00000000, + 0x1180000d, + 0x00000000, + 0xad400000, + 0x254a0004, + 0x1000fffb, + 0x258cfffc, + 0x11800007, + 0x00000000, + 0x8d6e0000, + 0x256b0004, + 0xad4e0000, + 0x254a0004, + 0x1000fff9, + 0x258cfffc, + 0x1000ffe1, + 0x25ad0010, + 0x03e00008, + 0x00000000, + 0x3c021040, + 0xac574ff0, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x4d01ffff, + 0x00000000, + 0x8f820ffc, + 0x00000000, + 0x3042001f, + 0x00021080, + 0x3c17a0d1, + 0x02e2b821, + 0x26f7d648, + 0x8ef70000, + 0x00000000, + 0x02e00008, + 0x00000000, + 0x2402ffff, + 0xaf820ffc, + 0x8f970fc8, + 0x3c021040, + 0xac570ff0, + 0x8f820f04, + 0x26f70010, + 0x16e20004, + 0xaf970fc8, + 0x8f970f00, + 0x00000000, + 0xaf970fc8, + 0x4d01ffff, + 0x00000000, + 0x03e00008, + 0x00000000, + 0x3c1fa0d1, + 0x27fff02c, + 0x1000ffed, + 0x8f970ff0, + 0x3c0200d1, + 0x32f703ff, + 0x0017b980, + 0x02e2b825, + 0xaee0003c, + 0x2402ffff, + 0xaee20030, + 0xaee20014, + 0x97830ff4, + 0x97820ff8, + 0x3c1d0000, + 0x27bd0698, + 0xa6e30008, + 0xa6e20002, + 0xaf9f0fe8, + 0x03a0f809, + 0xa6e2002c, + 0x8f9f0fe8, + 0x1000ffd9, + 0xaee2000c, + 0x8f970ff0, + 0x3c0200d1, + 0x32f703ff, + 0x0017b980, + 0x02e2b825, + 0x97820ff4, + 0x3c030000, + 0x24630698, + 0xa6e20002, + 0xaf9f0fe8, + 0x0060f809, + 0xa6e2002c, + 0x8f9f0fe8, + 0x1000ffca, + 0xaee2000c, + 0x8f970ff0, + 0x3c0200d1, + 0x32f703ff, + 0x0017b980, + 0x02e2b825, + 0x97820ff4, + 0x00000000, + 0x96e30008, + 0xa6e20008, + 0x00431026, + 0x30420060, + 0x1040ffbd, + 0x8ee2003c, + 0xaee0003c, + 0x1040ffba, + 0x3c028800, + 0xaf820fbc, + 0x8ee20038, + 0xaee00038, + 0x30630060, + 0x279d0f18, + 0x03a3e821, + 0x000318c2, + 0x03a3e821, + 0x8fa3001c, + 0x1040ffaf, + 0xaf820fb4, + 0x3c020c40, + 0xac430fb4, + 0x8fa20018, + 0x2463000c, + 0x14430003, + 0x00000000, + 0x8fa30014, + 0x00000000, + 0xafa3001c, + 0x4d01ffff, + 0x00000000, + 0x1000ffa2, + 0x00000000, + 0x8f970ff0, + 0x3c0200d1, + 0xa7970fb8, + 0x0017b980, + 0x32f7ffc0, + 0x02e2b821, + 0xaee00030, + 0x3c02dead, + 0x8ee3003c, + 0xaee2003c, + 0x8ee20038, + 0x1060ff95, + 0xaee00038, + 0x3c038800, + 0xaf830fbc, + 0x86e30008, + 0x27970f18, + 0x30630060, + 0x02e3b821, + 0x000318c2, + 0x02e3b821, + 0x8ee3001c, + 0x1040ff8a, + 0xaf820fb4, + 0x3c020c40, + 0xac430fb4, + 0x8ee20018, + 0x2463000c, + 0x14430003, + 0x00000000, + 0x8ee30014, + 0x00000000, + 0xaee3001c, + 0x4d01ffff, + 0x00000000, + 0x1000ff7d, + 0x00000000, + 0x8f820ff0, + 0x8f970ff4, + 0x90410000, + 0x00000000, + 0x00370825, + 0x1000ff76, + 0xa0410000, + 0x8f820ff0, + 0x8f970ff4, + 0x94410000, + 0x00000000, + 0x00370825, + 0x1000ff6f, + 0xa4410000, + 0x8f820ff0, + 0x8f970ff4, + 0x8c410000, + 0x00000000, + 0x00370825, + 0x1000ff68, + 0xac410000, + 0x8f820ff0, + 0x8f970ff4, + 0x90410000, + 0x02e0b827, + 0x00370824, + 0x1000ff61, + 0xa0410000, + 0x8f820ff0, + 0x8f970ff4, + 0x94410000, + 0x02e0b827, + 0x00370824, + 0x1000ff5a, + 0xa4410000, + 0x8f820ff0, + 0x8f970ff4, + 0x8c410000, + 0x02e0b827, + 0x00370824, + 0x1000ff53, + 0xac410000, + 0x8f820ff0, + 0x8f970ff4, + 0x1000ff4f, + 0xa0570000, + 0x8f820ff0, + 0x8f970ff4, + 0x1000ff4b, + 0xa4570000, + 0x8f820ff0, + 0x8f970ff4, + 0x1000ff47, + 0xac570000, + 0x8f820ff0, + 0x00000000, + 0x8c420000, + 0x1000ff42, + 0xaf820ff4, + 0x3c01a0c2, + 0x8c22c000, + 0x00000000, + 0xaf820ff0, + 0x3c01a0c2, + 0x8c22c004, + 0x1000ff3a, + 0xaf820ff4, + 0x3c01a0d1, + 0x8c22d5ac, + 0x00000000, + 0xaf820ff0, + 0x3c01a0d1, + 0x8c22d5b0, + 0x1000ff32, + 0xaf820ff4, + 0x3c02a0f0, + 0xac400000, + 0x90570153, + 0x00000000, + 0xa3970c80, + 0x90570157, + 0x00000000, + 0xa3970c81, + 0x9057015b, + 0x00000000, + 0xa3970c87, + 0x9057015f, + 0x00000000, + 0xa3970c86, + 0x90570163, + 0x00000000, + 0x32f70007, + 0xa3970c85, + 0x90570193, + 0x00000000, + 0xa3970c8b, + 0x90570197, + 0x00000000, + 0xa3970c8a, + 0x9057019b, + 0x00000000, + 0x32f70007, + 0xa3970c89, + 0x9057000b, + 0x00000000, + 0x32f700e0, + 0x00170942, + 0x90570047, + 0x00000000, + 0x32f70078, + 0x00370825, + 0x90570067, + 0x00000000, + 0x32f7000f, + 0x0017b9c0, + 0x00370825, + 0x905700c7, + 0x00000000, + 0x32f7002f, + 0x0017bac0, + 0x00370825, + 0x90570147, + 0x00000000, + 0x32f7001e, + 0x0017bc00, + 0x00370825, + 0x90570183, + 0x00000000, + 0x32f70060, + 0x0017bc00, + 0x00370825, + 0xaf810c8c, + 0x3c021840, + 0x8f970fc8, + 0x00000000, + 0x8f970ff0, + 0x00000000, + 0xac570c80, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x00000000, + 0x4d01ffff, + 0x00000000, + 0x3c02a0d1, + 0x2442f998, + 0xaf800c90, + 0xaf800c94, + 0x00400008, + 0x00000000, + 0x87970ff0, + 0x3c1300d1, + 0xa6770008, + 0x3c030000, + 0x24630520, + 0xaf9f0fe8, + 0x0060f809, + 0x24020001, + 0x8f9f0fe8, + 0x1040feda, + 0x97970ff0, + 0x27830f18, + 0x00771821, + 0x0017b8c2, + 0x02e3b821, + 0x3c028800, + 0xaf820fbc, + 0x8e620038, + 0xa7800fb8, + 0xaf820fb4, + 0x8ee3001c, + 0x3c020c40, + 0xac430fb4, + 0x8ee20018, + 0x2463000c, + 0x14430004, + 0xaee3001c, + 0x8ee30014, + 0x00000000, + 0xaee3001c, + 0x4d01ffff, + 0x00000000, + 0x1000ffdf, + 0x00000000, + 0x8f820c5c, + 0x8f830c60, + 0xaf820ff0, + 0x1000febe, + 0xaf830ff4, + 0x23890800, + 0x01201821, + 0x2402000f, + 0x206c0040, + 0xac6c0008, + 0x01801821, + 0x1440fffc, + 0x2042ffff, + 0xac690008, + 0x278b0c98, + 0xa5600000, + 0x2403ffff, + 0xad630014, + 0x34020001, + 0x34420020, + 0xa5620008, + 0x278a0e00, + 0x01401021, + 0x00001821, + 0xac400000, + 0x24630004, + 0x2c6c0100, + 0x1580fffc, + 0x24420004, + 0x3c02a0d1, + 0x2442e000, + 0xaf820fe0, + 0x3c1800d1, + 0x01206021, + 0x00006821, + 0x00007821, + 0x00005821, + 0x00004021, + 0x40026000, + 0x00000000, + 0x34424001, + 0x40826000, + 0x3c020000, + 0x244206f8, + 0x00400008, + 0x00000000, diff --git a/drivers/atm/atmsar11.regions b/drivers/atm/atmsar11.regions new file mode 100644 index 000000000000..42252b7c0de3 --- /dev/null +++ b/drivers/atm/atmsar11.regions @@ -0,0 +1,6 @@ +/* + See copyright and licensing conditions in ambassador.* files. +*/ + { 0x00000080, 993, }, + { 0xa0d0d500, 80, }, + { 0xa0d0f000, 978, }, diff --git a/drivers/atm/atmsar11.start b/drivers/atm/atmsar11.start new file mode 100644 index 000000000000..dba55e77d8fd --- /dev/null +++ b/drivers/atm/atmsar11.start @@ -0,0 +1,4 @@ +/* + See copyright and licensing conditions in ambassador.* files. +*/ + 0xa0d0f000 diff --git a/drivers/atm/atmtcp.c b/drivers/atm/atmtcp.c new file mode 100644 index 000000000000..f2f01cb82cb4 --- /dev/null +++ b/drivers/atm/atmtcp.c @@ -0,0 +1,505 @@ +/* drivers/atm/atmtcp.c - ATM over TCP "device" driver */ + +/* Written 1997-2000 by Werner Almesberger, EPFL LRC/ICA */ + + +#include <linux/module.h> +#include <linux/wait.h> +#include <linux/atmdev.h> +#include <linux/atm_tcp.h> +#include <linux/bitops.h> +#include <linux/init.h> +#include <asm/uaccess.h> +#include <asm/atomic.h> + + +extern int atm_init_aal5(struct atm_vcc *vcc); /* "raw" AAL5 transport */ + + +#define PRIV(dev) ((struct atmtcp_dev_data *) ((dev)->dev_data)) + + +struct atmtcp_dev_data { + struct atm_vcc *vcc; /* control VCC; NULL if detached */ + int persist; /* non-zero if persistent */ +}; + + +#define DEV_LABEL "atmtcp" + +#define MAX_VPI_BITS 8 /* simplifies life */ +#define MAX_VCI_BITS 16 + + +/* + * Hairy code ahead: the control VCC may be closed while we're still + * waiting for an answer, so we need to re-validate out_vcc every once + * in a while. + */ + + +static int atmtcp_send_control(struct atm_vcc *vcc,int type, + const struct atmtcp_control *msg,int flag) +{ + DECLARE_WAITQUEUE(wait,current); + struct atm_vcc *out_vcc; + struct sk_buff *skb; + struct atmtcp_control *new_msg; + int old_test; + int error = 0; + + out_vcc = PRIV(vcc->dev) ? PRIV(vcc->dev)->vcc : NULL; + if (!out_vcc) return -EUNATCH; + skb = alloc_skb(sizeof(*msg),GFP_KERNEL); + if (!skb) return -ENOMEM; + mb(); + out_vcc = PRIV(vcc->dev) ? PRIV(vcc->dev)->vcc : NULL; + if (!out_vcc) { + dev_kfree_skb(skb); + return -EUNATCH; + } + atm_force_charge(out_vcc,skb->truesize); + new_msg = (struct atmtcp_control *) skb_put(skb,sizeof(*new_msg)); + *new_msg = *msg; + new_msg->hdr.length = ATMTCP_HDR_MAGIC; + new_msg->type = type; + memset(&new_msg->vcc,0,sizeof(atm_kptr_t)); + *(struct atm_vcc **) &new_msg->vcc = vcc; + old_test = test_bit(flag,&vcc->flags); + out_vcc->push(out_vcc,skb); + add_wait_queue(sk_atm(vcc)->sk_sleep, &wait); + while (test_bit(flag,&vcc->flags) == old_test) { + mb(); + out_vcc = PRIV(vcc->dev) ? PRIV(vcc->dev)->vcc : NULL; + if (!out_vcc) { + error = -EUNATCH; + break; + } + set_current_state(TASK_UNINTERRUPTIBLE); + schedule(); + } + set_current_state(TASK_RUNNING); + remove_wait_queue(sk_atm(vcc)->sk_sleep, &wait); + return error; +} + + +static int atmtcp_recv_control(const struct atmtcp_control *msg) +{ + struct atm_vcc *vcc = *(struct atm_vcc **) &msg->vcc; + + vcc->vpi = msg->addr.sap_addr.vpi; + vcc->vci = msg->addr.sap_addr.vci; + vcc->qos = msg->qos; + sk_atm(vcc)->sk_err = -msg->result; + switch (msg->type) { + case ATMTCP_CTRL_OPEN: + change_bit(ATM_VF_READY,&vcc->flags); + break; + case ATMTCP_CTRL_CLOSE: + change_bit(ATM_VF_ADDR,&vcc->flags); + break; + default: + printk(KERN_ERR "atmtcp_recv_control: unknown type %d\n", + msg->type); + return -EINVAL; + } + wake_up(sk_atm(vcc)->sk_sleep); + return 0; +} + + +static void atmtcp_v_dev_close(struct atm_dev *dev) +{ + /* Nothing.... Isn't this simple :-) -- REW */ +} + + +static int atmtcp_v_open(struct atm_vcc *vcc) +{ + struct atmtcp_control msg; + int error; + short vpi = vcc->vpi; + int vci = vcc->vci; + + memset(&msg,0,sizeof(msg)); + msg.addr.sap_family = AF_ATMPVC; + msg.hdr.vpi = htons(vpi); + msg.addr.sap_addr.vpi = vpi; + msg.hdr.vci = htons(vci); + msg.addr.sap_addr.vci = vci; + if (vpi == ATM_VPI_UNSPEC || vci == ATM_VCI_UNSPEC) return 0; + msg.type = ATMTCP_CTRL_OPEN; + msg.qos = vcc->qos; + set_bit(ATM_VF_ADDR,&vcc->flags); + clear_bit(ATM_VF_READY,&vcc->flags); /* just in case ... */ + error = atmtcp_send_control(vcc,ATMTCP_CTRL_OPEN,&msg,ATM_VF_READY); + if (error) return error; + return -sk_atm(vcc)->sk_err; +} + + +static void atmtcp_v_close(struct atm_vcc *vcc) +{ + struct atmtcp_control msg; + + memset(&msg,0,sizeof(msg)); + msg.addr.sap_family = AF_ATMPVC; + msg.addr.sap_addr.vpi = vcc->vpi; + msg.addr.sap_addr.vci = vcc->vci; + clear_bit(ATM_VF_READY,&vcc->flags); + (void) atmtcp_send_control(vcc,ATMTCP_CTRL_CLOSE,&msg,ATM_VF_ADDR); +} + + +static int atmtcp_v_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg) +{ + struct atm_cirange ci; + struct atm_vcc *vcc; + struct hlist_node *node; + struct sock *s; + int i; + + if (cmd != ATM_SETCIRANGE) return -ENOIOCTLCMD; + if (copy_from_user(&ci, arg,sizeof(ci))) return -EFAULT; + if (ci.vpi_bits == ATM_CI_MAX) ci.vpi_bits = MAX_VPI_BITS; + if (ci.vci_bits == ATM_CI_MAX) ci.vci_bits = MAX_VCI_BITS; + if (ci.vpi_bits > MAX_VPI_BITS || ci.vpi_bits < 0 || + ci.vci_bits > MAX_VCI_BITS || ci.vci_bits < 0) return -EINVAL; + read_lock(&vcc_sklist_lock); + for(i = 0; i < VCC_HTABLE_SIZE; ++i) { + struct hlist_head *head = &vcc_hash[i]; + + sk_for_each(s, node, head) { + vcc = atm_sk(s); + if (vcc->dev != dev) + continue; + if ((vcc->vpi >> ci.vpi_bits) || + (vcc->vci >> ci.vci_bits)) { + read_unlock(&vcc_sklist_lock); + return -EBUSY; + } + } + } + read_unlock(&vcc_sklist_lock); + dev->ci_range = ci; + return 0; +} + + +static int atmtcp_v_send(struct atm_vcc *vcc,struct sk_buff *skb) +{ + struct atmtcp_dev_data *dev_data; + struct atm_vcc *out_vcc=NULL; /* Initializer quietens GCC warning */ + struct sk_buff *new_skb; + struct atmtcp_hdr *hdr; + int size; + + if (vcc->qos.txtp.traffic_class == ATM_NONE) { + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + return -EINVAL; + } + dev_data = PRIV(vcc->dev); + if (dev_data) out_vcc = dev_data->vcc; + if (!dev_data || !out_vcc) { + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + if (dev_data) return 0; + atomic_inc(&vcc->stats->tx_err); + return -ENOLINK; + } + size = skb->len+sizeof(struct atmtcp_hdr); + new_skb = atm_alloc_charge(out_vcc,size,GFP_ATOMIC); + if (!new_skb) { + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + atomic_inc(&vcc->stats->tx_err); + return -ENOBUFS; + } + hdr = (void *) skb_put(new_skb,sizeof(struct atmtcp_hdr)); + hdr->vpi = htons(vcc->vpi); + hdr->vci = htons(vcc->vci); + hdr->length = htonl(skb->len); + memcpy(skb_put(new_skb,skb->len),skb->data,skb->len); + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + out_vcc->push(out_vcc,new_skb); + atomic_inc(&vcc->stats->tx); + atomic_inc(&out_vcc->stats->rx); + return 0; +} + + +static int atmtcp_v_proc(struct atm_dev *dev,loff_t *pos,char *page) +{ + struct atmtcp_dev_data *dev_data = PRIV(dev); + + if (*pos) return 0; + if (!dev_data->persist) return sprintf(page,"ephemeral\n"); + return sprintf(page,"persistent, %sconnected\n", + dev_data->vcc ? "" : "dis"); +} + + +static void atmtcp_c_close(struct atm_vcc *vcc) +{ + struct atm_dev *atmtcp_dev; + struct atmtcp_dev_data *dev_data; + struct sock *s; + struct hlist_node *node; + struct atm_vcc *walk; + int i; + + atmtcp_dev = (struct atm_dev *) vcc->dev_data; + dev_data = PRIV(atmtcp_dev); + dev_data->vcc = NULL; + if (dev_data->persist) return; + atmtcp_dev->dev_data = NULL; + kfree(dev_data); + shutdown_atm_dev(atmtcp_dev); + vcc->dev_data = NULL; + read_lock(&vcc_sklist_lock); + for(i = 0; i < VCC_HTABLE_SIZE; ++i) { + struct hlist_head *head = &vcc_hash[i]; + + sk_for_each(s, node, head) { + walk = atm_sk(s); + if (walk->dev != atmtcp_dev) + continue; + wake_up(s->sk_sleep); + } + } + read_unlock(&vcc_sklist_lock); + module_put(THIS_MODULE); +} + + +static struct atm_vcc *find_vcc(struct atm_dev *dev, short vpi, int vci) +{ + struct hlist_head *head; + struct atm_vcc *vcc; + struct hlist_node *node; + struct sock *s; + + head = &vcc_hash[vci & (VCC_HTABLE_SIZE -1)]; + + sk_for_each(s, node, head) { + vcc = atm_sk(s); + if (vcc->dev == dev && + vcc->vci == vci && vcc->vpi == vpi && + vcc->qos.rxtp.traffic_class != ATM_NONE) { + return vcc; + } + } + return NULL; +} + + +static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb) +{ + struct atm_dev *dev; + struct atmtcp_hdr *hdr; + struct atm_vcc *out_vcc; + struct sk_buff *new_skb; + int result = 0; + + if (!skb->len) return 0; + dev = vcc->dev_data; + hdr = (struct atmtcp_hdr *) skb->data; + if (hdr->length == ATMTCP_HDR_MAGIC) { + result = atmtcp_recv_control( + (struct atmtcp_control *) skb->data); + goto done; + } + read_lock(&vcc_sklist_lock); + out_vcc = find_vcc(dev, ntohs(hdr->vpi), ntohs(hdr->vci)); + read_unlock(&vcc_sklist_lock); + if (!out_vcc) { + atomic_inc(&vcc->stats->tx_err); + goto done; + } + skb_pull(skb,sizeof(struct atmtcp_hdr)); + new_skb = atm_alloc_charge(out_vcc,skb->len,GFP_KERNEL); + if (!new_skb) { + result = -ENOBUFS; + goto done; + } + do_gettimeofday(&new_skb->stamp); + memcpy(skb_put(new_skb,skb->len),skb->data,skb->len); + out_vcc->push(out_vcc,new_skb); + atomic_inc(&vcc->stats->tx); + atomic_inc(&out_vcc->stats->rx); +done: + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + return result; +} + + +/* + * Device operations for the virtual ATM devices created by ATMTCP. + */ + + +static struct atmdev_ops atmtcp_v_dev_ops = { + .dev_close = atmtcp_v_dev_close, + .open = atmtcp_v_open, + .close = atmtcp_v_close, + .ioctl = atmtcp_v_ioctl, + .send = atmtcp_v_send, + .proc_read = atmtcp_v_proc, + .owner = THIS_MODULE +}; + + +/* + * Device operations for the ATMTCP control device. + */ + + +static struct atmdev_ops atmtcp_c_dev_ops = { + .close = atmtcp_c_close, + .send = atmtcp_c_send +}; + + +static struct atm_dev atmtcp_control_dev = { + .ops = &atmtcp_c_dev_ops, + .type = "atmtcp", + .number = 999, + .lock = SPIN_LOCK_UNLOCKED +}; + + +static int atmtcp_create(int itf,int persist,struct atm_dev **result) +{ + struct atmtcp_dev_data *dev_data; + struct atm_dev *dev; + + dev_data = kmalloc(sizeof(*dev_data),GFP_KERNEL); + if (!dev_data) + return -ENOMEM; + + dev = atm_dev_register(DEV_LABEL,&atmtcp_v_dev_ops,itf,NULL); + if (!dev) { + kfree(dev_data); + return itf == -1 ? -ENOMEM : -EBUSY; + } + dev->ci_range.vpi_bits = MAX_VPI_BITS; + dev->ci_range.vci_bits = MAX_VCI_BITS; + dev->dev_data = dev_data; + PRIV(dev)->vcc = NULL; + PRIV(dev)->persist = persist; + if (result) *result = dev; + return 0; +} + + +static int atmtcp_attach(struct atm_vcc *vcc,int itf) +{ + struct atm_dev *dev; + + dev = NULL; + if (itf != -1) dev = atm_dev_lookup(itf); + if (dev) { + if (dev->ops != &atmtcp_v_dev_ops) { + atm_dev_put(dev); + return -EMEDIUMTYPE; + } + if (PRIV(dev)->vcc) return -EBUSY; + } + else { + int error; + + error = atmtcp_create(itf,0,&dev); + if (error) return error; + } + PRIV(dev)->vcc = vcc; + vcc->dev = &atmtcp_control_dev; + vcc_insert_socket(sk_atm(vcc)); + set_bit(ATM_VF_META,&vcc->flags); + set_bit(ATM_VF_READY,&vcc->flags); + vcc->dev_data = dev; + (void) atm_init_aal5(vcc); /* @@@ losing AAL in transit ... */ + vcc->stats = &atmtcp_control_dev.stats.aal5; + return dev->number; +} + + +static int atmtcp_create_persistent(int itf) +{ + return atmtcp_create(itf,1,NULL); +} + + +static int atmtcp_remove_persistent(int itf) +{ + struct atm_dev *dev; + struct atmtcp_dev_data *dev_data; + + dev = atm_dev_lookup(itf); + if (!dev) return -ENODEV; + if (dev->ops != &atmtcp_v_dev_ops) { + atm_dev_put(dev); + return -EMEDIUMTYPE; + } + dev_data = PRIV(dev); + if (!dev_data->persist) return 0; + dev_data->persist = 0; + if (PRIV(dev)->vcc) return 0; + kfree(dev_data); + atm_dev_put(dev); + shutdown_atm_dev(dev); + return 0; +} + +static int atmtcp_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) +{ + int err = 0; + struct atm_vcc *vcc = ATM_SD(sock); + + if (cmd != SIOCSIFATMTCP && cmd != ATMTCP_CREATE && cmd != ATMTCP_REMOVE) + return -ENOIOCTLCMD; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + switch (cmd) { + case SIOCSIFATMTCP: + err = atmtcp_attach(vcc, (int) arg); + if (err >= 0) { + sock->state = SS_CONNECTED; + __module_get(THIS_MODULE); + } + break; + case ATMTCP_CREATE: + err = atmtcp_create_persistent((int) arg); + break; + case ATMTCP_REMOVE: + err = atmtcp_remove_persistent((int) arg); + break; + } + return err; +} + +static struct atm_ioctl atmtcp_ioctl_ops = { + .owner = THIS_MODULE, + .ioctl = atmtcp_ioctl, +}; + +static __init int atmtcp_init(void) +{ + register_atm_ioctl(&atmtcp_ioctl_ops); + return 0; +} + + +static void __exit atmtcp_exit(void) +{ + deregister_atm_ioctl(&atmtcp_ioctl_ops); +} + +MODULE_LICENSE("GPL"); +module_init(atmtcp_init); +module_exit(atmtcp_exit); diff --git a/drivers/atm/eni.c b/drivers/atm/eni.c new file mode 100644 index 000000000000..78e34ee79df8 --- /dev/null +++ b/drivers/atm/eni.c @@ -0,0 +1,2299 @@ +/* drivers/atm/eni.c - Efficient Networks ENI155P device driver */ + +/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */ + + +#include <linux/module.h> +#include <linux/config.h> +#include <linux/kernel.h> +#include <linux/mm.h> +#include <linux/pci.h> +#include <linux/errno.h> +#include <linux/atm.h> +#include <linux/atmdev.h> +#include <linux/sonet.h> +#include <linux/skbuff.h> +#include <linux/time.h> +#include <linux/delay.h> +#include <linux/uio.h> +#include <linux/init.h> +#include <linux/atm_eni.h> +#include <linux/bitops.h> +#include <asm/system.h> +#include <asm/io.h> +#include <asm/atomic.h> +#include <asm/uaccess.h> +#include <asm/string.h> +#include <asm/byteorder.h> + +#include "tonga.h" +#include "midway.h" +#include "suni.h" +#include "eni.h" + +#if !defined(__i386__) && !defined(__x86_64__) +#ifndef ioremap_nocache +#define ioremap_nocache(X,Y) ioremap(X,Y) +#endif +#endif + +/* + * TODO: + * + * Show stoppers + * none + * + * Minor + * - OAM support + * - fix bugs listed below + */ + +/* + * KNOWN BUGS: + * + * - may run into JK-JK bug and deadlock + * - should allocate UBR channel first + * - buffer space allocation algorithm is stupid + * (RX: should be maxSDU+maxdelay*rate + * TX: should be maxSDU+min(maxSDU,maxdelay*rate) ) + * - doesn't support OAM cells + * - eni_put_free may hang if not putting memory fragments that _complete_ + * 2^n block (never happens in real life, though) + * - keeps IRQ even if initialization fails + */ + + +#if 0 +#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args) +#else +#define DPRINTK(format,args...) +#endif + + +#ifndef CONFIG_ATM_ENI_TUNE_BURST +#define CONFIG_ATM_ENI_BURST_TX_8W +#define CONFIG_ATM_ENI_BURST_RX_4W +#endif + + +#ifndef CONFIG_ATM_ENI_DEBUG + + +#define NULLCHECK(x) + +#define EVENT(s,a,b) + + +static void event_dump(void) +{ +} + + +#else + + +/* + * NULL pointer checking + */ + +#define NULLCHECK(x) \ + if ((unsigned long) (x) < 0x30) \ + printk(KERN_CRIT #x "==0x%lx\n",(unsigned long) (x)) + +/* + * Very extensive activity logging. Greatly improves bug detection speed but + * costs a few Mbps if enabled. + */ + +#define EV 64 + +static const char *ev[EV]; +static unsigned long ev_a[EV],ev_b[EV]; +static int ec = 0; + + +static void EVENT(const char *s,unsigned long a,unsigned long b) +{ + ev[ec] = s; + ev_a[ec] = a; + ev_b[ec] = b; + ec = (ec+1) % EV; +} + + +static void event_dump(void) +{ + int n,i; + + for (n = 0; n < EV; n++) { + i = (ec+n) % EV; + printk(KERN_NOTICE); + printk(ev[i] ? ev[i] : "(null)",ev_a[i],ev_b[i]); + } +} + + +#endif /* CONFIG_ATM_ENI_DEBUG */ + + +/* + * NExx must not be equal at end + * EExx may be equal at end + * xxPJOK verify validity of pointer jumps + * xxPMOK operating on a circular buffer of "c" words + */ + +#define NEPJOK(a0,a1,b) \ + ((a0) < (a1) ? (b) <= (a0) || (b) > (a1) : (b) <= (a0) && (b) > (a1)) +#define EEPJOK(a0,a1,b) \ + ((a0) < (a1) ? (b) < (a0) || (b) >= (a1) : (b) < (a0) && (b) >= (a1)) +#define NEPMOK(a0,d,b,c) NEPJOK(a0,(a0+d) & (c-1),b) +#define EEPMOK(a0,d,b,c) EEPJOK(a0,(a0+d) & (c-1),b) + + +static int tx_complete = 0,dma_complete = 0,queued = 0,requeued = 0, + backlogged = 0,rx_enqueued = 0,rx_dequeued = 0,pushed = 0,submitted = 0, + putting = 0; + +static struct atm_dev *eni_boards = NULL; + +static u32 *cpu_zeroes = NULL; /* aligned "magic" zeroes */ +static dma_addr_t zeroes; + +/* Read/write registers on card */ +#define eni_in(r) readl(eni_dev->reg+(r)*4) +#define eni_out(v,r) writel((v),eni_dev->reg+(r)*4) + + +/*-------------------------------- utilities --------------------------------*/ + + +static void dump_mem(struct eni_dev *eni_dev) +{ + int i; + + for (i = 0; i < eni_dev->free_len; i++) + printk(KERN_DEBUG " %d: %p %d\n",i, + eni_dev->free_list[i].start, + 1 << eni_dev->free_list[i].order); +} + + +static void dump(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + + int i; + + eni_dev = ENI_DEV(dev); + printk(KERN_NOTICE "Free memory\n"); + dump_mem(eni_dev); + printk(KERN_NOTICE "TX buffers\n"); + for (i = 0; i < NR_CHAN; i++) + if (eni_dev->tx[i].send) + printk(KERN_NOTICE " TX %d @ %p: %ld\n",i, + eni_dev->tx[i].send,eni_dev->tx[i].words*4); + printk(KERN_NOTICE "RX buffers\n"); + for (i = 0; i < 1024; i++) + if (eni_dev->rx_map[i] && ENI_VCC(eni_dev->rx_map[i])->rx) + printk(KERN_NOTICE " RX %d @ %p: %ld\n",i, + ENI_VCC(eni_dev->rx_map[i])->recv, + ENI_VCC(eni_dev->rx_map[i])->words*4); + printk(KERN_NOTICE "----\n"); +} + + +static void eni_put_free(struct eni_dev *eni_dev, void __iomem *start, + unsigned long size) +{ + struct eni_free *list; + int len,order; + + DPRINTK("init 0x%lx+%ld(0x%lx)\n",start,size,size); + start += eni_dev->base_diff; + list = eni_dev->free_list; + len = eni_dev->free_len; + while (size) { + if (len >= eni_dev->free_list_size) { + printk(KERN_CRIT "eni_put_free overflow (%p,%ld)\n", + start,size); + break; + } + for (order = 0; !(((unsigned long)start | size) & (1 << order)); order++); + if (MID_MIN_BUF_SIZE > (1 << order)) { + printk(KERN_CRIT "eni_put_free: order %d too small\n", + order); + break; + } + list[len].start = (void __iomem *) start; + list[len].order = order; + len++; + start += 1 << order; + size -= 1 << order; + } + eni_dev->free_len = len; + /*dump_mem(eni_dev);*/ +} + + +static void __iomem *eni_alloc_mem(struct eni_dev *eni_dev, unsigned long *size) +{ + struct eni_free *list; + void __iomem *start; + int len,i,order,best_order,index; + + list = eni_dev->free_list; + len = eni_dev->free_len; + if (*size < MID_MIN_BUF_SIZE) *size = MID_MIN_BUF_SIZE; + if (*size > MID_MAX_BUF_SIZE) return NULL; + for (order = 0; (1 << order) < *size; order++); + DPRINTK("trying: %ld->%d\n",*size,order); + best_order = 65; /* we don't have more than 2^64 of anything ... */ + index = 0; /* silence GCC */ + for (i = 0; i < len; i++) + if (list[i].order == order) { + best_order = order; + index = i; + break; + } + else if (best_order > list[i].order && list[i].order > order) { + best_order = list[i].order; + index = i; + } + if (best_order == 65) return NULL; + start = list[index].start-eni_dev->base_diff; + list[index] = list[--len]; + eni_dev->free_len = len; + *size = 1 << order; + eni_put_free(eni_dev,start+*size,(1 << best_order)-*size); + DPRINTK("%ld bytes (order %d) at 0x%lx\n",*size,order,start); + memset_io(start,0,*size); /* never leak data */ + /*dump_mem(eni_dev);*/ + return start; +} + + +static void eni_free_mem(struct eni_dev *eni_dev, void __iomem *start, + unsigned long size) +{ + struct eni_free *list; + int len,i,order; + + start += eni_dev->base_diff; + list = eni_dev->free_list; + len = eni_dev->free_len; + for (order = -1; size; order++) size >>= 1; + DPRINTK("eni_free_mem: %p+0x%lx (order %d)\n",start,size,order); + for (i = 0; i < len; i++) + if (((unsigned long) list[i].start) == ((unsigned long)start^(1 << order)) && + list[i].order == order) { + DPRINTK("match[%d]: 0x%lx/0x%lx(0x%x), %d/%d\n",i, + list[i].start,start,1 << order,list[i].order,order); + list[i] = list[--len]; + start = (void __iomem *) ((unsigned long) start & ~(unsigned long) (1 << order)); + order++; + i = -1; + continue; + } + if (len >= eni_dev->free_list_size) { + printk(KERN_ALERT "eni_free_mem overflow (%p,%d)\n",start, + order); + return; + } + list[len].start = start; + list[len].order = order; + eni_dev->free_len = len+1; + /*dump_mem(eni_dev);*/ +} + + +/*----------------------------------- RX ------------------------------------*/ + + +#define ENI_VCC_NOS ((struct atm_vcc *) 1) + + +static void rx_ident_err(struct atm_vcc *vcc) +{ + struct atm_dev *dev; + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + + dev = vcc->dev; + eni_dev = ENI_DEV(dev); + /* immediately halt adapter */ + eni_out(eni_in(MID_MC_S) & + ~(MID_DMA_ENABLE | MID_TX_ENABLE | MID_RX_ENABLE),MID_MC_S); + /* dump useful information */ + eni_vcc = ENI_VCC(vcc); + printk(KERN_ALERT DEV_LABEL "(itf %d): driver error - RX ident " + "mismatch\n",dev->number); + printk(KERN_ALERT " VCI %d, rxing %d, words %ld\n",vcc->vci, + eni_vcc->rxing,eni_vcc->words); + printk(KERN_ALERT " host descr 0x%lx, rx pos 0x%lx, descr value " + "0x%x\n",eni_vcc->descr,eni_vcc->rx_pos, + (unsigned) readl(eni_vcc->recv+eni_vcc->descr*4)); + printk(KERN_ALERT " last %p, servicing %d\n",eni_vcc->last, + eni_vcc->servicing); + EVENT("---dump ends here---\n",0,0); + printk(KERN_NOTICE "---recent events---\n"); + event_dump(); + ENI_DEV(dev)->fast = NULL; /* really stop it */ + ENI_DEV(dev)->slow = NULL; + skb_queue_head_init(&ENI_DEV(dev)->rx_queue); +} + + +static int do_rx_dma(struct atm_vcc *vcc,struct sk_buff *skb, + unsigned long skip,unsigned long size,unsigned long eff) +{ + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + u32 dma_rd,dma_wr; + u32 dma[RX_DMA_BUF*2]; + dma_addr_t paddr; + unsigned long here; + int i,j; + + eni_dev = ENI_DEV(vcc->dev); + eni_vcc = ENI_VCC(vcc); + paddr = 0; /* GCC, shut up */ + if (skb) { + paddr = pci_map_single(eni_dev->pci_dev,skb->data,skb->len, + PCI_DMA_FROMDEVICE); + ENI_PRV_PADDR(skb) = paddr; + if (paddr & 3) + printk(KERN_CRIT DEV_LABEL "(itf %d): VCI %d has " + "mis-aligned RX data (0x%lx)\n",vcc->dev->number, + vcc->vci,(unsigned long) paddr); + ENI_PRV_SIZE(skb) = size+skip; + /* PDU plus descriptor */ + ATM_SKB(skb)->vcc = vcc; + } + j = 0; + if ((eff && skip) || 1) { /* @@@ actually, skip is always == 1 ... */ + here = (eni_vcc->descr+skip) & (eni_vcc->words-1); + dma[j++] = (here << MID_DMA_COUNT_SHIFT) | (vcc->vci + << MID_DMA_VCI_SHIFT) | MID_DT_JK; + j++; + } + here = (eni_vcc->descr+size+skip) & (eni_vcc->words-1); + if (!eff) size += skip; + else { + unsigned long words; + + if (!size) { + DPRINTK("strange things happen ...\n"); + EVENT("strange things happen ... (skip=%ld,eff=%ld)\n", + size,eff); + } + words = eff; + if (paddr & 15) { + unsigned long init; + + init = 4-((paddr & 15) >> 2); + if (init > words) init = words; + dma[j++] = MID_DT_WORD | (init << MID_DMA_COUNT_SHIFT) | + (vcc->vci << MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + paddr += init << 2; + words -= init; + } +#ifdef CONFIG_ATM_ENI_BURST_RX_16W /* may work with some PCI chipsets ... */ + if (words & ~15) { + dma[j++] = MID_DT_16W | ((words >> 4) << + MID_DMA_COUNT_SHIFT) | (vcc->vci << + MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + paddr += (words & ~15) << 2; + words &= 15; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_8W /* works only with *some* PCI chipsets ... */ + if (words & ~7) { + dma[j++] = MID_DT_8W | ((words >> 3) << + MID_DMA_COUNT_SHIFT) | (vcc->vci << + MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + paddr += (words & ~7) << 2; + words &= 7; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_4W /* recommended */ + if (words & ~3) { + dma[j++] = MID_DT_4W | ((words >> 2) << + MID_DMA_COUNT_SHIFT) | (vcc->vci << + MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + paddr += (words & ~3) << 2; + words &= 3; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_2W /* probably useless if RX_4W, RX_8W, ... */ + if (words & ~1) { + dma[j++] = MID_DT_2W | ((words >> 1) << + MID_DMA_COUNT_SHIFT) | (vcc->vci << + MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + paddr += (words & ~1) << 2; + words &= 1; + } +#endif + if (words) { + dma[j++] = MID_DT_WORD | (words << MID_DMA_COUNT_SHIFT) + | (vcc->vci << MID_DMA_VCI_SHIFT); + dma[j++] = paddr; + } + } + if (size != eff) { + dma[j++] = (here << MID_DMA_COUNT_SHIFT) | + (vcc->vci << MID_DMA_VCI_SHIFT) | MID_DT_JK; + j++; + } + if (!j || j > 2*RX_DMA_BUF) { + printk(KERN_CRIT DEV_LABEL "!j or j too big!!!\n"); + goto trouble; + } + dma[j-2] |= MID_DMA_END; + j = j >> 1; + dma_wr = eni_in(MID_DMA_WR_RX); + dma_rd = eni_in(MID_DMA_RD_RX); + /* + * Can I move the dma_wr pointer by 2j+1 positions without overwriting + * data that hasn't been read (position of dma_rd) yet ? + */ + if (!NEPMOK(dma_wr,j+j+1,dma_rd,NR_DMA_RX)) { /* @@@ +1 is ugly */ + printk(KERN_WARNING DEV_LABEL "(itf %d): RX DMA full\n", + vcc->dev->number); + goto trouble; + } + for (i = 0; i < j; i++) { + writel(dma[i*2],eni_dev->rx_dma+dma_wr*8); + writel(dma[i*2+1],eni_dev->rx_dma+dma_wr*8+4); + dma_wr = (dma_wr+1) & (NR_DMA_RX-1); + } + if (skb) { + ENI_PRV_POS(skb) = eni_vcc->descr+size+1; + skb_queue_tail(&eni_dev->rx_queue,skb); + eni_vcc->last = skb; +rx_enqueued++; + } + eni_vcc->descr = here; + eni_out(dma_wr,MID_DMA_WR_RX); + return 0; + +trouble: + if (paddr) + pci_unmap_single(eni_dev->pci_dev,paddr,skb->len, + PCI_DMA_FROMDEVICE); + if (skb) dev_kfree_skb_irq(skb); + return -1; +} + + +static void discard(struct atm_vcc *vcc,unsigned long size) +{ + struct eni_vcc *eni_vcc; + + eni_vcc = ENI_VCC(vcc); + EVENT("discard (size=%ld)\n",size,0); + while (do_rx_dma(vcc,NULL,1,size,0)) EVENT("BUSY LOOP",0,0); + /* could do a full fallback, but that might be more expensive */ + if (eni_vcc->rxing) ENI_PRV_POS(eni_vcc->last) += size+1; + else eni_vcc->rx_pos = (eni_vcc->rx_pos+size+1) & (eni_vcc->words-1); +} + + +/* + * TODO: should check whether direct copies (without DMA setup, dequeuing on + * interrupt, etc.) aren't much faster for AAL0 + */ + +static int rx_aal0(struct atm_vcc *vcc) +{ + struct eni_vcc *eni_vcc; + unsigned long descr; + unsigned long length; + struct sk_buff *skb; + + DPRINTK(">rx_aal0\n"); + eni_vcc = ENI_VCC(vcc); + descr = readl(eni_vcc->recv+eni_vcc->descr*4); + if ((descr & MID_RED_IDEN) != (MID_RED_RX_ID << MID_RED_SHIFT)) { + rx_ident_err(vcc); + return 1; + } + if (descr & MID_RED_T) { + DPRINTK(DEV_LABEL "(itf %d): trashing empty cell\n", + vcc->dev->number); + length = 0; + atomic_inc(&vcc->stats->rx_err); + } + else { + length = ATM_CELL_SIZE-1; /* no HEC */ + } + skb = length ? atm_alloc_charge(vcc,length,GFP_ATOMIC) : NULL; + if (!skb) { + discard(vcc,length >> 2); + return 0; + } + skb_put(skb,length); + skb->stamp = eni_vcc->timestamp; + DPRINTK("got len %ld\n",length); + if (do_rx_dma(vcc,skb,1,length >> 2,length >> 2)) return 1; + eni_vcc->rxing++; + return 0; +} + + +static int rx_aal5(struct atm_vcc *vcc) +{ + struct eni_vcc *eni_vcc; + unsigned long descr; + unsigned long size,eff,length; + struct sk_buff *skb; + + EVENT("rx_aal5\n",0,0); + DPRINTK(">rx_aal5\n"); + eni_vcc = ENI_VCC(vcc); + descr = readl(eni_vcc->recv+eni_vcc->descr*4); + if ((descr & MID_RED_IDEN) != (MID_RED_RX_ID << MID_RED_SHIFT)) { + rx_ident_err(vcc); + return 1; + } + if (descr & (MID_RED_T | MID_RED_CRC_ERR)) { + if (descr & MID_RED_T) { + EVENT("empty cell (descr=0x%lx)\n",descr,0); + DPRINTK(DEV_LABEL "(itf %d): trashing empty cell\n", + vcc->dev->number); + size = 0; + } + else { + static unsigned long silence = 0; + + if (time_after(jiffies, silence) || silence == 0) { + printk(KERN_WARNING DEV_LABEL "(itf %d): " + "discarding PDU(s) with CRC error\n", + vcc->dev->number); + silence = (jiffies+2*HZ)|1; + } + size = (descr & MID_RED_COUNT)*(ATM_CELL_PAYLOAD >> 2); + EVENT("CRC error (descr=0x%lx,size=%ld)\n",descr, + size); + } + eff = length = 0; + atomic_inc(&vcc->stats->rx_err); + } + else { + size = (descr & MID_RED_COUNT)*(ATM_CELL_PAYLOAD >> 2); + DPRINTK("size=%ld\n",size); + length = readl(eni_vcc->recv+(((eni_vcc->descr+size-1) & + (eni_vcc->words-1)))*4) & 0xffff; + /* -trailer(2)+header(1) */ + if (length && length <= (size << 2)-8 && length <= + ATM_MAX_AAL5_PDU) eff = (length+3) >> 2; + else { /* ^ trailer length (8) */ + EVENT("bad PDU (descr=0x08%lx,length=%ld)\n",descr, + length); + printk(KERN_ERR DEV_LABEL "(itf %d): bad AAL5 PDU " + "(VCI=%d,length=%ld,size=%ld (descr 0x%lx))\n", + vcc->dev->number,vcc->vci,length,size << 2,descr); + length = eff = 0; + atomic_inc(&vcc->stats->rx_err); + } + } + skb = eff ? atm_alloc_charge(vcc,eff << 2,GFP_ATOMIC) : NULL; + if (!skb) { + discard(vcc,size); + return 0; + } + skb_put(skb,length); + DPRINTK("got len %ld\n",length); + if (do_rx_dma(vcc,skb,1,size,eff)) return 1; + eni_vcc->rxing++; + return 0; +} + + +static inline int rx_vcc(struct atm_vcc *vcc) +{ + void __iomem *vci_dsc; + unsigned long tmp; + struct eni_vcc *eni_vcc; + + eni_vcc = ENI_VCC(vcc); + vci_dsc = ENI_DEV(vcc->dev)->vci+vcc->vci*16; + EVENT("rx_vcc(1)\n",0,0); + while (eni_vcc->descr != (tmp = (readl(vci_dsc+4) & MID_VCI_DESCR) >> + MID_VCI_DESCR_SHIFT)) { + EVENT("rx_vcc(2: host dsc=0x%lx, nic dsc=0x%lx)\n", + eni_vcc->descr,tmp); + DPRINTK("CB_DESCR %ld REG_DESCR %d\n",ENI_VCC(vcc)->descr, + (((unsigned) readl(vci_dsc+4) & MID_VCI_DESCR) >> + MID_VCI_DESCR_SHIFT)); + if (ENI_VCC(vcc)->rx(vcc)) return 1; + } + /* clear IN_SERVICE flag */ + writel(readl(vci_dsc) & ~MID_VCI_IN_SERVICE,vci_dsc); + /* + * If new data has arrived between evaluating the while condition and + * clearing IN_SERVICE, we wouldn't be notified until additional data + * follows. So we have to loop again to be sure. + */ + EVENT("rx_vcc(3)\n",0,0); + while (ENI_VCC(vcc)->descr != (tmp = (readl(vci_dsc+4) & MID_VCI_DESCR) + >> MID_VCI_DESCR_SHIFT)) { + EVENT("rx_vcc(4: host dsc=0x%lx, nic dsc=0x%lx)\n", + eni_vcc->descr,tmp); + DPRINTK("CB_DESCR %ld REG_DESCR %d\n",ENI_VCC(vcc)->descr, + (((unsigned) readl(vci_dsc+4) & MID_VCI_DESCR) >> + MID_VCI_DESCR_SHIFT)); + if (ENI_VCC(vcc)->rx(vcc)) return 1; + } + return 0; +} + + +static void poll_rx(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + struct atm_vcc *curr; + + eni_dev = ENI_DEV(dev); + while ((curr = eni_dev->fast)) { + EVENT("poll_rx.fast\n",0,0); + if (rx_vcc(curr)) return; + eni_dev->fast = ENI_VCC(curr)->next; + ENI_VCC(curr)->next = ENI_VCC_NOS; + barrier(); + ENI_VCC(curr)->servicing--; + } + while ((curr = eni_dev->slow)) { + EVENT("poll_rx.slow\n",0,0); + if (rx_vcc(curr)) return; + eni_dev->slow = ENI_VCC(curr)->next; + ENI_VCC(curr)->next = ENI_VCC_NOS; + barrier(); + ENI_VCC(curr)->servicing--; + } +} + + +static void get_service(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + struct atm_vcc *vcc; + unsigned long vci; + + DPRINTK(">get_service\n"); + eni_dev = ENI_DEV(dev); + while (eni_in(MID_SERV_WRITE) != eni_dev->serv_read) { + vci = readl(eni_dev->service+eni_dev->serv_read*4); + eni_dev->serv_read = (eni_dev->serv_read+1) & (NR_SERVICE-1); + vcc = eni_dev->rx_map[vci & 1023]; + if (!vcc) { + printk(KERN_CRIT DEV_LABEL "(itf %d): VCI %ld not " + "found\n",dev->number,vci); + continue; /* nasty but we try to go on anyway */ + /* @@@ nope, doesn't work */ + } + EVENT("getting from service\n",0,0); + if (ENI_VCC(vcc)->next != ENI_VCC_NOS) { + EVENT("double service\n",0,0); + DPRINTK("Grr, servicing VCC %ld twice\n",vci); + continue; + } + do_gettimeofday(&ENI_VCC(vcc)->timestamp); + ENI_VCC(vcc)->next = NULL; + if (vcc->qos.rxtp.traffic_class == ATM_CBR) { + if (eni_dev->fast) + ENI_VCC(eni_dev->last_fast)->next = vcc; + else eni_dev->fast = vcc; + eni_dev->last_fast = vcc; + } + else { + if (eni_dev->slow) + ENI_VCC(eni_dev->last_slow)->next = vcc; + else eni_dev->slow = vcc; + eni_dev->last_slow = vcc; + } +putting++; + ENI_VCC(vcc)->servicing++; + } +} + + +static void dequeue_rx(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + struct atm_vcc *vcc; + struct sk_buff *skb; + void __iomem *vci_dsc; + int first; + + eni_dev = ENI_DEV(dev); + first = 1; + while (1) { + skb = skb_dequeue(&eni_dev->rx_queue); + if (!skb) { + if (first) { + DPRINTK(DEV_LABEL "(itf %d): RX but not " + "rxing\n",dev->number); + EVENT("nothing to dequeue\n",0,0); + } + break; + } + EVENT("dequeued (size=%ld,pos=0x%lx)\n",ENI_PRV_SIZE(skb), + ENI_PRV_POS(skb)); +rx_dequeued++; + vcc = ATM_SKB(skb)->vcc; + eni_vcc = ENI_VCC(vcc); + first = 0; + vci_dsc = eni_dev->vci+vcc->vci*16; + if (!EEPMOK(eni_vcc->rx_pos,ENI_PRV_SIZE(skb), + (readl(vci_dsc+4) & MID_VCI_READ) >> MID_VCI_READ_SHIFT, + eni_vcc->words)) { + EVENT("requeuing\n",0,0); + skb_queue_head(&eni_dev->rx_queue,skb); + break; + } + eni_vcc->rxing--; + eni_vcc->rx_pos = ENI_PRV_POS(skb) & (eni_vcc->words-1); + pci_unmap_single(eni_dev->pci_dev,ENI_PRV_PADDR(skb),skb->len, + PCI_DMA_TODEVICE); + if (!skb->len) dev_kfree_skb_irq(skb); + else { + EVENT("pushing (len=%ld)\n",skb->len,0); + if (vcc->qos.aal == ATM_AAL0) + *(unsigned long *) skb->data = + ntohl(*(unsigned long *) skb->data); + memset(skb->cb,0,sizeof(struct eni_skb_prv)); + vcc->push(vcc,skb); + pushed++; + } + atomic_inc(&vcc->stats->rx); + } + wake_up(&eni_dev->rx_wait); +} + + +static int open_rx_first(struct atm_vcc *vcc) +{ + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + unsigned long size; + + DPRINTK("open_rx_first\n"); + eni_dev = ENI_DEV(vcc->dev); + eni_vcc = ENI_VCC(vcc); + eni_vcc->rx = NULL; + if (vcc->qos.rxtp.traffic_class == ATM_NONE) return 0; + size = vcc->qos.rxtp.max_sdu*eni_dev->rx_mult/100; + if (size > MID_MAX_BUF_SIZE && vcc->qos.rxtp.max_sdu <= + MID_MAX_BUF_SIZE) + size = MID_MAX_BUF_SIZE; + eni_vcc->recv = eni_alloc_mem(eni_dev,&size); + DPRINTK("rx at 0x%lx\n",eni_vcc->recv); + eni_vcc->words = size >> 2; + if (!eni_vcc->recv) return -ENOBUFS; + eni_vcc->rx = vcc->qos.aal == ATM_AAL5 ? rx_aal5 : rx_aal0; + eni_vcc->descr = 0; + eni_vcc->rx_pos = 0; + eni_vcc->rxing = 0; + eni_vcc->servicing = 0; + eni_vcc->next = ENI_VCC_NOS; + return 0; +} + + +static int open_rx_second(struct atm_vcc *vcc) +{ + void __iomem *here; + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + unsigned long size; + int order; + + DPRINTK("open_rx_second\n"); + eni_dev = ENI_DEV(vcc->dev); + eni_vcc = ENI_VCC(vcc); + if (!eni_vcc->rx) return 0; + /* set up VCI descriptor */ + here = eni_dev->vci+vcc->vci*16; + DPRINTK("loc 0x%x\n",(unsigned) (eni_vcc->recv-eni_dev->ram)/4); + size = eni_vcc->words >> 8; + for (order = -1; size; order++) size >>= 1; + writel(0,here+4); /* descr, read = 0 */ + writel(0,here+8); /* write, state, count = 0 */ + if (eni_dev->rx_map[vcc->vci]) + printk(KERN_CRIT DEV_LABEL "(itf %d): BUG - VCI %d already " + "in use\n",vcc->dev->number,vcc->vci); + eni_dev->rx_map[vcc->vci] = vcc; /* now it counts */ + writel(((vcc->qos.aal != ATM_AAL5 ? MID_MODE_RAW : MID_MODE_AAL5) << + MID_VCI_MODE_SHIFT) | MID_VCI_PTI_MODE | + (((eni_vcc->recv-eni_dev->ram) >> (MID_LOC_SKIP+2)) << + MID_VCI_LOCATION_SHIFT) | (order << MID_VCI_SIZE_SHIFT),here); + return 0; +} + + +static void close_rx(struct atm_vcc *vcc) +{ + DECLARE_WAITQUEUE(wait,current); + void __iomem *here; + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + + eni_vcc = ENI_VCC(vcc); + if (!eni_vcc->rx) return; + eni_dev = ENI_DEV(vcc->dev); + if (vcc->vpi != ATM_VPI_UNSPEC && vcc->vci != ATM_VCI_UNSPEC) { + here = eni_dev->vci+vcc->vci*16; + /* block receiver */ + writel((readl(here) & ~MID_VCI_MODE) | (MID_MODE_TRASH << + MID_VCI_MODE_SHIFT),here); + /* wait for receiver to become idle */ + udelay(27); + /* discard pending cell */ + writel(readl(here) & ~MID_VCI_IN_SERVICE,here); + /* don't accept any new ones */ + eni_dev->rx_map[vcc->vci] = NULL; + /* wait for RX queue to drain */ + DPRINTK("eni_close: waiting for RX ...\n"); + EVENT("RX closing\n",0,0); + add_wait_queue(&eni_dev->rx_wait,&wait); + set_current_state(TASK_UNINTERRUPTIBLE); + barrier(); + for (;;) { + /* transition service->rx: rxing++, servicing-- */ + if (!eni_vcc->servicing) { + barrier(); + if (!eni_vcc->rxing) break; + } + EVENT("drain PDUs (rx %ld, serv %ld)\n",eni_vcc->rxing, + eni_vcc->servicing); + printk(KERN_INFO "%d+%d RX left\n",eni_vcc->servicing, + eni_vcc->rxing); + schedule(); + set_current_state(TASK_UNINTERRUPTIBLE); + } + for (;;) { + int at_end; + u32 tmp; + + tasklet_disable(&eni_dev->task); + tmp = readl(eni_dev->vci+vcc->vci*16+4) & MID_VCI_READ; + at_end = eni_vcc->rx_pos == tmp >> MID_VCI_READ_SHIFT; + tasklet_enable(&eni_dev->task); + if (at_end) break; + EVENT("drain discard (host 0x%lx, nic 0x%lx)\n", + eni_vcc->rx_pos,tmp); + printk(KERN_INFO "draining RX: host 0x%lx, nic 0x%x\n", + eni_vcc->rx_pos,tmp); + schedule(); + set_current_state(TASK_UNINTERRUPTIBLE); + } + set_current_state(TASK_RUNNING); + remove_wait_queue(&eni_dev->rx_wait,&wait); + } + eni_free_mem(eni_dev,eni_vcc->recv,eni_vcc->words << 2); + eni_vcc->rx = NULL; +} + + +static int start_rx(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + + eni_dev = ENI_DEV(dev); + eni_dev->rx_map = (struct atm_vcc **) get_zeroed_page(GFP_KERNEL); + if (!eni_dev->rx_map) { + printk(KERN_ERR DEV_LABEL "(itf %d): couldn't get free page\n", + dev->number); + free_page((unsigned long) eni_dev->free_list); + return -ENOMEM; + } + memset(eni_dev->rx_map,0,PAGE_SIZE); + eni_dev->rx_mult = DEFAULT_RX_MULT; + eni_dev->fast = eni_dev->last_fast = NULL; + eni_dev->slow = eni_dev->last_slow = NULL; + init_waitqueue_head(&eni_dev->rx_wait); + skb_queue_head_init(&eni_dev->rx_queue); + eni_dev->serv_read = eni_in(MID_SERV_WRITE); + eni_out(0,MID_DMA_WR_RX); + return 0; +} + + +/*----------------------------------- TX ------------------------------------*/ + + +enum enq_res { enq_ok,enq_next,enq_jam }; + + +static inline void put_dma(int chan,u32 *dma,int *j,dma_addr_t paddr, + u32 size) +{ + u32 init,words; + + DPRINTK("put_dma: 0x%lx+0x%x\n",(unsigned long) paddr,size); + EVENT("put_dma: 0x%lx+0x%lx\n",(unsigned long) paddr,size); +#if 0 /* don't complain anymore */ + if (paddr & 3) + printk(KERN_ERR "put_dma: unaligned addr (0x%lx)\n",paddr); + if (size & 3) + printk(KERN_ERR "put_dma: unaligned size (0x%lx)\n",size); +#endif + if (paddr & 3) { + init = 4-(paddr & 3); + if (init > size || size < 7) init = size; + DPRINTK("put_dma: %lx DMA: %d/%d bytes\n", + (unsigned long) paddr,init,size); + dma[(*j)++] = MID_DT_BYTE | (init << MID_DMA_COUNT_SHIFT) | + (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += init; + size -= init; + } + words = size >> 2; + size &= 3; + if (words && (paddr & 31)) { + init = 8-((paddr & 31) >> 2); + if (init > words) init = words; + DPRINTK("put_dma: %lx DMA: %d/%d words\n", + (unsigned long) paddr,init,words); + dma[(*j)++] = MID_DT_WORD | (init << MID_DMA_COUNT_SHIFT) | + (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += init << 2; + words -= init; + } +#ifdef CONFIG_ATM_ENI_BURST_TX_16W /* may work with some PCI chipsets ... */ + if (words & ~15) { + DPRINTK("put_dma: %lx DMA: %d*16/%d words\n", + (unsigned long) paddr,words >> 4,words); + dma[(*j)++] = MID_DT_16W | ((words >> 4) << MID_DMA_COUNT_SHIFT) + | (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += (words & ~15) << 2; + words &= 15; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_8W /* recommended */ + if (words & ~7) { + DPRINTK("put_dma: %lx DMA: %d*8/%d words\n", + (unsigned long) paddr,words >> 3,words); + dma[(*j)++] = MID_DT_8W | ((words >> 3) << MID_DMA_COUNT_SHIFT) + | (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += (words & ~7) << 2; + words &= 7; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_4W /* probably useless if TX_8W or TX_16W */ + if (words & ~3) { + DPRINTK("put_dma: %lx DMA: %d*4/%d words\n", + (unsigned long) paddr,words >> 2,words); + dma[(*j)++] = MID_DT_4W | ((words >> 2) << MID_DMA_COUNT_SHIFT) + | (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += (words & ~3) << 2; + words &= 3; + } +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_2W /* probably useless if TX_4W, TX_8W, ... */ + if (words & ~1) { + DPRINTK("put_dma: %lx DMA: %d*2/%d words\n", + (unsigned long) paddr,words >> 1,words); + dma[(*j)++] = MID_DT_2W | ((words >> 1) << MID_DMA_COUNT_SHIFT) + | (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += (words & ~1) << 2; + words &= 1; + } +#endif + if (words) { + DPRINTK("put_dma: %lx DMA: %d words\n",(unsigned long) paddr, + words); + dma[(*j)++] = MID_DT_WORD | (words << MID_DMA_COUNT_SHIFT) | + (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + paddr += words << 2; + } + if (size) { + DPRINTK("put_dma: %lx DMA: %d bytes\n",(unsigned long) paddr, + size); + dma[(*j)++] = MID_DT_BYTE | (size << MID_DMA_COUNT_SHIFT) | + (chan << MID_DMA_CHAN_SHIFT); + dma[(*j)++] = paddr; + } +} + + +static enum enq_res do_tx(struct sk_buff *skb) +{ + struct atm_vcc *vcc; + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + struct eni_tx *tx; + dma_addr_t paddr; + u32 dma_rd,dma_wr; + u32 size; /* in words */ + int aal5,dma_size,i,j; + + DPRINTK(">do_tx\n"); + NULLCHECK(skb); + EVENT("do_tx: skb=0x%lx, %ld bytes\n",(unsigned long) skb,skb->len); + vcc = ATM_SKB(skb)->vcc; + NULLCHECK(vcc); + eni_dev = ENI_DEV(vcc->dev); + NULLCHECK(eni_dev); + eni_vcc = ENI_VCC(vcc); + tx = eni_vcc->tx; + NULLCHECK(tx); +#if 0 /* Enable this for testing with the "align" program */ + { + unsigned int hack = *((char *) skb->data)-'0'; + + if (hack < 8) { + skb->data += hack; + skb->len -= hack; + } + } +#endif +#if 0 /* should work now */ + if ((unsigned long) skb->data & 3) + printk(KERN_ERR DEV_LABEL "(itf %d): VCI %d has mis-aligned " + "TX data\n",vcc->dev->number,vcc->vci); +#endif + /* + * Potential future IP speedup: make hard_header big enough to put + * segmentation descriptor directly into PDU. Saves: 4 slave writes, + * 1 DMA xfer & 2 DMA'ed bytes (protocol layering is for wimps :-) + */ + + aal5 = vcc->qos.aal == ATM_AAL5; + /* check space in buffer */ + if (!aal5) + size = (ATM_CELL_PAYLOAD >> 2)+TX_DESCR_SIZE; + /* cell without HEC plus segmentation header (includes + four-byte cell header) */ + else { + size = skb->len+4*AAL5_TRAILER+ATM_CELL_PAYLOAD-1; + /* add AAL5 trailer */ + size = ((size-(size % ATM_CELL_PAYLOAD)) >> 2)+TX_DESCR_SIZE; + /* add segmentation header */ + } + /* + * Can I move tx_pos by size bytes without getting closer than TX_GAP + * to the read pointer ? TX_GAP means to leave some space for what + * the manual calls "too close". + */ + if (!NEPMOK(tx->tx_pos,size+TX_GAP, + eni_in(MID_TX_RDPTR(tx->index)),tx->words)) { + DPRINTK(DEV_LABEL "(itf %d): TX full (size %d)\n", + vcc->dev->number,size); + return enq_next; + } + /* check DMA */ + dma_wr = eni_in(MID_DMA_WR_TX); + dma_rd = eni_in(MID_DMA_RD_TX); + dma_size = 3; /* JK for descriptor and final fill, plus final size + mis-alignment fix */ +DPRINTK("iovcnt = %d\n",skb_shinfo(skb)->nr_frags); + if (!skb_shinfo(skb)->nr_frags) dma_size += 5; + else dma_size += 5*(skb_shinfo(skb)->nr_frags+1); + if (dma_size > TX_DMA_BUF) { + printk(KERN_CRIT DEV_LABEL "(itf %d): needs %d DMA entries " + "(got only %d)\n",vcc->dev->number,dma_size,TX_DMA_BUF); + } + DPRINTK("dma_wr is %d, tx_pos is %ld\n",dma_wr,tx->tx_pos); + if (dma_wr != dma_rd && ((dma_rd+NR_DMA_TX-dma_wr) & (NR_DMA_TX-1)) < + dma_size) { + printk(KERN_WARNING DEV_LABEL "(itf %d): TX DMA full\n", + vcc->dev->number); + return enq_jam; + } + paddr = pci_map_single(eni_dev->pci_dev,skb->data,skb->len, + PCI_DMA_TODEVICE); + ENI_PRV_PADDR(skb) = paddr; + /* prepare DMA queue entries */ + j = 0; + eni_dev->dma[j++] = (((tx->tx_pos+TX_DESCR_SIZE) & (tx->words-1)) << + MID_DMA_COUNT_SHIFT) | (tx->index << MID_DMA_CHAN_SHIFT) | + MID_DT_JK; + j++; + if (!skb_shinfo(skb)->nr_frags) + if (aal5) put_dma(tx->index,eni_dev->dma,&j,paddr,skb->len); + else put_dma(tx->index,eni_dev->dma,&j,paddr+4,skb->len-4); + else { +DPRINTK("doing direct send\n"); /* @@@ well, this doesn't work anyway */ + for (i = -1; i < skb_shinfo(skb)->nr_frags; i++) + if (i == -1) + put_dma(tx->index,eni_dev->dma,&j,(unsigned long) + skb->data, + skb->len - skb->data_len); + else + put_dma(tx->index,eni_dev->dma,&j,(unsigned long) + skb_shinfo(skb)->frags[i].page + skb_shinfo(skb)->frags[i].page_offset, + skb_shinfo(skb)->frags[i].size); + } + if (skb->len & 3) + put_dma(tx->index,eni_dev->dma,&j,zeroes,4-(skb->len & 3)); + /* JK for AAL5 trailer - AAL0 doesn't need it, but who cares ... */ + eni_dev->dma[j++] = (((tx->tx_pos+size) & (tx->words-1)) << + MID_DMA_COUNT_SHIFT) | (tx->index << MID_DMA_CHAN_SHIFT) | + MID_DMA_END | MID_DT_JK; + j++; + DPRINTK("DMA at end: %d\n",j); + /* store frame */ + writel((MID_SEG_TX_ID << MID_SEG_ID_SHIFT) | + (aal5 ? MID_SEG_AAL5 : 0) | (tx->prescaler << MID_SEG_PR_SHIFT) | + (tx->resolution << MID_SEG_RATE_SHIFT) | + (size/(ATM_CELL_PAYLOAD/4)),tx->send+tx->tx_pos*4); +/*printk("dsc = 0x%08lx\n",(unsigned long) readl(tx->send+tx->tx_pos*4));*/ + writel((vcc->vci << MID_SEG_VCI_SHIFT) | + (aal5 ? 0 : (skb->data[3] & 0xf)) | + (ATM_SKB(skb)->atm_options & ATM_ATMOPT_CLP ? MID_SEG_CLP : 0), + tx->send+((tx->tx_pos+1) & (tx->words-1))*4); + DPRINTK("size: %d, len:%d\n",size,skb->len); + if (aal5) + writel(skb->len,tx->send+ + ((tx->tx_pos+size-AAL5_TRAILER) & (tx->words-1))*4); + j = j >> 1; + for (i = 0; i < j; i++) { + writel(eni_dev->dma[i*2],eni_dev->tx_dma+dma_wr*8); + writel(eni_dev->dma[i*2+1],eni_dev->tx_dma+dma_wr*8+4); + dma_wr = (dma_wr+1) & (NR_DMA_TX-1); + } + ENI_PRV_POS(skb) = tx->tx_pos; + ENI_PRV_SIZE(skb) = size; + ENI_VCC(vcc)->txing += size; + tx->tx_pos = (tx->tx_pos+size) & (tx->words-1); + DPRINTK("dma_wr set to %d, tx_pos is now %ld\n",dma_wr,tx->tx_pos); + eni_out(dma_wr,MID_DMA_WR_TX); + skb_queue_tail(&eni_dev->tx_queue,skb); +queued++; + return enq_ok; +} + + +static void poll_tx(struct atm_dev *dev) +{ + struct eni_tx *tx; + struct sk_buff *skb; + enum enq_res res; + int i; + + DPRINTK(">poll_tx\n"); + for (i = NR_CHAN-1; i >= 0; i--) { + tx = &ENI_DEV(dev)->tx[i]; + if (tx->send) + while ((skb = skb_dequeue(&tx->backlog))) { + res = do_tx(skb); + if (res == enq_ok) continue; + DPRINTK("re-queuing TX PDU\n"); + skb_queue_head(&tx->backlog,skb); +requeued++; + if (res == enq_jam) return; + break; + } + } +} + + +static void dequeue_tx(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + struct atm_vcc *vcc; + struct sk_buff *skb; + struct eni_tx *tx; + + NULLCHECK(dev); + eni_dev = ENI_DEV(dev); + NULLCHECK(eni_dev); + while ((skb = skb_dequeue(&eni_dev->tx_queue))) { + vcc = ATM_SKB(skb)->vcc; + NULLCHECK(vcc); + tx = ENI_VCC(vcc)->tx; + NULLCHECK(ENI_VCC(vcc)->tx); + DPRINTK("dequeue_tx: next 0x%lx curr 0x%x\n",ENI_PRV_POS(skb), + (unsigned) eni_in(MID_TX_DESCRSTART(tx->index))); + if (ENI_VCC(vcc)->txing < tx->words && ENI_PRV_POS(skb) == + eni_in(MID_TX_DESCRSTART(tx->index))) { + skb_queue_head(&eni_dev->tx_queue,skb); + break; + } + ENI_VCC(vcc)->txing -= ENI_PRV_SIZE(skb); + pci_unmap_single(eni_dev->pci_dev,ENI_PRV_PADDR(skb),skb->len, + PCI_DMA_TODEVICE); + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb_irq(skb); + atomic_inc(&vcc->stats->tx); + wake_up(&eni_dev->tx_wait); +dma_complete++; + } +} + + +static struct eni_tx *alloc_tx(struct eni_dev *eni_dev,int ubr) +{ + int i; + + for (i = !ubr; i < NR_CHAN; i++) + if (!eni_dev->tx[i].send) return eni_dev->tx+i; + return NULL; +} + + +static int comp_tx(struct eni_dev *eni_dev,int *pcr,int reserved,int *pre, + int *res,int unlimited) +{ + static const int pre_div[] = { 4,16,128,2048 }; + /* 2^(((x+2)^2-(x+2))/2+1) */ + + if (unlimited) *pre = *res = 0; + else { + if (*pcr > 0) { + int div; + + for (*pre = 0; *pre < 3; (*pre)++) + if (TS_CLOCK/pre_div[*pre]/64 <= *pcr) break; + div = pre_div[*pre]**pcr; + DPRINTK("min div %d\n",div); + *res = TS_CLOCK/div-1; + } + else { + int div; + + if (!*pcr) *pcr = eni_dev->tx_bw+reserved; + for (*pre = 3; *pre >= 0; (*pre)--) + if (TS_CLOCK/pre_div[*pre]/64 > -*pcr) break; + if (*pre < 3) (*pre)++; /* else fail later */ + div = pre_div[*pre]*-*pcr; + DPRINTK("max div %d\n",div); + *res = (TS_CLOCK+div-1)/div-1; + } + if (*res < 0) *res = 0; + if (*res > MID_SEG_MAX_RATE) *res = MID_SEG_MAX_RATE; + } + *pcr = TS_CLOCK/pre_div[*pre]/(*res+1); + DPRINTK("out pcr: %d (%d:%d)\n",*pcr,*pre,*res); + return 0; +} + + +static int reserve_or_set_tx(struct atm_vcc *vcc,struct atm_trafprm *txtp, + int set_rsv,int set_shp) +{ + struct eni_dev *eni_dev = ENI_DEV(vcc->dev); + struct eni_vcc *eni_vcc = ENI_VCC(vcc); + struct eni_tx *tx; + unsigned long size; + void __iomem *mem; + int rate,ubr,unlimited,new_tx; + int pre,res,order; + int error; + + rate = atm_pcr_goal(txtp); + ubr = txtp->traffic_class == ATM_UBR; + unlimited = ubr && (!rate || rate <= -ATM_OC3_PCR || + rate >= ATM_OC3_PCR); + if (!unlimited) { + size = txtp->max_sdu*eni_dev->tx_mult/100; + if (size > MID_MAX_BUF_SIZE && txtp->max_sdu <= + MID_MAX_BUF_SIZE) + size = MID_MAX_BUF_SIZE; + } + else { + if (eni_dev->ubr) { + eni_vcc->tx = eni_dev->ubr; + txtp->pcr = ATM_OC3_PCR; + return 0; + } + size = UBR_BUFFER; + } + new_tx = !eni_vcc->tx; + mem = NULL; /* for gcc */ + if (!new_tx) tx = eni_vcc->tx; + else { + mem = eni_alloc_mem(eni_dev,&size); + if (!mem) return -ENOBUFS; + tx = alloc_tx(eni_dev,unlimited); + if (!tx) { + eni_free_mem(eni_dev,mem,size); + return -EBUSY; + } + DPRINTK("got chan %d\n",tx->index); + tx->reserved = tx->shaping = 0; + tx->send = mem; + tx->words = size >> 2; + skb_queue_head_init(&tx->backlog); + for (order = 0; size > (1 << (order+10)); order++); + eni_out((order << MID_SIZE_SHIFT) | + ((tx->send-eni_dev->ram) >> (MID_LOC_SKIP+2)), + MID_TX_PLACE(tx->index)); + tx->tx_pos = eni_in(MID_TX_DESCRSTART(tx->index)) & + MID_DESCR_START; + } + error = comp_tx(eni_dev,&rate,tx->reserved,&pre,&res,unlimited); + if (!error && txtp->min_pcr > rate) error = -EINVAL; + if (!error && txtp->max_pcr && txtp->max_pcr != ATM_MAX_PCR && + txtp->max_pcr < rate) error = -EINVAL; + if (!error && !ubr && rate > eni_dev->tx_bw+tx->reserved) + error = -EINVAL; + if (!error && set_rsv && !set_shp && rate < tx->shaping) + error = -EINVAL; + if (!error && !set_rsv && rate > tx->reserved && !ubr) + error = -EINVAL; + if (error) { + if (new_tx) { + tx->send = NULL; + eni_free_mem(eni_dev,mem,size); + } + return error; + } + txtp->pcr = rate; + if (set_rsv && !ubr) { + eni_dev->tx_bw += tx->reserved; + tx->reserved = rate; + eni_dev->tx_bw -= rate; + } + if (set_shp || (unlimited && new_tx)) { + if (unlimited && new_tx) eni_dev->ubr = tx; + tx->prescaler = pre; + tx->resolution = res; + tx->shaping = rate; + } + if (set_shp) eni_vcc->tx = tx; + DPRINTK("rsv %d shp %d\n",tx->reserved,tx->shaping); + return 0; +} + + +static int open_tx_first(struct atm_vcc *vcc) +{ + ENI_VCC(vcc)->tx = NULL; + if (vcc->qos.txtp.traffic_class == ATM_NONE) return 0; + ENI_VCC(vcc)->txing = 0; + return reserve_or_set_tx(vcc,&vcc->qos.txtp,1,1); +} + + +static int open_tx_second(struct atm_vcc *vcc) +{ + return 0; /* nothing to do */ +} + + +static void close_tx(struct atm_vcc *vcc) +{ + DECLARE_WAITQUEUE(wait,current); + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + + eni_vcc = ENI_VCC(vcc); + if (!eni_vcc->tx) return; + eni_dev = ENI_DEV(vcc->dev); + /* wait for TX queue to drain */ + DPRINTK("eni_close: waiting for TX ...\n"); + add_wait_queue(&eni_dev->tx_wait,&wait); + set_current_state(TASK_UNINTERRUPTIBLE); + for (;;) { + int txing; + + tasklet_disable(&eni_dev->task); + txing = skb_peek(&eni_vcc->tx->backlog) || eni_vcc->txing; + tasklet_enable(&eni_dev->task); + if (!txing) break; + DPRINTK("%d TX left\n",eni_vcc->txing); + schedule(); + set_current_state(TASK_UNINTERRUPTIBLE); + } + set_current_state(TASK_RUNNING); + remove_wait_queue(&eni_dev->tx_wait,&wait); + if (eni_vcc->tx != eni_dev->ubr) { + /* + * Looping a few times in here is probably far cheaper than + * keeping track of TX completions all the time, so let's poll + * a bit ... + */ + while (eni_in(MID_TX_RDPTR(eni_vcc->tx->index)) != + eni_in(MID_TX_DESCRSTART(eni_vcc->tx->index))) + schedule(); + eni_free_mem(eni_dev,eni_vcc->tx->send,eni_vcc->tx->words << 2); + eni_vcc->tx->send = NULL; + eni_dev->tx_bw += eni_vcc->tx->reserved; + } + eni_vcc->tx = NULL; +} + + +static int start_tx(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + int i; + + eni_dev = ENI_DEV(dev); + eni_dev->lost = 0; + eni_dev->tx_bw = ATM_OC3_PCR; + eni_dev->tx_mult = DEFAULT_TX_MULT; + init_waitqueue_head(&eni_dev->tx_wait); + eni_dev->ubr = NULL; + skb_queue_head_init(&eni_dev->tx_queue); + eni_out(0,MID_DMA_WR_TX); + for (i = 0; i < NR_CHAN; i++) { + eni_dev->tx[i].send = NULL; + eni_dev->tx[i].index = i; + } + return 0; +} + + +/*--------------------------------- common ----------------------------------*/ + + +#if 0 /* may become useful again when tuning things */ + +static void foo(void) +{ +printk(KERN_INFO + "tx_complete=%d,dma_complete=%d,queued=%d,requeued=%d,sub=%d,\n" + "backlogged=%d,rx_enqueued=%d,rx_dequeued=%d,putting=%d,pushed=%d\n", + tx_complete,dma_complete,queued,requeued,submitted,backlogged, + rx_enqueued,rx_dequeued,putting,pushed); +if (eni_boards) printk(KERN_INFO "loss: %ld\n",ENI_DEV(eni_boards)->lost); +} + +#endif + + +static void bug_int(struct atm_dev *dev,unsigned long reason) +{ + struct eni_dev *eni_dev; + + DPRINTK(">bug_int\n"); + eni_dev = ENI_DEV(dev); + if (reason & MID_DMA_ERR_ACK) + printk(KERN_CRIT DEV_LABEL "(itf %d): driver error - DMA " + "error\n",dev->number); + if (reason & MID_TX_IDENT_MISM) + printk(KERN_CRIT DEV_LABEL "(itf %d): driver error - ident " + "mismatch\n",dev->number); + if (reason & MID_TX_DMA_OVFL) + printk(KERN_CRIT DEV_LABEL "(itf %d): driver error - DMA " + "overflow\n",dev->number); + EVENT("---dump ends here---\n",0,0); + printk(KERN_NOTICE "---recent events---\n"); + event_dump(); +} + + +static irqreturn_t eni_int(int irq,void *dev_id,struct pt_regs *regs) +{ + struct atm_dev *dev; + struct eni_dev *eni_dev; + u32 reason; + + DPRINTK(">eni_int\n"); + dev = dev_id; + eni_dev = ENI_DEV(dev); + reason = eni_in(MID_ISA); + DPRINTK(DEV_LABEL ": int 0x%lx\n",(unsigned long) reason); + /* + * Must handle these two right now, because reading ISA doesn't clear + * them, so they re-occur and we never make it to the tasklet. Since + * they're rare, we don't mind the occasional invocation of eni_tasklet + * with eni_dev->events == 0. + */ + if (reason & MID_STAT_OVFL) { + EVENT("stat overflow\n",0,0); + eni_dev->lost += eni_in(MID_STAT) & MID_OVFL_TRASH; + } + if (reason & MID_SUNI_INT) { + EVENT("SUNI int\n",0,0); + dev->phy->interrupt(dev); +#if 0 + foo(); +#endif + } + spin_lock(&eni_dev->lock); + eni_dev->events |= reason; + spin_unlock(&eni_dev->lock); + tasklet_schedule(&eni_dev->task); + return IRQ_HANDLED; +} + + +static void eni_tasklet(unsigned long data) +{ + struct atm_dev *dev = (struct atm_dev *) data; + struct eni_dev *eni_dev = ENI_DEV(dev); + unsigned long flags; + u32 events; + + DPRINTK("eni_tasklet (dev %p)\n",dev); + spin_lock_irqsave(&eni_dev->lock,flags); + events = xchg(&eni_dev->events,0); + spin_unlock_irqrestore(&eni_dev->lock,flags); + if (events & MID_RX_DMA_COMPLETE) { + EVENT("INT: RX DMA complete, starting dequeue_rx\n",0,0); + dequeue_rx(dev); + EVENT("dequeue_rx done, starting poll_rx\n",0,0); + poll_rx(dev); + EVENT("poll_rx done\n",0,0); + /* poll_tx ? */ + } + if (events & MID_SERVICE) { + EVENT("INT: service, starting get_service\n",0,0); + get_service(dev); + EVENT("get_service done, starting poll_rx\n",0,0); + poll_rx(dev); + EVENT("poll_rx done\n",0,0); + } + if (events & MID_TX_DMA_COMPLETE) { + EVENT("INT: TX DMA COMPLETE\n",0,0); + dequeue_tx(dev); + } + if (events & MID_TX_COMPLETE) { + EVENT("INT: TX COMPLETE\n",0,0); +tx_complete++; + wake_up(&eni_dev->tx_wait); + /* poll_rx ? */ + } + if (events & (MID_DMA_ERR_ACK | MID_TX_IDENT_MISM | MID_TX_DMA_OVFL)) { + EVENT("bug interrupt\n",0,0); + bug_int(dev,events); + } + poll_tx(dev); +} + + +/*--------------------------------- entries ---------------------------------*/ + + +static const char *media_name[] __devinitdata = { + "MMF", "SMF", "MMF", "03?", /* 0- 3 */ + "UTP", "05?", "06?", "07?", /* 4- 7 */ + "TAXI","09?", "10?", "11?", /* 8-11 */ + "12?", "13?", "14?", "15?", /* 12-15 */ + "MMF", "SMF", "18?", "19?", /* 16-19 */ + "UTP", "21?", "22?", "23?", /* 20-23 */ + "24?", "25?", "26?", "27?", /* 24-27 */ + "28?", "29?", "30?", "31?" /* 28-31 */ +}; + + +#define SET_SEPROM \ + ({ if (!error && !pci_error) { \ + pci_error = pci_write_config_byte(eni_dev->pci_dev,PCI_TONGA_CTRL,tonga); \ + udelay(10); /* 10 usecs */ \ + } }) +#define GET_SEPROM \ + ({ if (!error && !pci_error) { \ + pci_error = pci_read_config_byte(eni_dev->pci_dev,PCI_TONGA_CTRL,&tonga); \ + udelay(10); /* 10 usecs */ \ + } }) + + +static int __devinit get_esi_asic(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + unsigned char tonga; + int error,failed,pci_error; + int address,i,j; + + eni_dev = ENI_DEV(dev); + error = pci_error = 0; + tonga = SEPROM_MAGIC | SEPROM_DATA | SEPROM_CLK; + SET_SEPROM; + for (i = 0; i < ESI_LEN && !error && !pci_error; i++) { + /* start operation */ + tonga |= SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + tonga &= ~SEPROM_DATA; + SET_SEPROM; + tonga &= ~SEPROM_CLK; + SET_SEPROM; + /* send address */ + address = ((i+SEPROM_ESI_BASE) << 1)+1; + for (j = 7; j >= 0; j--) { + tonga = (address >> j) & 1 ? tonga | SEPROM_DATA : + tonga & ~SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + tonga &= ~SEPROM_CLK; + SET_SEPROM; + } + /* get ack */ + tonga |= SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + GET_SEPROM; + failed = tonga & SEPROM_DATA; + tonga &= ~SEPROM_CLK; + SET_SEPROM; + tonga |= SEPROM_DATA; + SET_SEPROM; + if (failed) error = -EIO; + else { + dev->esi[i] = 0; + for (j = 7; j >= 0; j--) { + dev->esi[i] <<= 1; + tonga |= SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + GET_SEPROM; + if (tonga & SEPROM_DATA) dev->esi[i] |= 1; + tonga &= ~SEPROM_CLK; + SET_SEPROM; + tonga |= SEPROM_DATA; + SET_SEPROM; + } + /* get ack */ + tonga |= SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + GET_SEPROM; + if (!(tonga & SEPROM_DATA)) error = -EIO; + tonga &= ~SEPROM_CLK; + SET_SEPROM; + tonga |= SEPROM_DATA; + SET_SEPROM; + } + /* stop operation */ + tonga &= ~SEPROM_DATA; + SET_SEPROM; + tonga |= SEPROM_CLK; + SET_SEPROM; + tonga |= SEPROM_DATA; + SET_SEPROM; + } + if (pci_error) { + printk(KERN_ERR DEV_LABEL "(itf %d): error reading ESI " + "(0x%02x)\n",dev->number,pci_error); + error = -EIO; + } + return error; +} + + +#undef SET_SEPROM +#undef GET_SEPROM + + +static int __devinit get_esi_fpga(struct atm_dev *dev, void __iomem *base) +{ + void __iomem *mac_base; + int i; + + mac_base = base+EPROM_SIZE-sizeof(struct midway_eprom); + for (i = 0; i < ESI_LEN; i++) dev->esi[i] = readb(mac_base+(i^3)); + return 0; +} + + +static int __devinit eni_do_init(struct atm_dev *dev) +{ + struct midway_eprom __iomem *eprom; + struct eni_dev *eni_dev; + struct pci_dev *pci_dev; + unsigned long real_base; + void __iomem *base; + unsigned char revision; + int error,i,last; + + DPRINTK(">eni_init\n"); + dev->ci_range.vpi_bits = 0; + dev->ci_range.vci_bits = NR_VCI_LD; + dev->link_rate = ATM_OC3_PCR; + eni_dev = ENI_DEV(dev); + pci_dev = eni_dev->pci_dev; + real_base = pci_resource_start(pci_dev, 0); + eni_dev->irq = pci_dev->irq; + error = pci_read_config_byte(pci_dev,PCI_REVISION_ID,&revision); + if (error) { + printk(KERN_ERR DEV_LABEL "(itf %d): init error 0x%02x\n", + dev->number,error); + return -EINVAL; + } + if ((error = pci_write_config_word(pci_dev,PCI_COMMAND, + PCI_COMMAND_MEMORY | + (eni_dev->asic ? PCI_COMMAND_PARITY | PCI_COMMAND_SERR : 0)))) { + printk(KERN_ERR DEV_LABEL "(itf %d): can't enable memory " + "(0x%02x)\n",dev->number,error); + return -EIO; + } + printk(KERN_NOTICE DEV_LABEL "(itf %d): rev.%d,base=0x%lx,irq=%d,", + dev->number,revision,real_base,eni_dev->irq); + if (!(base = ioremap_nocache(real_base,MAP_MAX_SIZE))) { + printk("\n"); + printk(KERN_ERR DEV_LABEL "(itf %d): can't set up page " + "mapping\n",dev->number); + return error; + } + eni_dev->base_diff = real_base - (unsigned long) base; + /* id may not be present in ASIC Tonga boards - check this @@@ */ + if (!eni_dev->asic) { + eprom = (base+EPROM_SIZE-sizeof(struct midway_eprom)); + if (readl(&eprom->magic) != ENI155_MAGIC) { + printk("\n"); + printk(KERN_ERR KERN_ERR DEV_LABEL "(itf %d): bad " + "magic - expected 0x%x, got 0x%x\n",dev->number, + ENI155_MAGIC,(unsigned) readl(&eprom->magic)); + return -EINVAL; + } + } + eni_dev->phy = base+PHY_BASE; + eni_dev->reg = base+REG_BASE; + eni_dev->ram = base+RAM_BASE; + last = MAP_MAX_SIZE-RAM_BASE; + for (i = last-RAM_INCREMENT; i >= 0; i -= RAM_INCREMENT) { + writel(0x55555555,eni_dev->ram+i); + if (readl(eni_dev->ram+i) != 0x55555555) last = i; + else { + writel(0xAAAAAAAA,eni_dev->ram+i); + if (readl(eni_dev->ram+i) != 0xAAAAAAAA) last = i; + else writel(i,eni_dev->ram+i); + } + } + for (i = 0; i < last; i += RAM_INCREMENT) + if (readl(eni_dev->ram+i) != i) break; + eni_dev->mem = i; + memset_io(eni_dev->ram,0,eni_dev->mem); + /* TODO: should shrink allocation now */ + printk("mem=%dkB (",eni_dev->mem >> 10); + /* TODO: check for non-SUNI, check for TAXI ? */ + if (!(eni_in(MID_RES_ID_MCON) & 0x200) != !eni_dev->asic) { + printk(")\n"); + printk(KERN_ERR DEV_LABEL "(itf %d): ERROR - wrong id 0x%x\n", + dev->number,(unsigned) eni_in(MID_RES_ID_MCON)); + return -EINVAL; + } + error = eni_dev->asic ? get_esi_asic(dev) : get_esi_fpga(dev,base); + if (error) return error; + for (i = 0; i < ESI_LEN; i++) + printk("%s%02X",i ? "-" : "",dev->esi[i]); + printk(")\n"); + printk(KERN_NOTICE DEV_LABEL "(itf %d): %s,%s\n",dev->number, + eni_in(MID_RES_ID_MCON) & 0x200 ? "ASIC" : "FPGA", + media_name[eni_in(MID_RES_ID_MCON) & DAUGTHER_ID]); + return suni_init(dev); +} + + +static int __devinit eni_start(struct atm_dev *dev) +{ + struct eni_dev *eni_dev; + + void __iomem *buf; + unsigned long buffer_mem; + int error; + + DPRINTK(">eni_start\n"); + eni_dev = ENI_DEV(dev); + if (request_irq(eni_dev->irq,&eni_int,SA_SHIRQ,DEV_LABEL,dev)) { + printk(KERN_ERR DEV_LABEL "(itf %d): IRQ%d is already in use\n", + dev->number,eni_dev->irq); + return -EAGAIN; + } + /* @@@ should release IRQ on error */ + pci_set_master(eni_dev->pci_dev); + if ((error = pci_write_config_word(eni_dev->pci_dev,PCI_COMMAND, + PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | + (eni_dev->asic ? PCI_COMMAND_PARITY | PCI_COMMAND_SERR : 0)))) { + printk(KERN_ERR DEV_LABEL "(itf %d): can't enable memory+" + "master (0x%02x)\n",dev->number,error); + return error; + } + if ((error = pci_write_config_byte(eni_dev->pci_dev,PCI_TONGA_CTRL, + END_SWAP_DMA))) { + printk(KERN_ERR DEV_LABEL "(itf %d): can't set endian swap " + "(0x%02x)\n",dev->number,error); + return error; + } + /* determine addresses of internal tables */ + eni_dev->vci = eni_dev->ram; + eni_dev->rx_dma = eni_dev->ram+NR_VCI*16; + eni_dev->tx_dma = eni_dev->rx_dma+NR_DMA_RX*8; + eni_dev->service = eni_dev->tx_dma+NR_DMA_TX*8; + buf = eni_dev->service+NR_SERVICE*4; + DPRINTK("vci 0x%lx,rx 0x%lx, tx 0x%lx,srv 0x%lx,buf 0x%lx\n", + eni_dev->vci,eni_dev->rx_dma,eni_dev->tx_dma, + eni_dev->service,buf); + spin_lock_init(&eni_dev->lock); + tasklet_init(&eni_dev->task,eni_tasklet,(unsigned long) dev); + eni_dev->events = 0; + /* initialize memory management */ + buffer_mem = eni_dev->mem - (buf - eni_dev->ram); + eni_dev->free_list_size = buffer_mem/MID_MIN_BUF_SIZE/2; + eni_dev->free_list = (struct eni_free *) kmalloc( + sizeof(struct eni_free)*(eni_dev->free_list_size+1),GFP_KERNEL); + if (!eni_dev->free_list) { + printk(KERN_ERR DEV_LABEL "(itf %d): couldn't get free page\n", + dev->number); + return -ENOMEM; + } + eni_dev->free_len = 0; + eni_put_free(eni_dev,buf,buffer_mem); + memset_io(eni_dev->vci,0,16*NR_VCI); /* clear VCI table */ + /* + * byte_addr free (k) + * 0x00000000 512 VCI table + * 0x00004000 496 RX DMA + * 0x00005000 492 TX DMA + * 0x00006000 488 service list + * 0x00007000 484 buffers + * 0x00080000 0 end (512kB) + */ + eni_out(0xffffffff,MID_IE); + error = start_tx(dev); + if (error) return error; + error = start_rx(dev); + if (error) return error; + error = dev->phy->start(dev); + if (error) return error; + eni_out(eni_in(MID_MC_S) | (1 << MID_INT_SEL_SHIFT) | + MID_TX_LOCK_MODE | MID_DMA_ENABLE | MID_TX_ENABLE | MID_RX_ENABLE, + MID_MC_S); + /* Tonga uses SBus INTReq1 */ + (void) eni_in(MID_ISA); /* clear Midway interrupts */ + return 0; +} + + +static void eni_close(struct atm_vcc *vcc) +{ + DPRINTK(">eni_close\n"); + if (!ENI_VCC(vcc)) return; + clear_bit(ATM_VF_READY,&vcc->flags); + close_rx(vcc); + close_tx(vcc); + DPRINTK("eni_close: done waiting\n"); + /* deallocate memory */ + kfree(ENI_VCC(vcc)); + vcc->dev_data = NULL; + clear_bit(ATM_VF_ADDR,&vcc->flags); + /*foo();*/ +} + + +static int eni_open(struct atm_vcc *vcc) +{ + struct eni_dev *eni_dev; + struct eni_vcc *eni_vcc; + int error; + short vpi = vcc->vpi; + int vci = vcc->vci; + + DPRINTK(">eni_open\n"); + EVENT("eni_open\n",0,0); + if (!test_bit(ATM_VF_PARTIAL,&vcc->flags)) + vcc->dev_data = NULL; + eni_dev = ENI_DEV(vcc->dev); + if (vci != ATM_VPI_UNSPEC && vpi != ATM_VCI_UNSPEC) + set_bit(ATM_VF_ADDR,&vcc->flags); + if (vcc->qos.aal != ATM_AAL0 && vcc->qos.aal != ATM_AAL5) + return -EINVAL; + DPRINTK(DEV_LABEL "(itf %d): open %d.%d\n",vcc->dev->number,vcc->vpi, + vcc->vci); + if (!test_bit(ATM_VF_PARTIAL,&vcc->flags)) { + eni_vcc = kmalloc(sizeof(struct eni_vcc),GFP_KERNEL); + if (!eni_vcc) return -ENOMEM; + vcc->dev_data = eni_vcc; + eni_vcc->tx = NULL; /* for eni_close after open_rx */ + if ((error = open_rx_first(vcc))) { + eni_close(vcc); + return error; + } + if ((error = open_tx_first(vcc))) { + eni_close(vcc); + return error; + } + } + if (vci == ATM_VPI_UNSPEC || vpi == ATM_VCI_UNSPEC) return 0; + if ((error = open_rx_second(vcc))) { + eni_close(vcc); + return error; + } + if ((error = open_tx_second(vcc))) { + eni_close(vcc); + return error; + } + set_bit(ATM_VF_READY,&vcc->flags); + /* should power down SUNI while !ref_count @@@ */ + return 0; +} + + +static int eni_change_qos(struct atm_vcc *vcc,struct atm_qos *qos,int flgs) +{ + struct eni_dev *eni_dev = ENI_DEV(vcc->dev); + struct eni_tx *tx = ENI_VCC(vcc)->tx; + struct sk_buff *skb; + int error,rate,rsv,shp; + + if (qos->txtp.traffic_class == ATM_NONE) return 0; + if (tx == eni_dev->ubr) return -EBADFD; + rate = atm_pcr_goal(&qos->txtp); + if (rate < 0) rate = -rate; + rsv = shp = 0; + if ((flgs & ATM_MF_DEC_RSV) && rate && rate < tx->reserved) rsv = 1; + if ((flgs & ATM_MF_INC_RSV) && (!rate || rate > tx->reserved)) rsv = 1; + if ((flgs & ATM_MF_DEC_SHP) && rate && rate < tx->shaping) shp = 1; + if ((flgs & ATM_MF_INC_SHP) && (!rate || rate > tx->shaping)) shp = 1; + if (!rsv && !shp) return 0; + error = reserve_or_set_tx(vcc,&qos->txtp,rsv,shp); + if (error) return error; + if (shp && !(flgs & ATM_MF_IMMED)) return 0; + /* + * Walk through the send buffer and patch the rate information in all + * segmentation buffer descriptors of this VCC. + */ + tasklet_disable(&eni_dev->task); + skb_queue_walk(&eni_dev->tx_queue, skb) { + void __iomem *dsc; + + if (ATM_SKB(skb)->vcc != vcc) continue; + dsc = tx->send+ENI_PRV_POS(skb)*4; + writel((readl(dsc) & ~(MID_SEG_RATE | MID_SEG_PR)) | + (tx->prescaler << MID_SEG_PR_SHIFT) | + (tx->resolution << MID_SEG_RATE_SHIFT), dsc); + } + tasklet_enable(&eni_dev->task); + return 0; +} + + +static int eni_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg) +{ + struct eni_dev *eni_dev = ENI_DEV(dev); + + if (cmd == ENI_MEMDUMP) { + if (!capable(CAP_NET_ADMIN)) return -EPERM; + printk(KERN_WARNING "Please use /proc/atm/" DEV_LABEL ":%d " + "instead of obsolete ioctl ENI_MEMDUMP\n",dev->number); + dump(dev); + return 0; + } + if (cmd == ENI_SETMULT) { + struct eni_multipliers mult; + + if (!capable(CAP_NET_ADMIN)) return -EPERM; + if (copy_from_user(&mult, arg, + sizeof(struct eni_multipliers))) + return -EFAULT; + if ((mult.tx && mult.tx <= 100) || (mult.rx &&mult.rx <= 100) || + mult.tx > 65536 || mult.rx > 65536) + return -EINVAL; + if (mult.tx) eni_dev->tx_mult = mult.tx; + if (mult.rx) eni_dev->rx_mult = mult.rx; + return 0; + } + if (cmd == ATM_SETCIRANGE) { + struct atm_cirange ci; + + if (copy_from_user(&ci, arg,sizeof(struct atm_cirange))) + return -EFAULT; + if ((ci.vpi_bits == 0 || ci.vpi_bits == ATM_CI_MAX) && + (ci.vci_bits == NR_VCI_LD || ci.vpi_bits == ATM_CI_MAX)) + return 0; + return -EINVAL; + } + if (!dev->phy->ioctl) return -ENOIOCTLCMD; + return dev->phy->ioctl(dev,cmd,arg); +} + + +static int eni_getsockopt(struct atm_vcc *vcc,int level,int optname, + void __user *optval,int optlen) +{ + return -EINVAL; +} + + +static int eni_setsockopt(struct atm_vcc *vcc,int level,int optname, + void __user *optval,int optlen) +{ + return -EINVAL; +} + + +static int eni_send(struct atm_vcc *vcc,struct sk_buff *skb) +{ + enum enq_res res; + + DPRINTK(">eni_send\n"); + if (!ENI_VCC(vcc)->tx) { + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + return -EINVAL; + } + if (!skb) { + printk(KERN_CRIT "!skb in eni_send ?\n"); + if (vcc->pop) vcc->pop(vcc,skb); + return -EINVAL; + } + if (vcc->qos.aal == ATM_AAL0) { + if (skb->len != ATM_CELL_SIZE-1) { + if (vcc->pop) vcc->pop(vcc,skb); + else dev_kfree_skb(skb); + return -EINVAL; + } + *(u32 *) skb->data = htonl(*(u32 *) skb->data); + } +submitted++; + ATM_SKB(skb)->vcc = vcc; + tasklet_disable(&ENI_DEV(vcc->dev)->task); + res = do_tx(skb); + tasklet_enable(&ENI_DEV(vcc->dev)->task); + if (res == enq_ok) return 0; + skb_queue_tail(&ENI_VCC(vcc)->tx->backlog,skb); +backlogged++; + tasklet_schedule(&ENI_DEV(vcc->dev)->task); + return 0; +} + +static void eni_phy_put(struct atm_dev *dev,unsigned char value, + unsigned long addr) +{ + writel(value,ENI_DEV(dev)->phy+addr*4); +} + + + +static unsigned char eni_phy_get(struct atm_dev *dev,unsigned long addr) +{ + return readl(ENI_DEV(dev)->phy+addr*4); +} + + +static int eni_proc_read(struct atm_dev *dev,loff_t *pos,char *page) +{ + struct hlist_node *node; + struct sock *s; + static const char *signal[] = { "LOST","unknown","okay" }; + struct eni_dev *eni_dev = ENI_DEV(dev); + struct atm_vcc *vcc; + int left,i; + + left = *pos; + if (!left) + return sprintf(page,DEV_LABEL "(itf %d) signal %s, %dkB, " + "%d cps remaining\n",dev->number,signal[(int) dev->signal], + eni_dev->mem >> 10,eni_dev->tx_bw); + if (!--left) + return sprintf(page,"%4sBursts: TX" +#if !defined(CONFIG_ATM_ENI_BURST_TX_16W) && \ + !defined(CONFIG_ATM_ENI_BURST_TX_8W) && \ + !defined(CONFIG_ATM_ENI_BURST_TX_4W) && \ + !defined(CONFIG_ATM_ENI_BURST_TX_2W) + " none" +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_16W + " 16W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_8W + " 8W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_4W + " 4W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_TX_2W + " 2W" +#endif + ", RX" +#if !defined(CONFIG_ATM_ENI_BURST_RX_16W) && \ + !defined(CONFIG_ATM_ENI_BURST_RX_8W) && \ + !defined(CONFIG_ATM_ENI_BURST_RX_4W) && \ + !defined(CONFIG_ATM_ENI_BURST_RX_2W) + " none" +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_16W + " 16W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_8W + " 8W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_4W + " 4W" +#endif +#ifdef CONFIG_ATM_ENI_BURST_RX_2W + " 2W" +#endif +#ifndef CONFIG_ATM_ENI_TUNE_BURST + " (default)" +#endif + "\n",""); + if (!--left) + return sprintf(page,"%4sBuffer multipliers: tx %d%%, rx %d%%\n", + "",eni_dev->tx_mult,eni_dev->rx_mult); + for (i = 0; i < NR_CHAN; i++) { + struct eni_tx *tx = eni_dev->tx+i; + + if (!tx->send) continue; + if (!--left) { + return sprintf(page,"tx[%d]: 0x%ld-0x%ld " + "(%6ld bytes), rsv %d cps, shp %d cps%s\n",i, + (unsigned long) (tx->send - eni_dev->ram), + tx->send-eni_dev->ram+tx->words*4-1,tx->words*4, + tx->reserved,tx->shaping, + tx == eni_dev->ubr ? " (UBR)" : ""); + } + if (--left) continue; + return sprintf(page,"%10sbacklog %u packets\n","", + skb_queue_len(&tx->backlog)); + } + read_lock(&vcc_sklist_lock); + for(i = 0; i < VCC_HTABLE_SIZE; ++i) { + struct hlist_head *head = &vcc_hash[i]; + + sk_for_each(s, node, head) { + struct eni_vcc *eni_vcc; + int length; + + vcc = atm_sk(s); + if (vcc->dev != dev) + continue; + eni_vcc = ENI_VCC(vcc); + if (--left) continue; + length = sprintf(page,"vcc %4d: ",vcc->vci); + if (eni_vcc->rx) { + length += sprintf(page+length,"0x%ld-0x%ld " + "(%6ld bytes)", + (unsigned long) (eni_vcc->recv - eni_dev->ram), + eni_vcc->recv-eni_dev->ram+eni_vcc->words*4-1, + eni_vcc->words*4); + if (eni_vcc->tx) length += sprintf(page+length,", "); + } + if (eni_vcc->tx) + length += sprintf(page+length,"tx[%d], txing %d bytes", + eni_vcc->tx->index,eni_vcc->txing); + page[length] = '\n'; + read_unlock(&vcc_sklist_lock); + return length+1; + } + } + read_unlock(&vcc_sklist_lock); + for (i = 0; i < eni_dev->free_len; i++) { + struct eni_free *fe = eni_dev->free_list+i; + unsigned long offset; + + if (--left) continue; + offset = (unsigned long) eni_dev->ram+eni_dev->base_diff; + return sprintf(page,"free %p-%p (%6d bytes)\n", + fe->start-offset,fe->start-offset+(1 << fe->order)-1, + 1 << fe->order); + } + return 0; +} + + +static const struct atmdev_ops ops = { + .open = eni_open, + .close = eni_close, + .ioctl = eni_ioctl, + .getsockopt = eni_getsockopt, + .setsockopt = eni_setsockopt, + .send = eni_send, + .phy_put = eni_phy_put, + .phy_get = eni_phy_get, + .change_qos = eni_change_qos, + .proc_read = eni_proc_read +}; + + +static int __devinit eni_init_one(struct pci_dev *pci_dev, + const struct pci_device_id *ent) +{ + struct atm_dev *dev; + struct eni_dev *eni_dev; + int error = -ENOMEM; + + DPRINTK("eni_init_one\n"); + + if (pci_enable_device(pci_dev)) { + error = -EIO; + goto out0; + } + + eni_dev = (struct eni_dev *) kmalloc(sizeof(struct eni_dev),GFP_KERNEL); + if (!eni_dev) goto out0; + if (!cpu_zeroes) { + cpu_zeroes = pci_alloc_consistent(pci_dev,ENI_ZEROES_SIZE, + &zeroes); + if (!cpu_zeroes) goto out1; + } + dev = atm_dev_register(DEV_LABEL,&ops,-1,NULL); + if (!dev) goto out2; + pci_set_drvdata(pci_dev, dev); + eni_dev->pci_dev = pci_dev; + dev->dev_data = eni_dev; + eni_dev->asic = ent->driver_data; + error = eni_do_init(dev); + if (error) goto out3; + error = eni_start(dev); + if (error) goto out3; + eni_dev->more = eni_boards; + eni_boards = dev; + return 0; +out3: + atm_dev_deregister(dev); +out2: + pci_free_consistent(eni_dev->pci_dev,ENI_ZEROES_SIZE,cpu_zeroes,zeroes); + cpu_zeroes = NULL; +out1: + kfree(eni_dev); +out0: + return error; +} + + +static struct pci_device_id eni_pci_tbl[] = { + { PCI_VENDOR_ID_EF, PCI_DEVICE_ID_EF_ATM_FPGA, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, 0 /* FPGA */ }, + { PCI_VENDOR_ID_EF, PCI_DEVICE_ID_EF_ATM_ASIC, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, 1 /* ASIC */ }, + { 0, } +}; +MODULE_DEVICE_TABLE(pci,eni_pci_tbl); + + +static void __devexit eni_remove_one(struct pci_dev *pci_dev) +{ + /* grrr */ +} + + +static struct pci_driver eni_driver = { + .name = DEV_LABEL, + .id_table = eni_pci_tbl, + .probe = eni_init_one, + .remove = __devexit_p(eni_remove_one), +}; + + +static int __init eni_init(void) +{ + struct sk_buff *skb; /* dummy for sizeof */ + + if (sizeof(skb->cb) < sizeof(struct eni_skb_prv)) { + printk(KERN_ERR "eni_detect: skb->cb is too small (%Zd < %Zd)\n", + sizeof(skb->cb),sizeof(struct eni_skb_prv)); + return -EIO; + } + return pci_register_driver(&eni_driver); +} + + +module_init(eni_init); +/* @@@ since exit routine not defined, this module can not be unloaded */ + +MODULE_LICENSE("GPL"); diff --git a/drivers/atm/eni.h b/drivers/atm/eni.h new file mode 100644 index 000000000000..385090c2a580 --- /dev/null +++ b/drivers/atm/eni.h @@ -0,0 +1,130 @@ +/* drivers/atm/eni.h - Efficient Networks ENI155P device driver declarations */ + +/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */ + + +#ifndef DRIVER_ATM_ENI_H +#define DRIVER_ATM_ENI_H + +#include <linux/atm.h> +#include <linux/atmdev.h> +#include <linux/sonet.h> +#include <linux/skbuff.h> +#include <linux/time.h> +#include <linux/pci.h> +#include <linux/spinlock.h> +#include <asm/atomic.h> + +#include "midway.h" + + +#define KERNEL_OFFSET 0xC0000000 /* kernel 0x0 is at phys 0xC0000000 */ +#define DEV_LABEL "eni" + +#define UBR_BUFFER (128*1024) /* UBR buffer size */ + +#define RX_DMA_BUF 8 /* burst and skip a few things */ +#define TX_DMA_BUF 100 /* should be enough for 64 kB */ + +#define DEFAULT_RX_MULT 300 /* max_sdu*3 */ +#define DEFAULT_TX_MULT 300 /* max_sdu*3 */ + +#define ENI_ZEROES_SIZE 4 /* need that many DMA-able zero bytes */ + + +struct eni_free { + void __iomem *start; /* counting in bytes */ + int order; +}; + +struct eni_tx { + void __iomem *send; /* base, 0 if unused */ + int prescaler; /* shaping prescaler */ + int resolution; /* shaping divider */ + unsigned long tx_pos; /* current TX write position */ + unsigned long words; /* size of TX queue */ + int index; /* TX channel number */ + int reserved; /* reserved peak cell rate */ + int shaping; /* shaped peak cell rate */ + struct sk_buff_head backlog; /* queue of waiting TX buffers */ +}; + +struct eni_vcc { + int (*rx)(struct atm_vcc *vcc); /* RX function, NULL if none */ + void __iomem *recv; /* receive buffer */ + unsigned long words; /* its size in words */ + unsigned long descr; /* next descriptor (RX) */ + unsigned long rx_pos; /* current RX descriptor pos */ + struct eni_tx *tx; /* TXer, NULL if none */ + int rxing; /* number of pending PDUs */ + int servicing; /* number of waiting VCs (0 or 1) */ + int txing; /* number of pending TX bytes */ + struct timeval timestamp; /* for RX timing */ + struct atm_vcc *next; /* next pending RX */ + struct sk_buff *last; /* last PDU being DMAed (used to carry + discard information) */ +}; + +struct eni_dev { + /*-------------------------------- spinlock */ + spinlock_t lock; /* sync with interrupt */ + struct tasklet_struct task; /* tasklet for interrupt work */ + u32 events; /* pending events */ + /*-------------------------------- base pointers into Midway address + space */ + void __iomem *phy; /* PHY interface chip registers */ + void __iomem *reg; /* register base */ + void __iomem *ram; /* RAM base */ + void __iomem *vci; /* VCI table */ + void __iomem *rx_dma; /* RX DMA queue */ + void __iomem *tx_dma; /* TX DMA queue */ + void __iomem *service; /* service list */ + /*-------------------------------- TX part */ + struct eni_tx tx[NR_CHAN]; /* TX channels */ + struct eni_tx *ubr; /* UBR channel */ + struct sk_buff_head tx_queue; /* PDUs currently being TX DMAed*/ + wait_queue_head_t tx_wait; /* for close */ + int tx_bw; /* remaining bandwidth */ + u32 dma[TX_DMA_BUF*2]; /* DMA request scratch area */ + int tx_mult; /* buffer size multiplier (percent) */ + /*-------------------------------- RX part */ + u32 serv_read; /* host service read index */ + struct atm_vcc *fast,*last_fast;/* queues of VCCs with pending PDUs */ + struct atm_vcc *slow,*last_slow; + struct atm_vcc **rx_map; /* for fast lookups */ + struct sk_buff_head rx_queue; /* PDUs currently being RX-DMAed */ + wait_queue_head_t rx_wait; /* for close */ + int rx_mult; /* buffer size multiplier (percent) */ + /*-------------------------------- statistics */ + unsigned long lost; /* number of lost cells (RX) */ + /*-------------------------------- memory management */ + unsigned long base_diff; /* virtual-real base address */ + int free_len; /* free list length */ + struct eni_free *free_list; /* free list */ + int free_list_size; /* maximum size of free list */ + /*-------------------------------- ENI links */ + struct atm_dev *more; /* other ENI devices */ + /*-------------------------------- general information */ + int mem; /* RAM on board (in bytes) */ + int asic; /* PCI interface type, 0 for FPGA */ + unsigned int irq; /* IRQ */ + struct pci_dev *pci_dev; /* PCI stuff */ +}; + + +#define ENI_DEV(d) ((struct eni_dev *) (d)->dev_data) +#define ENI_VCC(d) ((struct eni_vcc *) (d)->dev_data) + + +struct eni_skb_prv { + struct atm_skb_data _; /* reserved */ + unsigned long pos; /* position of next descriptor */ + int size; /* PDU size in reassembly buffer */ + dma_addr_t paddr; /* DMA handle */ +}; + +#define ENI_PRV_SIZE(skb) (((struct eni_skb_prv *) (skb)->cb)->size) +#define ENI_PRV_POS(skb) (((struct eni_skb_prv *) (skb)->cb)->pos) +#define ENI_PRV_PADDR(skb) (((struct eni_skb_prv *) (skb)->cb)->paddr) + +#endif diff --git a/drivers/atm/firestream.c b/drivers/atm/firestream.c new file mode 100644 index 000000000000..101f0cc33d10 --- /dev/null +++ b/drivers/atm/firestream.c @@ -0,0 +1,2053 @@ + +/* drivers/atm/firestream.c - FireStream 155 (MB86697) and + * FireStream 50 (MB86695) device driver + */ + +/* Written & (C) 2000 by R.E.Wolff@BitWizard.nl + * Copied snippets from zatm.c by Werner Almesberger, EPFL LRC/ICA + * and ambassador.c Copyright (C) 1995-1999 Madge Networks Ltd + */ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian + system and in the file COPYING in the Linux kernel source. +*/ + + +#include <linux/module.h> +#include <linux/sched.h> +#include <linux/kernel.h> +#include <linux/mm.h> +#include <linux/pci.h> +#include <linux/errno.h> +#include <linux/atm.h> +#include <linux/atmdev.h> +#include <linux/sonet.h> +#include <linux/skbuff.h> +#include <linux/netdevice.h> +#include <linux/delay.h> +#include <linux/ioport.h> /* for request_region */ +#include <linux/uio.h> +#include <linux/init.h> +#include <linux/capability.h> +#include <linux/bitops.h> +#include <asm/byteorder.h> +#include <asm/system.h> +#include <asm/string.h> +#include <asm/io.h> +#include <asm/atomic.h> +#include <asm/uaccess.h> +#include <linux/wait.h> + +#include "firestream.h" + +static int loopback = 0; +static int num=0x5a; + +/* According to measurements (but they look suspicious to me!) done in + * '97, 37% of the packets are one cell in size. So it pays to have + * buffers allocated at that size. A large jump in percentage of + * packets occurs at packets around 536 bytes in length. So it also + * pays to have those pre-allocated. Unfortunately, we can't fully + * take advantage of this as the majority of the packets is likely to + * be TCP/IP (As where obviously the measurement comes from) There the + * link would be opened with say a 1500 byte MTU, and we can't handle + * smaller buffers more efficiently than the larger ones. -- REW + */ + +/* Due to the way Linux memory management works, specifying "576" as + * an allocation size here isn't going to help. They are allocated + * from 1024-byte regions anyway. With the size of the sk_buffs (quite + * large), it doesn't pay to allocate the smallest size (64) -- REW */ + +/* This is all guesswork. Hard numbers to back this up or disprove this, + * are appreciated. -- REW */ + +/* The last entry should be about 64k. However, the "buffer size" is + * passed to the chip in a 16 bit field. I don't know how "65536" + * would be interpreted. -- REW */ + +#define NP FS_NR_FREE_POOLS +static int rx_buf_sizes[NP] = {128, 256, 512, 1024, 2048, 4096, 16384, 65520}; +/* log2: 7 8 9 10 11 12 14 16 */ + +#if 0 +static int rx_pool_sizes[NP] = {1024, 1024, 512, 256, 128, 64, 32, 32}; +#else +/* debug */ +static int rx_pool_sizes[NP] = {128, 128, 128, 64, 64, 64, 32, 32}; +#endif +/* log2: 10 10 9 8 7 6 5 5 */ +/* sumlog2: 17 18 18 18 18 18 19 21 */ +/* mem allocated: 128k 256k 256k 256k 256k 256k 512k 2M */ +/* tot mem: almost 4M */ + +/* NP is shorter, so that it fits on a single line. */ +#undef NP + + +/* Small hardware gotcha: + + The FS50 CAM (VP/VC match registers) always take the lowest channel + number that matches. This is not a problem. + + However, they also ignore whether the channel is enabled or + not. This means that if you allocate channel 0 to 1.2 and then + channel 1 to 0.0, then disabeling channel 0 and writing 0 to the + match channel for channel 0 will "steal" the traffic from channel + 1, even if you correctly disable channel 0. + + Workaround: + + - When disabling channels, write an invalid VP/VC value to the + match register. (We use 0xffffffff, which in the worst case + matches VP/VC = <maxVP>/<maxVC>, but I expect it not to match + anything as some "when not in use, program to 0" bits are now + programmed to 1...) + + - Don't initialize the match registers to 0, as 0.0 is a valid + channel. +*/ + + +/* Optimization hints and tips. + + The FireStream chips are very capable of reducing the amount of + "interrupt-traffic" for the CPU. This driver requests an interrupt on EVERY + action. You could try to minimize this a bit. + + Besides that, the userspace->kernel copy and the PCI bus are the + performance limiting issues for this driver. + + You could queue up a bunch of outgoing packets without telling the + FireStream. I'm not sure that's going to win you much though. The + Linux layer won't tell us in advance when it's not going to give us + any more packets in a while. So this is tricky to implement right without + introducing extra delays. + + -- REW + */ + + + + +/* The strings that define what the RX queue entry is all about. */ +/* Fujitsu: Please tell me which ones can have a pointer to a + freepool descriptor! */ +static char *res_strings[] = { + "RX OK: streaming not EOP", + "RX OK: streaming EOP", + "RX OK: Single buffer packet", + "RX OK: packet mode", + "RX OK: F4 OAM (end to end)", + "RX OK: F4 OAM (Segment)", + "RX OK: F5 OAM (end to end)", + "RX OK: F5 OAM (Segment)", + "RX OK: RM cell", + "RX OK: TRANSP cell", + "RX OK: TRANSPC cell", + "Unmatched cell", + "reserved 12", + "reserved 13", + "reserved 14", + "Unrecognized cell", + "reserved 16", + "reassemby abort: AAL5 abort", + "packet purged", + "packet ageing timeout", + "channel ageing timeout", + "calculated lenght error", + "programmed lenght limit error", + "aal5 crc32 error", + "oam transp or transpc crc10 error", + "reserved 25", + "reserved 26", + "reserved 27", + "reserved 28", + "reserved 29", + "reserved 30", + "reassembly abort: no buffers", + "receive buffer overflow", + "change in GFC", + "receive buffer full", + "low priority discard - no receive descriptor", + "low priority discard - missing end of packet", + "reserved 41", + "reserved 42", + "reserved 43", + "reserved 44", + "reserved 45", + "reserved 46", + "reserved 47", + "reserved 48", + "reserved 49", + "reserved 50", + "reserved 51", + "reserved 52", + "reserved 53", + "reserved 54", + "reserved 55", + "reserved 56", + "reserved 57", + "reserved 58", + "reserved 59", + "reserved 60", + "reserved 61", + "reserved 62", + "reserved 63", +}; + +static char *irq_bitname[] = { + "LPCO", + "DPCO", + "RBRQ0_W", + "RBRQ1_W", + "RBRQ2_W", + "RBRQ3_W", + "RBRQ0_NF", + "RBRQ1_NF", + "RBRQ2_NF", + "RBRQ3_NF", + "BFP_SC", + "INIT", + "INIT_ERR", + "USCEO", + "UPEC0", + "VPFCO", + "CRCCO", + "HECO", + "TBRQ_W", + "TBRQ_NF", + "CTPQ_E", + "GFC_C0", + "PCI_FTL", + "CSQ_W", + "CSQ_NF", + "EXT_INT", + "RXDMA_S" +}; + + +#define PHY_EOF -1 +#define PHY_CLEARALL -2 + +struct reginit_item { + int reg, val; +}; + + +static struct reginit_item PHY_NTC_INIT[] __devinitdata = { + { PHY_CLEARALL, 0x40 }, + { 0x12, 0x0001 }, + { 0x13, 0x7605 }, + { 0x1A, 0x0001 }, + { 0x1B, 0x0005 }, + { 0x38, 0x0003 }, + { 0x39, 0x0006 }, /* changed here to make loopback */ + { 0x01, 0x5262 }, + { 0x15, 0x0213 }, + { 0x00, 0x0003 }, + { PHY_EOF, 0}, /* -1 signals end of list */ +}; + + +/* Safetyfeature: If the card interrupts more than this number of times + in a jiffy (1/100th of a second) then we just disable the interrupt and + print a message. This prevents the system from hanging. + + 150000 packets per second is close to the limit a PC is going to have + anyway. We therefore have to disable this for production. -- REW */ +#undef IRQ_RATE_LIMIT // 100 + +/* Interrupts work now. Unlike serial cards, ATM cards don't work all + that great without interrupts. -- REW */ +#undef FS_POLL_FREQ // 100 + +/* + This driver can spew a whole lot of debugging output at you. If you + need maximum performance, you should disable the DEBUG define. To + aid in debugging in the field, I'm leaving the compile-time debug + features enabled, and disable them "runtime". That allows me to + instruct people with problems to enable debugging without requiring + them to recompile... -- REW +*/ +#define DEBUG + +#ifdef DEBUG +#define fs_dprintk(f, str...) if (fs_debug & f) printk (str) +#else +#define fs_dprintk(f, str...) /* nothing */ +#endif + + +static int fs_keystream = 0; + +#ifdef DEBUG +/* I didn't forget to set this to zero before shipping. Hit me with a stick + if you get this with the debug default not set to zero again. -- REW */ +static int fs_debug = 0; +#else +#define fs_debug 0 +#endif + +#ifdef MODULE +#ifdef DEBUG +module_param(fs_debug, int, 0644); +#endif +module_param(loopback, int, 0); +module_param(num, int, 0); +module_param(fs_keystream, int, 0); +/* XXX Add rx_buf_sizes, and rx_pool_sizes As per request Amar. -- REW */ +#endif + + +#define FS_DEBUG_FLOW 0x00000001 +#define FS_DEBUG_OPEN 0x00000002 +#define FS_DEBUG_QUEUE 0x00000004 +#define FS_DEBUG_IRQ 0x00000008 +#define FS_DEBUG_INIT 0x00000010 +#define FS_DEBUG_SEND 0x00000020 +#define FS_DEBUG_PHY 0x00000040 +#define FS_DEBUG_CLEANUP 0x00000080 +#define FS_DEBUG_QOS 0x00000100 +#define FS_DEBUG_TXQ 0x00000200 +#define FS_DEBUG_ALLOC 0x00000400 +#define FS_DEBUG_TXMEM 0x00000800 +#define FS_DEBUG_QSIZE 0x00001000 + + +#define func_enter() fs_dprintk (FS_DEBUG_FLOW, "fs: enter %s\n", __FUNCTION__) +#define func_exit() fs_dprintk (FS_DEBUG_FLOW, "fs: exit %s\n", __FUNCTION__) + + +static struct fs_dev *fs_boards = NULL; + +#ifdef DEBUG + +static void my_hd (void *addr, int len) +{ + int j, ch; + unsigned char *ptr = addr; + + while (len > 0) { + printk ("%p ", ptr); + for (j=0;j < ((len < 16)?len:16);j++) { + printk ("%02x %s", ptr[j], (j==7)?" ":""); + } + for ( ;j < 16;j++) { + printk (" %s", (j==7)?" ":""); + } + for (j=0;j < ((len < 16)?len:16);j++) { + ch = ptr[j]; + printk ("%c", (ch < 0x20)?'.':((ch > 0x7f)?'.':ch)); + } + printk ("\n"); + ptr += 16; + len -= 16; + } +} +#else /* DEBUG */ +static void my_hd (void *addr, int len){} +#endif /* DEBUG */ + +/********** free an skb (as per ATM device driver documentation) **********/ + +/* Hmm. If this is ATM specific, why isn't there an ATM routine for this? + * I copied it over from the ambassador driver. -- REW */ + +static inline void fs_kfree_skb (struct sk_buff * skb) +{ + if (ATM_SKB(skb)->vcc->pop) + ATM_SKB(skb)->vcc->pop (ATM_SKB(skb)->vcc, skb); + else + dev_kfree_skb_any (skb); +} + + + + +/* It seems the ATM forum recommends this horribly complicated 16bit + * floating point format. Turns out the Ambassador uses the exact same + * encoding. I just copied it over. If Mitch agrees, I'll move it over + * to the atm_misc file or something like that. (and remove it from + * here and the ambassador driver) -- REW + */ + +/* The good thing about this format is that it is monotonic. So, + a conversion routine need not be very complicated. To be able to + round "nearest" we need to take along a few extra bits. Lets + put these after 16 bits, so that we can just return the top 16 + bits of the 32bit number as the result: + + int mr (unsigned int rate, int r) + { + int e = 16+9; + static int round[4]={0, 0, 0xffff, 0x8000}; + if (!rate) return 0; + while (rate & 0xfc000000) { + rate >>= 1; + e++; + } + while (! (rate & 0xfe000000)) { + rate <<= 1; + e--; + } + +// Now the mantissa is in positions bit 16-25. Excepf for the "hidden 1" that's in bit 26. + rate &= ~0x02000000; +// Next add in the exponent + rate |= e << (16+9); +// And perform the rounding: + return (rate + round[r]) >> 16; + } + + 14 lines-of-code. Compare that with the 120 that the Ambassador + guys needed. (would be 8 lines shorter if I'd try to really reduce + the number of lines: + + int mr (unsigned int rate, int r) + { + int e = 16+9; + static int round[4]={0, 0, 0xffff, 0x8000}; + if (!rate) return 0; + for (; rate & 0xfc000000 ;rate >>= 1, e++); + for (;!(rate & 0xfe000000);rate <<= 1, e--); + return ((rate & ~0x02000000) | (e << (16+9)) + round[r]) >> 16; + } + + Exercise for the reader: Remove one more line-of-code, without + cheating. (Just joining two lines is cheating). (I know it's + possible, don't think you've beat me if you found it... If you + manage to lose two lines or more, keep me updated! ;-) + + -- REW */ + + +#define ROUND_UP 1 +#define ROUND_DOWN 2 +#define ROUND_NEAREST 3 +/********** make rate (not quite as much fun as Horizon) **********/ + +static unsigned int make_rate (unsigned int rate, int r, + u16 * bits, unsigned int * actual) +{ + unsigned char exp = -1; /* hush gcc */ + unsigned int man = -1; /* hush gcc */ + + fs_dprintk (FS_DEBUG_QOS, "make_rate %u", rate); + + /* rates in cells per second, ITU format (nasty 16-bit floating-point) + given 5-bit e and 9-bit m: + rate = EITHER (1+m/2^9)*2^e OR 0 + bits = EITHER 1<<14 | e<<9 | m OR 0 + (bit 15 is "reserved", bit 14 "non-zero") + smallest rate is 0 (special representation) + largest rate is (1+511/512)*2^31 = 4290772992 (< 2^32-1) + smallest non-zero rate is (1+0/512)*2^0 = 1 (> 0) + simple algorithm: + find position of top bit, this gives e + remove top bit and shift (rounding if feeling clever) by 9-e + */ + /* Ambassador ucode bug: please don't set bit 14! so 0 rate not + representable. // This should move into the ambassador driver + when properly merged. -- REW */ + + if (rate > 0xffc00000U) { + /* larger than largest representable rate */ + + if (r == ROUND_UP) { + return -EINVAL; + } else { + exp = 31; + man = 511; + } + + } else if (rate) { + /* representable rate */ + + exp = 31; + man = rate; + + /* invariant: rate = man*2^(exp-31) */ + while (!(man & (1<<31))) { + exp = exp - 1; + man = man<<1; + } + + /* man has top bit set + rate = (2^31+(man-2^31))*2^(exp-31) + rate = (1+(man-2^31)/2^31)*2^exp + */ + man = man<<1; + man &= 0xffffffffU; /* a nop on 32-bit systems */ + /* rate = (1+man/2^32)*2^exp + + exp is in the range 0 to 31, man is in the range 0 to 2^32-1 + time to lose significance... we want m in the range 0 to 2^9-1 + rounding presents a minor problem... we first decide which way + we are rounding (based on given rounding direction and possibly + the bits of the mantissa that are to be discarded). + */ + + switch (r) { + case ROUND_DOWN: { + /* just truncate */ + man = man>>(32-9); + break; + } + case ROUND_UP: { + /* check all bits that we are discarding */ + if (man & (-1>>9)) { + man = (man>>(32-9)) + 1; + if (man == (1<<9)) { + /* no need to check for round up outside of range */ + man = 0; + exp += 1; + } + } else { + man = (man>>(32-9)); + } + break; + } + case ROUND_NEAREST: { + /* check msb that we are discarding */ + if (man & (1<<(32-9-1))) { + man = (man>>(32-9)) + 1; + if (man == (1<<9)) { + /* no need to check for round up outside of range */ + man = 0; + exp += 1; + } + } else { + man = (man>>(32-9)); + } + break; + } + } + + } else { + /* zero rate - not representable */ + + if (r == ROUND_DOWN) { + return -EINVAL; + } else { + exp = 0; + man = 0; + } + } + + fs_dprintk (FS_DEBUG_QOS, "rate: man=%u, exp=%hu", man, exp); + + if (bits) + *bits = /* (1<<14) | */ (exp<<9) | man; + + if (actual) + *actual = (exp >= 9) + ? (1 << exp) + (man << (exp-9)) + : (1 << exp) + ((man + (1<<(9-exp-1))) >> (9-exp)); + + return 0; +} + + + + +/* FireStream access routines */ +/* For DEEP-DOWN debugging these can be rigged to intercept accesses to + certain registers or to just log all accesses. */ + +static inline void write_fs (struct fs_dev *dev, int offset, u32 val) +{ + writel (val, dev->base + offset); +} + + +static inline u32 read_fs (struct fs_dev *dev, int offset) +{ + return readl (dev->base + offset); +} + + + +static inline struct FS_QENTRY *get_qentry (struct fs_dev *dev, struct queue *q) +{ + return bus_to_virt (read_fs (dev, Q_WP(q->offset)) & Q_ADDR_MASK); +} + + +static void submit_qentry (struct fs_dev *dev, struct queue *q, struct FS_QENTRY *qe) +{ + u32 wp; + struct FS_QENTRY *cqe; + + /* XXX Sanity check: the write pointer can be checked to be + still the same as the value passed as qe... -- REW */ + /* udelay (5); */ + while ((wp = read_fs (dev, Q_WP (q->offset))) & Q_FULL) { + fs_dprintk (FS_DEBUG_TXQ, "Found queue at %x full. Waiting.\n", + q->offset); + schedule (); + } + + wp &= ~0xf; + cqe = bus_to_virt (wp); + if (qe != cqe) { + fs_dprintk (FS_DEBUG_TXQ, "q mismatch! %p %p\n", qe, cqe); + } + + write_fs (dev, Q_WP(q->offset), Q_INCWRAP); + + { + static int c; + if (!(c++ % 100)) + { + int rp, wp; + rp = read_fs (dev, Q_RP(q->offset)); + wp = read_fs (dev, Q_WP(q->offset)); + fs_dprintk (FS_DEBUG_TXQ, "q at %d: %x-%x: %x entries.\n", + q->offset, rp, wp, wp-rp); + } + } +} + +#ifdef DEBUG_EXTRA +static struct FS_QENTRY pq[60]; +static int qp; + +static struct FS_BPENTRY dq[60]; +static int qd; +static void *da[60]; +#endif + +static void submit_queue (struct fs_dev *dev, struct queue *q, + u32 cmd, u32 p1, u32 p2, u32 p3) +{ + struct FS_QENTRY *qe; + + qe = get_qentry (dev, q); + qe->cmd = cmd; + qe->p0 = p1; + qe->p1 = p2; + qe->p2 = p3; + submit_qentry (dev, q, qe); + +#ifdef DEBUG_EXTRA + pq[qp].cmd = cmd; + pq[qp].p0 = p1; + pq[qp].p1 = p2; + pq[qp].p2 = p3; + qp++; + if (qp >= 60) qp = 0; +#endif +} + +/* Test the "other" way one day... -- REW */ +#if 1 +#define submit_command submit_queue +#else + +static void submit_command (struct fs_dev *dev, struct queue *q, + u32 cmd, u32 p1, u32 p2, u32 p3) +{ + write_fs (dev, CMDR0, cmd); + write_fs (dev, CMDR1, p1); + write_fs (dev, CMDR2, p2); + write_fs (dev, CMDR3, p3); +} +#endif + + + +static void process_return_queue (struct fs_dev *dev, struct queue *q) +{ + long rq; + struct FS_QENTRY *qe; + void *tc; + + while (!((rq = read_fs (dev, Q_RP(q->offset))) & Q_EMPTY)) { + fs_dprintk (FS_DEBUG_QUEUE, "reaping return queue entry at %lx\n", rq); + qe = bus_to_virt (rq); + + fs_dprintk (FS_DEBUG_QUEUE, "queue entry: %08x %08x %08x %08x. (%d)\n", + qe->cmd, qe->p0, qe->p1, qe->p2, STATUS_CODE (qe)); + + switch (STATUS_CODE (qe)) { + case 5: + tc = bus_to_virt (qe->p0); + fs_dprintk (FS_DEBUG_ALLOC, "Free tc: %p\n", tc); + kfree (tc); + break; + } + + write_fs (dev, Q_RP(q->offset), Q_INCWRAP); + } +} + + +static void process_txdone_queue (struct fs_dev *dev, struct queue *q) +{ + long rq; + long tmp; + struct FS_QENTRY *qe; + struct sk_buff *skb; + struct FS_BPENTRY *td; + + while (!((rq = read_fs (dev, Q_RP(q->offset))) & Q_EMPTY)) { + fs_dprintk (FS_DEBUG_QUEUE, "reaping txdone entry at %lx\n", rq); + qe = bus_to_virt (rq); + + fs_dprintk (FS_DEBUG_QUEUE, "queue entry: %08x %08x %08x %08x: %d\n", + qe->cmd, qe->p0, qe->p1, qe->p2, STATUS_CODE (qe)); + + if (STATUS_CODE (qe) != 2) + fs_dprintk (FS_DEBUG_TXMEM, "queue entry: %08x %08x %08x %08x: %d\n", + qe->cmd, qe->p0, qe->p1, qe->p2, STATUS_CODE (qe)); + + + switch (STATUS_CODE (qe)) { + case 0x01: /* This is for AAL0 where we put the chip in streaming mode */ + /* Fall through */ + case 0x02: + /* Process a real txdone entry. */ + tmp = qe->p0; + if (tmp & 0x0f) + printk (KERN_WARNING "td not aligned: %ld\n", tmp); + tmp &= ~0x0f; + td = bus_to_virt (tmp); + + fs_dprintk (FS_DEBUG_QUEUE, "Pool entry: %08x %08x %08x %08x %p.\n", + td->flags, td->next, td->bsa, td->aal_bufsize, td->skb ); + + skb = td->skb; + if (skb == FS_VCC (ATM_SKB(skb)->vcc)->last_skb) { + wake_up_interruptible (& FS_VCC (ATM_SKB(skb)->vcc)->close_wait); + FS_VCC (ATM_SKB(skb)->vcc)->last_skb = NULL; + } + td->dev->ntxpckts--; + + { + static int c=0; + + if (!(c++ % 100)) { + fs_dprintk (FS_DEBUG_QSIZE, "[%d]", td->dev->ntxpckts); + } + } + + atomic_inc(&ATM_SKB(skb)->vcc->stats->tx); + + fs_dprintk (FS_DEBUG_TXMEM, "i"); + fs_dprintk (FS_DEBUG_ALLOC, "Free t-skb: %p\n", skb); + fs_kfree_skb (skb); + + fs_dprintk (FS_DEBUG_ALLOC, "Free trans-d: %p\n", td); + memset (td, 0x12, sizeof (struct FS_BPENTRY)); + kfree (td); + break; + default: + /* Here we get the tx purge inhibit command ... */ + /* Action, I believe, is "don't do anything". -- REW */ + ; + } + + write_fs (dev, Q_RP(q->offset), Q_INCWRAP); + } +} + + +static void process_incoming (struct fs_dev *dev, struct queue *q) +{ + long rq; + struct FS_QENTRY *qe; + struct FS_BPENTRY *pe; + struct sk_buff *skb; + unsigned int channo; + struct atm_vcc *atm_vcc; + + while (!((rq = read_fs (dev, Q_RP(q->offset))) & Q_EMPTY)) { + fs_dprintk (FS_DEBUG_QUEUE, "reaping incoming queue entry at %lx\n", rq); + qe = bus_to_virt (rq); + + fs_dprintk (FS_DEBUG_QUEUE, "queue entry: %08x %08x %08x %08x. ", + qe->cmd, qe->p0, qe->p1, qe->p2); + + fs_dprintk (FS_DEBUG_QUEUE, "-> %x: %s\n", + STATUS_CODE (qe), + res_strings[STATUS_CODE(qe)]); + + pe = bus_to_virt (qe->p0); + fs_dprintk (FS_DEBUG_QUEUE, "Pool entry: %08x %08x %08x %08x %p %p.\n", + pe->flags, pe->next, pe->bsa, pe->aal_bufsize, + pe->skb, pe->fp); + + channo = qe->cmd & 0xffff; + + if (channo < dev->nchannels) + atm_vcc = dev->atm_vccs[channo]; + else + atm_vcc = NULL; + + /* Single buffer packet */ + switch (STATUS_CODE (qe)) { + case 0x1: + /* Fall through for streaming mode */ + case 0x2:/* Packet received OK.... */ + if (atm_vcc) { + skb = pe->skb; + pe->fp->n--; +#if 0 + fs_dprintk (FS_DEBUG_QUEUE, "Got skb: %p\n", skb); + if (FS_DEBUG_QUEUE & fs_debug) my_hd (bus_to_virt (pe->bsa), 0x20); +#endif + skb_put (skb, qe->p1 & 0xffff); + ATM_SKB(skb)->vcc = atm_vcc; + atomic_inc(&atm_vcc->stats->rx); + do_gettimeofday(&skb->stamp); + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-skb: %p (pushed)\n", skb); + atm_vcc->push (atm_vcc, skb); + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-d: %p\n", pe); + kfree (pe); + } else { + printk (KERN_ERR "Got a receive on a non-open channel %d.\n", channo); + } + break; + case 0x17:/* AAL 5 CRC32 error. IFF the length field is nonzero, a buffer + has been consumed and needs to be processed. -- REW */ + if (qe->p1 & 0xffff) { + pe = bus_to_virt (qe->p0); + pe->fp->n--; + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-skb: %p\n", pe->skb); + dev_kfree_skb_any (pe->skb); + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-d: %p\n", pe); + kfree (pe); + } + if (atm_vcc) + atomic_inc(&atm_vcc->stats->rx_drop); + break; + case 0x1f: /* Reassembly abort: no buffers. */ + /* Silently increment error counter. */ + if (atm_vcc) + atomic_inc(&atm_vcc->stats->rx_drop); + break; + default: /* Hmm. Haven't written the code to handle the others yet... -- REW */ + printk (KERN_WARNING "Don't know what to do with RX status %x: %s.\n", + STATUS_CODE(qe), res_strings[STATUS_CODE (qe)]); + } + write_fs (dev, Q_RP(q->offset), Q_INCWRAP); + } +} + + + +#define DO_DIRECTION(tp) ((tp)->traffic_class != ATM_NONE) + +static int fs_open(struct atm_vcc *atm_vcc) +{ + struct fs_dev *dev; + struct fs_vcc *vcc; + struct fs_transmit_config *tc; + struct atm_trafprm * txtp; + struct atm_trafprm * rxtp; + /* struct fs_receive_config *rc;*/ + /* struct FS_QENTRY *qe; */ + int error; + int bfp; + int to; + unsigned short tmc0; + short vpi = atm_vcc->vpi; + int vci = atm_vcc->vci; + + func_enter (); + + dev = FS_DEV(atm_vcc->dev); + fs_dprintk (FS_DEBUG_OPEN, "fs: open on dev: %p, vcc at %p\n", + dev, atm_vcc); + + if (vci != ATM_VPI_UNSPEC && vpi != ATM_VCI_UNSPEC) + set_bit(ATM_VF_ADDR, &atm_vcc->flags); + + if ((atm_vcc->qos.aal != ATM_AAL5) && + (atm_vcc->qos.aal != ATM_AAL2)) + return -EINVAL; /* XXX AAL0 */ + + fs_dprintk (FS_DEBUG_OPEN, "fs: (itf %d): open %d.%d\n", + atm_vcc->dev->number, atm_vcc->vpi, atm_vcc->vci); + + /* XXX handle qos parameters (rate limiting) ? */ + + vcc = kmalloc(sizeof(struct fs_vcc), GFP_KERNEL); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc VCC: %p(%Zd)\n", vcc, sizeof(struct fs_vcc)); + if (!vcc) { + clear_bit(ATM_VF_ADDR, &atm_vcc->flags); + return -ENOMEM; + } + + atm_vcc->dev_data = vcc; + vcc->last_skb = NULL; + + init_waitqueue_head (&vcc->close_wait); + + txtp = &atm_vcc->qos.txtp; + rxtp = &atm_vcc->qos.rxtp; + + if (!test_bit(ATM_VF_PARTIAL, &atm_vcc->flags)) { + if (IS_FS50(dev)) { + /* Increment the channel numer: take a free one next time. */ + for (to=33;to;to--, dev->channo++) { + /* We only have 32 channels */ + if (dev->channo >= 32) + dev->channo = 0; + /* If we need to do RX, AND the RX is inuse, try the next */ + if (DO_DIRECTION(rxtp) && dev->atm_vccs[dev->channo]) + continue; + /* If we need to do TX, AND the TX is inuse, try the next */ + if (DO_DIRECTION(txtp) && test_bit (dev->channo, dev->tx_inuse)) + continue; + /* Ok, both are free! (or not needed) */ + break; + } + if (!to) { + printk ("No more free channels for FS50..\n"); + return -EBUSY; + } + vcc->channo = dev->channo; + dev->channo &= dev->channel_mask; + + } else { + vcc->channo = (vpi << FS155_VCI_BITS) | (vci); + if (((DO_DIRECTION(rxtp) && dev->atm_vccs[vcc->channo])) || + ( DO_DIRECTION(txtp) && test_bit (vcc->channo, dev->tx_inuse))) { + printk ("Channel is in use for FS155.\n"); + return -EBUSY; + } + } + fs_dprintk (FS_DEBUG_OPEN, "OK. Allocated channel %x(%d).\n", + vcc->channo, vcc->channo); + } + + if (DO_DIRECTION (txtp)) { + tc = kmalloc (sizeof (struct fs_transmit_config), GFP_KERNEL); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc tc: %p(%Zd)\n", + tc, sizeof (struct fs_transmit_config)); + if (!tc) { + fs_dprintk (FS_DEBUG_OPEN, "fs: can't alloc transmit_config.\n"); + return -ENOMEM; + } + + /* Allocate the "open" entry from the high priority txq. This makes + it most likely that the chip will notice it. It also prevents us + from having to wait for completion. On the other hand, we may + need to wait for completion anyway, to see if it completed + succesfully. */ + + switch (atm_vcc->qos.aal) { + case ATM_AAL2: + case ATM_AAL0: + tc->flags = 0 + | TC_FLAGS_TRANSPARENT_PAYLOAD + | TC_FLAGS_PACKET + | (1 << 28) + | TC_FLAGS_TYPE_UBR /* XXX Change to VBR -- PVDL */ + | TC_FLAGS_CAL0; + break; + case ATM_AAL5: + tc->flags = 0 + | TC_FLAGS_AAL5 + | TC_FLAGS_PACKET /* ??? */ + | TC_FLAGS_TYPE_CBR + | TC_FLAGS_CAL0; + break; + default: + printk ("Unknown aal: %d\n", atm_vcc->qos.aal); + tc->flags = 0; + } + /* Docs are vague about this atm_hdr field. By the way, the FS + * chip makes odd errors if lower bits are set.... -- REW */ + tc->atm_hdr = (vpi << 20) | (vci << 4); + { + int pcr = atm_pcr_goal (txtp); + + fs_dprintk (FS_DEBUG_OPEN, "pcr = %d.\n", pcr); + + /* XXX Hmm. officially we're only allowed to do this if rounding + is round_down -- REW */ + if (IS_FS50(dev)) { + if (pcr > 51840000/53/8) pcr = 51840000/53/8; + } else { + if (pcr > 155520000/53/8) pcr = 155520000/53/8; + } + if (!pcr) { + /* no rate cap */ + tmc0 = IS_FS50(dev)?0x61BE:0x64c9; /* Just copied over the bits from Fujitsu -- REW */ + } else { + int r; + if (pcr < 0) { + r = ROUND_DOWN; + pcr = -pcr; + } else { + r = ROUND_UP; + } + error = make_rate (pcr, r, &tmc0, NULL); + } + fs_dprintk (FS_DEBUG_OPEN, "pcr = %d.\n", pcr); + } + + tc->TMC[0] = tmc0 | 0x4000; + tc->TMC[1] = 0; /* Unused */ + tc->TMC[2] = 0; /* Unused */ + tc->TMC[3] = 0; /* Unused */ + + tc->spec = 0; /* UTOPIA address, UDF, HEC: Unused -> 0 */ + tc->rtag[0] = 0; /* What should I do with routing tags??? + -- Not used -- AS -- Thanks -- REW*/ + tc->rtag[1] = 0; + tc->rtag[2] = 0; + + if (fs_debug & FS_DEBUG_OPEN) { + fs_dprintk (FS_DEBUG_OPEN, "TX config record:\n"); + my_hd (tc, sizeof (*tc)); + } + + /* We now use the "submit_command" function to submit commands to + the firestream. There is a define up near the definition of + that routine that switches this routine between immediate write + to the immediate comamnd registers and queuing the commands in + the HPTXQ for execution. This last technique might be more + efficient if we know we're going to submit a whole lot of + commands in one go, but this driver is not setup to be able to + use such a construct. So it probably doen't matter much right + now. -- REW */ + + /* The command is IMMediate and INQueue. The parameters are out-of-line.. */ + submit_command (dev, &dev->hp_txq, + QE_CMD_CONFIG_TX | QE_CMD_IMM_INQ | vcc->channo, + virt_to_bus (tc), 0, 0); + + submit_command (dev, &dev->hp_txq, + QE_CMD_TX_EN | QE_CMD_IMM_INQ | vcc->channo, + 0, 0, 0); + set_bit (vcc->channo, dev->tx_inuse); + } + + if (DO_DIRECTION (rxtp)) { + dev->atm_vccs[vcc->channo] = atm_vcc; + + for (bfp = 0;bfp < FS_NR_FREE_POOLS; bfp++) + if (atm_vcc->qos.rxtp.max_sdu <= dev->rx_fp[bfp].bufsize) break; + if (bfp >= FS_NR_FREE_POOLS) { + fs_dprintk (FS_DEBUG_OPEN, "No free pool fits sdu: %d.\n", + atm_vcc->qos.rxtp.max_sdu); + /* XXX Cleanup? -- Would just calling fs_close work??? -- REW */ + + /* XXX clear tx inuse. Close TX part? */ + dev->atm_vccs[vcc->channo] = NULL; + kfree (vcc); + return -EINVAL; + } + + switch (atm_vcc->qos.aal) { + case ATM_AAL0: + case ATM_AAL2: + submit_command (dev, &dev->hp_txq, + QE_CMD_CONFIG_RX | QE_CMD_IMM_INQ | vcc->channo, + RC_FLAGS_TRANSP | + RC_FLAGS_BFPS_BFP * bfp | + RC_FLAGS_RXBM_PSB, 0, 0); + break; + case ATM_AAL5: + submit_command (dev, &dev->hp_txq, + QE_CMD_CONFIG_RX | QE_CMD_IMM_INQ | vcc->channo, + RC_FLAGS_AAL5 | + RC_FLAGS_BFPS_BFP * bfp | + RC_FLAGS_RXBM_PSB, 0, 0); + break; + }; + if (IS_FS50 (dev)) { + submit_command (dev, &dev->hp_txq, + QE_CMD_REG_WR | QE_CMD_IMM_INQ, + 0x80 + vcc->channo, + (vpi << 16) | vci, 0 ); /* XXX -- Use defines. */ + } + submit_command (dev, &dev->hp_txq, + QE_CMD_RX_EN | QE_CMD_IMM_INQ | vcc->channo, + 0, 0, 0); + } + + /* Indicate we're done! */ + set_bit(ATM_VF_READY, &atm_vcc->flags); + + func_exit (); + return 0; +} + + +static void fs_close(struct atm_vcc *atm_vcc) +{ + struct fs_dev *dev = FS_DEV (atm_vcc->dev); + struct fs_vcc *vcc = FS_VCC (atm_vcc); + struct atm_trafprm * txtp; + struct atm_trafprm * rxtp; + + func_enter (); + + clear_bit(ATM_VF_READY, &atm_vcc->flags); + + fs_dprintk (FS_DEBUG_QSIZE, "--==**[%d]**==--", dev->ntxpckts); + if (vcc->last_skb) { + fs_dprintk (FS_DEBUG_QUEUE, "Waiting for skb %p to be sent.\n", + vcc->last_skb); + /* We're going to wait for the last packet to get sent on this VC. It would + be impolite not to send them don't you think? + XXX + We don't know which packets didn't get sent. So if we get interrupted in + this sleep_on, we'll lose any reference to these packets. Memory leak! + On the other hand, it's awfully convenient that we can abort a "close" that + is taking too long. Maybe just use non-interruptible sleep on? -- REW */ + interruptible_sleep_on (& vcc->close_wait); + } + + txtp = &atm_vcc->qos.txtp; + rxtp = &atm_vcc->qos.rxtp; + + + /* See App note XXX (Unpublished as of now) for the reason for the + removal of the "CMD_IMM_INQ" part of the TX_PURGE_INH... -- REW */ + + if (DO_DIRECTION (txtp)) { + submit_command (dev, &dev->hp_txq, + QE_CMD_TX_PURGE_INH | /*QE_CMD_IMM_INQ|*/ vcc->channo, 0,0,0); + clear_bit (vcc->channo, dev->tx_inuse); + } + + if (DO_DIRECTION (rxtp)) { + submit_command (dev, &dev->hp_txq, + QE_CMD_RX_PURGE_INH | QE_CMD_IMM_INQ | vcc->channo, 0,0,0); + dev->atm_vccs [vcc->channo] = NULL; + + /* This means that this is configured as a receive channel */ + if (IS_FS50 (dev)) { + /* Disable the receive filter. Is 0/0 indeed an invalid receive + channel? -- REW. Yes it is. -- Hang. Ok. I'll use -1 + (0xfff...) -- REW */ + submit_command (dev, &dev->hp_txq, + QE_CMD_REG_WR | QE_CMD_IMM_INQ, + 0x80 + vcc->channo, -1, 0 ); + } + } + + fs_dprintk (FS_DEBUG_ALLOC, "Free vcc: %p\n", vcc); + kfree (vcc); + + func_exit (); +} + + +static int fs_send (struct atm_vcc *atm_vcc, struct sk_buff *skb) +{ + struct fs_dev *dev = FS_DEV (atm_vcc->dev); + struct fs_vcc *vcc = FS_VCC (atm_vcc); + struct FS_BPENTRY *td; + + func_enter (); + + fs_dprintk (FS_DEBUG_TXMEM, "I"); + fs_dprintk (FS_DEBUG_SEND, "Send: atm_vcc %p skb %p vcc %p dev %p\n", + atm_vcc, skb, vcc, dev); + + fs_dprintk (FS_DEBUG_ALLOC, "Alloc t-skb: %p (atm_send)\n", skb); + + ATM_SKB(skb)->vcc = atm_vcc; + + vcc->last_skb = skb; + + td = kmalloc (sizeof (struct FS_BPENTRY), GFP_ATOMIC); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc transd: %p(%Zd)\n", td, sizeof (struct FS_BPENTRY)); + if (!td) { + /* Oops out of mem */ + return -ENOMEM; + } + + fs_dprintk (FS_DEBUG_SEND, "first word in buffer: %x\n", + *(int *) skb->data); + + td->flags = TD_EPI | TD_DATA | skb->len; + td->next = 0; + td->bsa = virt_to_bus (skb->data); + td->skb = skb; + td->dev = dev; + dev->ntxpckts++; + +#ifdef DEBUG_EXTRA + da[qd] = td; + dq[qd].flags = td->flags; + dq[qd].next = td->next; + dq[qd].bsa = td->bsa; + dq[qd].skb = td->skb; + dq[qd].dev = td->dev; + qd++; + if (qd >= 60) qd = 0; +#endif + + submit_queue (dev, &dev->hp_txq, + QE_TRANSMIT_DE | vcc->channo, + virt_to_bus (td), 0, + virt_to_bus (td)); + + fs_dprintk (FS_DEBUG_QUEUE, "in send: txq %d txrq %d\n", + read_fs (dev, Q_EA (dev->hp_txq.offset)) - + read_fs (dev, Q_SA (dev->hp_txq.offset)), + read_fs (dev, Q_EA (dev->tx_relq.offset)) - + read_fs (dev, Q_SA (dev->tx_relq.offset))); + + func_exit (); + return 0; +} + + +/* Some function placeholders for functions we don't yet support. */ + +#if 0 +static int fs_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg) +{ + func_enter (); + func_exit (); + return -ENOIOCTLCMD; +} + + +static int fs_getsockopt(struct atm_vcc *vcc,int level,int optname, + void __user *optval,int optlen) +{ + func_enter (); + func_exit (); + return 0; +} + + +static int fs_setsockopt(struct atm_vcc *vcc,int level,int optname, + void __user *optval,int optlen) +{ + func_enter (); + func_exit (); + return 0; +} + + +static void fs_phy_put(struct atm_dev *dev,unsigned char value, + unsigned long addr) +{ + func_enter (); + func_exit (); +} + + +static unsigned char fs_phy_get(struct atm_dev *dev,unsigned long addr) +{ + func_enter (); + func_exit (); + return 0; +} + + +static int fs_change_qos(struct atm_vcc *vcc,struct atm_qos *qos,int flags) +{ + func_enter (); + func_exit (); + return 0; +}; + +#endif + + +static const struct atmdev_ops ops = { + .open = fs_open, + .close = fs_close, + .send = fs_send, + .owner = THIS_MODULE, + /* ioctl: fs_ioctl, */ + /* getsockopt: fs_getsockopt, */ + /* setsockopt: fs_setsockopt, */ + /* change_qos: fs_change_qos, */ + + /* For now implement these internally here... */ + /* phy_put: fs_phy_put, */ + /* phy_get: fs_phy_get, */ +}; + + +static void __devinit undocumented_pci_fix (struct pci_dev *pdev) +{ + int tint; + + /* The Windows driver says: */ + /* Switch off FireStream Retry Limit Threshold + */ + + /* The register at 0x28 is documented as "reserved", no further + comments. */ + + pci_read_config_dword (pdev, 0x28, &tint); + if (tint != 0x80) { + tint = 0x80; + pci_write_config_dword (pdev, 0x28, tint); + } +} + + + +/************************************************************************** + * PHY routines * + **************************************************************************/ + +static void __devinit write_phy (struct fs_dev *dev, int regnum, int val) +{ + submit_command (dev, &dev->hp_txq, QE_CMD_PRP_WR | QE_CMD_IMM_INQ, + regnum, val, 0); +} + +static int __devinit init_phy (struct fs_dev *dev, struct reginit_item *reginit) +{ + int i; + + func_enter (); + while (reginit->reg != PHY_EOF) { + if (reginit->reg == PHY_CLEARALL) { + /* "PHY_CLEARALL means clear all registers. Numregisters is in "val". */ + for (i=0;i<reginit->val;i++) { + write_phy (dev, i, 0); + } + } else { + write_phy (dev, reginit->reg, reginit->val); + } + reginit++; + } + func_exit (); + return 0; +} + +static void reset_chip (struct fs_dev *dev) +{ + int i; + + write_fs (dev, SARMODE0, SARMODE0_SRTS0); + + /* Undocumented delay */ + udelay (128); + + /* The "internal registers are documented to all reset to zero, but + comments & code in the Windows driver indicates that the pools are + NOT reset. */ + for (i=0;i < FS_NR_FREE_POOLS;i++) { + write_fs (dev, FP_CNF (RXB_FP(i)), 0); + write_fs (dev, FP_SA (RXB_FP(i)), 0); + write_fs (dev, FP_EA (RXB_FP(i)), 0); + write_fs (dev, FP_CNT (RXB_FP(i)), 0); + write_fs (dev, FP_CTU (RXB_FP(i)), 0); + } + + /* The same goes for the match channel registers, although those are + NOT documented that way in the Windows driver. -- REW */ + /* The Windows driver DOES write 0 to these registers somewhere in + the init sequence. However, a small hardware-feature, will + prevent reception of data on VPI/VCI = 0/0 (Unless the channel + allocated happens to have no disabled channels that have a lower + number. -- REW */ + + /* Clear the match channel registers. */ + if (IS_FS50 (dev)) { + for (i=0;i<FS50_NR_CHANNELS;i++) { + write_fs (dev, 0x200 + i * 4, -1); + } + } +} + +static void __devinit *aligned_kmalloc (int size, int flags, int alignment) +{ + void *t; + + if (alignment <= 0x10) { + t = kmalloc (size, flags); + if ((unsigned long)t & (alignment-1)) { + printk ("Kmalloc doesn't align things correctly! %p\n", t); + kfree (t); + return aligned_kmalloc (size, flags, alignment * 4); + } + return t; + } + printk (KERN_ERR "Request for > 0x10 alignment not yet implemented (hard!)\n"); + return NULL; +} + +static int __devinit init_q (struct fs_dev *dev, + struct queue *txq, int queue, int nentries, int is_rq) +{ + int sz = nentries * sizeof (struct FS_QENTRY); + struct FS_QENTRY *p; + + func_enter (); + + fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", + queue, nentries); + + p = aligned_kmalloc (sz, GFP_KERNEL, 0x10); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc queue: %p(%d)\n", p, sz); + + if (!p) return 0; + + write_fs (dev, Q_SA(queue), virt_to_bus(p)); + write_fs (dev, Q_EA(queue), virt_to_bus(p+nentries-1)); + write_fs (dev, Q_WP(queue), virt_to_bus(p)); + write_fs (dev, Q_RP(queue), virt_to_bus(p)); + if (is_rq) { + /* Configuration for the receive queue: 0: interrupt immediately, + no pre-warning to empty queues: We do our best to keep the + queue filled anyway. */ + write_fs (dev, Q_CNF(queue), 0 ); + } + + txq->sa = p; + txq->ea = p; + txq->offset = queue; + + func_exit (); + return 1; +} + + +static int __devinit init_fp (struct fs_dev *dev, + struct freepool *fp, int queue, int bufsize, int nr_buffers) +{ + func_enter (); + + fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue); + + write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME); + write_fs (dev, FP_SA(queue), 0); + write_fs (dev, FP_EA(queue), 0); + write_fs (dev, FP_CTU(queue), 0); + write_fs (dev, FP_CNT(queue), 0); + + fp->offset = queue; + fp->bufsize = bufsize; + fp->nr_buffers = nr_buffers; + + func_exit (); + return 1; +} + + +static inline int nr_buffers_in_freepool (struct fs_dev *dev, struct freepool *fp) +{ +#if 0 + /* This seems to be unreliable.... */ + return read_fs (dev, FP_CNT (fp->offset)); +#else + return fp->n; +#endif +} + + +/* Check if this gets going again if a pool ever runs out. -- Yes, it + does. I've seen "receive abort: no buffers" and things started + working again after that... -- REW */ + +static void top_off_fp (struct fs_dev *dev, struct freepool *fp, int gfp_flags) +{ + struct FS_BPENTRY *qe, *ne; + struct sk_buff *skb; + int n = 0; + + fs_dprintk (FS_DEBUG_QUEUE, "Topping off queue at %x (%d-%d/%d)\n", + fp->offset, read_fs (dev, FP_CNT (fp->offset)), fp->n, + fp->nr_buffers); + while (nr_buffers_in_freepool(dev, fp) < fp->nr_buffers) { + + skb = alloc_skb (fp->bufsize, gfp_flags); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc rec-skb: %p(%d)\n", skb, fp->bufsize); + if (!skb) break; + ne = kmalloc (sizeof (struct FS_BPENTRY), gfp_flags); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc rec-d: %p(%Zd)\n", ne, sizeof (struct FS_BPENTRY)); + if (!ne) { + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-skb: %p\n", skb); + dev_kfree_skb_any (skb); + break; + } + + fs_dprintk (FS_DEBUG_QUEUE, "Adding skb %p desc %p -> %p(%p) ", + skb, ne, skb->data, skb->head); + n++; + ne->flags = FP_FLAGS_EPI | fp->bufsize; + ne->next = virt_to_bus (NULL); + ne->bsa = virt_to_bus (skb->data); + ne->aal_bufsize = fp->bufsize; + ne->skb = skb; + ne->fp = fp; + + qe = (struct FS_BPENTRY *) (read_fs (dev, FP_EA(fp->offset))); + fs_dprintk (FS_DEBUG_QUEUE, "link at %p\n", qe); + if (qe) { + qe = bus_to_virt ((long) qe); + qe->next = virt_to_bus(ne); + qe->flags &= ~FP_FLAGS_EPI; + } else + write_fs (dev, FP_SA(fp->offset), virt_to_bus(ne)); + + write_fs (dev, FP_EA(fp->offset), virt_to_bus (ne)); + fp->n++; /* XXX Atomic_inc? */ + write_fs (dev, FP_CTU(fp->offset), 1); + } + + fs_dprintk (FS_DEBUG_QUEUE, "Added %d entries. \n", n); +} + +static void __devexit free_queue (struct fs_dev *dev, struct queue *txq) +{ + func_enter (); + + write_fs (dev, Q_SA(txq->offset), 0); + write_fs (dev, Q_EA(txq->offset), 0); + write_fs (dev, Q_RP(txq->offset), 0); + write_fs (dev, Q_WP(txq->offset), 0); + /* Configuration ? */ + + fs_dprintk (FS_DEBUG_ALLOC, "Free queue: %p\n", txq->sa); + kfree (txq->sa); + + func_exit (); +} + +static void __devexit free_freepool (struct fs_dev *dev, struct freepool *fp) +{ + func_enter (); + + write_fs (dev, FP_CNF(fp->offset), 0); + write_fs (dev, FP_SA (fp->offset), 0); + write_fs (dev, FP_EA (fp->offset), 0); + write_fs (dev, FP_CNT(fp->offset), 0); + write_fs (dev, FP_CTU(fp->offset), 0); + + func_exit (); +} + + + +static irqreturn_t fs_irq (int irq, void *dev_id, struct pt_regs * pt_regs) +{ + int i; + u32 status; + struct fs_dev *dev = dev_id; + + status = read_fs (dev, ISR); + if (!status) + return IRQ_NONE; + + func_enter (); + +#ifdef IRQ_RATE_LIMIT + /* Aaargh! I'm ashamed. This costs more lines-of-code than the actual + interrupt routine!. (Well, used to when I wrote that comment) -- REW */ + { + static int lastjif; + static int nintr=0; + + if (lastjif == jiffies) { + if (++nintr > IRQ_RATE_LIMIT) { + free_irq (dev->irq, dev_id); + printk (KERN_ERR "fs: Too many interrupts. Turning off interrupt %d.\n", + dev->irq); + } + } else { + lastjif = jiffies; + nintr = 0; + } + } +#endif + fs_dprintk (FS_DEBUG_QUEUE, "in intr: txq %d txrq %d\n", + read_fs (dev, Q_EA (dev->hp_txq.offset)) - + read_fs (dev, Q_SA (dev->hp_txq.offset)), + read_fs (dev, Q_EA (dev->tx_relq.offset)) - + read_fs (dev, Q_SA (dev->tx_relq.offset))); + + /* print the bits in the ISR register. */ + if (fs_debug & FS_DEBUG_IRQ) { + /* The FS_DEBUG things are unneccesary here. But this way it is + clear for grep that these are debug prints. */ + fs_dprintk (FS_DEBUG_IRQ, "IRQ status:"); + for (i=0;i<27;i++) + if (status & (1 << i)) + fs_dprintk (FS_DEBUG_IRQ, " %s", irq_bitname[i]); + fs_dprintk (FS_DEBUG_IRQ, "\n"); + } + + if (status & ISR_RBRQ0_W) { + fs_dprintk (FS_DEBUG_IRQ, "Iiiin-coming (0)!!!!\n"); + process_incoming (dev, &dev->rx_rq[0]); + /* items mentioned on RBRQ0 are from FP 0 or 1. */ + top_off_fp (dev, &dev->rx_fp[0], GFP_ATOMIC); + top_off_fp (dev, &dev->rx_fp[1], GFP_ATOMIC); + } + + if (status & ISR_RBRQ1_W) { + fs_dprintk (FS_DEBUG_IRQ, "Iiiin-coming (1)!!!!\n"); + process_incoming (dev, &dev->rx_rq[1]); + top_off_fp (dev, &dev->rx_fp[2], GFP_ATOMIC); + top_off_fp (dev, &dev->rx_fp[3], GFP_ATOMIC); + } + + if (status & ISR_RBRQ2_W) { + fs_dprintk (FS_DEBUG_IRQ, "Iiiin-coming (2)!!!!\n"); + process_incoming (dev, &dev->rx_rq[2]); + top_off_fp (dev, &dev->rx_fp[4], GFP_ATOMIC); + top_off_fp (dev, &dev->rx_fp[5], GFP_ATOMIC); + } + + if (status & ISR_RBRQ3_W) { + fs_dprintk (FS_DEBUG_IRQ, "Iiiin-coming (3)!!!!\n"); + process_incoming (dev, &dev->rx_rq[3]); + top_off_fp (dev, &dev->rx_fp[6], GFP_ATOMIC); + top_off_fp (dev, &dev->rx_fp[7], GFP_ATOMIC); + } + + if (status & ISR_CSQ_W) { + fs_dprintk (FS_DEBUG_IRQ, "Command executed ok!\n"); + process_return_queue (dev, &dev->st_q); + } + + if (status & ISR_TBRQ_W) { + fs_dprintk (FS_DEBUG_IRQ, "Data tramsitted!\n"); + process_txdone_queue (dev, &dev->tx_relq); + } + + func_exit (); + return IRQ_HANDLED; +} + + +#ifdef FS_POLL_FREQ +static void fs_poll (unsigned long data) +{ + struct fs_dev *dev = (struct fs_dev *) data; + + fs_irq (0, dev, NULL); + dev->timer.expires = jiffies + FS_POLL_FREQ; + add_timer (&dev->timer); +} +#endif + +static int __devinit fs_init (struct fs_dev *dev) +{ + struct pci_dev *pci_dev; + int isr, to; + int i; + + func_enter (); + pci_dev = dev->pci_dev; + + printk (KERN_INFO "found a FireStream %d card, base %08lx, irq%d.\n", + IS_FS50(dev)?50:155, + pci_resource_start(pci_dev, 0), dev->pci_dev->irq); + + if (fs_debug & FS_DEBUG_INIT) + my_hd ((unsigned char *) dev, sizeof (*dev)); + + undocumented_pci_fix (pci_dev); + + dev->hw_base = pci_resource_start(pci_dev, 0); + + dev->base = ioremap(dev->hw_base, 0x1000); + + reset_chip (dev); + + write_fs (dev, SARMODE0, 0 + | (0 * SARMODE0_SHADEN) /* We don't use shadow registers. */ + | (1 * SARMODE0_INTMODE_READCLEAR) + | (1 * SARMODE0_CWRE) + | IS_FS50(dev)?SARMODE0_PRPWT_FS50_5: + SARMODE0_PRPWT_FS155_3 + | (1 * SARMODE0_CALSUP_1) + | IS_FS50 (dev)?(0 + | SARMODE0_RXVCS_32 + | SARMODE0_ABRVCS_32 + | SARMODE0_TXVCS_32): + (0 + | SARMODE0_RXVCS_1k + | SARMODE0_ABRVCS_1k + | SARMODE0_TXVCS_1k)); + + /* 10ms * 100 is 1 second. That should be enough, as AN3:9 says it takes + 1ms. */ + to = 100; + while (--to) { + isr = read_fs (dev, ISR); + + /* This bit is documented as "RESERVED" */ + if (isr & ISR_INIT_ERR) { + printk (KERN_ERR "Error initializing the FS... \n"); + return 1; + } + if (isr & ISR_INIT) { + fs_dprintk (FS_DEBUG_INIT, "Ha! Initialized OK!\n"); + break; + } + + /* Try again after 10ms. */ + msleep(10); + } + + if (!to) { + printk (KERN_ERR "timeout initializing the FS... \n"); + return 1; + } + + /* XXX fix for fs155 */ + dev->channel_mask = 0x1f; + dev->channo = 0; + + /* AN3: 10 */ + write_fs (dev, SARMODE1, 0 + | (fs_keystream * SARMODE1_DEFHEC) /* XXX PHY */ + | ((loopback == 1) * SARMODE1_TSTLP) /* XXX Loopback mode enable... */ + | (1 * SARMODE1_DCRM) + | (1 * SARMODE1_DCOAM) + | (0 * SARMODE1_OAMCRC) + | (0 * SARMODE1_DUMPE) + | (0 * SARMODE1_GPLEN) + | (0 * SARMODE1_GNAM) + | (0 * SARMODE1_GVAS) + | (0 * SARMODE1_GPAS) + | (1 * SARMODE1_GPRI) + | (0 * SARMODE1_PMS) + | (0 * SARMODE1_GFCR) + | (1 * SARMODE1_HECM2) + | (1 * SARMODE1_HECM1) + | (1 * SARMODE1_HECM0) + | (1 << 12) /* That's what hang's driver does. Program to 0 */ + | (0 * 0xff) /* XXX FS155 */); + + + /* Cal prescale etc */ + + /* AN3: 11 */ + write_fs (dev, TMCONF, 0x0000000f); + write_fs (dev, CALPRESCALE, 0x01010101 * num); + write_fs (dev, 0x80, 0x000F00E4); + + /* AN3: 12 */ + write_fs (dev, CELLOSCONF, 0 + | ( 0 * CELLOSCONF_CEN) + | ( CELLOSCONF_SC1) + | (0x80 * CELLOSCONF_COBS) + | (num * CELLOSCONF_COPK) /* Changed from 0xff to 0x5a */ + | (num * CELLOSCONF_COST));/* after a hint from Hang. + * performance jumped 50->70... */ + + /* Magic value by Hang */ + write_fs (dev, CELLOSCONF_COST, 0x0B809191); + + if (IS_FS50 (dev)) { + write_fs (dev, RAS0, RAS0_DCD_XHLT); + dev->atm_dev->ci_range.vpi_bits = 12; + dev->atm_dev->ci_range.vci_bits = 16; + dev->nchannels = FS50_NR_CHANNELS; + } else { + write_fs (dev, RAS0, RAS0_DCD_XHLT + | (((1 << FS155_VPI_BITS) - 1) * RAS0_VPSEL) + | (((1 << FS155_VCI_BITS) - 1) * RAS0_VCSEL)); + /* We can chose the split arbitarily. We might be able to + support more. Whatever. This should do for now. */ + dev->atm_dev->ci_range.vpi_bits = FS155_VPI_BITS; + dev->atm_dev->ci_range.vci_bits = FS155_VCI_BITS; + + /* Address bits we can't use should be compared to 0. */ + write_fs (dev, RAC, 0); + + /* Manual (AN9, page 6) says ASF1=0 means compare Utopia address + * too. I can't find ASF1 anywhere. Anyway, we AND with just the + * other bits, then compare with 0, which is exactly what we + * want. */ + write_fs (dev, RAM, (1 << (28 - FS155_VPI_BITS - FS155_VCI_BITS)) - 1); + dev->nchannels = FS155_NR_CHANNELS; + } + dev->atm_vccs = kmalloc (dev->nchannels * sizeof (struct atm_vcc *), + GFP_KERNEL); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc atmvccs: %p(%Zd)\n", + dev->atm_vccs, dev->nchannels * sizeof (struct atm_vcc *)); + + if (!dev->atm_vccs) { + printk (KERN_WARNING "Couldn't allocate memory for VCC buffers. Woops!\n"); + /* XXX Clean up..... */ + return 1; + } + memset (dev->atm_vccs, 0, dev->nchannels * sizeof (struct atm_vcc *)); + + dev->tx_inuse = kmalloc (dev->nchannels / 8 /* bits/byte */ , GFP_KERNEL); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc tx_inuse: %p(%d)\n", + dev->atm_vccs, dev->nchannels / 8); + + if (!dev->tx_inuse) { + printk (KERN_WARNING "Couldn't allocate memory for tx_inuse bits!\n"); + /* XXX Clean up..... */ + return 1; + } + memset (dev->tx_inuse, 0, dev->nchannels / 8); + + /* -- RAS1 : FS155 and 50 differ. Default (0) should be OK for both */ + /* -- RAS2 : FS50 only: Default is OK. */ + + /* DMAMODE, default should be OK. -- REW */ + write_fs (dev, DMAMR, DMAMR_TX_MODE_FULL); + + init_q (dev, &dev->hp_txq, TX_PQ(TXQ_HP), TXQ_NENTRIES, 0); + init_q (dev, &dev->lp_txq, TX_PQ(TXQ_LP), TXQ_NENTRIES, 0); + init_q (dev, &dev->tx_relq, TXB_RQ, TXQ_NENTRIES, 1); + init_q (dev, &dev->st_q, ST_Q, TXQ_NENTRIES, 1); + + for (i=0;i < FS_NR_FREE_POOLS;i++) { + init_fp (dev, &dev->rx_fp[i], RXB_FP(i), + rx_buf_sizes[i], rx_pool_sizes[i]); + top_off_fp (dev, &dev->rx_fp[i], GFP_KERNEL); + } + + + for (i=0;i < FS_NR_RX_QUEUES;i++) + init_q (dev, &dev->rx_rq[i], RXB_RQ(i), RXRQ_NENTRIES, 1); + + dev->irq = pci_dev->irq; + if (request_irq (dev->irq, fs_irq, SA_SHIRQ, "firestream", dev)) { + printk (KERN_WARNING "couldn't get irq %d for firestream.\n", pci_dev->irq); + /* XXX undo all previous stuff... */ + return 1; + } + fs_dprintk (FS_DEBUG_INIT, "Grabbed irq %d for dev at %p.\n", dev->irq, dev); + + /* We want to be notified of most things. Just the statistics count + overflows are not interesting */ + write_fs (dev, IMR, 0 + | ISR_RBRQ0_W + | ISR_RBRQ1_W + | ISR_RBRQ2_W + | ISR_RBRQ3_W + | ISR_TBRQ_W + | ISR_CSQ_W); + + write_fs (dev, SARMODE0, 0 + | (0 * SARMODE0_SHADEN) /* We don't use shadow registers. */ + | (1 * SARMODE0_GINT) + | (1 * SARMODE0_INTMODE_READCLEAR) + | (0 * SARMODE0_CWRE) + | (IS_FS50(dev)?SARMODE0_PRPWT_FS50_5: + SARMODE0_PRPWT_FS155_3) + | (1 * SARMODE0_CALSUP_1) + | (IS_FS50 (dev)?(0 + | SARMODE0_RXVCS_32 + | SARMODE0_ABRVCS_32 + | SARMODE0_TXVCS_32): + (0 + | SARMODE0_RXVCS_1k + | SARMODE0_ABRVCS_1k + | SARMODE0_TXVCS_1k)) + | (1 * SARMODE0_RUN)); + + init_phy (dev, PHY_NTC_INIT); + + if (loopback == 2) { + write_phy (dev, 0x39, 0x000e); + } + +#ifdef FS_POLL_FREQ + init_timer (&dev->timer); + dev->timer.data = (unsigned long) dev; + dev->timer.function = fs_poll; + dev->timer.expires = jiffies + FS_POLL_FREQ; + add_timer (&dev->timer); +#endif + + dev->atm_dev->dev_data = dev; + + func_exit (); + return 0; +} + +static int __devinit firestream_init_one (struct pci_dev *pci_dev, + const struct pci_device_id *ent) +{ + struct atm_dev *atm_dev; + struct fs_dev *fs_dev; + + if (pci_enable_device(pci_dev)) + goto err_out; + + fs_dev = kmalloc (sizeof (struct fs_dev), GFP_KERNEL); + fs_dprintk (FS_DEBUG_ALLOC, "Alloc fs-dev: %p(%Zd)\n", + fs_dev, sizeof (struct fs_dev)); + if (!fs_dev) + goto err_out; + + memset (fs_dev, 0, sizeof (struct fs_dev)); + + atm_dev = atm_dev_register("fs", &ops, -1, NULL); + if (!atm_dev) + goto err_out_free_fs_dev; + + fs_dev->pci_dev = pci_dev; + fs_dev->atm_dev = atm_dev; + fs_dev->flags = ent->driver_data; + + if (fs_init(fs_dev)) + goto err_out_free_atm_dev; + + fs_dev->next = fs_boards; + fs_boards = fs_dev; + return 0; + + err_out_free_atm_dev: + atm_dev_deregister(atm_dev); + err_out_free_fs_dev: + kfree(fs_dev); + err_out: + return -ENODEV; +} + +static void __devexit firestream_remove_one (struct pci_dev *pdev) +{ + int i; + struct fs_dev *dev, *nxtdev; + struct fs_vcc *vcc; + struct FS_BPENTRY *fp, *nxt; + + func_enter (); + +#if 0 + printk ("hptxq:\n"); + for (i=0;i<60;i++) { + printk ("%d: %08x %08x %08x %08x \n", + i, pq[qp].cmd, pq[qp].p0, pq[qp].p1, pq[qp].p2); + qp++; + if (qp >= 60) qp = 0; + } + + printk ("descriptors:\n"); + for (i=0;i<60;i++) { + printk ("%d: %p: %08x %08x %p %p\n", + i, da[qd], dq[qd].flags, dq[qd].bsa, dq[qd].skb, dq[qd].dev); + qd++; + if (qd >= 60) qd = 0; + } +#endif + + for (dev = fs_boards;dev != NULL;dev=nxtdev) { + fs_dprintk (FS_DEBUG_CLEANUP, "Releasing resources for dev at %p.\n", dev); + + /* XXX Hit all the tx channels too! */ + + for (i=0;i < dev->nchannels;i++) { + if (dev->atm_vccs[i]) { + vcc = FS_VCC (dev->atm_vccs[i]); + submit_command (dev, &dev->hp_txq, + QE_CMD_TX_PURGE_INH | QE_CMD_IMM_INQ | vcc->channo, 0,0,0); + submit_command (dev, &dev->hp_txq, + QE_CMD_RX_PURGE_INH | QE_CMD_IMM_INQ | vcc->channo, 0,0,0); + + } + } + + /* XXX Wait a while for the chip to release all buffers. */ + + for (i=0;i < FS_NR_FREE_POOLS;i++) { + for (fp=bus_to_virt (read_fs (dev, FP_SA(dev->rx_fp[i].offset))); + !(fp->flags & FP_FLAGS_EPI);fp = nxt) { + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-skb: %p\n", fp->skb); + dev_kfree_skb_any (fp->skb); + nxt = bus_to_virt (fp->next); + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-d: %p\n", fp); + kfree (fp); + } + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-skb: %p\n", fp->skb); + dev_kfree_skb_any (fp->skb); + fs_dprintk (FS_DEBUG_ALLOC, "Free rec-d: %p\n", fp); + kfree (fp); + } + + /* Hang the chip in "reset", prevent it clobbering memory that is + no longer ours. */ + reset_chip (dev); + + fs_dprintk (FS_DEBUG_CLEANUP, "Freeing irq%d.\n", dev->irq); + free_irq (dev->irq, dev); + del_timer (&dev->timer); + + atm_dev_deregister(dev->atm_dev); + free_queue (dev, &dev->hp_txq); + free_queue (dev, &dev->lp_txq); + free_queue (dev, &dev->tx_relq); + free_queue (dev, &dev->st_q); + + fs_dprintk (FS_DEBUG_ALLOC, "Free atmvccs: %p\n", dev->atm_vccs); + kfree (dev->atm_vccs); + + for (i=0;i< FS_NR_FREE_POOLS;i++) + free_freepool (dev, &dev->rx_fp[i]); + + for (i=0;i < FS_NR_RX_QUEUES;i++) + free_queue (dev, &dev->rx_rq[i]); + + fs_dprintk (FS_DEBUG_ALLOC, "Free fs-dev: %p\n", dev); + nxtdev = dev->next; + kfree (dev); + } + + func_exit (); +} + +static struct pci_device_id firestream_pci_tbl[] = { + { PCI_VENDOR_ID_FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS50, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, FS_IS50}, + { PCI_VENDOR_ID_FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS155, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, FS_IS155}, + { 0, } +}; + +MODULE_DEVICE_TABLE(pci, firestream_pci_tbl); + +static struct pci_driver firestream_driver = { + .name = "firestream", + .id_table = firestream_pci_tbl, + .probe = firestream_init_one, + .remove = __devexit_p(firestream_remove_one), +}; + +static int __init firestream_init_module (void) +{ + int error; + + func_enter (); + error = pci_register_driver(&firestream_driver); + func_exit (); + return error; +} + +static void __exit firestream_cleanup_module(void) +{ + pci_unregister_driver(&firestream_driver); +} + +module_init(firestream_init_module); +module_exit(firestream_cleanup_module); + +MODULE_LICENSE("GPL"); + + + diff --git a/drivers/atm/firestream.h b/drivers/atm/firestream.h new file mode 100644 index 000000000000..49e783e35ee9 --- /dev/null +++ b/drivers/atm/firestream.h @@ -0,0 +1,518 @@ +/* drivers/atm/firestream.h - FireStream 155 (MB86697) and + * FireStream 50 (MB86695) device driver + */ + +/* Written & (C) 2000 by R.E.Wolff@BitWizard.nl + * Copied snippets from zatm.c by Werner Almesberger, EPFL LRC/ICA + * and ambassador.c Copyright (C) 1995-1999 Madge Networks Ltd + */ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian + system and in the file COPYING in the Linux kernel source. +*/ + + +/*********************************************************************** + * first the defines for the chip. * + ***********************************************************************/ + + +/********************* General chip parameters. ************************/ + +#define FS_NR_FREE_POOLS 8 +#define FS_NR_RX_QUEUES 4 + + +/********************* queues and queue access macros ******************/ + + +/* A queue entry. */ +struct FS_QENTRY { + u32 cmd; + u32 p0, p1, p2; +}; + + +/* A freepool entry. */ +struct FS_BPENTRY { + u32 flags; + u32 next; + u32 bsa; + u32 aal_bufsize; + + /* The hardware doesn't look at this, but we need the SKB somewhere... */ + struct sk_buff *skb; + struct freepool *fp; + struct fs_dev *dev; +}; + + +#define STATUS_CODE(qe) ((qe->cmd >> 22) & 0x3f) + + +/* OFFSETS against the base of a QUEUE... */ +#define QSA 0x00 +#define QEA 0x04 +#define QRP 0x08 +#define QWP 0x0c +#define QCNF 0x10 /* Only for Release queues! */ +/* Not for the transmit pending queue. */ + + +/* OFFSETS against the base of a FREE POOL... */ +#define FPCNF 0x00 +#define FPSA 0x04 +#define FPEA 0x08 +#define FPCNT 0x0c +#define FPCTU 0x10 + +#define Q_SA(b) (b + QSA ) +#define Q_EA(b) (b + QEA ) +#define Q_RP(b) (b + QRP ) +#define Q_WP(b) (b + QWP ) +#define Q_CNF(b) (b + QCNF) + +#define FP_CNF(b) (b + FPCNF) +#define FP_SA(b) (b + FPSA) +#define FP_EA(b) (b + FPEA) +#define FP_CNT(b) (b + FPCNT) +#define FP_CTU(b) (b + FPCTU) + +/* bits in a queue register. */ +#define Q_FULL 0x1 +#define Q_EMPTY 0x2 +#define Q_INCWRAP 0x4 +#define Q_ADDR_MASK 0xfffffff0 + +/* bits in a FreePool config register */ +#define RBFP_RBS (0x1 << 16) +#define RBFP_RBSVAL (0x1 << 15) +#define RBFP_CME (0x1 << 12) +#define RBFP_DLP (0x1 << 11) +#define RBFP_BFPWT (0x1 << 0) + + + + +/* FireStream commands. */ +#define QE_CMD_NULL (0x00 << 22) +#define QE_CMD_REG_RD (0x01 << 22) +#define QE_CMD_REG_RDM (0x02 << 22) +#define QE_CMD_REG_WR (0x03 << 22) +#define QE_CMD_REG_WRM (0x04 << 22) +#define QE_CMD_CONFIG_TX (0x05 << 22) +#define QE_CMD_CONFIG_RX (0x06 << 22) +#define QE_CMD_PRP_RD (0x07 << 22) +#define QE_CMD_PRP_RDM (0x2a << 22) +#define QE_CMD_PRP_WR (0x09 << 22) +#define QE_CMD_PRP_WRM (0x2b << 22) +#define QE_CMD_RX_EN (0x0a << 22) +#define QE_CMD_RX_PURGE (0x0b << 22) +#define QE_CMD_RX_PURGE_INH (0x0c << 22) +#define QE_CMD_TX_EN (0x0d << 22) +#define QE_CMD_TX_PURGE (0x0e << 22) +#define QE_CMD_TX_PURGE_INH (0x0f << 22) +#define QE_CMD_RST_CG (0x10 << 22) +#define QE_CMD_SET_CG (0x11 << 22) +#define QE_CMD_RST_CLP (0x12 << 22) +#define QE_CMD_SET_CLP (0x13 << 22) +#define QE_CMD_OVERRIDE (0x14 << 22) +#define QE_CMD_ADD_BFP (0x15 << 22) +#define QE_CMD_DUMP_TX (0x16 << 22) +#define QE_CMD_DUMP_RX (0x17 << 22) +#define QE_CMD_LRAM_RD (0x18 << 22) +#define QE_CMD_LRAM_RDM (0x28 << 22) +#define QE_CMD_LRAM_WR (0x19 << 22) +#define QE_CMD_LRAM_WRM (0x29 << 22) +#define QE_CMD_LRAM_BSET (0x1a << 22) +#define QE_CMD_LRAM_BCLR (0x1b << 22) +#define QE_CMD_CONFIG_SEGM (0x1c << 22) +#define QE_CMD_READ_SEGM (0x1d << 22) +#define QE_CMD_CONFIG_ROUT (0x1e << 22) +#define QE_CMD_READ_ROUT (0x1f << 22) +#define QE_CMD_CONFIG_TM (0x20 << 22) +#define QE_CMD_READ_TM (0x21 << 22) +#define QE_CMD_CONFIG_TXBM (0x22 << 22) +#define QE_CMD_READ_TXBM (0x23 << 22) +#define QE_CMD_CONFIG_RXBM (0x24 << 22) +#define QE_CMD_READ_RXBM (0x25 << 22) +#define QE_CMD_CONFIG_REAS (0x26 << 22) +#define QE_CMD_READ_REAS (0x27 << 22) + +#define QE_TRANSMIT_DE (0x0 << 30) +#define QE_CMD_LINKED (0x1 << 30) +#define QE_CMD_IMM (0x2 << 30) +#define QE_CMD_IMM_INQ (0x3 << 30) + +#define TD_EPI (0x1 << 27) +#define TD_COMMAND (0x1 << 28) + +#define TD_DATA (0x0 << 29) +#define TD_RM_CELL (0x1 << 29) +#define TD_OAM_CELL (0x2 << 29) +#define TD_OAM_CELL_SEGMENT (0x3 << 29) + +#define TD_BPI (0x1 << 20) + +#define FP_FLAGS_EPI (0x1 << 27) + + +#define TX_PQ(i) (0x00 + (i) * 0x10) +#define TXB_RQ (0x20) +#define ST_Q (0x48) +#define RXB_FP(i) (0x90 + (i) * 0x14) +#define RXB_RQ(i) (0x134 + (i) * 0x14) + + +#define TXQ_HP 0 +#define TXQ_LP 1 + +/* Phew. You don't want to know how many revisions these simple queue + * address macros went through before I got them nice and compact as + * they are now. -- REW + */ + + +/* And now for something completely different: + * The rest of the registers... */ + + +#define CMDR0 0x34 +#define CMDR1 0x38 +#define CMDR2 0x3c +#define CMDR3 0x40 + + +#define SARMODE0 0x5c + +#define SARMODE0_TXVCS_0 (0x0 << 0) +#define SARMODE0_TXVCS_1k (0x1 << 0) +#define SARMODE0_TXVCS_2k (0x2 << 0) +#define SARMODE0_TXVCS_4k (0x3 << 0) +#define SARMODE0_TXVCS_8k (0x4 << 0) +#define SARMODE0_TXVCS_16k (0x5 << 0) +#define SARMODE0_TXVCS_32k (0x6 << 0) +#define SARMODE0_TXVCS_64k (0x7 << 0) +#define SARMODE0_TXVCS_32 (0x8 << 0) + +#define SARMODE0_ABRVCS_0 (0x0 << 4) +#define SARMODE0_ABRVCS_512 (0x1 << 4) +#define SARMODE0_ABRVCS_1k (0x2 << 4) +#define SARMODE0_ABRVCS_2k (0x3 << 4) +#define SARMODE0_ABRVCS_4k (0x4 << 4) +#define SARMODE0_ABRVCS_8k (0x5 << 4) +#define SARMODE0_ABRVCS_16k (0x6 << 4) +#define SARMODE0_ABRVCS_32k (0x7 << 4) +#define SARMODE0_ABRVCS_32 (0x9 << 4) /* The others are "8", this one really has to + be 9. Tell me you don't believe me. -- REW */ + +#define SARMODE0_RXVCS_0 (0x0 << 8) +#define SARMODE0_RXVCS_1k (0x1 << 8) +#define SARMODE0_RXVCS_2k (0x2 << 8) +#define SARMODE0_RXVCS_4k (0x3 << 8) +#define SARMODE0_RXVCS_8k (0x4 << 8) +#define SARMODE0_RXVCS_16k (0x5 << 8) +#define SARMODE0_RXVCS_32k (0x6 << 8) +#define SARMODE0_RXVCS_64k (0x7 << 8) +#define SARMODE0_RXVCS_32 (0x8 << 8) + +#define SARMODE0_CALSUP_1 (0x0 << 12) +#define SARMODE0_CALSUP_2 (0x1 << 12) +#define SARMODE0_CALSUP_3 (0x2 << 12) +#define SARMODE0_CALSUP_4 (0x3 << 12) + +#define SARMODE0_PRPWT_FS50_0 (0x0 << 14) +#define SARMODE0_PRPWT_FS50_2 (0x1 << 14) +#define SARMODE0_PRPWT_FS50_5 (0x2 << 14) +#define SARMODE0_PRPWT_FS50_11 (0x3 << 14) + +#define SARMODE0_PRPWT_FS155_0 (0x0 << 14) +#define SARMODE0_PRPWT_FS155_1 (0x1 << 14) +#define SARMODE0_PRPWT_FS155_2 (0x2 << 14) +#define SARMODE0_PRPWT_FS155_3 (0x3 << 14) + +#define SARMODE0_SRTS0 (0x1 << 23) +#define SARMODE0_SRTS1 (0x1 << 24) + +#define SARMODE0_RUN (0x1 << 25) + +#define SARMODE0_UNLOCK (0x1 << 26) +#define SARMODE0_CWRE (0x1 << 27) + + +#define SARMODE0_INTMODE_READCLEAR (0x0 << 28) +#define SARMODE0_INTMODE_READNOCLEAR (0x1 << 28) +#define SARMODE0_INTMODE_READNOCLEARINHIBIT (0x2 << 28) +#define SARMODE0_INTMODE_READCLEARINHIBIT (0x3 << 28) /* Tell me you don't believe me. */ + +#define SARMODE0_GINT (0x1 << 30) +#define SARMODE0_SHADEN (0x1 << 31) + + +#define SARMODE1 0x60 + + +#define SARMODE1_TRTL_SHIFT 0 /* Program to 0 */ +#define SARMODE1_RRTL_SHIFT 4 /* Program to 0 */ + +#define SARMODE1_TAGM (0x1 << 8) /* Program to 0 */ + +#define SARMODE1_HECM0 (0x1 << 9) +#define SARMODE1_HECM1 (0x1 << 10) +#define SARMODE1_HECM2 (0x1 << 11) + +#define SARMODE1_GFCE (0x1 << 14) +#define SARMODE1_GFCR (0x1 << 15) +#define SARMODE1_PMS (0x1 << 18) +#define SARMODE1_GPRI (0x1 << 19) +#define SARMODE1_GPAS (0x1 << 20) +#define SARMODE1_GVAS (0x1 << 21) +#define SARMODE1_GNAM (0x1 << 22) +#define SARMODE1_GPLEN (0x1 << 23) +#define SARMODE1_DUMPE (0x1 << 24) +#define SARMODE1_OAMCRC (0x1 << 25) +#define SARMODE1_DCOAM (0x1 << 26) +#define SARMODE1_DCRM (0x1 << 27) +#define SARMODE1_TSTLP (0x1 << 28) +#define SARMODE1_DEFHEC (0x1 << 29) + + +#define ISR 0x64 +#define IUSR 0x68 +#define IMR 0x6c + +#define ISR_LPCO (0x1 << 0) +#define ISR_DPCO (0x1 << 1) +#define ISR_RBRQ0_W (0x1 << 2) +#define ISR_RBRQ1_W (0x1 << 3) +#define ISR_RBRQ2_W (0x1 << 4) +#define ISR_RBRQ3_W (0x1 << 5) +#define ISR_RBRQ0_NF (0x1 << 6) +#define ISR_RBRQ1_NF (0x1 << 7) +#define ISR_RBRQ2_NF (0x1 << 8) +#define ISR_RBRQ3_NF (0x1 << 9) +#define ISR_BFP_SC (0x1 << 10) +#define ISR_INIT (0x1 << 11) +#define ISR_INIT_ERR (0x1 << 12) /* Documented as "reserved" */ +#define ISR_USCEO (0x1 << 13) +#define ISR_UPEC0 (0x1 << 14) +#define ISR_VPFCO (0x1 << 15) +#define ISR_CRCCO (0x1 << 16) +#define ISR_HECO (0x1 << 17) +#define ISR_TBRQ_W (0x1 << 18) +#define ISR_TBRQ_NF (0x1 << 19) +#define ISR_CTPQ_E (0x1 << 20) +#define ISR_GFC_C0 (0x1 << 21) +#define ISR_PCI_FTL (0x1 << 22) +#define ISR_CSQ_W (0x1 << 23) +#define ISR_CSQ_NF (0x1 << 24) +#define ISR_EXT_INT (0x1 << 25) +#define ISR_RXDMA_S (0x1 << 26) + + +#define TMCONF 0x78 +/* Bits? */ + + +#define CALPRESCALE 0x7c +/* Bits? */ + +#define CELLOSCONF 0x84 +#define CELLOSCONF_COTS (0x1 << 28) +#define CELLOSCONF_CEN (0x1 << 27) +#define CELLOSCONF_SC8 (0x3 << 24) +#define CELLOSCONF_SC4 (0x2 << 24) +#define CELLOSCONF_SC2 (0x1 << 24) +#define CELLOSCONF_SC1 (0x0 << 24) + +#define CELLOSCONF_COBS (0x1 << 16) +#define CELLOSCONF_COPK (0x1 << 8) +#define CELLOSCONF_COST (0x1 << 0) +/* Bits? */ + +#define RAS0 0x1bc +#define RAS0_DCD_XHLT (0x1 << 31) + +#define RAS0_VPSEL (0x1 << 16) +#define RAS0_VCSEL (0x1 << 0) + +#define RAS1 0x1c0 +#define RAS1_UTREG (0x1 << 5) + + +#define DMAMR 0x1cc +#define DMAMR_TX_MODE_FULL (0x0 << 0) +#define DMAMR_TX_MODE_PART (0x1 << 0) +#define DMAMR_TX_MODE_NONE (0x2 << 0) /* And 3 */ + + + +#define RAS2 0x280 + +#define RAS2_NNI (0x1 << 0) +#define RAS2_USEL (0x1 << 1) +#define RAS2_UBS (0x1 << 2) + + + +struct fs_transmit_config { + u32 flags; + u32 atm_hdr; + u32 TMC[4]; + u32 spec; + u32 rtag[3]; +}; + +#define TC_FLAGS_AAL5 (0x0 << 29) +#define TC_FLAGS_TRANSPARENT_PAYLOAD (0x1 << 29) +#define TC_FLAGS_TRANSPARENT_CELL (0x2 << 29) +#define TC_FLAGS_STREAMING (0x1 << 28) +#define TC_FLAGS_PACKET (0x0) +#define TC_FLAGS_TYPE_ABR (0x0 << 22) +#define TC_FLAGS_TYPE_CBR (0x1 << 22) +#define TC_FLAGS_TYPE_VBR (0x2 << 22) +#define TC_FLAGS_TYPE_UBR (0x3 << 22) +#define TC_FLAGS_CAL0 (0x0 << 20) +#define TC_FLAGS_CAL1 (0x1 << 20) +#define TC_FLAGS_CAL2 (0x2 << 20) +#define TC_FLAGS_CAL3 (0x3 << 20) + + +#define RC_FLAGS_NAM (0x1 << 13) +#define RC_FLAGS_RXBM_PSB (0x0 << 14) +#define RC_FLAGS_RXBM_CIF (0x1 << 14) +#define RC_FLAGS_RXBM_PMB (0x2 << 14) +#define RC_FLAGS_RXBM_STR (0x4 << 14) +#define RC_FLAGS_RXBM_SAF (0x6 << 14) +#define RC_FLAGS_RXBM_POS (0x6 << 14) +#define RC_FLAGS_BFPS (0x1 << 17) + +#define RC_FLAGS_BFPS_BFP (0x1 << 17) + +#define RC_FLAGS_BFPS_BFP0 (0x0 << 17) +#define RC_FLAGS_BFPS_BFP1 (0x1 << 17) +#define RC_FLAGS_BFPS_BFP2 (0x2 << 17) +#define RC_FLAGS_BFPS_BFP3 (0x3 << 17) +#define RC_FLAGS_BFPS_BFP4 (0x4 << 17) +#define RC_FLAGS_BFPS_BFP5 (0x5 << 17) +#define RC_FLAGS_BFPS_BFP6 (0x6 << 17) +#define RC_FLAGS_BFPS_BFP7 (0x7 << 17) +#define RC_FLAGS_BFPS_BFP01 (0x8 << 17) +#define RC_FLAGS_BFPS_BFP23 (0x9 << 17) +#define RC_FLAGS_BFPS_BFP45 (0xa << 17) +#define RC_FLAGS_BFPS_BFP67 (0xb << 17) +#define RC_FLAGS_BFPS_BFP07 (0xc << 17) +#define RC_FLAGS_BFPS_BFP27 (0xd << 17) +#define RC_FLAGS_BFPS_BFP47 (0xe << 17) + +#define RC_FLAGS_BFPS (0x1 << 17) +#define RC_FLAGS_BFPP (0x1 << 21) +#define RC_FLAGS_TEVC (0x1 << 22) +#define RC_FLAGS_TEP (0x1 << 23) +#define RC_FLAGS_AAL5 (0x0 << 24) +#define RC_FLAGS_TRANSP (0x1 << 24) +#define RC_FLAGS_TRANSC (0x2 << 24) +#define RC_FLAGS_ML (0x1 << 27) +#define RC_FLAGS_TRBRM (0x1 << 28) +#define RC_FLAGS_PRI (0x1 << 29) +#define RC_FLAGS_HOAM (0x1 << 30) +#define RC_FLAGS_CRC10 (0x1 << 31) + + +#define RAC 0x1c8 +#define RAM 0x1c4 + + + +/************************************************************************ + * Then the datastructures that the DRIVER uses. * + ************************************************************************/ + +#define TXQ_NENTRIES 32 +#define RXRQ_NENTRIES 1024 + + +struct fs_vcc { + int channo; + wait_queue_head_t close_wait; + struct sk_buff *last_skb; +}; + + +struct queue { + struct FS_QENTRY *sa, *ea; + int offset; +}; + +struct freepool { + int offset; + int bufsize; + int nr_buffers; + int n; +}; + + +struct fs_dev { + struct fs_dev *next; /* other FS devices */ + int flags; + + unsigned char irq; /* IRQ */ + struct pci_dev *pci_dev; /* PCI stuff */ + struct atm_dev *atm_dev; + struct timer_list timer; + + unsigned long hw_base; /* mem base address */ + void __iomem *base; /* Mapping of base address */ + int channo; + unsigned long channel_mask; + + struct queue hp_txq, lp_txq, tx_relq, st_q; + struct freepool rx_fp[FS_NR_FREE_POOLS]; + struct queue rx_rq[FS_NR_RX_QUEUES]; + + int nchannels; + struct atm_vcc **atm_vccs; + void *tx_inuse; + int ntxpckts; +}; + + + + +/* Number of channesl that the FS50 supports. */ +#define FS50_CHANNEL_BITS 5 +#define FS50_NR_CHANNELS (1 << FS50_CHANNEL_BITS) + + +#define FS_DEV(atm_dev) ((struct fs_dev *) (atm_dev)->dev_data) +#define FS_VCC(atm_vcc) ((struct fs_vcc *) (atm_vcc)->dev_data) + + +#define FS_IS50 0x1 +#define FS_IS155 0x2 + +#define IS_FS50(dev) (dev->flags & FS_IS50) +#define IS_FS155(dev) (dev->flags & FS_IS155) + +/* Within limits this is user-configurable. */ +/* Note: Currently the sum (10 -> 1k channels) is hardcoded in the driver. */ +#define FS155_VPI_BITS 4 +#define FS155_VCI_BITS 6 + +#define FS155_CHANNEL_BITS (FS155_VPI_BITS + FS155_VCI_BITS) +#define FS155_NR_CHANNELS (1 << FS155_CHANNEL_BITS) diff --git a/drivers/atm/fore200e.c b/drivers/atm/fore200e.c new file mode 100644 index 000000000000..196b33644627 --- /dev/null +++ b/drivers/atm/fore200e.c @@ -0,0 +1,3249 @@ +/* + $Id: fore200e.c,v 1.5 2000/04/14 10:10:34 davem Exp $ + + A FORE Systems 200E-series driver for ATM on Linux. + Christophe Lizzi (lizzi@cnam.fr), October 1999-March 2003. + + Based on the PCA-200E driver from Uwe Dannowski (Uwe.Dannowski@inf.tu-dresden.de). + + This driver simultaneously supports PCA-200E and SBA-200E adapters + on i386, alpha (untested), powerpc, sparc and sparc64 architectures. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ + + +#include <linux/config.h> +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/init.h> +#include <linux/capability.h> +#include <linux/sched.h> +#include <linux/interrupt.h> +#include <linux/bitops.h> +#include <linux/pci.h> +#include <linux/module.h> +#include <linux/atmdev.h> +#include <linux/sonet.h> +#include <linux/atm_suni.h> +#include <linux/dma-mapping.h> +#include <linux/delay.h> +#include <asm/io.h> +#include <asm/string.h> +#include <asm/page.h> +#include <asm/irq.h> +#include <asm/dma.h> +#include <asm/byteorder.h> +#include <asm/uaccess.h> +#include <asm/atomic.h> + +#ifdef CONFIG_ATM_FORE200E_SBA +#include <asm/idprom.h> +#include <asm/sbus.h> +#include <asm/openprom.h> +#include <asm/oplib.h> +#include <asm/pgtable.h> +#endif + +#if defined(CONFIG_ATM_FORE200E_USE_TASKLET) /* defer interrupt work to a tasklet */ +#define FORE200E_USE_TASKLET +#endif + +#if 0 /* enable the debugging code of the buffer supply queues */ +#define FORE200E_BSQ_DEBUG +#endif + +#if 1 /* ensure correct handling of 52-byte AAL0 SDUs expected by atmdump-like apps */ +#define FORE200E_52BYTE_AAL0_SDU +#endif + +#include "fore200e.h" +#include "suni.h" + +#define FORE200E_VERSION "0.3e" + +#define FORE200E "fore200e: " + +#if 0 /* override .config */ +#define CONFIG_ATM_FORE200E_DEBUG 1 +#endif +#if defined(CONFIG_ATM_FORE200E_DEBUG) && (CONFIG_ATM_FORE200E_DEBUG > 0) +#define DPRINTK(level, format, args...) do { if (CONFIG_ATM_FORE200E_DEBUG >= (level)) \ + printk(FORE200E format, ##args); } while (0) +#else +#define DPRINTK(level, format, args...) do {} while (0) +#endif + + +#define FORE200E_ALIGN(addr, alignment) \ + ((((unsigned long)(addr) + (alignment - 1)) & ~(alignment - 1)) - (unsigned long)(addr)) + +#define FORE200E_DMA_INDEX(dma_addr, type, index) ((dma_addr) + (index) * sizeof(type)) + +#define FORE200E_INDEX(virt_addr, type, index) (&((type *)(virt_addr))[ index ]) + +#define FORE200E_NEXT_ENTRY(index, modulo) (index = ++(index) % (modulo)) + +#if 1 +#define ASSERT(expr) if (!(expr)) { \ + printk(FORE200E "assertion failed! %s[%d]: %s\n", \ + __FUNCTION__, __LINE__, #expr); \ + panic(FORE200E "%s", __FUNCTION__); \ + } +#else +#define ASSERT(expr) do {} while (0) +#endif + + +static const struct atmdev_ops fore200e_ops; +static const struct fore200e_bus fore200e_bus[]; + +static LIST_HEAD(fore200e_boards); + + +MODULE_AUTHOR("Christophe Lizzi - credits to Uwe Dannowski and Heikki Vatiainen"); +MODULE_DESCRIPTION("FORE Systems 200E-series ATM driver - version " FORE200E_VERSION); +MODULE_SUPPORTED_DEVICE("PCA-200E, SBA-200E"); + + +static const int fore200e_rx_buf_nbr[ BUFFER_SCHEME_NBR ][ BUFFER_MAGN_NBR ] = { + { BUFFER_S1_NBR, BUFFER_L1_NBR }, + { BUFFER_S2_NBR, BUFFER_L2_NBR } +}; + +static const int fore200e_rx_buf_size[ BUFFER_SCHEME_NBR ][ BUFFER_MAGN_NBR ] = { + { BUFFER_S1_SIZE, BUFFER_L1_SIZE }, + { BUFFER_S2_SIZE, BUFFER_L2_SIZE } +}; + + +#if defined(CONFIG_ATM_FORE200E_DEBUG) && (CONFIG_ATM_FORE200E_DEBUG > 0) +static const char* fore200e_traffic_class[] = { "NONE", "UBR", "CBR", "VBR", "ABR", "ANY" }; +#endif + + +#if 0 /* currently unused */ +static int +fore200e_fore2atm_aal(enum fore200e_aal aal) +{ + switch(aal) { + case FORE200E_AAL0: return ATM_AAL0; + case FORE200E_AAL34: return ATM_AAL34; + case FORE200E_AAL5: return ATM_AAL5; + } + + return -EINVAL; +} +#endif + + +static enum fore200e_aal +fore200e_atm2fore_aal(int aal) +{ + switch(aal) { + case ATM_AAL0: return FORE200E_AAL0; + case ATM_AAL34: return FORE200E_AAL34; + case ATM_AAL1: + case ATM_AAL2: + case ATM_AAL5: return FORE200E_AAL5; + } + + return -EINVAL; +} + + +static char* +fore200e_irq_itoa(int irq) +{ +#if defined(__sparc_v9__) + return __irq_itoa(irq); +#else + static char str[8]; + sprintf(str, "%d", irq); + return str; +#endif +} + + +static void* +fore200e_kmalloc(int size, int flags) +{ + void* chunk = kmalloc(size, flags); + + if (chunk) + memset(chunk, 0x00, size); + else + printk(FORE200E "kmalloc() failed, requested size = %d, flags = 0x%x\n", size, flags); + + return chunk; +} + + +static void +fore200e_kfree(void* chunk) +{ + kfree(chunk); +} + + +/* allocate and align a chunk of memory intended to hold the data behing exchanged + between the driver and the adapter (using streaming DVMA) */ + +static int +fore200e_chunk_alloc(struct fore200e* fore200e, struct chunk* chunk, int size, int alignment, int direction) +{ + unsigned long offset = 0; + + if (alignment <= sizeof(int)) + alignment = 0; + + chunk->alloc_size = size + alignment; + chunk->align_size = size; + chunk->direction = direction; + + chunk->alloc_addr = fore200e_kmalloc(chunk->alloc_size, GFP_KERNEL | GFP_DMA); + if (chunk->alloc_addr == NULL) + return -ENOMEM; + + if (alignment > 0) + offset = FORE200E_ALIGN(chunk->alloc_addr, alignment); + + chunk->align_addr = chunk->alloc_addr + offset; + + chunk->dma_addr = fore200e->bus->dma_map(fore200e, chunk->align_addr, chunk->align_size, direction); + + return 0; +} + + +/* free a chunk of memory */ + +static void +fore200e_chunk_free(struct fore200e* fore200e, struct chunk* chunk) +{ + fore200e->bus->dma_unmap(fore200e, chunk->dma_addr, chunk->dma_size, chunk->direction); + + fore200e_kfree(chunk->alloc_addr); +} + + +static void +fore200e_spin(int msecs) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(msecs); + while (time_before(jiffies, timeout)); +} + + +static int +fore200e_poll(struct fore200e* fore200e, volatile u32* addr, u32 val, int msecs) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(msecs); + int ok; + + mb(); + do { + if ((ok = (*addr == val)) || (*addr & STATUS_ERROR)) + break; + + } while (time_before(jiffies, timeout)); + +#if 1 + if (!ok) { + printk(FORE200E "cmd polling failed, got status 0x%08x, expected 0x%08x\n", + *addr, val); + } +#endif + + return ok; +} + + +static int +fore200e_io_poll(struct fore200e* fore200e, volatile u32 __iomem *addr, u32 val, int msecs) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(msecs); + int ok; + + do { + if ((ok = (fore200e->bus->read(addr) == val))) + break; + + } while (time_before(jiffies, timeout)); + +#if 1 + if (!ok) { + printk(FORE200E "I/O polling failed, got status 0x%08x, expected 0x%08x\n", + fore200e->bus->read(addr), val); + } +#endif + + return ok; +} + + +static void +fore200e_free_rx_buf(struct fore200e* fore200e) +{ + int scheme, magn, nbr; + struct buffer* buffer; + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) { + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) { + + if ((buffer = fore200e->host_bsq[ scheme ][ magn ].buffer) != NULL) { + + for (nbr = 0; nbr < fore200e_rx_buf_nbr[ scheme ][ magn ]; nbr++) { + + struct chunk* data = &buffer[ nbr ].data; + + if (data->alloc_addr != NULL) + fore200e_chunk_free(fore200e, data); + } + } + } + } +} + + +static void +fore200e_uninit_bs_queue(struct fore200e* fore200e) +{ + int scheme, magn; + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) { + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) { + + struct chunk* status = &fore200e->host_bsq[ scheme ][ magn ].status; + struct chunk* rbd_block = &fore200e->host_bsq[ scheme ][ magn ].rbd_block; + + if (status->alloc_addr) + fore200e->bus->dma_chunk_free(fore200e, status); + + if (rbd_block->alloc_addr) + fore200e->bus->dma_chunk_free(fore200e, rbd_block); + } + } +} + + +static int +fore200e_reset(struct fore200e* fore200e, int diag) +{ + int ok; + + fore200e->cp_monitor = fore200e->virt_base + FORE200E_CP_MONITOR_OFFSET; + + fore200e->bus->write(BSTAT_COLD_START, &fore200e->cp_monitor->bstat); + + fore200e->bus->reset(fore200e); + + if (diag) { + ok = fore200e_io_poll(fore200e, &fore200e->cp_monitor->bstat, BSTAT_SELFTEST_OK, 1000); + if (ok == 0) { + + printk(FORE200E "device %s self-test failed\n", fore200e->name); + return -ENODEV; + } + + printk(FORE200E "device %s self-test passed\n", fore200e->name); + + fore200e->state = FORE200E_STATE_RESET; + } + + return 0; +} + + +static void +fore200e_shutdown(struct fore200e* fore200e) +{ + printk(FORE200E "removing device %s at 0x%lx, IRQ %s\n", + fore200e->name, fore200e->phys_base, + fore200e_irq_itoa(fore200e->irq)); + + if (fore200e->state > FORE200E_STATE_RESET) { + /* first, reset the board to prevent further interrupts or data transfers */ + fore200e_reset(fore200e, 0); + } + + /* then, release all allocated resources */ + switch(fore200e->state) { + + case FORE200E_STATE_COMPLETE: + if (fore200e->stats) + kfree(fore200e->stats); + + case FORE200E_STATE_IRQ: + free_irq(fore200e->irq, fore200e->atm_dev); + + case FORE200E_STATE_ALLOC_BUF: + fore200e_free_rx_buf(fore200e); + + case FORE200E_STATE_INIT_BSQ: + fore200e_uninit_bs_queue(fore200e); + + case FORE200E_STATE_INIT_RXQ: + fore200e->bus->dma_chunk_free(fore200e, &fore200e->host_rxq.status); + fore200e->bus->dma_chunk_free(fore200e, &fore200e->host_rxq.rpd); + + case FORE200E_STATE_INIT_TXQ: + fore200e->bus->dma_chunk_free(fore200e, &fore200e->host_txq.status); + fore200e->bus->dma_chunk_free(fore200e, &fore200e->host_txq.tpd); + + case FORE200E_STATE_INIT_CMDQ: + fore200e->bus->dma_chunk_free(fore200e, &fore200e->host_cmdq.status); + + case FORE200E_STATE_INITIALIZE: + /* nothing to do for that state */ + + case FORE200E_STATE_START_FW: + /* nothing to do for that state */ + + case FORE200E_STATE_LOAD_FW: + /* nothing to do for that state */ + + case FORE200E_STATE_RESET: + /* nothing to do for that state */ + + case FORE200E_STATE_MAP: + fore200e->bus->unmap(fore200e); + + case FORE200E_STATE_CONFIGURE: + /* nothing to do for that state */ + + case FORE200E_STATE_REGISTER: + /* XXX shouldn't we *start* by deregistering the device? */ + atm_dev_deregister(fore200e->atm_dev); + + case FORE200E_STATE_BLANK: + /* nothing to do for that state */ + break; + } +} + + +#ifdef CONFIG_ATM_FORE200E_PCA + +static u32 fore200e_pca_read(volatile u32 __iomem *addr) +{ + /* on big-endian hosts, the board is configured to convert + the endianess of slave RAM accesses */ + return le32_to_cpu(readl(addr)); +} + + +static void fore200e_pca_write(u32 val, volatile u32 __iomem *addr) +{ + /* on big-endian hosts, the board is configured to convert + the endianess of slave RAM accesses */ + writel(cpu_to_le32(val), addr); +} + + +static u32 +fore200e_pca_dma_map(struct fore200e* fore200e, void* virt_addr, int size, int direction) +{ + u32 dma_addr = pci_map_single((struct pci_dev*)fore200e->bus_dev, virt_addr, size, direction); + + DPRINTK(3, "PCI DVMA mapping: virt_addr = 0x%p, size = %d, direction = %d, --> dma_addr = 0x%08x\n", + virt_addr, size, direction, dma_addr); + + return dma_addr; +} + + +static void +fore200e_pca_dma_unmap(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "PCI DVMA unmapping: dma_addr = 0x%08x, size = %d, direction = %d\n", + dma_addr, size, direction); + + pci_unmap_single((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + + +static void +fore200e_pca_dma_sync_for_cpu(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "PCI DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction); + + pci_dma_sync_single_for_cpu((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + +static void +fore200e_pca_dma_sync_for_device(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "PCI DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction); + + pci_dma_sync_single_for_device((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + + +/* allocate a DMA consistent chunk of memory intended to act as a communication mechanism + (to hold descriptors, status, queues, etc.) shared by the driver and the adapter */ + +static int +fore200e_pca_dma_chunk_alloc(struct fore200e* fore200e, struct chunk* chunk, + int size, int nbr, int alignment) +{ + /* returned chunks are page-aligned */ + chunk->alloc_size = size * nbr; + chunk->alloc_addr = pci_alloc_consistent((struct pci_dev*)fore200e->bus_dev, + chunk->alloc_size, + &chunk->dma_addr); + + if ((chunk->alloc_addr == NULL) || (chunk->dma_addr == 0)) + return -ENOMEM; + + chunk->align_addr = chunk->alloc_addr; + + return 0; +} + + +/* free a DMA consistent chunk of memory */ + +static void +fore200e_pca_dma_chunk_free(struct fore200e* fore200e, struct chunk* chunk) +{ + pci_free_consistent((struct pci_dev*)fore200e->bus_dev, + chunk->alloc_size, + chunk->alloc_addr, + chunk->dma_addr); +} + + +static int +fore200e_pca_irq_check(struct fore200e* fore200e) +{ + /* this is a 1 bit register */ + int irq_posted = readl(fore200e->regs.pca.psr); + +#if defined(CONFIG_ATM_FORE200E_DEBUG) && (CONFIG_ATM_FORE200E_DEBUG == 2) + if (irq_posted && (readl(fore200e->regs.pca.hcr) & PCA200E_HCR_OUTFULL)) { + DPRINTK(2,"FIFO OUT full, device %d\n", fore200e->atm_dev->number); + } +#endif + + return irq_posted; +} + + +static void +fore200e_pca_irq_ack(struct fore200e* fore200e) +{ + writel(PCA200E_HCR_CLRINTR, fore200e->regs.pca.hcr); +} + + +static void +fore200e_pca_reset(struct fore200e* fore200e) +{ + writel(PCA200E_HCR_RESET, fore200e->regs.pca.hcr); + fore200e_spin(10); + writel(0, fore200e->regs.pca.hcr); +} + + +static int __init +fore200e_pca_map(struct fore200e* fore200e) +{ + DPRINTK(2, "device %s being mapped in memory\n", fore200e->name); + + fore200e->virt_base = ioremap(fore200e->phys_base, PCA200E_IOSPACE_LENGTH); + + if (fore200e->virt_base == NULL) { + printk(FORE200E "can't map device %s\n", fore200e->name); + return -EFAULT; + } + + DPRINTK(1, "device %s mapped to 0x%p\n", fore200e->name, fore200e->virt_base); + + /* gain access to the PCA specific registers */ + fore200e->regs.pca.hcr = fore200e->virt_base + PCA200E_HCR_OFFSET; + fore200e->regs.pca.imr = fore200e->virt_base + PCA200E_IMR_OFFSET; + fore200e->regs.pca.psr = fore200e->virt_base + PCA200E_PSR_OFFSET; + + fore200e->state = FORE200E_STATE_MAP; + return 0; +} + + +static void +fore200e_pca_unmap(struct fore200e* fore200e) +{ + DPRINTK(2, "device %s being unmapped from memory\n", fore200e->name); + + if (fore200e->virt_base != NULL) + iounmap(fore200e->virt_base); +} + + +static int __init +fore200e_pca_configure(struct fore200e* fore200e) +{ + struct pci_dev* pci_dev = (struct pci_dev*)fore200e->bus_dev; + u8 master_ctrl, latency; + + DPRINTK(2, "device %s being configured\n", fore200e->name); + + if ((pci_dev->irq == 0) || (pci_dev->irq == 0xFF)) { + printk(FORE200E "incorrect IRQ setting - misconfigured PCI-PCI bridge?\n"); + return -EIO; + } + + pci_read_config_byte(pci_dev, PCA200E_PCI_MASTER_CTRL, &master_ctrl); + + master_ctrl = master_ctrl +#if defined(__BIG_ENDIAN) + /* request the PCA board to convert the endianess of slave RAM accesses */ + | PCA200E_CTRL_CONVERT_ENDIAN +#endif +#if 0 + | PCA200E_CTRL_DIS_CACHE_RD + | PCA200E_CTRL_DIS_WRT_INVAL + | PCA200E_CTRL_ENA_CONT_REQ_MODE + | PCA200E_CTRL_2_CACHE_WRT_INVAL +#endif + | PCA200E_CTRL_LARGE_PCI_BURSTS; + + pci_write_config_byte(pci_dev, PCA200E_PCI_MASTER_CTRL, master_ctrl); + + /* raise latency from 32 (default) to 192, as this seems to prevent NIC + lockups (under heavy rx loads) due to continuous 'FIFO OUT full' condition. + this may impact the performances of other PCI devices on the same bus, though */ + latency = 192; + pci_write_config_byte(pci_dev, PCI_LATENCY_TIMER, latency); + + fore200e->state = FORE200E_STATE_CONFIGURE; + return 0; +} + + +static int __init +fore200e_pca_prom_read(struct fore200e* fore200e, struct prom_data* prom) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct host_cmdq_entry* entry = &cmdq->host_entry[ cmdq->head ]; + struct prom_opcode opcode; + int ok; + u32 prom_dma; + + FORE200E_NEXT_ENTRY(cmdq->head, QUEUE_SIZE_CMD); + + opcode.opcode = OPCODE_GET_PROM; + opcode.pad = 0; + + prom_dma = fore200e->bus->dma_map(fore200e, prom, sizeof(struct prom_data), DMA_FROM_DEVICE); + + fore200e->bus->write(prom_dma, &entry->cp_entry->cmd.prom_block.prom_haddr); + + *entry->status = STATUS_PENDING; + + fore200e->bus->write(*(u32*)&opcode, (u32 __iomem *)&entry->cp_entry->cmd.prom_block.opcode); + + ok = fore200e_poll(fore200e, entry->status, STATUS_COMPLETE, 400); + + *entry->status = STATUS_FREE; + + fore200e->bus->dma_unmap(fore200e, prom_dma, sizeof(struct prom_data), DMA_FROM_DEVICE); + + if (ok == 0) { + printk(FORE200E "unable to get PROM data from device %s\n", fore200e->name); + return -EIO; + } + +#if defined(__BIG_ENDIAN) + +#define swap_here(addr) (*((u32*)(addr)) = swab32( *((u32*)(addr)) )) + + /* MAC address is stored as little-endian */ + swap_here(&prom->mac_addr[0]); + swap_here(&prom->mac_addr[4]); +#endif + + return 0; +} + + +static int +fore200e_pca_proc_read(struct fore200e* fore200e, char *page) +{ + struct pci_dev* pci_dev = (struct pci_dev*)fore200e->bus_dev; + + return sprintf(page, " PCI bus/slot/function:\t%d/%d/%d\n", + pci_dev->bus->number, PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn)); +} + +#endif /* CONFIG_ATM_FORE200E_PCA */ + + +#ifdef CONFIG_ATM_FORE200E_SBA + +static u32 +fore200e_sba_read(volatile u32 __iomem *addr) +{ + return sbus_readl(addr); +} + + +static void +fore200e_sba_write(u32 val, volatile u32 __iomem *addr) +{ + sbus_writel(val, addr); +} + + +static u32 +fore200e_sba_dma_map(struct fore200e* fore200e, void* virt_addr, int size, int direction) +{ + u32 dma_addr = sbus_map_single((struct sbus_dev*)fore200e->bus_dev, virt_addr, size, direction); + + DPRINTK(3, "SBUS DVMA mapping: virt_addr = 0x%p, size = %d, direction = %d --> dma_addr = 0x%08x\n", + virt_addr, size, direction, dma_addr); + + return dma_addr; +} + + +static void +fore200e_sba_dma_unmap(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "SBUS DVMA unmapping: dma_addr = 0x%08x, size = %d, direction = %d,\n", + dma_addr, size, direction); + + sbus_unmap_single((struct sbus_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + + +static void +fore200e_sba_dma_sync_for_cpu(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "SBUS DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction); + + sbus_dma_sync_single_for_cpu((struct sbus_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + +static void +fore200e_sba_dma_sync_for_device(struct fore200e* fore200e, u32 dma_addr, int size, int direction) +{ + DPRINTK(3, "SBUS DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction); + + sbus_dma_sync_single_for_device((struct sbus_dev*)fore200e->bus_dev, dma_addr, size, direction); +} + + +/* allocate a DVMA consistent chunk of memory intended to act as a communication mechanism + (to hold descriptors, status, queues, etc.) shared by the driver and the adapter */ + +static int +fore200e_sba_dma_chunk_alloc(struct fore200e* fore200e, struct chunk* chunk, + int size, int nbr, int alignment) +{ + chunk->alloc_size = chunk->align_size = size * nbr; + + /* returned chunks are page-aligned */ + chunk->alloc_addr = sbus_alloc_consistent((struct sbus_dev*)fore200e->bus_dev, + chunk->alloc_size, + &chunk->dma_addr); + + if ((chunk->alloc_addr == NULL) || (chunk->dma_addr == 0)) + return -ENOMEM; + + chunk->align_addr = chunk->alloc_addr; + + return 0; +} + + +/* free a DVMA consistent chunk of memory */ + +static void +fore200e_sba_dma_chunk_free(struct fore200e* fore200e, struct chunk* chunk) +{ + sbus_free_consistent((struct sbus_dev*)fore200e->bus_dev, + chunk->alloc_size, + chunk->alloc_addr, + chunk->dma_addr); +} + + +static void +fore200e_sba_irq_enable(struct fore200e* fore200e) +{ + u32 hcr = fore200e->bus->read(fore200e->regs.sba.hcr) & SBA200E_HCR_STICKY; + fore200e->bus->write(hcr | SBA200E_HCR_INTR_ENA, fore200e->regs.sba.hcr); +} + + +static int +fore200e_sba_irq_check(struct fore200e* fore200e) +{ + return fore200e->bus->read(fore200e->regs.sba.hcr) & SBA200E_HCR_INTR_REQ; +} + + +static void +fore200e_sba_irq_ack(struct fore200e* fore200e) +{ + u32 hcr = fore200e->bus->read(fore200e->regs.sba.hcr) & SBA200E_HCR_STICKY; + fore200e->bus->write(hcr | SBA200E_HCR_INTR_CLR, fore200e->regs.sba.hcr); +} + + +static void +fore200e_sba_reset(struct fore200e* fore200e) +{ + fore200e->bus->write(SBA200E_HCR_RESET, fore200e->regs.sba.hcr); + fore200e_spin(10); + fore200e->bus->write(0, fore200e->regs.sba.hcr); +} + + +static int __init +fore200e_sba_map(struct fore200e* fore200e) +{ + struct sbus_dev* sbus_dev = (struct sbus_dev*)fore200e->bus_dev; + unsigned int bursts; + + /* gain access to the SBA specific registers */ + fore200e->regs.sba.hcr = sbus_ioremap(&sbus_dev->resource[0], 0, SBA200E_HCR_LENGTH, "SBA HCR"); + fore200e->regs.sba.bsr = sbus_ioremap(&sbus_dev->resource[1], 0, SBA200E_BSR_LENGTH, "SBA BSR"); + fore200e->regs.sba.isr = sbus_ioremap(&sbus_dev->resource[2], 0, SBA200E_ISR_LENGTH, "SBA ISR"); + fore200e->virt_base = sbus_ioremap(&sbus_dev->resource[3], 0, SBA200E_RAM_LENGTH, "SBA RAM"); + + if (fore200e->virt_base == NULL) { + printk(FORE200E "unable to map RAM of device %s\n", fore200e->name); + return -EFAULT; + } + + DPRINTK(1, "device %s mapped to 0x%p\n", fore200e->name, fore200e->virt_base); + + fore200e->bus->write(0x02, fore200e->regs.sba.isr); /* XXX hardwired interrupt level */ + + /* get the supported DVMA burst sizes */ + bursts = prom_getintdefault(sbus_dev->bus->prom_node, "burst-sizes", 0x00); + + if (sbus_can_dma_64bit(sbus_dev)) + sbus_set_sbus64(sbus_dev, bursts); + + fore200e->state = FORE200E_STATE_MAP; + return 0; +} + + +static void +fore200e_sba_unmap(struct fore200e* fore200e) +{ + sbus_iounmap(fore200e->regs.sba.hcr, SBA200E_HCR_LENGTH); + sbus_iounmap(fore200e->regs.sba.bsr, SBA200E_BSR_LENGTH); + sbus_iounmap(fore200e->regs.sba.isr, SBA200E_ISR_LENGTH); + sbus_iounmap(fore200e->virt_base, SBA200E_RAM_LENGTH); +} + + +static int __init +fore200e_sba_configure(struct fore200e* fore200e) +{ + fore200e->state = FORE200E_STATE_CONFIGURE; + return 0; +} + + +static struct fore200e* __init +fore200e_sba_detect(const struct fore200e_bus* bus, int index) +{ + struct fore200e* fore200e; + struct sbus_bus* sbus_bus; + struct sbus_dev* sbus_dev = NULL; + + unsigned int count = 0; + + for_each_sbus (sbus_bus) { + for_each_sbusdev (sbus_dev, sbus_bus) { + if (strcmp(sbus_dev->prom_name, SBA200E_PROM_NAME) == 0) { + if (count >= index) + goto found; + count++; + } + } + } + return NULL; + + found: + if (sbus_dev->num_registers != 4) { + printk(FORE200E "this %s device has %d instead of 4 registers\n", + bus->model_name, sbus_dev->num_registers); + return NULL; + } + + fore200e = fore200e_kmalloc(sizeof(struct fore200e), GFP_KERNEL); + if (fore200e == NULL) + return NULL; + + fore200e->bus = bus; + fore200e->bus_dev = sbus_dev; + fore200e->irq = sbus_dev->irqs[ 0 ]; + + fore200e->phys_base = (unsigned long)sbus_dev; + + sprintf(fore200e->name, "%s-%d", bus->model_name, index - 1); + + return fore200e; +} + + +static int __init +fore200e_sba_prom_read(struct fore200e* fore200e, struct prom_data* prom) +{ + struct sbus_dev* sbus_dev = (struct sbus_dev*) fore200e->bus_dev; + int len; + + len = prom_getproperty(sbus_dev->prom_node, "macaddrlo2", &prom->mac_addr[ 4 ], 4); + if (len < 0) + return -EBUSY; + + len = prom_getproperty(sbus_dev->prom_node, "macaddrhi4", &prom->mac_addr[ 2 ], 4); + if (len < 0) + return -EBUSY; + + prom_getproperty(sbus_dev->prom_node, "serialnumber", + (char*)&prom->serial_number, sizeof(prom->serial_number)); + + prom_getproperty(sbus_dev->prom_node, "promversion", + (char*)&prom->hw_revision, sizeof(prom->hw_revision)); + + return 0; +} + + +static int +fore200e_sba_proc_read(struct fore200e* fore200e, char *page) +{ + struct sbus_dev* sbus_dev = (struct sbus_dev*)fore200e->bus_dev; + + return sprintf(page, " SBUS slot/device:\t\t%d/'%s'\n", sbus_dev->slot, sbus_dev->prom_name); +} +#endif /* CONFIG_ATM_FORE200E_SBA */ + + +static void +fore200e_tx_irq(struct fore200e* fore200e) +{ + struct host_txq* txq = &fore200e->host_txq; + struct host_txq_entry* entry; + struct atm_vcc* vcc; + struct fore200e_vc_map* vc_map; + + if (fore200e->host_txq.txing == 0) + return; + + for (;;) { + + entry = &txq->host_entry[ txq->tail ]; + + if ((*entry->status & STATUS_COMPLETE) == 0) { + break; + } + + DPRINTK(3, "TX COMPLETED: entry = %p [tail = %d], vc_map = %p, skb = %p\n", + entry, txq->tail, entry->vc_map, entry->skb); + + /* free copy of misaligned data */ + if (entry->data) + kfree(entry->data); + + /* remove DMA mapping */ + fore200e->bus->dma_unmap(fore200e, entry->tpd->tsd[ 0 ].buffer, entry->tpd->tsd[ 0 ].length, + DMA_TO_DEVICE); + + vc_map = entry->vc_map; + + /* vcc closed since the time the entry was submitted for tx? */ + if ((vc_map->vcc == NULL) || + (test_bit(ATM_VF_READY, &vc_map->vcc->flags) == 0)) { + + DPRINTK(1, "no ready vcc found for PDU sent on device %d\n", + fore200e->atm_dev->number); + + dev_kfree_skb_any(entry->skb); + } + else { + ASSERT(vc_map->vcc); + + /* vcc closed then immediately re-opened? */ + if (vc_map->incarn != entry->incarn) { + + /* when a vcc is closed, some PDUs may be still pending in the tx queue. + if the same vcc is immediately re-opened, those pending PDUs must + not be popped after the completion of their emission, as they refer + to the prior incarnation of that vcc. otherwise, sk_atm(vcc)->sk_wmem_alloc + would be decremented by the size of the (unrelated) skb, possibly + leading to a negative sk->sk_wmem_alloc count, ultimately freezing the vcc. + we thus bind the tx entry to the current incarnation of the vcc + when the entry is submitted for tx. When the tx later completes, + if the incarnation number of the tx entry does not match the one + of the vcc, then this implies that the vcc has been closed then re-opened. + we thus just drop the skb here. */ + + DPRINTK(1, "vcc closed-then-re-opened; dropping PDU sent on device %d\n", + fore200e->atm_dev->number); + + dev_kfree_skb_any(entry->skb); + } + else { + vcc = vc_map->vcc; + ASSERT(vcc); + + /* notify tx completion */ + if (vcc->pop) { + vcc->pop(vcc, entry->skb); + } + else { + dev_kfree_skb_any(entry->skb); + } +#if 1 + /* race fixed by the above incarnation mechanism, but... */ + if (atomic_read(&sk_atm(vcc)->sk_wmem_alloc) < 0) { + atomic_set(&sk_atm(vcc)->sk_wmem_alloc, 0); + } +#endif + /* check error condition */ + if (*entry->status & STATUS_ERROR) + atomic_inc(&vcc->stats->tx_err); + else + atomic_inc(&vcc->stats->tx); + } + } + + *entry->status = STATUS_FREE; + + fore200e->host_txq.txing--; + + FORE200E_NEXT_ENTRY(txq->tail, QUEUE_SIZE_TX); + } +} + + +#ifdef FORE200E_BSQ_DEBUG +int bsq_audit(int where, struct host_bsq* bsq, int scheme, int magn) +{ + struct buffer* buffer; + int count = 0; + + buffer = bsq->freebuf; + while (buffer) { + + if (buffer->supplied) { + printk(FORE200E "bsq_audit(%d): queue %d.%d, buffer %ld supplied but in free list!\n", + where, scheme, magn, buffer->index); + } + + if (buffer->magn != magn) { + printk(FORE200E "bsq_audit(%d): queue %d.%d, buffer %ld, unexpected magn = %d\n", + where, scheme, magn, buffer->index, buffer->magn); + } + + if (buffer->scheme != scheme) { + printk(FORE200E "bsq_audit(%d): queue %d.%d, buffer %ld, unexpected scheme = %d\n", + where, scheme, magn, buffer->index, buffer->scheme); + } + + if ((buffer->index < 0) || (buffer->index >= fore200e_rx_buf_nbr[ scheme ][ magn ])) { + printk(FORE200E "bsq_audit(%d): queue %d.%d, out of range buffer index = %ld !\n", + where, scheme, magn, buffer->index); + } + + count++; + buffer = buffer->next; + } + + if (count != bsq->freebuf_count) { + printk(FORE200E "bsq_audit(%d): queue %d.%d, %d bufs in free list, but freebuf_count = %d\n", + where, scheme, magn, count, bsq->freebuf_count); + } + return 0; +} +#endif + + +static void +fore200e_supply(struct fore200e* fore200e) +{ + int scheme, magn, i; + + struct host_bsq* bsq; + struct host_bsq_entry* entry; + struct buffer* buffer; + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) { + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) { + + bsq = &fore200e->host_bsq[ scheme ][ magn ]; + +#ifdef FORE200E_BSQ_DEBUG + bsq_audit(1, bsq, scheme, magn); +#endif + while (bsq->freebuf_count >= RBD_BLK_SIZE) { + + DPRINTK(2, "supplying %d rx buffers to queue %d / %d, freebuf_count = %d\n", + RBD_BLK_SIZE, scheme, magn, bsq->freebuf_count); + + entry = &bsq->host_entry[ bsq->head ]; + + for (i = 0; i < RBD_BLK_SIZE; i++) { + + /* take the first buffer in the free buffer list */ + buffer = bsq->freebuf; + if (!buffer) { + printk(FORE200E "no more free bufs in queue %d.%d, but freebuf_count = %d\n", + scheme, magn, bsq->freebuf_count); + return; + } + bsq->freebuf = buffer->next; + +#ifdef FORE200E_BSQ_DEBUG + if (buffer->supplied) + printk(FORE200E "queue %d.%d, buffer %lu already supplied\n", + scheme, magn, buffer->index); + buffer->supplied = 1; +#endif + entry->rbd_block->rbd[ i ].buffer_haddr = buffer->data.dma_addr; + entry->rbd_block->rbd[ i ].handle = FORE200E_BUF2HDL(buffer); + } + + FORE200E_NEXT_ENTRY(bsq->head, QUEUE_SIZE_BS); + + /* decrease accordingly the number of free rx buffers */ + bsq->freebuf_count -= RBD_BLK_SIZE; + + *entry->status = STATUS_PENDING; + fore200e->bus->write(entry->rbd_block_dma, &entry->cp_entry->rbd_block_haddr); + } + } + } +} + + +static int +fore200e_push_rpd(struct fore200e* fore200e, struct atm_vcc* vcc, struct rpd* rpd) +{ + struct sk_buff* skb; + struct buffer* buffer; + struct fore200e_vcc* fore200e_vcc; + int i, pdu_len = 0; +#ifdef FORE200E_52BYTE_AAL0_SDU + u32 cell_header = 0; +#endif + + ASSERT(vcc); + + fore200e_vcc = FORE200E_VCC(vcc); + ASSERT(fore200e_vcc); + +#ifdef FORE200E_52BYTE_AAL0_SDU + if ((vcc->qos.aal == ATM_AAL0) && (vcc->qos.rxtp.max_sdu == ATM_AAL0_SDU)) { + + cell_header = (rpd->atm_header.gfc << ATM_HDR_GFC_SHIFT) | + (rpd->atm_header.vpi << ATM_HDR_VPI_SHIFT) | + (rpd->atm_header.vci << ATM_HDR_VCI_SHIFT) | + (rpd->atm_header.plt << ATM_HDR_PTI_SHIFT) | + rpd->atm_header.clp; + pdu_len = 4; + } +#endif + + /* compute total PDU length */ + for (i = 0; i < rpd->nseg; i++) + pdu_len += rpd->rsd[ i ].length; + + skb = alloc_skb(pdu_len, GFP_ATOMIC); + if (skb == NULL) { + DPRINTK(2, "unable to alloc new skb, rx PDU length = %d\n", pdu_len); + + atomic_inc(&vcc->stats->rx_drop); + return -ENOMEM; + } + + do_gettimeofday(&skb->stamp); + +#ifdef FORE200E_52BYTE_AAL0_SDU + if (cell_header) { + *((u32*)skb_put(skb, 4)) = cell_header; + } +#endif + + /* reassemble segments */ + for (i = 0; i < rpd->nseg; i++) { + + /* rebuild rx buffer address from rsd handle */ + buffer = FORE200E_HDL2BUF(rpd->rsd[ i ].handle); + + /* Make device DMA transfer visible to CPU. */ + fore200e->bus->dma_sync_for_cpu(fore200e, buffer->data.dma_addr, rpd->rsd[ i ].length, DMA_FROM_DEVICE); + + memcpy(skb_put(skb, rpd->rsd[ i ].length), buffer->data.align_addr, rpd->rsd[ i ].length); + + /* Now let the device get at it again. */ + fore200e->bus->dma_sync_for_device(fore200e, buffer->data.dma_addr, rpd->rsd[ i ].length, DMA_FROM_DEVICE); + } + + DPRINTK(3, "rx skb: len = %d, truesize = %d\n", skb->len, skb->truesize); + + if (pdu_len < fore200e_vcc->rx_min_pdu) + fore200e_vcc->rx_min_pdu = pdu_len; + if (pdu_len > fore200e_vcc->rx_max_pdu) + fore200e_vcc->rx_max_pdu = pdu_len; + fore200e_vcc->rx_pdu++; + + /* push PDU */ + if (atm_charge(vcc, skb->truesize) == 0) { + + DPRINTK(2, "receive buffers saturated for %d.%d.%d - PDU dropped\n", + vcc->itf, vcc->vpi, vcc->vci); + + dev_kfree_skb_any(skb); + + atomic_inc(&vcc->stats->rx_drop); + return -ENOMEM; + } + + ASSERT(atomic_read(&sk_atm(vcc)->sk_wmem_alloc) >= 0); + + vcc->push(vcc, skb); + atomic_inc(&vcc->stats->rx); + + ASSERT(atomic_read(&sk_atm(vcc)->sk_wmem_alloc) >= 0); + + return 0; +} + + +static void +fore200e_collect_rpd(struct fore200e* fore200e, struct rpd* rpd) +{ + struct host_bsq* bsq; + struct buffer* buffer; + int i; + + for (i = 0; i < rpd->nseg; i++) { + + /* rebuild rx buffer address from rsd handle */ + buffer = FORE200E_HDL2BUF(rpd->rsd[ i ].handle); + + bsq = &fore200e->host_bsq[ buffer->scheme ][ buffer->magn ]; + +#ifdef FORE200E_BSQ_DEBUG + bsq_audit(2, bsq, buffer->scheme, buffer->magn); + + if (buffer->supplied == 0) + printk(FORE200E "queue %d.%d, buffer %ld was not supplied\n", + buffer->scheme, buffer->magn, buffer->index); + buffer->supplied = 0; +#endif + + /* re-insert the buffer into the free buffer list */ + buffer->next = bsq->freebuf; + bsq->freebuf = buffer; + + /* then increment the number of free rx buffers */ + bsq->freebuf_count++; + } +} + + +static void +fore200e_rx_irq(struct fore200e* fore200e) +{ + struct host_rxq* rxq = &fore200e->host_rxq; + struct host_rxq_entry* entry; + struct atm_vcc* vcc; + struct fore200e_vc_map* vc_map; + + for (;;) { + + entry = &rxq->host_entry[ rxq->head ]; + + /* no more received PDUs */ + if ((*entry->status & STATUS_COMPLETE) == 0) + break; + + vc_map = FORE200E_VC_MAP(fore200e, entry->rpd->atm_header.vpi, entry->rpd->atm_header.vci); + + if ((vc_map->vcc == NULL) || + (test_bit(ATM_VF_READY, &vc_map->vcc->flags) == 0)) { + + DPRINTK(1, "no ready VC found for PDU received on %d.%d.%d\n", + fore200e->atm_dev->number, + entry->rpd->atm_header.vpi, entry->rpd->atm_header.vci); + } + else { + vcc = vc_map->vcc; + ASSERT(vcc); + + if ((*entry->status & STATUS_ERROR) == 0) { + + fore200e_push_rpd(fore200e, vcc, entry->rpd); + } + else { + DPRINTK(2, "damaged PDU on %d.%d.%d\n", + fore200e->atm_dev->number, + entry->rpd->atm_header.vpi, entry->rpd->atm_header.vci); + atomic_inc(&vcc->stats->rx_err); + } + } + + FORE200E_NEXT_ENTRY(rxq->head, QUEUE_SIZE_RX); + + fore200e_collect_rpd(fore200e, entry->rpd); + + /* rewrite the rpd address to ack the received PDU */ + fore200e->bus->write(entry->rpd_dma, &entry->cp_entry->rpd_haddr); + *entry->status = STATUS_FREE; + + fore200e_supply(fore200e); + } +} + + +#ifndef FORE200E_USE_TASKLET +static void +fore200e_irq(struct fore200e* fore200e) +{ + unsigned long flags; + + spin_lock_irqsave(&fore200e->q_lock, flags); + fore200e_rx_irq(fore200e); + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + spin_lock_irqsave(&fore200e->q_lock, flags); + fore200e_tx_irq(fore200e); + spin_unlock_irqrestore(&fore200e->q_lock, flags); +} +#endif + + +static irqreturn_t +fore200e_interrupt(int irq, void* dev, struct pt_regs* regs) +{ + struct fore200e* fore200e = FORE200E_DEV((struct atm_dev*)dev); + + if (fore200e->bus->irq_check(fore200e) == 0) { + + DPRINTK(3, "interrupt NOT triggered by device %d\n", fore200e->atm_dev->number); + return IRQ_NONE; + } + DPRINTK(3, "interrupt triggered by device %d\n", fore200e->atm_dev->number); + +#ifdef FORE200E_USE_TASKLET + tasklet_schedule(&fore200e->tx_tasklet); + tasklet_schedule(&fore200e->rx_tasklet); +#else + fore200e_irq(fore200e); +#endif + + fore200e->bus->irq_ack(fore200e); + return IRQ_HANDLED; +} + + +#ifdef FORE200E_USE_TASKLET +static void +fore200e_tx_tasklet(unsigned long data) +{ + struct fore200e* fore200e = (struct fore200e*) data; + unsigned long flags; + + DPRINTK(3, "tx tasklet scheduled for device %d\n", fore200e->atm_dev->number); + + spin_lock_irqsave(&fore200e->q_lock, flags); + fore200e_tx_irq(fore200e); + spin_unlock_irqrestore(&fore200e->q_lock, flags); +} + + +static void +fore200e_rx_tasklet(unsigned long data) +{ + struct fore200e* fore200e = (struct fore200e*) data; + unsigned long flags; + + DPRINTK(3, "rx tasklet scheduled for device %d\n", fore200e->atm_dev->number); + + spin_lock_irqsave(&fore200e->q_lock, flags); + fore200e_rx_irq((struct fore200e*) data); + spin_unlock_irqrestore(&fore200e->q_lock, flags); +} +#endif + + +static int +fore200e_select_scheme(struct atm_vcc* vcc) +{ + /* fairly balance the VCs over (identical) buffer schemes */ + int scheme = vcc->vci % 2 ? BUFFER_SCHEME_ONE : BUFFER_SCHEME_TWO; + + DPRINTK(1, "VC %d.%d.%d uses buffer scheme %d\n", + vcc->itf, vcc->vpi, vcc->vci, scheme); + + return scheme; +} + + +static int +fore200e_activate_vcin(struct fore200e* fore200e, int activate, struct atm_vcc* vcc, int mtu) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct host_cmdq_entry* entry = &cmdq->host_entry[ cmdq->head ]; + struct activate_opcode activ_opcode; + struct deactivate_opcode deactiv_opcode; + struct vpvc vpvc; + int ok; + enum fore200e_aal aal = fore200e_atm2fore_aal(vcc->qos.aal); + + FORE200E_NEXT_ENTRY(cmdq->head, QUEUE_SIZE_CMD); + + if (activate) { + FORE200E_VCC(vcc)->scheme = fore200e_select_scheme(vcc); + + activ_opcode.opcode = OPCODE_ACTIVATE_VCIN; + activ_opcode.aal = aal; + activ_opcode.scheme = FORE200E_VCC(vcc)->scheme; + activ_opcode.pad = 0; + } + else { + deactiv_opcode.opcode = OPCODE_DEACTIVATE_VCIN; + deactiv_opcode.pad = 0; + } + + vpvc.vci = vcc->vci; + vpvc.vpi = vcc->vpi; + + *entry->status = STATUS_PENDING; + + if (activate) { + +#ifdef FORE200E_52BYTE_AAL0_SDU + mtu = 48; +#endif + /* the MTU is not used by the cp, except in the case of AAL0 */ + fore200e->bus->write(mtu, &entry->cp_entry->cmd.activate_block.mtu); + fore200e->bus->write(*(u32*)&vpvc, (u32 __iomem *)&entry->cp_entry->cmd.activate_block.vpvc); + fore200e->bus->write(*(u32*)&activ_opcode, (u32 __iomem *)&entry->cp_entry->cmd.activate_block.opcode); + } + else { + fore200e->bus->write(*(u32*)&vpvc, (u32 __iomem *)&entry->cp_entry->cmd.deactivate_block.vpvc); + fore200e->bus->write(*(u32*)&deactiv_opcode, (u32 __iomem *)&entry->cp_entry->cmd.deactivate_block.opcode); + } + + ok = fore200e_poll(fore200e, entry->status, STATUS_COMPLETE, 400); + + *entry->status = STATUS_FREE; + + if (ok == 0) { + printk(FORE200E "unable to %s VC %d.%d.%d\n", + activate ? "open" : "close", vcc->itf, vcc->vpi, vcc->vci); + return -EIO; + } + + DPRINTK(1, "VC %d.%d.%d %sed\n", vcc->itf, vcc->vpi, vcc->vci, + activate ? "open" : "clos"); + + return 0; +} + + +#define FORE200E_MAX_BACK2BACK_CELLS 255 /* XXX depends on CDVT */ + +static void +fore200e_rate_ctrl(struct atm_qos* qos, struct tpd_rate* rate) +{ + if (qos->txtp.max_pcr < ATM_OC3_PCR) { + + /* compute the data cells to idle cells ratio from the tx PCR */ + rate->data_cells = qos->txtp.max_pcr * FORE200E_MAX_BACK2BACK_CELLS / ATM_OC3_PCR; + rate->idle_cells = FORE200E_MAX_BACK2BACK_CELLS - rate->data_cells; + } + else { + /* disable rate control */ + rate->data_cells = rate->idle_cells = 0; + } +} + + +static int +fore200e_open(struct atm_vcc *vcc) +{ + struct fore200e* fore200e = FORE200E_DEV(vcc->dev); + struct fore200e_vcc* fore200e_vcc; + struct fore200e_vc_map* vc_map; + unsigned long flags; + int vci = vcc->vci; + short vpi = vcc->vpi; + + ASSERT((vpi >= 0) && (vpi < 1<<FORE200E_VPI_BITS)); + ASSERT((vci >= 0) && (vci < 1<<FORE200E_VCI_BITS)); + + spin_lock_irqsave(&fore200e->q_lock, flags); + + vc_map = FORE200E_VC_MAP(fore200e, vpi, vci); + if (vc_map->vcc) { + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + printk(FORE200E "VC %d.%d.%d already in use\n", + fore200e->atm_dev->number, vpi, vci); + + return -EINVAL; + } + + vc_map->vcc = vcc; + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + fore200e_vcc = fore200e_kmalloc(sizeof(struct fore200e_vcc), GFP_ATOMIC); + if (fore200e_vcc == NULL) { + vc_map->vcc = NULL; + return -ENOMEM; + } + + DPRINTK(2, "opening %d.%d.%d:%d QoS = (tx: cl=%s, pcr=%d-%d, cdv=%d, max_sdu=%d; " + "rx: cl=%s, pcr=%d-%d, cdv=%d, max_sdu=%d)\n", + vcc->itf, vcc->vpi, vcc->vci, fore200e_atm2fore_aal(vcc->qos.aal), + fore200e_traffic_class[ vcc->qos.txtp.traffic_class ], + vcc->qos.txtp.min_pcr, vcc->qos.txtp.max_pcr, vcc->qos.txtp.max_cdv, vcc->qos.txtp.max_sdu, + fore200e_traffic_class[ vcc->qos.rxtp.traffic_class ], + vcc->qos.rxtp.min_pcr, vcc->qos.rxtp.max_pcr, vcc->qos.rxtp.max_cdv, vcc->qos.rxtp.max_sdu); + + /* pseudo-CBR bandwidth requested? */ + if ((vcc->qos.txtp.traffic_class == ATM_CBR) && (vcc->qos.txtp.max_pcr > 0)) { + + down(&fore200e->rate_sf); + if (fore200e->available_cell_rate < vcc->qos.txtp.max_pcr) { + up(&fore200e->rate_sf); + + fore200e_kfree(fore200e_vcc); + vc_map->vcc = NULL; + return -EAGAIN; + } + + /* reserve bandwidth */ + fore200e->available_cell_rate -= vcc->qos.txtp.max_pcr; + up(&fore200e->rate_sf); + } + + vcc->itf = vcc->dev->number; + + set_bit(ATM_VF_PARTIAL,&vcc->flags); + set_bit(ATM_VF_ADDR, &vcc->flags); + + vcc->dev_data = fore200e_vcc; + + if (fore200e_activate_vcin(fore200e, 1, vcc, vcc->qos.rxtp.max_sdu) < 0) { + + vc_map->vcc = NULL; + + clear_bit(ATM_VF_ADDR, &vcc->flags); + clear_bit(ATM_VF_PARTIAL,&vcc->flags); + + vcc->dev_data = NULL; + + fore200e->available_cell_rate += vcc->qos.txtp.max_pcr; + + fore200e_kfree(fore200e_vcc); + return -EINVAL; + } + + /* compute rate control parameters */ + if ((vcc->qos.txtp.traffic_class == ATM_CBR) && (vcc->qos.txtp.max_pcr > 0)) { + + fore200e_rate_ctrl(&vcc->qos, &fore200e_vcc->rate); + set_bit(ATM_VF_HASQOS, &vcc->flags); + + DPRINTK(3, "tx on %d.%d.%d:%d, tx PCR = %d, rx PCR = %d, data_cells = %u, idle_cells = %u\n", + vcc->itf, vcc->vpi, vcc->vci, fore200e_atm2fore_aal(vcc->qos.aal), + vcc->qos.txtp.max_pcr, vcc->qos.rxtp.max_pcr, + fore200e_vcc->rate.data_cells, fore200e_vcc->rate.idle_cells); + } + + fore200e_vcc->tx_min_pdu = fore200e_vcc->rx_min_pdu = MAX_PDU_SIZE + 1; + fore200e_vcc->tx_max_pdu = fore200e_vcc->rx_max_pdu = 0; + fore200e_vcc->tx_pdu = fore200e_vcc->rx_pdu = 0; + + /* new incarnation of the vcc */ + vc_map->incarn = ++fore200e->incarn_count; + + /* VC unusable before this flag is set */ + set_bit(ATM_VF_READY, &vcc->flags); + + return 0; +} + + +static void +fore200e_close(struct atm_vcc* vcc) +{ + struct fore200e* fore200e = FORE200E_DEV(vcc->dev); + struct fore200e_vcc* fore200e_vcc; + struct fore200e_vc_map* vc_map; + unsigned long flags; + + ASSERT(vcc); + ASSERT((vcc->vpi >= 0) && (vcc->vpi < 1<<FORE200E_VPI_BITS)); + ASSERT((vcc->vci >= 0) && (vcc->vci < 1<<FORE200E_VCI_BITS)); + + DPRINTK(2, "closing %d.%d.%d:%d\n", vcc->itf, vcc->vpi, vcc->vci, fore200e_atm2fore_aal(vcc->qos.aal)); + + clear_bit(ATM_VF_READY, &vcc->flags); + + fore200e_activate_vcin(fore200e, 0, vcc, 0); + + spin_lock_irqsave(&fore200e->q_lock, flags); + + vc_map = FORE200E_VC_MAP(fore200e, vcc->vpi, vcc->vci); + + /* the vc is no longer considered as "in use" by fore200e_open() */ + vc_map->vcc = NULL; + + vcc->itf = vcc->vci = vcc->vpi = 0; + + fore200e_vcc = FORE200E_VCC(vcc); + vcc->dev_data = NULL; + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + /* release reserved bandwidth, if any */ + if ((vcc->qos.txtp.traffic_class == ATM_CBR) && (vcc->qos.txtp.max_pcr > 0)) { + + down(&fore200e->rate_sf); + fore200e->available_cell_rate += vcc->qos.txtp.max_pcr; + up(&fore200e->rate_sf); + + clear_bit(ATM_VF_HASQOS, &vcc->flags); + } + + clear_bit(ATM_VF_ADDR, &vcc->flags); + clear_bit(ATM_VF_PARTIAL,&vcc->flags); + + ASSERT(fore200e_vcc); + fore200e_kfree(fore200e_vcc); +} + + +static int +fore200e_send(struct atm_vcc *vcc, struct sk_buff *skb) +{ + struct fore200e* fore200e = FORE200E_DEV(vcc->dev); + struct fore200e_vcc* fore200e_vcc = FORE200E_VCC(vcc); + struct fore200e_vc_map* vc_map; + struct host_txq* txq = &fore200e->host_txq; + struct host_txq_entry* entry; + struct tpd* tpd; + struct tpd_haddr tpd_haddr; + int retry = CONFIG_ATM_FORE200E_TX_RETRY; + int tx_copy = 0; + int tx_len = skb->len; + u32* cell_header = NULL; + unsigned char* skb_data; + int skb_len; + unsigned char* data; + unsigned long flags; + + ASSERT(vcc); + ASSERT(atomic_read(&sk_atm(vcc)->sk_wmem_alloc) >= 0); + ASSERT(fore200e); + ASSERT(fore200e_vcc); + + if (!test_bit(ATM_VF_READY, &vcc->flags)) { + DPRINTK(1, "VC %d.%d.%d not ready for tx\n", vcc->itf, vcc->vpi, vcc->vpi); + dev_kfree_skb_any(skb); + return -EINVAL; + } + +#ifdef FORE200E_52BYTE_AAL0_SDU + if ((vcc->qos.aal == ATM_AAL0) && (vcc->qos.txtp.max_sdu == ATM_AAL0_SDU)) { + cell_header = (u32*) skb->data; + skb_data = skb->data + 4; /* skip 4-byte cell header */ + skb_len = tx_len = skb->len - 4; + + DPRINTK(3, "user-supplied cell header = 0x%08x\n", *cell_header); + } + else +#endif + { + skb_data = skb->data; + skb_len = skb->len; + } + + if (((unsigned long)skb_data) & 0x3) { + + DPRINTK(2, "misaligned tx PDU on device %s\n", fore200e->name); + tx_copy = 1; + tx_len = skb_len; + } + + if ((vcc->qos.aal == ATM_AAL0) && (skb_len % ATM_CELL_PAYLOAD)) { + + /* this simply NUKES the PCA board */ + DPRINTK(2, "incomplete tx AAL0 PDU on device %s\n", fore200e->name); + tx_copy = 1; + tx_len = ((skb_len / ATM_CELL_PAYLOAD) + 1) * ATM_CELL_PAYLOAD; + } + + if (tx_copy) { + data = kmalloc(tx_len, GFP_ATOMIC | GFP_DMA); + if (data == NULL) { + if (vcc->pop) { + vcc->pop(vcc, skb); + } + else { + dev_kfree_skb_any(skb); + } + return -ENOMEM; + } + + memcpy(data, skb_data, skb_len); + if (skb_len < tx_len) + memset(data + skb_len, 0x00, tx_len - skb_len); + } + else { + data = skb_data; + } + + vc_map = FORE200E_VC_MAP(fore200e, vcc->vpi, vcc->vci); + ASSERT(vc_map->vcc == vcc); + + retry_here: + + spin_lock_irqsave(&fore200e->q_lock, flags); + + entry = &txq->host_entry[ txq->head ]; + + if ((*entry->status != STATUS_FREE) || (txq->txing >= QUEUE_SIZE_TX - 2)) { + + /* try to free completed tx queue entries */ + fore200e_tx_irq(fore200e); + + if (*entry->status != STATUS_FREE) { + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + /* retry once again? */ + if (--retry > 0) { + udelay(50); + goto retry_here; + } + + atomic_inc(&vcc->stats->tx_err); + + fore200e->tx_sat++; + DPRINTK(2, "tx queue of device %s is saturated, PDU dropped - heartbeat is %08x\n", + fore200e->name, fore200e->cp_queues->heartbeat); + if (vcc->pop) { + vcc->pop(vcc, skb); + } + else { + dev_kfree_skb_any(skb); + } + + if (tx_copy) + kfree(data); + + return -ENOBUFS; + } + } + + entry->incarn = vc_map->incarn; + entry->vc_map = vc_map; + entry->skb = skb; + entry->data = tx_copy ? data : NULL; + + tpd = entry->tpd; + tpd->tsd[ 0 ].buffer = fore200e->bus->dma_map(fore200e, data, tx_len, DMA_TO_DEVICE); + tpd->tsd[ 0 ].length = tx_len; + + FORE200E_NEXT_ENTRY(txq->head, QUEUE_SIZE_TX); + txq->txing++; + + /* The dma_map call above implies a dma_sync so the device can use it, + * thus no explicit dma_sync call is necessary here. + */ + + DPRINTK(3, "tx on %d.%d.%d:%d, len = %u (%u)\n", + vcc->itf, vcc->vpi, vcc->vci, fore200e_atm2fore_aal(vcc->qos.aal), + tpd->tsd[0].length, skb_len); + + if (skb_len < fore200e_vcc->tx_min_pdu) + fore200e_vcc->tx_min_pdu = skb_len; + if (skb_len > fore200e_vcc->tx_max_pdu) + fore200e_vcc->tx_max_pdu = skb_len; + fore200e_vcc->tx_pdu++; + + /* set tx rate control information */ + tpd->rate.data_cells = fore200e_vcc->rate.data_cells; + tpd->rate.idle_cells = fore200e_vcc->rate.idle_cells; + + if (cell_header) { + tpd->atm_header.clp = (*cell_header & ATM_HDR_CLP); + tpd->atm_header.plt = (*cell_header & ATM_HDR_PTI_MASK) >> ATM_HDR_PTI_SHIFT; + tpd->atm_header.vci = (*cell_header & ATM_HDR_VCI_MASK) >> ATM_HDR_VCI_SHIFT; + tpd->atm_header.vpi = (*cell_header & ATM_HDR_VPI_MASK) >> ATM_HDR_VPI_SHIFT; + tpd->atm_header.gfc = (*cell_header & ATM_HDR_GFC_MASK) >> ATM_HDR_GFC_SHIFT; + } + else { + /* set the ATM header, common to all cells conveying the PDU */ + tpd->atm_header.clp = 0; + tpd->atm_header.plt = 0; + tpd->atm_header.vci = vcc->vci; + tpd->atm_header.vpi = vcc->vpi; + tpd->atm_header.gfc = 0; + } + + tpd->spec.length = tx_len; + tpd->spec.nseg = 1; + tpd->spec.aal = fore200e_atm2fore_aal(vcc->qos.aal); + tpd->spec.intr = 1; + + tpd_haddr.size = sizeof(struct tpd) / (1<<TPD_HADDR_SHIFT); /* size is expressed in 32 byte blocks */ + tpd_haddr.pad = 0; + tpd_haddr.haddr = entry->tpd_dma >> TPD_HADDR_SHIFT; /* shift the address, as we are in a bitfield */ + + *entry->status = STATUS_PENDING; + fore200e->bus->write(*(u32*)&tpd_haddr, (u32 __iomem *)&entry->cp_entry->tpd_haddr); + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + + return 0; +} + + +static int +fore200e_getstats(struct fore200e* fore200e) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct host_cmdq_entry* entry = &cmdq->host_entry[ cmdq->head ]; + struct stats_opcode opcode; + int ok; + u32 stats_dma_addr; + + if (fore200e->stats == NULL) { + fore200e->stats = fore200e_kmalloc(sizeof(struct stats), GFP_KERNEL | GFP_DMA); + if (fore200e->stats == NULL) + return -ENOMEM; + } + + stats_dma_addr = fore200e->bus->dma_map(fore200e, fore200e->stats, + sizeof(struct stats), DMA_FROM_DEVICE); + + FORE200E_NEXT_ENTRY(cmdq->head, QUEUE_SIZE_CMD); + + opcode.opcode = OPCODE_GET_STATS; + opcode.pad = 0; + + fore200e->bus->write(stats_dma_addr, &entry->cp_entry->cmd.stats_block.stats_haddr); + + *entry->status = STATUS_PENDING; + + fore200e->bus->write(*(u32*)&opcode, (u32 __iomem *)&entry->cp_entry->cmd.stats_block.opcode); + + ok = fore200e_poll(fore200e, entry->status, STATUS_COMPLETE, 400); + + *entry->status = STATUS_FREE; + + fore200e->bus->dma_unmap(fore200e, stats_dma_addr, sizeof(struct stats), DMA_FROM_DEVICE); + + if (ok == 0) { + printk(FORE200E "unable to get statistics from device %s\n", fore200e->name); + return -EIO; + } + + return 0; +} + + +static int +fore200e_getsockopt(struct atm_vcc* vcc, int level, int optname, void __user *optval, int optlen) +{ + /* struct fore200e* fore200e = FORE200E_DEV(vcc->dev); */ + + DPRINTK(2, "getsockopt %d.%d.%d, level = %d, optname = 0x%x, optval = 0x%p, optlen = %d\n", + vcc->itf, vcc->vpi, vcc->vci, level, optname, optval, optlen); + + return -EINVAL; +} + + +static int +fore200e_setsockopt(struct atm_vcc* vcc, int level, int optname, void __user *optval, int optlen) +{ + /* struct fore200e* fore200e = FORE200E_DEV(vcc->dev); */ + + DPRINTK(2, "setsockopt %d.%d.%d, level = %d, optname = 0x%x, optval = 0x%p, optlen = %d\n", + vcc->itf, vcc->vpi, vcc->vci, level, optname, optval, optlen); + + return -EINVAL; +} + + +#if 0 /* currently unused */ +static int +fore200e_get_oc3(struct fore200e* fore200e, struct oc3_regs* regs) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct host_cmdq_entry* entry = &cmdq->host_entry[ cmdq->head ]; + struct oc3_opcode opcode; + int ok; + u32 oc3_regs_dma_addr; + + oc3_regs_dma_addr = fore200e->bus->dma_map(fore200e, regs, sizeof(struct oc3_regs), DMA_FROM_DEVICE); + + FORE200E_NEXT_ENTRY(cmdq->head, QUEUE_SIZE_CMD); + + opcode.opcode = OPCODE_GET_OC3; + opcode.reg = 0; + opcode.value = 0; + opcode.mask = 0; + + fore200e->bus->write(oc3_regs_dma_addr, &entry->cp_entry->cmd.oc3_block.regs_haddr); + + *entry->status = STATUS_PENDING; + + fore200e->bus->write(*(u32*)&opcode, (u32*)&entry->cp_entry->cmd.oc3_block.opcode); + + ok = fore200e_poll(fore200e, entry->status, STATUS_COMPLETE, 400); + + *entry->status = STATUS_FREE; + + fore200e->bus->dma_unmap(fore200e, oc3_regs_dma_addr, sizeof(struct oc3_regs), DMA_FROM_DEVICE); + + if (ok == 0) { + printk(FORE200E "unable to get OC-3 regs of device %s\n", fore200e->name); + return -EIO; + } + + return 0; +} +#endif + + +static int +fore200e_set_oc3(struct fore200e* fore200e, u32 reg, u32 value, u32 mask) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct host_cmdq_entry* entry = &cmdq->host_entry[ cmdq->head ]; + struct oc3_opcode opcode; + int ok; + + DPRINTK(2, "set OC-3 reg = 0x%02x, value = 0x%02x, mask = 0x%02x\n", reg, value, mask); + + FORE200E_NEXT_ENTRY(cmdq->head, QUEUE_SIZE_CMD); + + opcode.opcode = OPCODE_SET_OC3; + opcode.reg = reg; + opcode.value = value; + opcode.mask = mask; + + fore200e->bus->write(0, &entry->cp_entry->cmd.oc3_block.regs_haddr); + + *entry->status = STATUS_PENDING; + + fore200e->bus->write(*(u32*)&opcode, (u32 __iomem *)&entry->cp_entry->cmd.oc3_block.opcode); + + ok = fore200e_poll(fore200e, entry->status, STATUS_COMPLETE, 400); + + *entry->status = STATUS_FREE; + + if (ok == 0) { + printk(FORE200E "unable to set OC-3 reg 0x%02x of device %s\n", reg, fore200e->name); + return -EIO; + } + + return 0; +} + + +static int +fore200e_setloop(struct fore200e* fore200e, int loop_mode) +{ + u32 mct_value, mct_mask; + int error; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + switch (loop_mode) { + + case ATM_LM_NONE: + mct_value = 0; + mct_mask = SUNI_MCT_DLE | SUNI_MCT_LLE; + break; + + case ATM_LM_LOC_PHY: + mct_value = mct_mask = SUNI_MCT_DLE; + break; + + case ATM_LM_RMT_PHY: + mct_value = mct_mask = SUNI_MCT_LLE; + break; + + default: + return -EINVAL; + } + + error = fore200e_set_oc3(fore200e, SUNI_MCT, mct_value, mct_mask); + if (error == 0) + fore200e->loop_mode = loop_mode; + + return error; +} + + +static inline unsigned int +fore200e_swap(unsigned int in) +{ +#if defined(__LITTLE_ENDIAN) + return swab32(in); +#else + return in; +#endif +} + + +static int +fore200e_fetch_stats(struct fore200e* fore200e, struct sonet_stats __user *arg) +{ + struct sonet_stats tmp; + + if (fore200e_getstats(fore200e) < 0) + return -EIO; + + tmp.section_bip = fore200e_swap(fore200e->stats->oc3.section_bip8_errors); + tmp.line_bip = fore200e_swap(fore200e->stats->oc3.line_bip24_errors); + tmp.path_bip = fore200e_swap(fore200e->stats->oc3.path_bip8_errors); + tmp.line_febe = fore200e_swap(fore200e->stats->oc3.line_febe_errors); + tmp.path_febe = fore200e_swap(fore200e->stats->oc3.path_febe_errors); + tmp.corr_hcs = fore200e_swap(fore200e->stats->oc3.corr_hcs_errors); + tmp.uncorr_hcs = fore200e_swap(fore200e->stats->oc3.ucorr_hcs_errors); + tmp.tx_cells = fore200e_swap(fore200e->stats->aal0.cells_transmitted) + + fore200e_swap(fore200e->stats->aal34.cells_transmitted) + + fore200e_swap(fore200e->stats->aal5.cells_transmitted); + tmp.rx_cells = fore200e_swap(fore200e->stats->aal0.cells_received) + + fore200e_swap(fore200e->stats->aal34.cells_received) + + fore200e_swap(fore200e->stats->aal5.cells_received); + + if (arg) + return copy_to_user(arg, &tmp, sizeof(struct sonet_stats)) ? -EFAULT : 0; + + return 0; +} + + +static int +fore200e_ioctl(struct atm_dev* dev, unsigned int cmd, void __user * arg) +{ + struct fore200e* fore200e = FORE200E_DEV(dev); + + DPRINTK(2, "ioctl cmd = 0x%x (%u), arg = 0x%p (%lu)\n", cmd, cmd, arg, (unsigned long)arg); + + switch (cmd) { + + case SONET_GETSTAT: + return fore200e_fetch_stats(fore200e, (struct sonet_stats __user *)arg); + + case SONET_GETDIAG: + return put_user(0, (int __user *)arg) ? -EFAULT : 0; + + case ATM_SETLOOP: + return fore200e_setloop(fore200e, (int)(unsigned long)arg); + + case ATM_GETLOOP: + return put_user(fore200e->loop_mode, (int __user *)arg) ? -EFAULT : 0; + + case ATM_QUERYLOOP: + return put_user(ATM_LM_LOC_PHY | ATM_LM_RMT_PHY, (int __user *)arg) ? -EFAULT : 0; + } + + return -ENOSYS; /* not implemented */ +} + + +static int +fore200e_change_qos(struct atm_vcc* vcc,struct atm_qos* qos, int flags) +{ + struct fore200e_vcc* fore200e_vcc = FORE200E_VCC(vcc); + struct fore200e* fore200e = FORE200E_DEV(vcc->dev); + + if (!test_bit(ATM_VF_READY, &vcc->flags)) { + DPRINTK(1, "VC %d.%d.%d not ready for QoS change\n", vcc->itf, vcc->vpi, vcc->vpi); + return -EINVAL; + } + + DPRINTK(2, "change_qos %d.%d.%d, " + "(tx: cl=%s, pcr=%d-%d, cdv=%d, max_sdu=%d; " + "rx: cl=%s, pcr=%d-%d, cdv=%d, max_sdu=%d), flags = 0x%x\n" + "available_cell_rate = %u", + vcc->itf, vcc->vpi, vcc->vci, + fore200e_traffic_class[ qos->txtp.traffic_class ], + qos->txtp.min_pcr, qos->txtp.max_pcr, qos->txtp.max_cdv, qos->txtp.max_sdu, + fore200e_traffic_class[ qos->rxtp.traffic_class ], + qos->rxtp.min_pcr, qos->rxtp.max_pcr, qos->rxtp.max_cdv, qos->rxtp.max_sdu, + flags, fore200e->available_cell_rate); + + if ((qos->txtp.traffic_class == ATM_CBR) && (qos->txtp.max_pcr > 0)) { + + down(&fore200e->rate_sf); + if (fore200e->available_cell_rate + vcc->qos.txtp.max_pcr < qos->txtp.max_pcr) { + up(&fore200e->rate_sf); + return -EAGAIN; + } + + fore200e->available_cell_rate += vcc->qos.txtp.max_pcr; + fore200e->available_cell_rate -= qos->txtp.max_pcr; + + up(&fore200e->rate_sf); + + memcpy(&vcc->qos, qos, sizeof(struct atm_qos)); + + /* update rate control parameters */ + fore200e_rate_ctrl(qos, &fore200e_vcc->rate); + + set_bit(ATM_VF_HASQOS, &vcc->flags); + + return 0; + } + + return -EINVAL; +} + + +static int __init +fore200e_irq_request(struct fore200e* fore200e) +{ + if (request_irq(fore200e->irq, fore200e_interrupt, SA_SHIRQ, fore200e->name, fore200e->atm_dev) < 0) { + + printk(FORE200E "unable to reserve IRQ %s for device %s\n", + fore200e_irq_itoa(fore200e->irq), fore200e->name); + return -EBUSY; + } + + printk(FORE200E "IRQ %s reserved for device %s\n", + fore200e_irq_itoa(fore200e->irq), fore200e->name); + +#ifdef FORE200E_USE_TASKLET + tasklet_init(&fore200e->tx_tasklet, fore200e_tx_tasklet, (unsigned long)fore200e); + tasklet_init(&fore200e->rx_tasklet, fore200e_rx_tasklet, (unsigned long)fore200e); +#endif + + fore200e->state = FORE200E_STATE_IRQ; + return 0; +} + + +static int __init +fore200e_get_esi(struct fore200e* fore200e) +{ + struct prom_data* prom = fore200e_kmalloc(sizeof(struct prom_data), GFP_KERNEL | GFP_DMA); + int ok, i; + + if (!prom) + return -ENOMEM; + + ok = fore200e->bus->prom_read(fore200e, prom); + if (ok < 0) { + fore200e_kfree(prom); + return -EBUSY; + } + + printk(FORE200E "device %s, rev. %c, S/N: %d, ESI: %02x:%02x:%02x:%02x:%02x:%02x\n", + fore200e->name, + (prom->hw_revision & 0xFF) + '@', /* probably meaningless with SBA boards */ + prom->serial_number & 0xFFFF, + prom->mac_addr[ 2 ], prom->mac_addr[ 3 ], prom->mac_addr[ 4 ], + prom->mac_addr[ 5 ], prom->mac_addr[ 6 ], prom->mac_addr[ 7 ]); + + for (i = 0; i < ESI_LEN; i++) { + fore200e->esi[ i ] = fore200e->atm_dev->esi[ i ] = prom->mac_addr[ i + 2 ]; + } + + fore200e_kfree(prom); + + return 0; +} + + +static int __init +fore200e_alloc_rx_buf(struct fore200e* fore200e) +{ + int scheme, magn, nbr, size, i; + + struct host_bsq* bsq; + struct buffer* buffer; + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) { + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) { + + bsq = &fore200e->host_bsq[ scheme ][ magn ]; + + nbr = fore200e_rx_buf_nbr[ scheme ][ magn ]; + size = fore200e_rx_buf_size[ scheme ][ magn ]; + + DPRINTK(2, "rx buffers %d / %d are being allocated\n", scheme, magn); + + /* allocate the array of receive buffers */ + buffer = bsq->buffer = fore200e_kmalloc(nbr * sizeof(struct buffer), GFP_KERNEL); + + if (buffer == NULL) + return -ENOMEM; + + bsq->freebuf = NULL; + + for (i = 0; i < nbr; i++) { + + buffer[ i ].scheme = scheme; + buffer[ i ].magn = magn; +#ifdef FORE200E_BSQ_DEBUG + buffer[ i ].index = i; + buffer[ i ].supplied = 0; +#endif + + /* allocate the receive buffer body */ + if (fore200e_chunk_alloc(fore200e, + &buffer[ i ].data, size, fore200e->bus->buffer_alignment, + DMA_FROM_DEVICE) < 0) { + + while (i > 0) + fore200e_chunk_free(fore200e, &buffer[ --i ].data); + fore200e_kfree(buffer); + + return -ENOMEM; + } + + /* insert the buffer into the free buffer list */ + buffer[ i ].next = bsq->freebuf; + bsq->freebuf = &buffer[ i ]; + } + /* all the buffers are free, initially */ + bsq->freebuf_count = nbr; + +#ifdef FORE200E_BSQ_DEBUG + bsq_audit(3, bsq, scheme, magn); +#endif + } + } + + fore200e->state = FORE200E_STATE_ALLOC_BUF; + return 0; +} + + +static int __init +fore200e_init_bs_queue(struct fore200e* fore200e) +{ + int scheme, magn, i; + + struct host_bsq* bsq; + struct cp_bsq_entry __iomem * cp_entry; + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) { + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) { + + DPRINTK(2, "buffer supply queue %d / %d is being initialized\n", scheme, magn); + + bsq = &fore200e->host_bsq[ scheme ][ magn ]; + + /* allocate and align the array of status words */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &bsq->status, + sizeof(enum status), + QUEUE_SIZE_BS, + fore200e->bus->status_alignment) < 0) { + return -ENOMEM; + } + + /* allocate and align the array of receive buffer descriptors */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &bsq->rbd_block, + sizeof(struct rbd_block), + QUEUE_SIZE_BS, + fore200e->bus->descr_alignment) < 0) { + + fore200e->bus->dma_chunk_free(fore200e, &bsq->status); + return -ENOMEM; + } + + /* get the base address of the cp resident buffer supply queue entries */ + cp_entry = fore200e->virt_base + + fore200e->bus->read(&fore200e->cp_queues->cp_bsq[ scheme ][ magn ]); + + /* fill the host resident and cp resident buffer supply queue entries */ + for (i = 0; i < QUEUE_SIZE_BS; i++) { + + bsq->host_entry[ i ].status = + FORE200E_INDEX(bsq->status.align_addr, enum status, i); + bsq->host_entry[ i ].rbd_block = + FORE200E_INDEX(bsq->rbd_block.align_addr, struct rbd_block, i); + bsq->host_entry[ i ].rbd_block_dma = + FORE200E_DMA_INDEX(bsq->rbd_block.dma_addr, struct rbd_block, i); + bsq->host_entry[ i ].cp_entry = &cp_entry[ i ]; + + *bsq->host_entry[ i ].status = STATUS_FREE; + + fore200e->bus->write(FORE200E_DMA_INDEX(bsq->status.dma_addr, enum status, i), + &cp_entry[ i ].status_haddr); + } + } + } + + fore200e->state = FORE200E_STATE_INIT_BSQ; + return 0; +} + + +static int __init +fore200e_init_rx_queue(struct fore200e* fore200e) +{ + struct host_rxq* rxq = &fore200e->host_rxq; + struct cp_rxq_entry __iomem * cp_entry; + int i; + + DPRINTK(2, "receive queue is being initialized\n"); + + /* allocate and align the array of status words */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &rxq->status, + sizeof(enum status), + QUEUE_SIZE_RX, + fore200e->bus->status_alignment) < 0) { + return -ENOMEM; + } + + /* allocate and align the array of receive PDU descriptors */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &rxq->rpd, + sizeof(struct rpd), + QUEUE_SIZE_RX, + fore200e->bus->descr_alignment) < 0) { + + fore200e->bus->dma_chunk_free(fore200e, &rxq->status); + return -ENOMEM; + } + + /* get the base address of the cp resident rx queue entries */ + cp_entry = fore200e->virt_base + fore200e->bus->read(&fore200e->cp_queues->cp_rxq); + + /* fill the host resident and cp resident rx entries */ + for (i=0; i < QUEUE_SIZE_RX; i++) { + + rxq->host_entry[ i ].status = + FORE200E_INDEX(rxq->status.align_addr, enum status, i); + rxq->host_entry[ i ].rpd = + FORE200E_INDEX(rxq->rpd.align_addr, struct rpd, i); + rxq->host_entry[ i ].rpd_dma = + FORE200E_DMA_INDEX(rxq->rpd.dma_addr, struct rpd, i); + rxq->host_entry[ i ].cp_entry = &cp_entry[ i ]; + + *rxq->host_entry[ i ].status = STATUS_FREE; + + fore200e->bus->write(FORE200E_DMA_INDEX(rxq->status.dma_addr, enum status, i), + &cp_entry[ i ].status_haddr); + + fore200e->bus->write(FORE200E_DMA_INDEX(rxq->rpd.dma_addr, struct rpd, i), + &cp_entry[ i ].rpd_haddr); + } + + /* set the head entry of the queue */ + rxq->head = 0; + + fore200e->state = FORE200E_STATE_INIT_RXQ; + return 0; +} + + +static int __init +fore200e_init_tx_queue(struct fore200e* fore200e) +{ + struct host_txq* txq = &fore200e->host_txq; + struct cp_txq_entry __iomem * cp_entry; + int i; + + DPRINTK(2, "transmit queue is being initialized\n"); + + /* allocate and align the array of status words */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &txq->status, + sizeof(enum status), + QUEUE_SIZE_TX, + fore200e->bus->status_alignment) < 0) { + return -ENOMEM; + } + + /* allocate and align the array of transmit PDU descriptors */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &txq->tpd, + sizeof(struct tpd), + QUEUE_SIZE_TX, + fore200e->bus->descr_alignment) < 0) { + + fore200e->bus->dma_chunk_free(fore200e, &txq->status); + return -ENOMEM; + } + + /* get the base address of the cp resident tx queue entries */ + cp_entry = fore200e->virt_base + fore200e->bus->read(&fore200e->cp_queues->cp_txq); + + /* fill the host resident and cp resident tx entries */ + for (i=0; i < QUEUE_SIZE_TX; i++) { + + txq->host_entry[ i ].status = + FORE200E_INDEX(txq->status.align_addr, enum status, i); + txq->host_entry[ i ].tpd = + FORE200E_INDEX(txq->tpd.align_addr, struct tpd, i); + txq->host_entry[ i ].tpd_dma = + FORE200E_DMA_INDEX(txq->tpd.dma_addr, struct tpd, i); + txq->host_entry[ i ].cp_entry = &cp_entry[ i ]; + + *txq->host_entry[ i ].status = STATUS_FREE; + + fore200e->bus->write(FORE200E_DMA_INDEX(txq->status.dma_addr, enum status, i), + &cp_entry[ i ].status_haddr); + + /* although there is a one-to-one mapping of tx queue entries and tpds, + we do not write here the DMA (physical) base address of each tpd into + the related cp resident entry, because the cp relies on this write + operation to detect that a new pdu has been submitted for tx */ + } + + /* set the head and tail entries of the queue */ + txq->head = 0; + txq->tail = 0; + + fore200e->state = FORE200E_STATE_INIT_TXQ; + return 0; +} + + +static int __init +fore200e_init_cmd_queue(struct fore200e* fore200e) +{ + struct host_cmdq* cmdq = &fore200e->host_cmdq; + struct cp_cmdq_entry __iomem * cp_entry; + int i; + + DPRINTK(2, "command queue is being initialized\n"); + + /* allocate and align the array of status words */ + if (fore200e->bus->dma_chunk_alloc(fore200e, + &cmdq->status, + sizeof(enum status), + QUEUE_SIZE_CMD, + fore200e->bus->status_alignment) < 0) { + return -ENOMEM; + } + + /* get the base address of the cp resident cmd queue entries */ + cp_entry = fore200e->virt_base + fore200e->bus->read(&fore200e->cp_queues->cp_cmdq); + + /* fill the host resident and cp resident cmd entries */ + for (i=0; i < QUEUE_SIZE_CMD; i++) { + + cmdq->host_entry[ i ].status = + FORE200E_INDEX(cmdq->status.align_addr, enum status, i); + cmdq->host_entry[ i ].cp_entry = &cp_entry[ i ]; + + *cmdq->host_entry[ i ].status = STATUS_FREE; + + fore200e->bus->write(FORE200E_DMA_INDEX(cmdq->status.dma_addr, enum status, i), + &cp_entry[ i ].status_haddr); + } + + /* set the head entry of the queue */ + cmdq->head = 0; + + fore200e->state = FORE200E_STATE_INIT_CMDQ; + return 0; +} + + +static void __init +fore200e_param_bs_queue(struct fore200e* fore200e, + enum buffer_scheme scheme, enum buffer_magn magn, + int queue_length, int pool_size, int supply_blksize) +{ + struct bs_spec __iomem * bs_spec = &fore200e->cp_queues->init.bs_spec[ scheme ][ magn ]; + + fore200e->bus->write(queue_length, &bs_spec->queue_length); + fore200e->bus->write(fore200e_rx_buf_size[ scheme ][ magn ], &bs_spec->buffer_size); + fore200e->bus->write(pool_size, &bs_spec->pool_size); + fore200e->bus->write(supply_blksize, &bs_spec->supply_blksize); +} + + +static int __init +fore200e_initialize(struct fore200e* fore200e) +{ + struct cp_queues __iomem * cpq; + int ok, scheme, magn; + + DPRINTK(2, "device %s being initialized\n", fore200e->name); + + init_MUTEX(&fore200e->rate_sf); + spin_lock_init(&fore200e->q_lock); + + cpq = fore200e->cp_queues = fore200e->virt_base + FORE200E_CP_QUEUES_OFFSET; + + /* enable cp to host interrupts */ + fore200e->bus->write(1, &cpq->imask); + + if (fore200e->bus->irq_enable) + fore200e->bus->irq_enable(fore200e); + + fore200e->bus->write(NBR_CONNECT, &cpq->init.num_connect); + + fore200e->bus->write(QUEUE_SIZE_CMD, &cpq->init.cmd_queue_len); + fore200e->bus->write(QUEUE_SIZE_RX, &cpq->init.rx_queue_len); + fore200e->bus->write(QUEUE_SIZE_TX, &cpq->init.tx_queue_len); + + fore200e->bus->write(RSD_EXTENSION, &cpq->init.rsd_extension); + fore200e->bus->write(TSD_EXTENSION, &cpq->init.tsd_extension); + + for (scheme = 0; scheme < BUFFER_SCHEME_NBR; scheme++) + for (magn = 0; magn < BUFFER_MAGN_NBR; magn++) + fore200e_param_bs_queue(fore200e, scheme, magn, + QUEUE_SIZE_BS, + fore200e_rx_buf_nbr[ scheme ][ magn ], + RBD_BLK_SIZE); + + /* issue the initialize command */ + fore200e->bus->write(STATUS_PENDING, &cpq->init.status); + fore200e->bus->write(OPCODE_INITIALIZE, &cpq->init.opcode); + + ok = fore200e_io_poll(fore200e, &cpq->init.status, STATUS_COMPLETE, 3000); + if (ok == 0) { + printk(FORE200E "device %s initialization failed\n", fore200e->name); + return -ENODEV; + } + + printk(FORE200E "device %s initialized\n", fore200e->name); + + fore200e->state = FORE200E_STATE_INITIALIZE; + return 0; +} + + +static void __init +fore200e_monitor_putc(struct fore200e* fore200e, char c) +{ + struct cp_monitor __iomem * monitor = fore200e->cp_monitor; + +#if 0 + printk("%c", c); +#endif + fore200e->bus->write(((u32) c) | FORE200E_CP_MONITOR_UART_AVAIL, &monitor->soft_uart.send); +} + + +static int __init +fore200e_monitor_getc(struct fore200e* fore200e) +{ + struct cp_monitor __iomem * monitor = fore200e->cp_monitor; + unsigned long timeout = jiffies + msecs_to_jiffies(50); + int c; + + while (time_before(jiffies, timeout)) { + + c = (int) fore200e->bus->read(&monitor->soft_uart.recv); + + if (c & FORE200E_CP_MONITOR_UART_AVAIL) { + + fore200e->bus->write(FORE200E_CP_MONITOR_UART_FREE, &monitor->soft_uart.recv); +#if 0 + printk("%c", c & 0xFF); +#endif + return c & 0xFF; + } + } + + return -1; +} + + +static void __init +fore200e_monitor_puts(struct fore200e* fore200e, char* str) +{ + while (*str) { + + /* the i960 monitor doesn't accept any new character if it has something to say */ + while (fore200e_monitor_getc(fore200e) >= 0); + + fore200e_monitor_putc(fore200e, *str++); + } + + while (fore200e_monitor_getc(fore200e) >= 0); +} + + +static int __init +fore200e_start_fw(struct fore200e* fore200e) +{ + int ok; + char cmd[ 48 ]; + struct fw_header* fw_header = (struct fw_header*) fore200e->bus->fw_data; + + DPRINTK(2, "device %s firmware being started\n", fore200e->name); + +#if defined(__sparc_v9__) + /* reported to be required by SBA cards on some sparc64 hosts */ + fore200e_spin(100); +#endif + + sprintf(cmd, "\rgo %x\r", le32_to_cpu(fw_header->start_offset)); + + fore200e_monitor_puts(fore200e, cmd); + + ok = fore200e_io_poll(fore200e, &fore200e->cp_monitor->bstat, BSTAT_CP_RUNNING, 1000); + if (ok == 0) { + printk(FORE200E "device %s firmware didn't start\n", fore200e->name); + return -ENODEV; + } + + printk(FORE200E "device %s firmware started\n", fore200e->name); + + fore200e->state = FORE200E_STATE_START_FW; + return 0; +} + + +static int __init +fore200e_load_fw(struct fore200e* fore200e) +{ + u32* fw_data = (u32*) fore200e->bus->fw_data; + u32 fw_size = (u32) *fore200e->bus->fw_size / sizeof(u32); + + struct fw_header* fw_header = (struct fw_header*) fw_data; + + u32 __iomem *load_addr = fore200e->virt_base + le32_to_cpu(fw_header->load_offset); + + DPRINTK(2, "device %s firmware being loaded at 0x%p (%d words)\n", + fore200e->name, load_addr, fw_size); + + if (le32_to_cpu(fw_header->magic) != FW_HEADER_MAGIC) { + printk(FORE200E "corrupted %s firmware image\n", fore200e->bus->model_name); + return -ENODEV; + } + + for (; fw_size--; fw_data++, load_addr++) + fore200e->bus->write(le32_to_cpu(*fw_data), load_addr); + + fore200e->state = FORE200E_STATE_LOAD_FW; + return 0; +} + + +static int __init +fore200e_register(struct fore200e* fore200e) +{ + struct atm_dev* atm_dev; + + DPRINTK(2, "device %s being registered\n", fore200e->name); + + atm_dev = atm_dev_register(fore200e->bus->proc_name, &fore200e_ops, -1, + NULL); + if (atm_dev == NULL) { + printk(FORE200E "unable to register device %s\n", fore200e->name); + return -ENODEV; + } + + atm_dev->dev_data = fore200e; + fore200e->atm_dev = atm_dev; + + atm_dev->ci_range.vpi_bits = FORE200E_VPI_BITS; + atm_dev->ci_range.vci_bits = FORE200E_VCI_BITS; + + fore200e->available_cell_rate = ATM_OC3_PCR; + + fore200e->state = FORE200E_STATE_REGISTER; + return 0; +} + + +static int __init +fore200e_init(struct fore200e* fore200e) +{ + if (fore200e_register(fore200e) < 0) + return -ENODEV; + + if (fore200e->bus->configure(fore200e) < 0) + return -ENODEV; + + if (fore200e->bus->map(fore200e) < 0) + return -ENODEV; + + if (fore200e_reset(fore200e, 1) < 0) + return -ENODEV; + + if (fore200e_load_fw(fore200e) < 0) + return -ENODEV; + + if (fore200e_start_fw(fore200e) < 0) + return -ENODEV; + + if (fore200e_initialize(fore200e) < 0) + return -ENODEV; + + if (fore200e_init_cmd_queue(fore200e) < 0) + return -ENOMEM; + + if (fore200e_init_tx_queue(fore200e) < 0) + return -ENOMEM; + + if (fore200e_init_rx_queue(fore200e) < 0) + return -ENOMEM; + + if (fore200e_init_bs_queue(fore200e) < 0) + return -ENOMEM; + + if (fore200e_alloc_rx_buf(fore200e) < 0) + return -ENOMEM; + + if (fore200e_get_esi(fore200e) < 0) + return -EIO; + + if (fore200e_irq_request(fore200e) < 0) + return -EBUSY; + + fore200e_supply(fore200e); + + /* all done, board initialization is now complete */ + fore200e->state = FORE200E_STATE_COMPLETE; + return 0; +} + + +static int __devinit +fore200e_pca_detect(struct pci_dev *pci_dev, const struct pci_device_id *pci_ent) +{ + const struct fore200e_bus* bus = (struct fore200e_bus*) pci_ent->driver_data; + struct fore200e* fore200e; + int err = 0; + static int index = 0; + + if (pci_enable_device(pci_dev)) { + err = -EINVAL; + goto out; + } + + fore200e = fore200e_kmalloc(sizeof(struct fore200e), GFP_KERNEL); + if (fore200e == NULL) { + err = -ENOMEM; + goto out_disable; + } + + fore200e->bus = bus; + fore200e->bus_dev = pci_dev; + fore200e->irq = pci_dev->irq; + fore200e->phys_base = pci_resource_start(pci_dev, 0); + + sprintf(fore200e->name, "%s-%d", bus->model_name, index - 1); + + pci_set_master(pci_dev); + + printk(FORE200E "device %s found at 0x%lx, IRQ %s\n", + fore200e->bus->model_name, + fore200e->phys_base, fore200e_irq_itoa(fore200e->irq)); + + sprintf(fore200e->name, "%s-%d", bus->model_name, index); + + err = fore200e_init(fore200e); + if (err < 0) { + fore200e_shutdown(fore200e); + goto out_free; + } + + ++index; + pci_set_drvdata(pci_dev, fore200e); + +out: + return err; + +out_free: + kfree(fore200e); +out_disable: + pci_disable_device(pci_dev); + goto out; +} + + +static void __devexit fore200e_pca_remove_one(struct pci_dev *pci_dev) +{ + struct fore200e *fore200e; + + fore200e = pci_get_drvdata(pci_dev); + + list_del(&fore200e->entry); + + fore200e_shutdown(fore200e); + kfree(fore200e); + pci_disable_device(pci_dev); +} + + +#ifdef CONFIG_ATM_FORE200E_PCA +static struct pci_device_id fore200e_pca_tbl[] = { + { PCI_VENDOR_ID_FORE, PCI_DEVICE_ID_FORE_PCA200E, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, (unsigned long) &fore200e_bus[0] }, + { 0, } +}; + +MODULE_DEVICE_TABLE(pci, fore200e_pca_tbl); + +static struct pci_driver fore200e_pca_driver = { + .name = "fore_200e", + .probe = fore200e_pca_detect, + .remove = __devexit_p(fore200e_pca_remove_one), + .id_table = fore200e_pca_tbl, +}; +#endif + + +static int __init +fore200e_module_init(void) +{ + const struct fore200e_bus* bus; + struct fore200e* fore200e; + int index; + + printk(FORE200E "FORE Systems 200E-series ATM driver - version " FORE200E_VERSION "\n"); + + /* for each configured bus interface */ + for (bus = fore200e_bus; bus->model_name; bus++) { + + /* detect all boards present on that bus */ + for (index = 0; bus->detect && (fore200e = bus->detect(bus, index)); index++) { + + printk(FORE200E "device %s found at 0x%lx, IRQ %s\n", + fore200e->bus->model_name, + fore200e->phys_base, fore200e_irq_itoa(fore200e->irq)); + + sprintf(fore200e->name, "%s-%d", bus->model_name, index); + + if (fore200e_init(fore200e) < 0) { + + fore200e_shutdown(fore200e); + break; + } + + list_add(&fore200e->entry, &fore200e_boards); + } + } + +#ifdef CONFIG_ATM_FORE200E_PCA + if (!pci_module_init(&fore200e_pca_driver)) + return 0; +#endif + + if (!list_empty(&fore200e_boards)) + return 0; + + return -ENODEV; +} + + +static void __exit +fore200e_module_cleanup(void) +{ + struct fore200e *fore200e, *next; + +#ifdef CONFIG_ATM_FORE200E_PCA + pci_unregister_driver(&fore200e_pca_driver); +#endif + + list_for_each_entry_safe(fore200e, next, &fore200e_boards, entry) { + fore200e_shutdown(fore200e); + kfree(fore200e); + } + DPRINTK(1, "module being removed\n"); +} + + +static int +fore200e_proc_read(struct atm_dev *dev, loff_t* pos, char* page) +{ + struct fore200e* fore200e = FORE200E_DEV(dev); + struct fore200e_vcc* fore200e_vcc; + struct atm_vcc* vcc; + int i, len, left = *pos; + unsigned long flags; + + if (!left--) { + + if (fore200e_getstats(fore200e) < 0) + return -EIO; + + len = sprintf(page,"\n" + " device:\n" + " internal name:\t\t%s\n", fore200e->name); + + /* print bus-specific information */ + if (fore200e->bus->proc_read) + len += fore200e->bus->proc_read(fore200e, page + len); + + len += sprintf(page + len, + " interrupt line:\t\t%s\n" + " physical base address:\t0x%p\n" + " virtual base address:\t0x%p\n" + " factory address (ESI):\t%02x:%02x:%02x:%02x:%02x:%02x\n" + " board serial number:\t\t%d\n\n", + fore200e_irq_itoa(fore200e->irq), + (void*)fore200e->phys_base, + fore200e->virt_base, + fore200e->esi[0], fore200e->esi[1], fore200e->esi[2], + fore200e->esi[3], fore200e->esi[4], fore200e->esi[5], + fore200e->esi[4] * 256 + fore200e->esi[5]); + + return len; + } + + if (!left--) + return sprintf(page, + " free small bufs, scheme 1:\t%d\n" + " free large bufs, scheme 1:\t%d\n" + " free small bufs, scheme 2:\t%d\n" + " free large bufs, scheme 2:\t%d\n", + fore200e->host_bsq[ BUFFER_SCHEME_ONE ][ BUFFER_MAGN_SMALL ].freebuf_count, + fore200e->host_bsq[ BUFFER_SCHEME_ONE ][ BUFFER_MAGN_LARGE ].freebuf_count, + fore200e->host_bsq[ BUFFER_SCHEME_TWO ][ BUFFER_MAGN_SMALL ].freebuf_count, + fore200e->host_bsq[ BUFFER_SCHEME_TWO ][ BUFFER_MAGN_LARGE ].freebuf_count); + + if (!left--) { + u32 hb = fore200e->bus->read(&fore200e->cp_queues->heartbeat); + + len = sprintf(page,"\n\n" + " cell processor:\n" + " heartbeat state:\t\t"); + + if (hb >> 16 != 0xDEAD) + len += sprintf(page + len, "0x%08x\n", hb); + else + len += sprintf(page + len, "*** FATAL ERROR %04x ***\n", hb & 0xFFFF); + + return len; + } + + if (!left--) { + static const char* media_name[] = { + "unshielded twisted pair", + "multimode optical fiber ST", + "multimode optical fiber SC", + "single-mode optical fiber ST", + "single-mode optical fiber SC", + "unknown" + }; + + static const char* oc3_mode[] = { + "normal operation", + "diagnostic loopback", + "line loopback", + "unknown" + }; + + u32 fw_release = fore200e->bus->read(&fore200e->cp_queues->fw_release); + u32 mon960_release = fore200e->bus->read(&fore200e->cp_queues->mon960_release); + u32 oc3_revision = fore200e->bus->read(&fore200e->cp_queues->oc3_revision); + u32 media_index = FORE200E_MEDIA_INDEX(fore200e->bus->read(&fore200e->cp_queues->media_type)); + u32 oc3_index; + + if ((media_index < 0) || (media_index > 4)) + media_index = 5; + + switch (fore200e->loop_mode) { + case ATM_LM_NONE: oc3_index = 0; + break; + case ATM_LM_LOC_PHY: oc3_index = 1; + break; + case ATM_LM_RMT_PHY: oc3_index = 2; + break; + default: oc3_index = 3; + } + + return sprintf(page, + " firmware release:\t\t%d.%d.%d\n" + " monitor release:\t\t%d.%d\n" + " media type:\t\t\t%s\n" + " OC-3 revision:\t\t0x%x\n" + " OC-3 mode:\t\t\t%s", + fw_release >> 16, fw_release << 16 >> 24, fw_release << 24 >> 24, + mon960_release >> 16, mon960_release << 16 >> 16, + media_name[ media_index ], + oc3_revision, + oc3_mode[ oc3_index ]); + } + + if (!left--) { + struct cp_monitor __iomem * cp_monitor = fore200e->cp_monitor; + + return sprintf(page, + "\n\n" + " monitor:\n" + " version number:\t\t%d\n" + " boot status word:\t\t0x%08x\n", + fore200e->bus->read(&cp_monitor->mon_version), + fore200e->bus->read(&cp_monitor->bstat)); + } + + if (!left--) + return sprintf(page, + "\n" + " device statistics:\n" + " 4b5b:\n" + " crc_header_errors:\t\t%10u\n" + " framing_errors:\t\t%10u\n", + fore200e_swap(fore200e->stats->phy.crc_header_errors), + fore200e_swap(fore200e->stats->phy.framing_errors)); + + if (!left--) + return sprintf(page, "\n" + " OC-3:\n" + " section_bip8_errors:\t%10u\n" + " path_bip8_errors:\t\t%10u\n" + " line_bip24_errors:\t\t%10u\n" + " line_febe_errors:\t\t%10u\n" + " path_febe_errors:\t\t%10u\n" + " corr_hcs_errors:\t\t%10u\n" + " ucorr_hcs_errors:\t\t%10u\n", + fore200e_swap(fore200e->stats->oc3.section_bip8_errors), + fore200e_swap(fore200e->stats->oc3.path_bip8_errors), + fore200e_swap(fore200e->stats->oc3.line_bip24_errors), + fore200e_swap(fore200e->stats->oc3.line_febe_errors), + fore200e_swap(fore200e->stats->oc3.path_febe_errors), + fore200e_swap(fore200e->stats->oc3.corr_hcs_errors), + fore200e_swap(fore200e->stats->oc3.ucorr_hcs_errors)); + + if (!left--) + return sprintf(page,"\n" + " ATM:\t\t\t\t cells\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " vpi out of range:\t\t%10u\n" + " vpi no conn:\t\t%10u\n" + " vci out of range:\t\t%10u\n" + " vci no conn:\t\t%10u\n", + fore200e_swap(fore200e->stats->atm.cells_transmitted), + fore200e_swap(fore200e->stats->atm.cells_received), + fore200e_swap(fore200e->stats->atm.vpi_bad_range), + fore200e_swap(fore200e->stats->atm.vpi_no_conn), + fore200e_swap(fore200e->stats->atm.vci_bad_range), + fore200e_swap(fore200e->stats->atm.vci_no_conn)); + + if (!left--) + return sprintf(page,"\n" + " AAL0:\t\t\t cells\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " dropped:\t\t\t%10u\n", + fore200e_swap(fore200e->stats->aal0.cells_transmitted), + fore200e_swap(fore200e->stats->aal0.cells_received), + fore200e_swap(fore200e->stats->aal0.cells_dropped)); + + if (!left--) + return sprintf(page,"\n" + " AAL3/4:\n" + " SAR sublayer:\t\t cells\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " dropped:\t\t\t%10u\n" + " CRC errors:\t\t%10u\n" + " protocol errors:\t\t%10u\n\n" + " CS sublayer:\t\t PDUs\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " dropped:\t\t\t%10u\n" + " protocol errors:\t\t%10u\n", + fore200e_swap(fore200e->stats->aal34.cells_transmitted), + fore200e_swap(fore200e->stats->aal34.cells_received), + fore200e_swap(fore200e->stats->aal34.cells_dropped), + fore200e_swap(fore200e->stats->aal34.cells_crc_errors), + fore200e_swap(fore200e->stats->aal34.cells_protocol_errors), + fore200e_swap(fore200e->stats->aal34.cspdus_transmitted), + fore200e_swap(fore200e->stats->aal34.cspdus_received), + fore200e_swap(fore200e->stats->aal34.cspdus_dropped), + fore200e_swap(fore200e->stats->aal34.cspdus_protocol_errors)); + + if (!left--) + return sprintf(page,"\n" + " AAL5:\n" + " SAR sublayer:\t\t cells\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " dropped:\t\t\t%10u\n" + " congestions:\t\t%10u\n\n" + " CS sublayer:\t\t PDUs\n" + " TX:\t\t\t%10u\n" + " RX:\t\t\t%10u\n" + " dropped:\t\t\t%10u\n" + " CRC errors:\t\t%10u\n" + " protocol errors:\t\t%10u\n", + fore200e_swap(fore200e->stats->aal5.cells_transmitted), + fore200e_swap(fore200e->stats->aal5.cells_received), + fore200e_swap(fore200e->stats->aal5.cells_dropped), + fore200e_swap(fore200e->stats->aal5.congestion_experienced), + fore200e_swap(fore200e->stats->aal5.cspdus_transmitted), + fore200e_swap(fore200e->stats->aal5.cspdus_received), + fore200e_swap(fore200e->stats->aal5.cspdus_dropped), + fore200e_swap(fore200e->stats->aal5.cspdus_crc_errors), + fore200e_swap(fore200e->stats->aal5.cspdus_protocol_errors)); + + if (!left--) + return sprintf(page,"\n" + " AUX:\t\t allocation failures\n" + " small b1:\t\t\t%10u\n" + " large b1:\t\t\t%10u\n" + " small b2:\t\t\t%10u\n" + " large b2:\t\t\t%10u\n" + " RX PDUs:\t\t\t%10u\n" + " TX PDUs:\t\t\t%10lu\n", + fore200e_swap(fore200e->stats->aux.small_b1_failed), + fore200e_swap(fore200e->stats->aux.large_b1_failed), + fore200e_swap(fore200e->stats->aux.small_b2_failed), + fore200e_swap(fore200e->stats->aux.large_b2_failed), + fore200e_swap(fore200e->stats->aux.rpd_alloc_failed), + fore200e->tx_sat); + + if (!left--) + return sprintf(page,"\n" + " receive carrier:\t\t\t%s\n", + fore200e->stats->aux.receive_carrier ? "ON" : "OFF!"); + + if (!left--) { + return sprintf(page,"\n" + " VCCs:\n address VPI VCI AAL " + "TX PDUs TX min/max size RX PDUs RX min/max size\n"); + } + + for (i = 0; i < NBR_CONNECT; i++) { + + vcc = fore200e->vc_map[i].vcc; + + if (vcc == NULL) + continue; + + spin_lock_irqsave(&fore200e->q_lock, flags); + + if (vcc && test_bit(ATM_VF_READY, &vcc->flags) && !left--) { + + fore200e_vcc = FORE200E_VCC(vcc); + ASSERT(fore200e_vcc); + + len = sprintf(page, + " %08x %03d %05d %1d %09lu %05d/%05d %09lu %05d/%05d\n", + (u32)(unsigned long)vcc, + vcc->vpi, vcc->vci, fore200e_atm2fore_aal(vcc->qos.aal), + fore200e_vcc->tx_pdu, + fore200e_vcc->tx_min_pdu > 0xFFFF ? 0 : fore200e_vcc->tx_min_pdu, + fore200e_vcc->tx_max_pdu, + fore200e_vcc->rx_pdu, + fore200e_vcc->rx_min_pdu > 0xFFFF ? 0 : fore200e_vcc->rx_min_pdu, + fore200e_vcc->rx_max_pdu); + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + return len; + } + + spin_unlock_irqrestore(&fore200e->q_lock, flags); + } + + return 0; +} + +module_init(fore200e_module_init); +module_exit(fore200e_module_cleanup); + + +static const struct atmdev_ops fore200e_ops = +{ + .open = fore200e_open, + .close = fore200e_close, + .ioctl = fore200e_ioctl, + .getsockopt = fore200e_getsockopt, + .setsockopt = fore200e_setsockopt, + .send = fore200e_send, + .change_qos = fore200e_change_qos, + .proc_read = fore200e_proc_read, + .owner = THIS_MODULE +}; + + +#ifdef CONFIG_ATM_FORE200E_PCA +extern const unsigned char _fore200e_pca_fw_data[]; +extern const unsigned int _fore200e_pca_fw_size; +#endif +#ifdef CONFIG_ATM_FORE200E_SBA +extern const unsigned char _fore200e_sba_fw_data[]; +extern const unsigned int _fore200e_sba_fw_size; +#endif + +static const struct fore200e_bus fore200e_bus[] = { +#ifdef CONFIG_ATM_FORE200E_PCA + { "PCA-200E", "pca200e", 32, 4, 32, + _fore200e_pca_fw_data, &_fore200e_pca_fw_size, + fore200e_pca_read, + fore200e_pca_write, + fore200e_pca_dma_map, + fore200e_pca_dma_unmap, + fore200e_pca_dma_sync_for_cpu, + fore200e_pca_dma_sync_for_device, + fore200e_pca_dma_chunk_alloc, + fore200e_pca_dma_chunk_free, + NULL, + fore200e_pca_configure, + fore200e_pca_map, + fore200e_pca_reset, + fore200e_pca_prom_read, + fore200e_pca_unmap, + NULL, + fore200e_pca_irq_check, + fore200e_pca_irq_ack, + fore200e_pca_proc_read, + }, +#endif +#ifdef CONFIG_ATM_FORE200E_SBA + { "SBA-200E", "sba200e", 32, 64, 32, + _fore200e_sba_fw_data, &_fore200e_sba_fw_size, + fore200e_sba_read, + fore200e_sba_write, + fore200e_sba_dma_map, + fore200e_sba_dma_unmap, + fore200e_sba_dma_sync_for_cpu, + fore200e_sba_dma_sync_for_device, + fore200e_sba_dma_chunk_alloc, + fore200e_sba_dma_chunk_free, + fore200e_sba_detect, + fore200e_sba_configure, + fore200e_sba_map, + fore200e_sba_reset, + fore200e_sba_prom_read, + fore200e_sba_unmap, + fore200e_sba_irq_enable, + fore200e_sba_irq_check, + fore200e_sba_irq_ack, + fore200e_sba_proc_read, + }, +#endif + {} +}; + +#ifdef MODULE_LICENSE +MODULE_LICENSE("GPL"); +#endif diff --git a/drivers/atm/fore200e.h b/drivers/atm/fore200e.h new file mode 100644 index 000000000000..2558eb853235 --- /dev/null +++ b/drivers/atm/fore200e.h @@ -0,0 +1,985 @@ +/* $Id: fore200e.h,v 1.4 2000/04/14 10:10:34 davem Exp $ */ +#ifndef _FORE200E_H +#define _FORE200E_H + +#ifdef __KERNEL__ +#include <linux/config.h> + +/* rx buffer sizes */ + +#define SMALL_BUFFER_SIZE 384 /* size of small buffers (multiple of 48 (PCA) and 64 (SBA) bytes) */ +#define LARGE_BUFFER_SIZE 4032 /* size of large buffers (multiple of 48 (PCA) and 64 (SBA) bytes) */ + + +#define RBD_BLK_SIZE 32 /* nbr of supplied rx buffers per rbd */ + + +#define MAX_PDU_SIZE 65535 /* maximum PDU size supported by AALs */ + + +#define BUFFER_S1_SIZE SMALL_BUFFER_SIZE /* size of small buffers, scheme 1 */ +#define BUFFER_L1_SIZE LARGE_BUFFER_SIZE /* size of large buffers, scheme 1 */ + +#define BUFFER_S2_SIZE SMALL_BUFFER_SIZE /* size of small buffers, scheme 2 */ +#define BUFFER_L2_SIZE LARGE_BUFFER_SIZE /* size of large buffers, scheme 2 */ + +#define BUFFER_S1_NBR (RBD_BLK_SIZE * 6) +#define BUFFER_L1_NBR (RBD_BLK_SIZE * 4) + +#define BUFFER_S2_NBR (RBD_BLK_SIZE * 6) +#define BUFFER_L2_NBR (RBD_BLK_SIZE * 4) + + +#define QUEUE_SIZE_CMD 16 /* command queue capacity */ +#define QUEUE_SIZE_RX 64 /* receive queue capacity */ +#define QUEUE_SIZE_TX 256 /* transmit queue capacity */ +#define QUEUE_SIZE_BS 32 /* buffer supply queue capacity */ + +#define FORE200E_VPI_BITS 0 +#define FORE200E_VCI_BITS 10 +#define NBR_CONNECT (1 << (FORE200E_VPI_BITS + FORE200E_VCI_BITS)) /* number of connections */ + + +#define TSD_FIXED 2 +#define TSD_EXTENSION 0 +#define TSD_NBR (TSD_FIXED + TSD_EXTENSION) + + +/* the cp starts putting a received PDU into one *small* buffer, + then it uses a number of *large* buffers for the trailing data. + we compute here the total number of receive segment descriptors + required to hold the largest possible PDU */ + +#define RSD_REQUIRED (((MAX_PDU_SIZE - SMALL_BUFFER_SIZE + LARGE_BUFFER_SIZE) / LARGE_BUFFER_SIZE) + 1) + +#define RSD_FIXED 3 + +/* RSD_REQUIRED receive segment descriptors are enough to describe a max-sized PDU, + but we have to keep the size of the receive PDU descriptor multiple of 32 bytes, + so we add one extra RSD to RSD_EXTENSION + (WARNING: THIS MAY CHANGE IF BUFFER SIZES ARE MODIFIED) */ + +#define RSD_EXTENSION ((RSD_REQUIRED - RSD_FIXED) + 1) +#define RSD_NBR (RSD_FIXED + RSD_EXTENSION) + + +#define FORE200E_DEV(d) ((struct fore200e*)((d)->dev_data)) +#define FORE200E_VCC(d) ((struct fore200e_vcc*)((d)->dev_data)) + +/* bitfields endian games */ + +#if defined(__LITTLE_ENDIAN_BITFIELD) +#define BITFIELD2(b1, b2) b1; b2; +#define BITFIELD3(b1, b2, b3) b1; b2; b3; +#define BITFIELD4(b1, b2, b3, b4) b1; b2; b3; b4; +#define BITFIELD5(b1, b2, b3, b4, b5) b1; b2; b3; b4; b5; +#define BITFIELD6(b1, b2, b3, b4, b5, b6) b1; b2; b3; b4; b5; b6; +#elif defined(__BIG_ENDIAN_BITFIELD) +#define BITFIELD2(b1, b2) b2; b1; +#define BITFIELD3(b1, b2, b3) b3; b2; b1; +#define BITFIELD4(b1, b2, b3, b4) b4; b3; b2; b1; +#define BITFIELD5(b1, b2, b3, b4, b5) b5; b4; b3; b2; b1; +#define BITFIELD6(b1, b2, b3, b4, b5, b6) b6; b5; b4; b3; b2; b1; +#else +#error unknown bitfield endianess +#endif + + +/* ATM cell header (minus HEC byte) */ + +typedef struct atm_header { + BITFIELD5( + u32 clp : 1, /* cell loss priority */ + u32 plt : 3, /* payload type */ + u32 vci : 16, /* virtual channel identifier */ + u32 vpi : 8, /* virtual path identifier */ + u32 gfc : 4 /* generic flow control */ + ) +} atm_header_t; + + +/* ATM adaptation layer id */ + +typedef enum fore200e_aal { + FORE200E_AAL0 = 0, + FORE200E_AAL34 = 4, + FORE200E_AAL5 = 5, +} fore200e_aal_t; + + +/* transmit PDU descriptor specification */ + +typedef struct tpd_spec { + BITFIELD4( + u32 length : 16, /* total PDU length */ + u32 nseg : 8, /* number of transmit segments */ + enum fore200e_aal aal : 4, /* adaptation layer */ + u32 intr : 4 /* interrupt requested */ + ) +} tpd_spec_t; + + +/* transmit PDU rate control */ + +typedef struct tpd_rate +{ + BITFIELD2( + u32 idle_cells : 16, /* number of idle cells to insert */ + u32 data_cells : 16 /* number of data cells to transmit */ + ) +} tpd_rate_t; + + +/* transmit segment descriptor */ + +typedef struct tsd { + u32 buffer; /* transmit buffer DMA address */ + u32 length; /* number of bytes in buffer */ +} tsd_t; + + +/* transmit PDU descriptor */ + +typedef struct tpd { + struct atm_header atm_header; /* ATM header minus HEC byte */ + struct tpd_spec spec; /* tpd specification */ + struct tpd_rate rate; /* tpd rate control */ + u32 pad; /* reserved */ + struct tsd tsd[ TSD_NBR ]; /* transmit segment descriptors */ +} tpd_t; + + +/* receive segment descriptor */ + +typedef struct rsd { + u32 handle; /* host supplied receive buffer handle */ + u32 length; /* number of bytes in buffer */ +} rsd_t; + + +/* receive PDU descriptor */ + +typedef struct rpd { + struct atm_header atm_header; /* ATM header minus HEC byte */ + u32 nseg; /* number of receive segments */ + struct rsd rsd[ RSD_NBR ]; /* receive segment descriptors */ +} rpd_t; + + +/* buffer scheme */ + +typedef enum buffer_scheme { + BUFFER_SCHEME_ONE, + BUFFER_SCHEME_TWO, + BUFFER_SCHEME_NBR /* always last */ +} buffer_scheme_t; + + +/* buffer magnitude */ + +typedef enum buffer_magn { + BUFFER_MAGN_SMALL, + BUFFER_MAGN_LARGE, + BUFFER_MAGN_NBR /* always last */ +} buffer_magn_t; + + +/* receive buffer descriptor */ + +typedef struct rbd { + u32 handle; /* host supplied handle */ + u32 buffer_haddr; /* host DMA address of host buffer */ +} rbd_t; + + +/* receive buffer descriptor block */ + +typedef struct rbd_block { + struct rbd rbd[ RBD_BLK_SIZE ]; /* receive buffer descriptor */ +} rbd_block_t; + + +/* tpd DMA address */ + +typedef struct tpd_haddr { + BITFIELD3( + u32 size : 4, /* tpd size expressed in 32 byte blocks */ + u32 pad : 1, /* reserved */ + u32 haddr : 27 /* tpd DMA addr aligned on 32 byte boundary */ + ) +} tpd_haddr_t; + +#define TPD_HADDR_SHIFT 5 /* addr aligned on 32 byte boundary */ + +/* cp resident transmit queue entry */ + +typedef struct cp_txq_entry { + struct tpd_haddr tpd_haddr; /* host DMA address of tpd */ + u32 status_haddr; /* host DMA address of completion status */ +} cp_txq_entry_t; + + +/* cp resident receive queue entry */ + +typedef struct cp_rxq_entry { + u32 rpd_haddr; /* host DMA address of rpd */ + u32 status_haddr; /* host DMA address of completion status */ +} cp_rxq_entry_t; + + +/* cp resident buffer supply queue entry */ + +typedef struct cp_bsq_entry { + u32 rbd_block_haddr; /* host DMA address of rbd block */ + u32 status_haddr; /* host DMA address of completion status */ +} cp_bsq_entry_t; + + +/* completion status */ + +typedef volatile enum status { + STATUS_PENDING = (1<<0), /* initial status (written by host) */ + STATUS_COMPLETE = (1<<1), /* completion status (written by cp) */ + STATUS_FREE = (1<<2), /* initial status (written by host) */ + STATUS_ERROR = (1<<3) /* completion status (written by cp) */ +} status_t; + + +/* cp operation code */ + +typedef enum opcode { + OPCODE_INITIALIZE = 1, /* initialize board */ + OPCODE_ACTIVATE_VCIN, /* activate incoming VCI */ + OPCODE_ACTIVATE_VCOUT, /* activate outgoing VCI */ + OPCODE_DEACTIVATE_VCIN, /* deactivate incoming VCI */ + OPCODE_DEACTIVATE_VCOUT, /* deactivate incoing VCI */ + OPCODE_GET_STATS, /* get board statistics */ + OPCODE_SET_OC3, /* set OC-3 registers */ + OPCODE_GET_OC3, /* get OC-3 registers */ + OPCODE_RESET_STATS, /* reset board statistics */ + OPCODE_GET_PROM, /* get expansion PROM data (PCI specific) */ + OPCODE_SET_VPI_BITS, /* set x bits of those decoded by the + firmware to be low order bits from + the VPI field of the ATM cell header */ + OPCODE_REQUEST_INTR = (1<<7) /* request interrupt */ +} opcode_t; + + +/* virtual path / virtual channel identifers */ + +typedef struct vpvc { + BITFIELD3( + u32 vci : 16, /* virtual channel identifier */ + u32 vpi : 8, /* virtual path identifier */ + u32 pad : 8 /* reserved */ + ) +} vpvc_t; + + +/* activate VC command opcode */ + +typedef struct activate_opcode { + BITFIELD4( + enum opcode opcode : 8, /* cp opcode */ + enum fore200e_aal aal : 8, /* adaptation layer */ + enum buffer_scheme scheme : 8, /* buffer scheme */ + u32 pad : 8 /* reserved */ + ) +} activate_opcode_t; + + +/* activate VC command block */ + +typedef struct activate_block { + struct activate_opcode opcode; /* activate VC command opcode */ + struct vpvc vpvc; /* VPI/VCI */ + u32 mtu; /* for AAL0 only */ + +} activate_block_t; + + +/* deactivate VC command opcode */ + +typedef struct deactivate_opcode { + BITFIELD2( + enum opcode opcode : 8, /* cp opcode */ + u32 pad : 24 /* reserved */ + ) +} deactivate_opcode_t; + + +/* deactivate VC command block */ + +typedef struct deactivate_block { + struct deactivate_opcode opcode; /* deactivate VC command opcode */ + struct vpvc vpvc; /* VPI/VCI */ +} deactivate_block_t; + + +/* OC-3 registers */ + +typedef struct oc3_regs { + u32 reg[ 128 ]; /* see the PMC Sierra PC5346 S/UNI-155-Lite + Saturn User Network Interface documentation + for a description of the OC-3 chip registers */ +} oc3_regs_t; + + +/* set/get OC-3 regs command opcode */ + +typedef struct oc3_opcode { + BITFIELD4( + enum opcode opcode : 8, /* cp opcode */ + u32 reg : 8, /* register index */ + u32 value : 8, /* register value */ + u32 mask : 8 /* register mask that specifies which + bits of the register value field + are significant */ + ) +} oc3_opcode_t; + + +/* set/get OC-3 regs command block */ + +typedef struct oc3_block { + struct oc3_opcode opcode; /* set/get OC-3 regs command opcode */ + u32 regs_haddr; /* host DMA address of OC-3 regs buffer */ +} oc3_block_t; + + +/* physical encoding statistics */ + +typedef struct stats_phy { + u32 crc_header_errors; /* cells received with bad header CRC */ + u32 framing_errors; /* cells received with bad framing */ + u32 pad[ 2 ]; /* i960 padding */ +} stats_phy_t; + + +/* OC-3 statistics */ + +typedef struct stats_oc3 { + u32 section_bip8_errors; /* section 8 bit interleaved parity */ + u32 path_bip8_errors; /* path 8 bit interleaved parity */ + u32 line_bip24_errors; /* line 24 bit interleaved parity */ + u32 line_febe_errors; /* line far end block errors */ + u32 path_febe_errors; /* path far end block errors */ + u32 corr_hcs_errors; /* correctable header check sequence */ + u32 ucorr_hcs_errors; /* uncorrectable header check sequence */ + u32 pad[ 1 ]; /* i960 padding */ +} stats_oc3_t; + + +/* ATM statistics */ + +typedef struct stats_atm { + u32 cells_transmitted; /* cells transmitted */ + u32 cells_received; /* cells received */ + u32 vpi_bad_range; /* cell drops: VPI out of range */ + u32 vpi_no_conn; /* cell drops: no connection for VPI */ + u32 vci_bad_range; /* cell drops: VCI out of range */ + u32 vci_no_conn; /* cell drops: no connection for VCI */ + u32 pad[ 2 ]; /* i960 padding */ +} stats_atm_t; + +/* AAL0 statistics */ + +typedef struct stats_aal0 { + u32 cells_transmitted; /* cells transmitted */ + u32 cells_received; /* cells received */ + u32 cells_dropped; /* cells dropped */ + u32 pad[ 1 ]; /* i960 padding */ +} stats_aal0_t; + + +/* AAL3/4 statistics */ + +typedef struct stats_aal34 { + u32 cells_transmitted; /* cells transmitted from segmented PDUs */ + u32 cells_received; /* cells reassembled into PDUs */ + u32 cells_crc_errors; /* payload CRC error count */ + u32 cells_protocol_errors; /* SAR or CS layer protocol errors */ + u32 cells_dropped; /* cells dropped: partial reassembly */ + u32 cspdus_transmitted; /* CS PDUs transmitted */ + u32 cspdus_received; /* CS PDUs received */ + u32 cspdus_protocol_errors; /* CS layer protocol errors */ + u32 cspdus_dropped; /* reassembled PDUs drop'd (in cells) */ + u32 pad[ 3 ]; /* i960 padding */ +} stats_aal34_t; + + +/* AAL5 statistics */ + +typedef struct stats_aal5 { + u32 cells_transmitted; /* cells transmitted from segmented SDUs */ + u32 cells_received; /* cells reassembled into SDUs */ + u32 cells_dropped; /* reassembled PDUs dropped (in cells) */ + u32 congestion_experienced; /* CRC error and length wrong */ + u32 cspdus_transmitted; /* CS PDUs transmitted */ + u32 cspdus_received; /* CS PDUs received */ + u32 cspdus_crc_errors; /* CS PDUs CRC errors */ + u32 cspdus_protocol_errors; /* CS layer protocol errors */ + u32 cspdus_dropped; /* reassembled PDUs dropped */ + u32 pad[ 3 ]; /* i960 padding */ +} stats_aal5_t; + + +/* auxiliary statistics */ + +typedef struct stats_aux { + u32 small_b1_failed; /* receive BD allocation failures */ + u32 large_b1_failed; /* receive BD allocation failures */ + u32 small_b2_failed; /* receive BD allocation failures */ + u32 large_b2_failed; /* receive BD allocation failures */ + u32 rpd_alloc_failed; /* receive PDU allocation failures */ + u32 receive_carrier; /* no carrier = 0, carrier = 1 */ + u32 pad[ 2 ]; /* i960 padding */ +} stats_aux_t; + + +/* whole statistics buffer */ + +typedef struct stats { + struct stats_phy phy; /* physical encoding statistics */ + struct stats_oc3 oc3; /* OC-3 statistics */ + struct stats_atm atm; /* ATM statistics */ + struct stats_aal0 aal0; /* AAL0 statistics */ + struct stats_aal34 aal34; /* AAL3/4 statistics */ + struct stats_aal5 aal5; /* AAL5 statistics */ + struct stats_aux aux; /* auxiliary statistics */ +} stats_t; + + +/* get statistics command opcode */ + +typedef struct stats_opcode { + BITFIELD2( + enum opcode opcode : 8, /* cp opcode */ + u32 pad : 24 /* reserved */ + ) +} stats_opcode_t; + + +/* get statistics command block */ + +typedef struct stats_block { + struct stats_opcode opcode; /* get statistics command opcode */ + u32 stats_haddr; /* host DMA address of stats buffer */ +} stats_block_t; + + +/* expansion PROM data (PCI specific) */ + +typedef struct prom_data { + u32 hw_revision; /* hardware revision */ + u32 serial_number; /* board serial number */ + u8 mac_addr[ 8 ]; /* board MAC address */ +} prom_data_t; + + +/* get expansion PROM data command opcode */ + +typedef struct prom_opcode { + BITFIELD2( + enum opcode opcode : 8, /* cp opcode */ + u32 pad : 24 /* reserved */ + ) +} prom_opcode_t; + + +/* get expansion PROM data command block */ + +typedef struct prom_block { + struct prom_opcode opcode; /* get PROM data command opcode */ + u32 prom_haddr; /* host DMA address of PROM buffer */ +} prom_block_t; + + +/* cp command */ + +typedef union cmd { + enum opcode opcode; /* operation code */ + struct activate_block activate_block; /* activate VC */ + struct deactivate_block deactivate_block; /* deactivate VC */ + struct stats_block stats_block; /* get statistics */ + struct prom_block prom_block; /* get expansion PROM data */ + struct oc3_block oc3_block; /* get/set OC-3 registers */ + u32 pad[ 4 ]; /* i960 padding */ +} cmd_t; + + +/* cp resident command queue */ + +typedef struct cp_cmdq_entry { + union cmd cmd; /* command */ + u32 status_haddr; /* host DMA address of completion status */ + u32 pad[ 3 ]; /* i960 padding */ +} cp_cmdq_entry_t; + + +/* host resident transmit queue entry */ + +typedef struct host_txq_entry { + struct cp_txq_entry __iomem *cp_entry; /* addr of cp resident tx queue entry */ + enum status* status; /* addr of host resident status */ + struct tpd* tpd; /* addr of transmit PDU descriptor */ + u32 tpd_dma; /* DMA address of tpd */ + struct sk_buff* skb; /* related skb */ + void* data; /* copy of misaligned data */ + unsigned long incarn; /* vc_map incarnation when submitted for tx */ + struct fore200e_vc_map* vc_map; + +} host_txq_entry_t; + + +/* host resident receive queue entry */ + +typedef struct host_rxq_entry { + struct cp_rxq_entry __iomem *cp_entry; /* addr of cp resident rx queue entry */ + enum status* status; /* addr of host resident status */ + struct rpd* rpd; /* addr of receive PDU descriptor */ + u32 rpd_dma; /* DMA address of rpd */ +} host_rxq_entry_t; + + +/* host resident buffer supply queue entry */ + +typedef struct host_bsq_entry { + struct cp_bsq_entry __iomem *cp_entry; /* addr of cp resident buffer supply queue entry */ + enum status* status; /* addr of host resident status */ + struct rbd_block* rbd_block; /* addr of receive buffer descriptor block */ + u32 rbd_block_dma; /* DMA address od rdb */ +} host_bsq_entry_t; + + +/* host resident command queue entry */ + +typedef struct host_cmdq_entry { + struct cp_cmdq_entry __iomem *cp_entry; /* addr of cp resident cmd queue entry */ + enum status *status; /* addr of host resident status */ +} host_cmdq_entry_t; + + +/* chunk of memory */ + +typedef struct chunk { + void* alloc_addr; /* base address of allocated chunk */ + void* align_addr; /* base address of aligned chunk */ + dma_addr_t dma_addr; /* DMA address of aligned chunk */ + int direction; /* direction of DMA mapping */ + u32 alloc_size; /* length of allocated chunk */ + u32 align_size; /* length of aligned chunk */ +} chunk_t; + +#define dma_size align_size /* DMA useable size */ + + +/* host resident receive buffer */ + +typedef struct buffer { + struct buffer* next; /* next receive buffer */ + enum buffer_scheme scheme; /* buffer scheme */ + enum buffer_magn magn; /* buffer magnitude */ + struct chunk data; /* data buffer */ +#ifdef FORE200E_BSQ_DEBUG + unsigned long index; /* buffer # in queue */ + int supplied; /* 'buffer supplied' flag */ +#endif +} buffer_t; + + +#if (BITS_PER_LONG == 32) +#define FORE200E_BUF2HDL(buffer) ((u32)(buffer)) +#define FORE200E_HDL2BUF(handle) ((struct buffer*)(handle)) +#else /* deal with 64 bit pointers */ +#define FORE200E_BUF2HDL(buffer) ((u32)((u64)(buffer))) +#define FORE200E_HDL2BUF(handle) ((struct buffer*)(((u64)(handle)) | PAGE_OFFSET)) +#endif + + +/* host resident command queue */ + +typedef struct host_cmdq { + struct host_cmdq_entry host_entry[ QUEUE_SIZE_CMD ]; /* host resident cmd queue entries */ + int head; /* head of cmd queue */ + struct chunk status; /* array of completion status */ +} host_cmdq_t; + + +/* host resident transmit queue */ + +typedef struct host_txq { + struct host_txq_entry host_entry[ QUEUE_SIZE_TX ]; /* host resident tx queue entries */ + int head; /* head of tx queue */ + int tail; /* tail of tx queue */ + struct chunk tpd; /* array of tpds */ + struct chunk status; /* arry of completion status */ + int txing; /* number of pending PDUs in tx queue */ +} host_txq_t; + + +/* host resident receive queue */ + +typedef struct host_rxq { + struct host_rxq_entry host_entry[ QUEUE_SIZE_RX ]; /* host resident rx queue entries */ + int head; /* head of rx queue */ + struct chunk rpd; /* array of rpds */ + struct chunk status; /* array of completion status */ +} host_rxq_t; + + +/* host resident buffer supply queues */ + +typedef struct host_bsq { + struct host_bsq_entry host_entry[ QUEUE_SIZE_BS ]; /* host resident buffer supply queue entries */ + int head; /* head of buffer supply queue */ + struct chunk rbd_block; /* array of rbds */ + struct chunk status; /* array of completion status */ + struct buffer* buffer; /* array of rx buffers */ + struct buffer* freebuf; /* list of free rx buffers */ + volatile int freebuf_count; /* count of free rx buffers */ +} host_bsq_t; + + +/* header of the firmware image */ + +typedef struct fw_header { + u32 magic; /* magic number */ + u32 version; /* firmware version id */ + u32 load_offset; /* fw load offset in board memory */ + u32 start_offset; /* fw execution start address in board memory */ +} fw_header_t; + +#define FW_HEADER_MAGIC 0x65726f66 /* 'fore' */ + + +/* receive buffer supply queues scheme specification */ + +typedef struct bs_spec { + u32 queue_length; /* queue capacity */ + u32 buffer_size; /* host buffer size */ + u32 pool_size; /* number of rbds */ + u32 supply_blksize; /* num of rbds in I/O block (multiple + of 4 between 4 and 124 inclusive) */ +} bs_spec_t; + + +/* initialization command block (one-time command, not in cmd queue) */ + +typedef struct init_block { + enum opcode opcode; /* initialize command */ + enum status status; /* related status word */ + u32 receive_threshold; /* not used */ + u32 num_connect; /* ATM connections */ + u32 cmd_queue_len; /* length of command queue */ + u32 tx_queue_len; /* length of transmit queue */ + u32 rx_queue_len; /* length of receive queue */ + u32 rsd_extension; /* number of extra 32 byte blocks */ + u32 tsd_extension; /* number of extra 32 byte blocks */ + u32 conless_vpvc; /* not used */ + u32 pad[ 2 ]; /* force quad alignment */ + struct bs_spec bs_spec[ BUFFER_SCHEME_NBR ][ BUFFER_MAGN_NBR ]; /* buffer supply queues spec */ +} init_block_t; + + +typedef enum media_type { + MEDIA_TYPE_CAT5_UTP = 0x06, /* unshielded twisted pair */ + MEDIA_TYPE_MM_OC3_ST = 0x16, /* multimode fiber ST */ + MEDIA_TYPE_MM_OC3_SC = 0x26, /* multimode fiber SC */ + MEDIA_TYPE_SM_OC3_ST = 0x36, /* single-mode fiber ST */ + MEDIA_TYPE_SM_OC3_SC = 0x46 /* single-mode fiber SC */ +} media_type_t; + +#define FORE200E_MEDIA_INDEX(media_type) ((media_type)>>4) + + +/* cp resident queues */ + +typedef struct cp_queues { + u32 cp_cmdq; /* command queue */ + u32 cp_txq; /* transmit queue */ + u32 cp_rxq; /* receive queue */ + u32 cp_bsq[ BUFFER_SCHEME_NBR ][ BUFFER_MAGN_NBR ]; /* buffer supply queues */ + u32 imask; /* 1 enables cp to host interrupts */ + u32 istat; /* 1 for interrupt posted */ + u32 heap_base; /* offset form beginning of ram */ + u32 heap_size; /* space available for queues */ + u32 hlogger; /* non zero for host logging */ + u32 heartbeat; /* cp heartbeat */ + u32 fw_release; /* firmware version */ + u32 mon960_release; /* i960 monitor version */ + u32 tq_plen; /* transmit throughput measurements */ + /* make sure the init block remains on a quad word boundary */ + struct init_block init; /* one time cmd, not in cmd queue */ + enum media_type media_type; /* media type id */ + u32 oc3_revision; /* OC-3 revision number */ +} cp_queues_t; + + +/* boot status */ + +typedef enum boot_status { + BSTAT_COLD_START = (u32) 0xc01dc01d, /* cold start */ + BSTAT_SELFTEST_OK = (u32) 0x02201958, /* self-test ok */ + BSTAT_SELFTEST_FAIL = (u32) 0xadbadbad, /* self-test failed */ + BSTAT_CP_RUNNING = (u32) 0xce11feed, /* cp is running */ + BSTAT_MON_TOO_BIG = (u32) 0x10aded00 /* i960 monitor is too big */ +} boot_status_t; + + +/* software UART */ + +typedef struct soft_uart { + u32 send; /* write register */ + u32 recv; /* read register */ +} soft_uart_t; + +#define FORE200E_CP_MONITOR_UART_FREE 0x00000000 +#define FORE200E_CP_MONITOR_UART_AVAIL 0x01000000 + + +/* i960 monitor */ + +typedef struct cp_monitor { + struct soft_uart soft_uart; /* software UART */ + enum boot_status bstat; /* boot status */ + u32 app_base; /* application base offset */ + u32 mon_version; /* i960 monitor version */ +} cp_monitor_t; + + +/* device state */ + +typedef enum fore200e_state { + FORE200E_STATE_BLANK, /* initial state */ + FORE200E_STATE_REGISTER, /* device registered */ + FORE200E_STATE_CONFIGURE, /* bus interface configured */ + FORE200E_STATE_MAP, /* board space mapped in host memory */ + FORE200E_STATE_RESET, /* board resetted */ + FORE200E_STATE_LOAD_FW, /* firmware loaded */ + FORE200E_STATE_START_FW, /* firmware started */ + FORE200E_STATE_INITIALIZE, /* initialize command successful */ + FORE200E_STATE_INIT_CMDQ, /* command queue initialized */ + FORE200E_STATE_INIT_TXQ, /* transmit queue initialized */ + FORE200E_STATE_INIT_RXQ, /* receive queue initialized */ + FORE200E_STATE_INIT_BSQ, /* buffer supply queue initialized */ + FORE200E_STATE_ALLOC_BUF, /* receive buffers allocated */ + FORE200E_STATE_IRQ, /* host interrupt requested */ + FORE200E_STATE_COMPLETE /* initialization completed */ +} fore200e_state; + + +/* PCA-200E registers */ + +typedef struct fore200e_pca_regs { + volatile u32 __iomem * hcr; /* address of host control register */ + volatile u32 __iomem * imr; /* address of host interrupt mask register */ + volatile u32 __iomem * psr; /* address of PCI specific register */ +} fore200e_pca_regs_t; + + +/* SBA-200E registers */ + +typedef struct fore200e_sba_regs { + volatile u32 __iomem *hcr; /* address of host control register */ + volatile u32 __iomem *bsr; /* address of burst transfer size register */ + volatile u32 __iomem *isr; /* address of interrupt level selection register */ +} fore200e_sba_regs_t; + + +/* model-specific registers */ + +typedef union fore200e_regs { + struct fore200e_pca_regs pca; /* PCA-200E registers */ + struct fore200e_sba_regs sba; /* SBA-200E registers */ +} fore200e_regs; + + +struct fore200e; + +/* bus-dependent data */ + +typedef struct fore200e_bus { + char* model_name; /* board model name */ + char* proc_name; /* board name under /proc/atm */ + int descr_alignment; /* tpd/rpd/rbd DMA alignment requirement */ + int buffer_alignment; /* rx buffers DMA alignment requirement */ + int status_alignment; /* status words DMA alignment requirement */ + const unsigned char* fw_data; /* address of firmware data start */ + const unsigned int* fw_size; /* address of firmware data size */ + u32 (*read)(volatile u32 __iomem *); + void (*write)(u32, volatile u32 __iomem *); + u32 (*dma_map)(struct fore200e*, void*, int, int); + void (*dma_unmap)(struct fore200e*, u32, int, int); + void (*dma_sync_for_cpu)(struct fore200e*, u32, int, int); + void (*dma_sync_for_device)(struct fore200e*, u32, int, int); + int (*dma_chunk_alloc)(struct fore200e*, struct chunk*, int, int, int); + void (*dma_chunk_free)(struct fore200e*, struct chunk*); + struct fore200e* (*detect)(const struct fore200e_bus*, int); + int (*configure)(struct fore200e*); + int (*map)(struct fore200e*); + void (*reset)(struct fore200e*); + int (*prom_read)(struct fore200e*, struct prom_data*); + void (*unmap)(struct fore200e*); + void (*irq_enable)(struct fore200e*); + int (*irq_check)(struct fore200e*); + void (*irq_ack)(struct fore200e*); + int (*proc_read)(struct fore200e*, char*); +} fore200e_bus_t; + +/* vc mapping */ + +typedef struct fore200e_vc_map { + struct atm_vcc* vcc; /* vcc entry */ + unsigned long incarn; /* vcc incarnation number */ +} fore200e_vc_map_t; + +#define FORE200E_VC_MAP(fore200e, vpi, vci) \ + (& (fore200e)->vc_map[ ((vpi) << FORE200E_VCI_BITS) | (vci) ]) + + +/* per-device data */ + +typedef struct fore200e { + struct list_head entry; /* next device */ + const struct fore200e_bus* bus; /* bus-dependent code and data */ + union fore200e_regs regs; /* bus-dependent registers */ + struct atm_dev* atm_dev; /* ATM device */ + + enum fore200e_state state; /* device state */ + + char name[16]; /* device name */ + void* bus_dev; /* bus-specific kernel data */ + int irq; /* irq number */ + unsigned long phys_base; /* physical base address */ + void __iomem * virt_base; /* virtual base address */ + + unsigned char esi[ ESI_LEN ]; /* end system identifier */ + + struct cp_monitor __iomem * cp_monitor; /* i960 monitor address */ + struct cp_queues __iomem * cp_queues; /* cp resident queues */ + struct host_cmdq host_cmdq; /* host resident cmd queue */ + struct host_txq host_txq; /* host resident tx queue */ + struct host_rxq host_rxq; /* host resident rx queue */ + /* host resident buffer supply queues */ + struct host_bsq host_bsq[ BUFFER_SCHEME_NBR ][ BUFFER_MAGN_NBR ]; + + u32 available_cell_rate; /* remaining pseudo-CBR bw on link */ + + int loop_mode; /* S/UNI loopback mode */ + + struct stats* stats; /* last snapshot of the stats */ + + struct semaphore rate_sf; /* protects rate reservation ops */ + spinlock_t q_lock; /* protects queue ops */ +#ifdef FORE200E_USE_TASKLET + struct tasklet_struct tx_tasklet; /* performs tx interrupt work */ + struct tasklet_struct rx_tasklet; /* performs rx interrupt work */ +#endif + unsigned long tx_sat; /* tx queue saturation count */ + + unsigned long incarn_count; + struct fore200e_vc_map vc_map[ NBR_CONNECT ]; /* vc mapping */ +} fore200e_t; + + +/* per-vcc data */ + +typedef struct fore200e_vcc { + enum buffer_scheme scheme; /* rx buffer scheme */ + struct tpd_rate rate; /* tx rate control data */ + int rx_min_pdu; /* size of smallest PDU received */ + int rx_max_pdu; /* size of largest PDU received */ + int tx_min_pdu; /* size of smallest PDU transmitted */ + int tx_max_pdu; /* size of largest PDU transmitted */ + unsigned long tx_pdu; /* nbr of tx pdus */ + unsigned long rx_pdu; /* nbr of rx pdus */ +} fore200e_vcc_t; + + + +/* 200E-series common memory layout */ + +#define FORE200E_CP_MONITOR_OFFSET 0x00000400 /* i960 monitor interface */ +#define FORE200E_CP_QUEUES_OFFSET 0x00004d40 /* cp resident queues */ + + +/* PCA-200E memory layout */ + +#define PCA200E_IOSPACE_LENGTH 0x00200000 + +#define PCA200E_HCR_OFFSET 0x00100000 /* board control register */ +#define PCA200E_IMR_OFFSET 0x00100004 /* host IRQ mask register */ +#define PCA200E_PSR_OFFSET 0x00100008 /* PCI specific register */ + + +/* PCA-200E host control register */ + +#define PCA200E_HCR_RESET (1<<0) /* read / write */ +#define PCA200E_HCR_HOLD_LOCK (1<<1) /* read / write */ +#define PCA200E_HCR_I960FAIL (1<<2) /* read */ +#define PCA200E_HCR_INTRB (1<<2) /* write */ +#define PCA200E_HCR_HOLD_ACK (1<<3) /* read */ +#define PCA200E_HCR_INTRA (1<<3) /* write */ +#define PCA200E_HCR_OUTFULL (1<<4) /* read */ +#define PCA200E_HCR_CLRINTR (1<<4) /* write */ +#define PCA200E_HCR_ESPHOLD (1<<5) /* read */ +#define PCA200E_HCR_INFULL (1<<6) /* read */ +#define PCA200E_HCR_TESTMODE (1<<7) /* read */ + + +/* PCA-200E PCI bus interface regs (offsets in PCI config space) */ + +#define PCA200E_PCI_LATENCY 0x40 /* maximum slave latenty */ +#define PCA200E_PCI_MASTER_CTRL 0x41 /* master control */ +#define PCA200E_PCI_THRESHOLD 0x42 /* burst / continous req threshold */ + +/* PBI master control register */ + +#define PCA200E_CTRL_DIS_CACHE_RD (1<<0) /* disable cache-line reads */ +#define PCA200E_CTRL_DIS_WRT_INVAL (1<<1) /* disable writes and invalidates */ +#define PCA200E_CTRL_2_CACHE_WRT_INVAL (1<<2) /* require 2 cache-lines for writes and invalidates */ +#define PCA200E_CTRL_IGN_LAT_TIMER (1<<3) /* ignore the latency timer */ +#define PCA200E_CTRL_ENA_CONT_REQ_MODE (1<<4) /* enable continuous request mode */ +#define PCA200E_CTRL_LARGE_PCI_BURSTS (1<<5) /* force large PCI bus bursts */ +#define PCA200E_CTRL_CONVERT_ENDIAN (1<<6) /* convert endianess of slave RAM accesses */ + + + +#define SBA200E_PROM_NAME "FORE,sba-200e" /* device name in openprom tree */ + + +/* size of SBA-200E registers */ + +#define SBA200E_HCR_LENGTH 4 +#define SBA200E_BSR_LENGTH 4 +#define SBA200E_ISR_LENGTH 4 +#define SBA200E_RAM_LENGTH 0x40000 + + +/* SBA-200E SBUS burst transfer size register */ + +#define SBA200E_BSR_BURST4 0x04 +#define SBA200E_BSR_BURST8 0x08 +#define SBA200E_BSR_BURST16 0x10 + + +/* SBA-200E host control register */ + +#define SBA200E_HCR_RESET (1<<0) /* read / write (sticky) */ +#define SBA200E_HCR_HOLD_LOCK (1<<1) /* read / write (sticky) */ +#define SBA200E_HCR_I960FAIL (1<<2) /* read */ +#define SBA200E_HCR_I960SETINTR (1<<2) /* write */ +#define SBA200E_HCR_OUTFULL (1<<3) /* read */ +#define SBA200E_HCR_INTR_CLR (1<<3) /* write */ +#define SBA200E_HCR_INTR_ENA (1<<4) /* read / write (sticky) */ +#define SBA200E_HCR_ESPHOLD (1<<5) /* read */ +#define SBA200E_HCR_INFULL (1<<6) /* read */ +#define SBA200E_HCR_TESTMODE (1<<7) /* read */ +#define SBA200E_HCR_INTR_REQ (1<<8) /* read */ + +#define SBA200E_HCR_STICKY (SBA200E_HCR_RESET | SBA200E_HCR_HOLD_LOCK | SBA200E_HCR_INTR_ENA) + + +#endif /* __KERNEL__ */ +#endif /* _FORE200E_H */ diff --git a/drivers/atm/fore200e_firmware_copyright b/drivers/atm/fore200e_firmware_copyright new file mode 100644 index 000000000000..d58e6490836e --- /dev/null +++ b/drivers/atm/fore200e_firmware_copyright @@ -0,0 +1,31 @@ + +These microcode data are placed under the terms of the GNU General Public License. + +We would prefer you not to distribute modified versions of it and not to ask +for assembly or other microcode source. + +Copyright (c) 1995-2000 FORE Systems, Inc., as an unpublished work. This +notice does not imply unrestricted or public access to these materials which +are a trade secret of FORE Systems, Inc. or its subsidiaries or affiliates +(together referred to as "FORE"), and which may not be reproduced, used, sold +or transferred to any third party without FORE's prior written consent. All +rights reserved. + +U.S. Government Restricted Rights. If you are licensing the Software on +behalf of the U.S. Government ("Government"), the following provisions apply +to you. If the software is supplied to the Department of Defense ("DoD"), it +is classified as "Commercial Computer Software" under paragraph 252.227-7014 +of the DoD Supplement to the Federal Acquisition Regulations ("DFARS") (or any +successor regulations) and the Government is acquiring only the license +rights granted herein (the license rights customarily provided to non-Government +users). If the Software is supplied to any unit or agency of the Government +other than the DoD, it is classified as "Restricted Computer Software" and +the Government's rights in the Software are defined in paragraph 52.227-19 of +the Federal Acquisition Regulations ("FAR") (or any successor regulations) or, +in the cases of NASA, in paragraph 18.52.227-86 of the NASA Supplement to the FAR +(or any successor regulations). + +FORE Systems is a registered trademark, and ForeRunner, ForeRunnerLE, and +ForeThought are trademarks of FORE Systems, Inc. All other brands or product +names are trademarks or registered trademarks of their respective holders. + diff --git a/drivers/atm/fore200e_mkfirm.c b/drivers/atm/fore200e_mkfirm.c new file mode 100644 index 000000000000..2ebe1a1e6f8b --- /dev/null +++ b/drivers/atm/fore200e_mkfirm.c @@ -0,0 +1,156 @@ +/* + $Id: fore200e_mkfirm.c,v 1.1 2000/02/21 16:04:32 davem Exp $ + + mkfirm.c: generates a C readable file from a binary firmware image + + Christophe Lizzi (lizzi@{csti.fr, cnam.fr}), June 1999. + + This software may be used and distributed according to the terms + of the GNU General Public License, incorporated herein by reference. +*/ + +#include <stdio.h> +#include <stdlib.h> +#include <sys/types.h> +#include <time.h> + +char* default_basename = "pca200e"; /* was initially written for the PCA-200E firmware */ +char* default_infname = "<stdin>"; +char* default_outfname = "<stdout>"; + +char* progname; +int verbose = 0; +int inkernel = 0; + + +void usage(void) +{ + fprintf(stderr, + "%s: [-v] [-k] [-b basename ] [-i firmware.bin] [-o firmware.c]\n", + progname); + exit(-1); +} + + +int main(int argc, char** argv) +{ + time_t now; + char* infname = NULL; + char* outfname = NULL; + char* basename = NULL; + FILE* infile; + FILE* outfile; + unsigned firmsize; + int c; + + progname = *(argv++); + + while (argc > 1) { + if ((*argv)[0] == '-') { + switch ((*argv)[1]) { + case 'i': + if (argc-- < 3) + usage(); + infname = *(++argv); + break; + case 'o': + if (argc-- < 3) + usage(); + outfname = *(++argv); + break; + case 'b': + if (argc-- < 3) + usage(); + basename = *(++argv); + break; + case 'v': + verbose = 1; + break; + case 'k': + inkernel = 1; + break; + default: + usage(); + } + } + else { + usage(); + } + argc--; + argv++; + } + + if (infname != NULL) { + infile = fopen(infname, "r"); + if (infile == NULL) { + fprintf(stderr, "%s: can't open %s for reading\n", + progname, infname); + exit(-2); + } + } + else { + infile = stdin; + infname = default_infname; + } + + if (outfname) { + outfile = fopen(outfname, "w"); + if (outfile == NULL) { + fprintf(stderr, "%s: can't open %s for writing\n", + progname, outfname); + exit(-3); + } + } + else { + outfile = stdout; + outfname = default_outfname; + } + + if (basename == NULL) + basename = default_basename; + + if (verbose) { + fprintf(stderr, "%s: input file = %s\n", progname, infname ); + fprintf(stderr, "%s: output file = %s\n", progname, outfname ); + fprintf(stderr, "%s: firmware basename = %s\n", progname, basename ); + } + + time(&now); + fprintf(outfile, "/*\n generated by %s from %s on %s" + " DO NOT EDIT!\n*/\n\n", + progname, infname, ctime(&now)); + + if (inkernel) + fprintf(outfile, "#include <linux/init.h>\n\n" ); + + /* XXX force 32 bit alignment? */ + fprintf(outfile, "const unsigned char%s %s_data[] = {\n", + inkernel ? " __initdata" : "", basename ); + + c = getc(infile); + fprintf(outfile,"\t0x%02x", c); + firmsize = 1; + + while ((c = getc(infile)) >= 0) { + + if (firmsize++ % 8) + fprintf(outfile,", 0x%02x", c); + else + fprintf(outfile,",\n\t0x%02x", c); + } + + fprintf(outfile, "\n};\n\n"); + + fprintf(outfile, "const unsigned int%s %s_size = %u;\n", + inkernel ? " __initdata" : "", basename, firmsize ); + + if (infile != stdin) + fclose(infile); + if (outfile != stdout) + fclose(outfile); + + if(verbose) + fprintf(stderr, "%s: firmware size = %u\n", progname, firmsize); + + exit(0); +} diff --git a/drivers/atm/he.c b/drivers/atm/he.c new file mode 100644 index 000000000000..c2c31a5f4513 --- /dev/null +++ b/drivers/atm/he.c @@ -0,0 +1,3091 @@ +/* $Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $ */ + +/* + + he.c + + ForeRunnerHE ATM Adapter driver for ATM on Linux + Copyright (C) 1999-2001 Naval Research Laboratory + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ + +/* + + he.c + + ForeRunnerHE ATM Adapter driver for ATM on Linux + Copyright (C) 1999-2001 Naval Research Laboratory + + Permission to use, copy, modify and distribute this software and its + documentation is hereby granted, provided that both the copyright + notice and this permission notice appear in all copies of the software, + derivative works or modified versions, and any portions thereof, and + that both notices appear in supporting documentation. + + NRL ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS" CONDITION AND + DISCLAIMS ANY LIABILITY OF ANY KIND FOR ANY DAMAGES WHATSOEVER + RESULTING FROM THE USE OF THIS SOFTWARE. + + This driver was written using the "Programmer's Reference Manual for + ForeRunnerHE(tm)", MANU0361-01 - Rev. A, 08/21/98. + + AUTHORS: + chas williams <chas@cmf.nrl.navy.mil> + eric kinzie <ekinzie@cmf.nrl.navy.mil> + + NOTES: + 4096 supported 'connections' + group 0 is used for all traffic + interrupt queue 0 is used for all interrupts + aal0 support (based on work from ulrich.u.muller@nokia.com) + + */ + +#include <linux/config.h> +#include <linux/module.h> +#include <linux/version.h> +#include <linux/kernel.h> +#include <linux/skbuff.h> +#include <linux/pci.h> +#include <linux/errno.h> +#include <linux/types.h> +#include <linux/string.h> +#include <linux/delay.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/sched.h> +#include <linux/timer.h> +#include <linux/interrupt.h> +#include <asm/io.h> +#include <asm/byteorder.h> +#include <asm/uaccess.h> + +#include <linux/atmdev.h> +#include <linux/atm.h> +#include <linux/sonet.h> + +#define USE_TASKLET +#undef USE_SCATTERGATHER +#undef USE_CHECKSUM_HW /* still confused about this */ +#define USE_RBPS +#undef USE_RBPS_POOL /* if memory is tight try this */ +#undef USE_RBPL_POOL /* if memory is tight try this */ +#define USE_TPD_POOL +/* #undef CONFIG_ATM_HE_USE_SUNI */ +/* #undef HE_DEBUG */ + +#include "he.h" +#include "suni.h" +#include <linux/atm_he.h> + +#define hprintk(fmt,args...) printk(KERN_ERR DEV_LABEL "%d: " fmt, he_dev->number , ##args) + +#ifdef HE_DEBUG +#define HPRINTK(fmt,args...) printk(KERN_DEBUG DEV_LABEL "%d: " fmt, he_dev->number , ##args) +#else /* !HE_DEBUG */ +#define HPRINTK(fmt,args...) do { } while (0) +#endif /* HE_DEBUG */ + +/* version definition */ + +static char *version = "$Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $"; + +/* declarations */ + +static int he_open(struct atm_vcc *vcc); +static void he_close(struct atm_vcc *vcc); +static int he_send(struct atm_vcc *vcc, struct sk_buff *skb); +static int he_ioctl(struct atm_dev *dev, unsigned int cmd, void __user *arg); +static irqreturn_t he_irq_handler(int irq, void *dev_id, struct pt_regs *regs); +static void he_tasklet(unsigned long data); +static int he_proc_read(struct atm_dev *dev,loff_t *pos,char *page); +static int he_start(struct atm_dev *dev); +static void he_stop(struct he_dev *dev); +static void he_phy_put(struct atm_dev *, unsigned char, unsigned long); +static unsigned char he_phy_get(struct atm_dev *, unsigned long); + +static u8 read_prom_byte(struct he_dev *he_dev, int addr); + +/* globals */ + +static struct he_dev *he_devs; +static int disable64; +static short nvpibits = -1; +static short nvcibits = -1; +static short rx_skb_reserve = 16; +static int irq_coalesce = 1; +static int sdh = 0; + +/* Read from EEPROM = 0000 0011b */ +static unsigned int readtab[] = { + CS_HIGH | CLK_HIGH, + CS_LOW | CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW, + CLK_HIGH, /* 0 */ + CLK_LOW | SI_HIGH, + CLK_HIGH | SI_HIGH, /* 1 */ + CLK_LOW | SI_HIGH, + CLK_HIGH | SI_HIGH /* 1 */ +}; + +/* Clock to read from/write to the EEPROM */ +static unsigned int clocktab[] = { + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW, + CLK_HIGH, + CLK_LOW +}; + +static struct atmdev_ops he_ops = +{ + .open = he_open, + .close = he_close, + .ioctl = he_ioctl, + .send = he_send, + .phy_put = he_phy_put, + .phy_get = he_phy_get, + .proc_read = he_proc_read, + .owner = THIS_MODULE +}; + +#define he_writel(dev, val, reg) do { writel(val, (dev)->membase + (reg)); wmb(); } while (0) +#define he_readl(dev, reg) readl((dev)->membase + (reg)) + +/* section 2.12 connection memory access */ + +static __inline__ void +he_writel_internal(struct he_dev *he_dev, unsigned val, unsigned addr, + unsigned flags) +{ + he_writel(he_dev, val, CON_DAT); + (void) he_readl(he_dev, CON_DAT); /* flush posted writes */ + he_writel(he_dev, flags | CON_CTL_WRITE | CON_CTL_ADDR(addr), CON_CTL); + while (he_readl(he_dev, CON_CTL) & CON_CTL_BUSY); +} + +#define he_writel_rcm(dev, val, reg) \ + he_writel_internal(dev, val, reg, CON_CTL_RCM) + +#define he_writel_tcm(dev, val, reg) \ + he_writel_internal(dev, val, reg, CON_CTL_TCM) + +#define he_writel_mbox(dev, val, reg) \ + he_writel_internal(dev, val, reg, CON_CTL_MBOX) + +static unsigned +he_readl_internal(struct he_dev *he_dev, unsigned addr, unsigned flags) +{ + he_writel(he_dev, flags | CON_CTL_READ | CON_CTL_ADDR(addr), CON_CTL); + while (he_readl(he_dev, CON_CTL) & CON_CTL_BUSY); + return he_readl(he_dev, CON_DAT); +} + +#define he_readl_rcm(dev, reg) \ + he_readl_internal(dev, reg, CON_CTL_RCM) + +#define he_readl_tcm(dev, reg) \ + he_readl_internal(dev, reg, CON_CTL_TCM) + +#define he_readl_mbox(dev, reg) \ + he_readl_internal(dev, reg, CON_CTL_MBOX) + + +/* figure 2.2 connection id */ + +#define he_mkcid(dev, vpi, vci) (((vpi << (dev)->vcibits) | vci) & 0x1fff) + +/* 2.5.1 per connection transmit state registers */ + +#define he_writel_tsr0(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 0) +#define he_readl_tsr0(dev, cid) \ + he_readl_tcm(dev, CONFIG_TSRA | (cid << 3) | 0) + +#define he_writel_tsr1(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 1) + +#define he_writel_tsr2(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 2) + +#define he_writel_tsr3(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 3) + +#define he_writel_tsr4(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 4) + + /* from page 2-20 + * + * NOTE While the transmit connection is active, bits 23 through 0 + * of this register must not be written by the host. Byte + * enables should be used during normal operation when writing + * the most significant byte. + */ + +#define he_writel_tsr4_upper(dev, val, cid) \ + he_writel_internal(dev, val, CONFIG_TSRA | (cid << 3) | 4, \ + CON_CTL_TCM \ + | CON_BYTE_DISABLE_2 \ + | CON_BYTE_DISABLE_1 \ + | CON_BYTE_DISABLE_0) + +#define he_readl_tsr4(dev, cid) \ + he_readl_tcm(dev, CONFIG_TSRA | (cid << 3) | 4) + +#define he_writel_tsr5(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 5) + +#define he_writel_tsr6(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 6) + +#define he_writel_tsr7(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRA | (cid << 3) | 7) + + +#define he_writel_tsr8(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRB | (cid << 2) | 0) + +#define he_writel_tsr9(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRB | (cid << 2) | 1) + +#define he_writel_tsr10(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRB | (cid << 2) | 2) + +#define he_writel_tsr11(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRB | (cid << 2) | 3) + + +#define he_writel_tsr12(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRC | (cid << 1) | 0) + +#define he_writel_tsr13(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRC | (cid << 1) | 1) + + +#define he_writel_tsr14(dev, val, cid) \ + he_writel_tcm(dev, val, CONFIG_TSRD | cid) + +#define he_writel_tsr14_upper(dev, val, cid) \ + he_writel_internal(dev, val, CONFIG_TSRD | cid, \ + CON_CTL_TCM \ + | CON_BYTE_DISABLE_2 \ + | CON_BYTE_DISABLE_1 \ + | CON_BYTE_DISABLE_0) + +/* 2.7.1 per connection receive state registers */ + +#define he_writel_rsr0(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 0) +#define he_readl_rsr0(dev, cid) \ + he_readl_rcm(dev, 0x00000 | (cid << 3) | 0) + +#define he_writel_rsr1(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 1) + +#define he_writel_rsr2(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 2) + +#define he_writel_rsr3(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 3) + +#define he_writel_rsr4(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 4) + +#define he_writel_rsr5(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 5) + +#define he_writel_rsr6(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 6) + +#define he_writel_rsr7(dev, val, cid) \ + he_writel_rcm(dev, val, 0x00000 | (cid << 3) | 7) + +static __inline__ struct atm_vcc* +__find_vcc(struct he_dev *he_dev, unsigned cid) +{ + struct hlist_head *head; + struct atm_vcc *vcc; + struct hlist_node *node; + struct sock *s; + short vpi; + int vci; + + vpi = cid >> he_dev->vcibits; + vci = cid & ((1 << he_dev->vcibits) - 1); + head = &vcc_hash[vci & (VCC_HTABLE_SIZE -1)]; + + sk_for_each(s, node, head) { + vcc = atm_sk(s); + if (vcc->dev == he_dev->atm_dev && + vcc->vci == vci && vcc->vpi == vpi && + vcc->qos.rxtp.traffic_class != ATM_NONE) { + return vcc; + } + } + return NULL; +} + +static int __devinit +he_init_one(struct pci_dev *pci_dev, const struct pci_device_id *pci_ent) +{ + struct atm_dev *atm_dev = NULL; + struct he_dev *he_dev = NULL; + int err = 0; + + printk(KERN_INFO "he: %s\n", version); + + if (pci_enable_device(pci_dev)) + return -EIO; + if (pci_set_dma_mask(pci_dev, HE_DMA_MASK) != 0) { + printk(KERN_WARNING "he: no suitable dma available\n"); + err = -EIO; + goto init_one_failure; + } + + atm_dev = atm_dev_register(DEV_LABEL, &he_ops, -1, NULL); + if (!atm_dev) { + err = -ENODEV; + goto init_one_failure; + } + pci_set_drvdata(pci_dev, atm_dev); + + he_dev = (struct he_dev *) kmalloc(sizeof(struct he_dev), + GFP_KERNEL); + if (!he_dev) { + err = -ENOMEM; + goto init_one_failure; + } + memset(he_dev, 0, sizeof(struct he_dev)); + + he_dev->pci_dev = pci_dev; + he_dev->atm_dev = atm_dev; + he_dev->atm_dev->dev_data = he_dev; + atm_dev->dev_data = he_dev; + he_dev->number = atm_dev->number; + if (he_start(atm_dev)) { + he_stop(he_dev); + err = -ENODEV; + goto init_one_failure; + } + he_dev->next = NULL; + if (he_devs) + he_dev->next = he_devs; + he_devs = he_dev; + return 0; + +init_one_failure: + if (atm_dev) + atm_dev_deregister(atm_dev); + if (he_dev) + kfree(he_dev); + pci_disable_device(pci_dev); + return err; +} + +static void __devexit +he_remove_one (struct pci_dev *pci_dev) +{ + struct atm_dev *atm_dev; + struct he_dev *he_dev; + + atm_dev = pci_get_drvdata(pci_dev); + he_dev = HE_DEV(atm_dev); + + /* need to remove from he_devs */ + + he_stop(he_dev); + atm_dev_deregister(atm_dev); + kfree(he_dev); + + pci_set_drvdata(pci_dev, NULL); + pci_disable_device(pci_dev); +} + + +static unsigned +rate_to_atmf(unsigned rate) /* cps to atm forum format */ +{ +#define NONZERO (1 << 14) + + unsigned exp = 0; + + if (rate == 0) + return 0; + + rate <<= 9; + while (rate > 0x3ff) { + ++exp; + rate >>= 1; + } + + return (NONZERO | (exp << 9) | (rate & 0x1ff)); +} + +static void __init +he_init_rx_lbfp0(struct he_dev *he_dev) +{ + unsigned i, lbm_offset, lbufd_index, lbuf_addr, lbuf_count; + unsigned lbufs_per_row = he_dev->cells_per_row / he_dev->cells_per_lbuf; + unsigned lbuf_bufsize = he_dev->cells_per_lbuf * ATM_CELL_PAYLOAD; + unsigned row_offset = he_dev->r0_startrow * he_dev->bytes_per_row; + + lbufd_index = 0; + lbm_offset = he_readl(he_dev, RCMLBM_BA); + + he_writel(he_dev, lbufd_index, RLBF0_H); + + for (i = 0, lbuf_count = 0; i < he_dev->r0_numbuffs; ++i) { + lbufd_index += 2; + lbuf_addr = (row_offset + (lbuf_count * lbuf_bufsize)) / 32; + + he_writel_rcm(he_dev, lbuf_addr, lbm_offset); + he_writel_rcm(he_dev, lbufd_index, lbm_offset + 1); + + if (++lbuf_count == lbufs_per_row) { + lbuf_count = 0; + row_offset += he_dev->bytes_per_row; + } + lbm_offset += 4; + } + + he_writel(he_dev, lbufd_index - 2, RLBF0_T); + he_writel(he_dev, he_dev->r0_numbuffs, RLBF0_C); +} + +static void __init +he_init_rx_lbfp1(struct he_dev *he_dev) +{ + unsigned i, lbm_offset, lbufd_index, lbuf_addr, lbuf_count; + unsigned lbufs_per_row = he_dev->cells_per_row / he_dev->cells_per_lbuf; + unsigned lbuf_bufsize = he_dev->cells_per_lbuf * ATM_CELL_PAYLOAD; + unsigned row_offset = he_dev->r1_startrow * he_dev->bytes_per_row; + + lbufd_index = 1; + lbm_offset = he_readl(he_dev, RCMLBM_BA) + (2 * lbufd_index); + + he_writel(he_dev, lbufd_index, RLBF1_H); + + for (i = 0, lbuf_count = 0; i < he_dev->r1_numbuffs; ++i) { + lbufd_index += 2; + lbuf_addr = (row_offset + (lbuf_count * lbuf_bufsize)) / 32; + + he_writel_rcm(he_dev, lbuf_addr, lbm_offset); + he_writel_rcm(he_dev, lbufd_index, lbm_offset + 1); + + if (++lbuf_count == lbufs_per_row) { + lbuf_count = 0; + row_offset += he_dev->bytes_per_row; + } + lbm_offset += 4; + } + + he_writel(he_dev, lbufd_index - 2, RLBF1_T); + he_writel(he_dev, he_dev->r1_numbuffs, RLBF1_C); +} + +static void __init +he_init_tx_lbfp(struct he_dev *he_dev) +{ + unsigned i, lbm_offset, lbufd_index, lbuf_addr, lbuf_count; + unsigned lbufs_per_row = he_dev->cells_per_row / he_dev->cells_per_lbuf; + unsigned lbuf_bufsize = he_dev->cells_per_lbuf * ATM_CELL_PAYLOAD; + unsigned row_offset = he_dev->tx_startrow * he_dev->bytes_per_row; + + lbufd_index = he_dev->r0_numbuffs + he_dev->r1_numbuffs; + lbm_offset = he_readl(he_dev, RCMLBM_BA) + (2 * lbufd_index); + + he_writel(he_dev, lbufd_index, TLBF_H); + + for (i = 0, lbuf_count = 0; i < he_dev->tx_numbuffs; ++i) { + lbufd_index += 1; + lbuf_addr = (row_offset + (lbuf_count * lbuf_bufsize)) / 32; + + he_writel_rcm(he_dev, lbuf_addr, lbm_offset); + he_writel_rcm(he_dev, lbufd_index, lbm_offset + 1); + + if (++lbuf_count == lbufs_per_row) { + lbuf_count = 0; + row_offset += he_dev->bytes_per_row; + } + lbm_offset += 2; + } + + he_writel(he_dev, lbufd_index - 1, TLBF_T); +} + +static int __init +he_init_tpdrq(struct he_dev *he_dev) +{ + he_dev->tpdrq_base = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq), &he_dev->tpdrq_phys); + if (he_dev->tpdrq_base == NULL) { + hprintk("failed to alloc tpdrq\n"); + return -ENOMEM; + } + memset(he_dev->tpdrq_base, 0, + CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq)); + + he_dev->tpdrq_tail = he_dev->tpdrq_base; + he_dev->tpdrq_head = he_dev->tpdrq_base; + + he_writel(he_dev, he_dev->tpdrq_phys, TPDRQ_B_H); + he_writel(he_dev, 0, TPDRQ_T); + he_writel(he_dev, CONFIG_TPDRQ_SIZE - 1, TPDRQ_S); + + return 0; +} + +static void __init +he_init_cs_block(struct he_dev *he_dev) +{ + unsigned clock, rate, delta; + int reg; + + /* 5.1.7 cs block initialization */ + + for (reg = 0; reg < 0x20; ++reg) + he_writel_mbox(he_dev, 0x0, CS_STTIM0 + reg); + + /* rate grid timer reload values */ + + clock = he_is622(he_dev) ? 66667000 : 50000000; + rate = he_dev->atm_dev->link_rate; + delta = rate / 16 / 2; + + for (reg = 0; reg < 0x10; ++reg) { + /* 2.4 internal transmit function + * + * we initialize the first row in the rate grid. + * values are period (in clock cycles) of timer + */ + unsigned period = clock / rate; + + he_writel_mbox(he_dev, period, CS_TGRLD0 + reg); + rate -= delta; + } + + if (he_is622(he_dev)) { + /* table 5.2 (4 cells per lbuf) */ + he_writel_mbox(he_dev, 0x000800fa, CS_ERTHR0); + he_writel_mbox(he_dev, 0x000c33cb, CS_ERTHR1); + he_writel_mbox(he_dev, 0x0010101b, CS_ERTHR2); + he_writel_mbox(he_dev, 0x00181dac, CS_ERTHR3); + he_writel_mbox(he_dev, 0x00280600, CS_ERTHR4); + + /* table 5.3, 5.4, 5.5, 5.6, 5.7 */ + he_writel_mbox(he_dev, 0x023de8b3, CS_ERCTL0); + he_writel_mbox(he_dev, 0x1801, CS_ERCTL1); + he_writel_mbox(he_dev, 0x68b3, CS_ERCTL2); + he_writel_mbox(he_dev, 0x1280, CS_ERSTAT0); + he_writel_mbox(he_dev, 0x68b3, CS_ERSTAT1); + he_writel_mbox(he_dev, 0x14585, CS_RTFWR); + + he_writel_mbox(he_dev, 0x4680, CS_RTATR); + + /* table 5.8 */ + he_writel_mbox(he_dev, 0x00159ece, CS_TFBSET); + he_writel_mbox(he_dev, 0x68b3, CS_WCRMAX); + he_writel_mbox(he_dev, 0x5eb3, CS_WCRMIN); + he_writel_mbox(he_dev, 0xe8b3, CS_WCRINC); + he_writel_mbox(he_dev, 0xdeb3, CS_WCRDEC); + he_writel_mbox(he_dev, 0x68b3, CS_WCRCEIL); + + /* table 5.9 */ + he_writel_mbox(he_dev, 0x5, CS_OTPPER); + he_writel_mbox(he_dev, 0x14, CS_OTWPER); + } else { + /* table 5.1 (4 cells per lbuf) */ + he_writel_mbox(he_dev, 0x000400ea, CS_ERTHR0); + he_writel_mbox(he_dev, 0x00063388, CS_ERTHR1); + he_writel_mbox(he_dev, 0x00081018, CS_ERTHR2); + he_writel_mbox(he_dev, 0x000c1dac, CS_ERTHR3); + he_writel_mbox(he_dev, 0x0014051a, CS_ERTHR4); + + /* table 5.3, 5.4, 5.5, 5.6, 5.7 */ + he_writel_mbox(he_dev, 0x0235e4b1, CS_ERCTL0); + he_writel_mbox(he_dev, 0x4701, CS_ERCTL1); + he_writel_mbox(he_dev, 0x64b1, CS_ERCTL2); + he_writel_mbox(he_dev, 0x1280, CS_ERSTAT0); + he_writel_mbox(he_dev, 0x64b1, CS_ERSTAT1); + he_writel_mbox(he_dev, 0xf424, CS_RTFWR); + + he_writel_mbox(he_dev, 0x4680, CS_RTATR); + + /* table 5.8 */ + he_writel_mbox(he_dev, 0x000563b7, CS_TFBSET); + he_writel_mbox(he_dev, 0x64b1, CS_WCRMAX); + he_writel_mbox(he_dev, 0x5ab1, CS_WCRMIN); + he_writel_mbox(he_dev, 0xe4b1, CS_WCRINC); + he_writel_mbox(he_dev, 0xdab1, CS_WCRDEC); + he_writel_mbox(he_dev, 0x64b1, CS_WCRCEIL); + + /* table 5.9 */ + he_writel_mbox(he_dev, 0x6, CS_OTPPER); + he_writel_mbox(he_dev, 0x1e, CS_OTWPER); + } + + he_writel_mbox(he_dev, 0x8, CS_OTTLIM); + + for (reg = 0; reg < 0x8; ++reg) + he_writel_mbox(he_dev, 0x0, CS_HGRRT0 + reg); + +} + +static int __init +he_init_cs_block_rcm(struct he_dev *he_dev) +{ + unsigned (*rategrid)[16][16]; + unsigned rate, delta; + int i, j, reg; + + unsigned rate_atmf, exp, man; + unsigned long long rate_cps; + int mult, buf, buf_limit = 4; + + rategrid = kmalloc( sizeof(unsigned) * 16 * 16, GFP_KERNEL); + if (!rategrid) + return -ENOMEM; + + /* initialize rate grid group table */ + + for (reg = 0x0; reg < 0xff; ++reg) + he_writel_rcm(he_dev, 0x0, CONFIG_RCMABR + reg); + + /* initialize rate controller groups */ + + for (reg = 0x100; reg < 0x1ff; ++reg) + he_writel_rcm(he_dev, 0x0, CONFIG_RCMABR + reg); + + /* initialize tNrm lookup table */ + + /* the manual makes reference to a routine in a sample driver + for proper configuration; fortunately, we only need this + in order to support abr connection */ + + /* initialize rate to group table */ + + rate = he_dev->atm_dev->link_rate; + delta = rate / 32; + + /* + * 2.4 transmit internal functions + * + * we construct a copy of the rate grid used by the scheduler + * in order to construct the rate to group table below + */ + + for (j = 0; j < 16; j++) { + (*rategrid)[0][j] = rate; + rate -= delta; + } + + for (i = 1; i < 16; i++) + for (j = 0; j < 16; j++) + if (i > 14) + (*rategrid)[i][j] = (*rategrid)[i - 1][j] / 4; + else + (*rategrid)[i][j] = (*rategrid)[i - 1][j] / 2; + + /* + * 2.4 transmit internal function + * + * this table maps the upper 5 bits of exponent and mantissa + * of the atm forum representation of the rate into an index + * on rate grid + */ + + rate_atmf = 0; + while (rate_atmf < 0x400) { + man = (rate_atmf & 0x1f) << 4; + exp = rate_atmf >> 5; + + /* + instead of '/ 512', use '>> 9' to prevent a call + to divdu3 on x86 platforms + */ + rate_cps = (unsigned long long) (1 << exp) * (man + 512) >> 9; + + if (rate_cps < 10) + rate_cps = 10; /* 2.2.1 minimum payload rate is 10 cps */ + + for (i = 255; i > 0; i--) + if ((*rategrid)[i/16][i%16] >= rate_cps) + break; /* pick nearest rate instead? */ + + /* + * each table entry is 16 bits: (rate grid index (8 bits) + * and a buffer limit (8 bits) + * there are two table entries in each 32-bit register + */ + +#ifdef notdef + buf = rate_cps * he_dev->tx_numbuffs / + (he_dev->atm_dev->link_rate * 2); +#else + /* this is pretty, but avoids _divdu3 and is mostly correct */ + mult = he_dev->atm_dev->link_rate / ATM_OC3_PCR; + if (rate_cps > (272 * mult)) + buf = 4; + else if (rate_cps > (204 * mult)) + buf = 3; + else if (rate_cps > (136 * mult)) + buf = 2; + else if (rate_cps > (68 * mult)) + buf = 1; + else + buf = 0; +#endif + if (buf > buf_limit) + buf = buf_limit; + reg = (reg << 16) | ((i << 8) | buf); + +#define RTGTBL_OFFSET 0x400 + + if (rate_atmf & 0x1) + he_writel_rcm(he_dev, reg, + CONFIG_RCMABR + RTGTBL_OFFSET + (rate_atmf >> 1)); + + ++rate_atmf; + } + + kfree(rategrid); + return 0; +} + +static int __init +he_init_group(struct he_dev *he_dev, int group) +{ + int i; + +#ifdef USE_RBPS + /* small buffer pool */ +#ifdef USE_RBPS_POOL + he_dev->rbps_pool = pci_pool_create("rbps", he_dev->pci_dev, + CONFIG_RBPS_BUFSIZE, 8, 0); + if (he_dev->rbps_pool == NULL) { + hprintk("unable to create rbps pages\n"); + return -ENOMEM; + } +#else /* !USE_RBPS_POOL */ + he_dev->rbps_pages = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_RBPS_SIZE * CONFIG_RBPS_BUFSIZE, &he_dev->rbps_pages_phys); + if (he_dev->rbps_pages == NULL) { + hprintk("unable to create rbps page pool\n"); + return -ENOMEM; + } +#endif /* USE_RBPS_POOL */ + + he_dev->rbps_base = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_RBPS_SIZE * sizeof(struct he_rbp), &he_dev->rbps_phys); + if (he_dev->rbps_base == NULL) { + hprintk("failed to alloc rbps\n"); + return -ENOMEM; + } + memset(he_dev->rbps_base, 0, CONFIG_RBPS_SIZE * sizeof(struct he_rbp)); + he_dev->rbps_virt = kmalloc(CONFIG_RBPS_SIZE * sizeof(struct he_virt), GFP_KERNEL); + + for (i = 0; i < CONFIG_RBPS_SIZE; ++i) { + dma_addr_t dma_handle; + void *cpuaddr; + +#ifdef USE_RBPS_POOL + cpuaddr = pci_pool_alloc(he_dev->rbps_pool, SLAB_KERNEL|SLAB_DMA, &dma_handle); + if (cpuaddr == NULL) + return -ENOMEM; +#else + cpuaddr = he_dev->rbps_pages + (i * CONFIG_RBPS_BUFSIZE); + dma_handle = he_dev->rbps_pages_phys + (i * CONFIG_RBPS_BUFSIZE); +#endif + + he_dev->rbps_virt[i].virt = cpuaddr; + he_dev->rbps_base[i].status = RBP_LOANED | RBP_SMALLBUF | (i << RBP_INDEX_OFF); + he_dev->rbps_base[i].phys = dma_handle; + + } + he_dev->rbps_tail = &he_dev->rbps_base[CONFIG_RBPS_SIZE - 1]; + + he_writel(he_dev, he_dev->rbps_phys, G0_RBPS_S + (group * 32)); + he_writel(he_dev, RBPS_MASK(he_dev->rbps_tail), + G0_RBPS_T + (group * 32)); + he_writel(he_dev, CONFIG_RBPS_BUFSIZE/4, + G0_RBPS_BS + (group * 32)); + he_writel(he_dev, + RBP_THRESH(CONFIG_RBPS_THRESH) | + RBP_QSIZE(CONFIG_RBPS_SIZE - 1) | + RBP_INT_ENB, + G0_RBPS_QI + (group * 32)); +#else /* !USE_RBPS */ + he_writel(he_dev, 0x0, G0_RBPS_S + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPS_T + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPS_QI + (group * 32)); + he_writel(he_dev, RBP_THRESH(0x1) | RBP_QSIZE(0x0), + G0_RBPS_BS + (group * 32)); +#endif /* USE_RBPS */ + + /* large buffer pool */ +#ifdef USE_RBPL_POOL + he_dev->rbpl_pool = pci_pool_create("rbpl", he_dev->pci_dev, + CONFIG_RBPL_BUFSIZE, 8, 0); + if (he_dev->rbpl_pool == NULL) { + hprintk("unable to create rbpl pool\n"); + return -ENOMEM; + } +#else /* !USE_RBPL_POOL */ + he_dev->rbpl_pages = (void *) pci_alloc_consistent(he_dev->pci_dev, + CONFIG_RBPL_SIZE * CONFIG_RBPL_BUFSIZE, &he_dev->rbpl_pages_phys); + if (he_dev->rbpl_pages == NULL) { + hprintk("unable to create rbpl pages\n"); + return -ENOMEM; + } +#endif /* USE_RBPL_POOL */ + + he_dev->rbpl_base = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_RBPL_SIZE * sizeof(struct he_rbp), &he_dev->rbpl_phys); + if (he_dev->rbpl_base == NULL) { + hprintk("failed to alloc rbpl\n"); + return -ENOMEM; + } + memset(he_dev->rbpl_base, 0, CONFIG_RBPL_SIZE * sizeof(struct he_rbp)); + he_dev->rbpl_virt = kmalloc(CONFIG_RBPL_SIZE * sizeof(struct he_virt), GFP_KERNEL); + + for (i = 0; i < CONFIG_RBPL_SIZE; ++i) { + dma_addr_t dma_handle; + void *cpuaddr; + +#ifdef USE_RBPL_POOL + cpuaddr = pci_pool_alloc(he_dev->rbpl_pool, SLAB_KERNEL|SLAB_DMA, &dma_handle); + if (cpuaddr == NULL) + return -ENOMEM; +#else + cpuaddr = he_dev->rbpl_pages + (i * CONFIG_RBPL_BUFSIZE); + dma_handle = he_dev->rbpl_pages_phys + (i * CONFIG_RBPL_BUFSIZE); +#endif + + he_dev->rbpl_virt[i].virt = cpuaddr; + he_dev->rbpl_base[i].status = RBP_LOANED | (i << RBP_INDEX_OFF); + he_dev->rbpl_base[i].phys = dma_handle; + } + he_dev->rbpl_tail = &he_dev->rbpl_base[CONFIG_RBPL_SIZE - 1]; + + he_writel(he_dev, he_dev->rbpl_phys, G0_RBPL_S + (group * 32)); + he_writel(he_dev, RBPL_MASK(he_dev->rbpl_tail), + G0_RBPL_T + (group * 32)); + he_writel(he_dev, CONFIG_RBPL_BUFSIZE/4, + G0_RBPL_BS + (group * 32)); + he_writel(he_dev, + RBP_THRESH(CONFIG_RBPL_THRESH) | + RBP_QSIZE(CONFIG_RBPL_SIZE - 1) | + RBP_INT_ENB, + G0_RBPL_QI + (group * 32)); + + /* rx buffer ready queue */ + + he_dev->rbrq_base = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), &he_dev->rbrq_phys); + if (he_dev->rbrq_base == NULL) { + hprintk("failed to allocate rbrq\n"); + return -ENOMEM; + } + memset(he_dev->rbrq_base, 0, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq)); + + he_dev->rbrq_head = he_dev->rbrq_base; + he_writel(he_dev, he_dev->rbrq_phys, G0_RBRQ_ST + (group * 16)); + he_writel(he_dev, 0, G0_RBRQ_H + (group * 16)); + he_writel(he_dev, + RBRQ_THRESH(CONFIG_RBRQ_THRESH) | RBRQ_SIZE(CONFIG_RBRQ_SIZE - 1), + G0_RBRQ_Q + (group * 16)); + if (irq_coalesce) { + hprintk("coalescing interrupts\n"); + he_writel(he_dev, RBRQ_TIME(768) | RBRQ_COUNT(7), + G0_RBRQ_I + (group * 16)); + } else + he_writel(he_dev, RBRQ_TIME(0) | RBRQ_COUNT(1), + G0_RBRQ_I + (group * 16)); + + /* tx buffer ready queue */ + + he_dev->tbrq_base = pci_alloc_consistent(he_dev->pci_dev, + CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), &he_dev->tbrq_phys); + if (he_dev->tbrq_base == NULL) { + hprintk("failed to allocate tbrq\n"); + return -ENOMEM; + } + memset(he_dev->tbrq_base, 0, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq)); + + he_dev->tbrq_head = he_dev->tbrq_base; + + he_writel(he_dev, he_dev->tbrq_phys, G0_TBRQ_B_T + (group * 16)); + he_writel(he_dev, 0, G0_TBRQ_H + (group * 16)); + he_writel(he_dev, CONFIG_TBRQ_SIZE - 1, G0_TBRQ_S + (group * 16)); + he_writel(he_dev, CONFIG_TBRQ_THRESH, G0_TBRQ_THRESH + (group * 16)); + + return 0; +} + +static int __init +he_init_irq(struct he_dev *he_dev) +{ + int i; + + /* 2.9.3.5 tail offset for each interrupt queue is located after the + end of the interrupt queue */ + + he_dev->irq_base = pci_alloc_consistent(he_dev->pci_dev, + (CONFIG_IRQ_SIZE+1) * sizeof(struct he_irq), &he_dev->irq_phys); + if (he_dev->irq_base == NULL) { + hprintk("failed to allocate irq\n"); + return -ENOMEM; + } + he_dev->irq_tailoffset = (unsigned *) + &he_dev->irq_base[CONFIG_IRQ_SIZE]; + *he_dev->irq_tailoffset = 0; + he_dev->irq_head = he_dev->irq_base; + he_dev->irq_tail = he_dev->irq_base; + + for (i = 0; i < CONFIG_IRQ_SIZE; ++i) + he_dev->irq_base[i].isw = ITYPE_INVALID; + + he_writel(he_dev, he_dev->irq_phys, IRQ0_BASE); + he_writel(he_dev, + IRQ_SIZE(CONFIG_IRQ_SIZE) | IRQ_THRESH(CONFIG_IRQ_THRESH), + IRQ0_HEAD); + he_writel(he_dev, IRQ_INT_A | IRQ_TYPE_LINE, IRQ0_CNTL); + he_writel(he_dev, 0x0, IRQ0_DATA); + + he_writel(he_dev, 0x0, IRQ1_BASE); + he_writel(he_dev, 0x0, IRQ1_HEAD); + he_writel(he_dev, 0x0, IRQ1_CNTL); + he_writel(he_dev, 0x0, IRQ1_DATA); + + he_writel(he_dev, 0x0, IRQ2_BASE); + he_writel(he_dev, 0x0, IRQ2_HEAD); + he_writel(he_dev, 0x0, IRQ2_CNTL); + he_writel(he_dev, 0x0, IRQ2_DATA); + + he_writel(he_dev, 0x0, IRQ3_BASE); + he_writel(he_dev, 0x0, IRQ3_HEAD); + he_writel(he_dev, 0x0, IRQ3_CNTL); + he_writel(he_dev, 0x0, IRQ3_DATA); + + /* 2.9.3.2 interrupt queue mapping registers */ + + he_writel(he_dev, 0x0, GRP_10_MAP); + he_writel(he_dev, 0x0, GRP_32_MAP); + he_writel(he_dev, 0x0, GRP_54_MAP); + he_writel(he_dev, 0x0, GRP_76_MAP); + + if (request_irq(he_dev->pci_dev->irq, he_irq_handler, SA_INTERRUPT|SA_SHIRQ, DEV_LABEL, he_dev)) { + hprintk("irq %d already in use\n", he_dev->pci_dev->irq); + return -EINVAL; + } + + he_dev->irq = he_dev->pci_dev->irq; + + return 0; +} + +static int __init +he_start(struct atm_dev *dev) +{ + struct he_dev *he_dev; + struct pci_dev *pci_dev; + unsigned long membase; + + u16 command; + u32 gen_cntl_0, host_cntl, lb_swap; + u8 cache_size, timer; + + unsigned err; + unsigned int status, reg; + int i, group; + + he_dev = HE_DEV(dev); + pci_dev = he_dev->pci_dev; + + membase = pci_resource_start(pci_dev, 0); + HPRINTK("membase = 0x%lx irq = %d.\n", membase, pci_dev->irq); + + /* + * pci bus controller initialization + */ + + /* 4.3 pci bus controller-specific initialization */ + if (pci_read_config_dword(pci_dev, GEN_CNTL_0, &gen_cntl_0) != 0) { + hprintk("can't read GEN_CNTL_0\n"); + return -EINVAL; + } + gen_cntl_0 |= (MRL_ENB | MRM_ENB | IGNORE_TIMEOUT); + if (pci_write_config_dword(pci_dev, GEN_CNTL_0, gen_cntl_0) != 0) { + hprintk("can't write GEN_CNTL_0.\n"); + return -EINVAL; + } + + if (pci_read_config_word(pci_dev, PCI_COMMAND, &command) != 0) { + hprintk("can't read PCI_COMMAND.\n"); + return -EINVAL; + } + + command |= (PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE); + if (pci_write_config_word(pci_dev, PCI_COMMAND, command) != 0) { + hprintk("can't enable memory.\n"); + return -EINVAL; + } + + if (pci_read_config_byte(pci_dev, PCI_CACHE_LINE_SIZE, &cache_size)) { + hprintk("can't read cache line size?\n"); + return -EINVAL; + } + + if (cache_size < 16) { + cache_size = 16; + if (pci_write_config_byte(pci_dev, PCI_CACHE_LINE_SIZE, cache_size)) + hprintk("can't set cache line size to %d\n", cache_size); + } + + if (pci_read_config_byte(pci_dev, PCI_LATENCY_TIMER, &timer)) { + hprintk("can't read latency timer?\n"); + return -EINVAL; + } + + /* from table 3.9 + * + * LAT_TIMER = 1 + AVG_LAT + BURST_SIZE/BUS_SIZE + * + * AVG_LAT: The average first data read/write latency [maximum 16 clock cycles] + * BURST_SIZE: 1536 bytes (read) for 622, 768 bytes (read) for 155 [192 clock cycles] + * + */ +#define LAT_TIMER 209 + if (timer < LAT_TIMER) { + HPRINTK("latency timer was %d, setting to %d\n", timer, LAT_TIMER); + timer = LAT_TIMER; + if (pci_write_config_byte(pci_dev, PCI_LATENCY_TIMER, timer)) + hprintk("can't set latency timer to %d\n", timer); + } + + if (!(he_dev->membase = ioremap(membase, HE_REGMAP_SIZE))) { + hprintk("can't set up page mapping\n"); + return -EINVAL; + } + + /* 4.4 card reset */ + he_writel(he_dev, 0x0, RESET_CNTL); + he_writel(he_dev, 0xff, RESET_CNTL); + + udelay(16*1000); /* 16 ms */ + status = he_readl(he_dev, RESET_CNTL); + if ((status & BOARD_RST_STATUS) == 0) { + hprintk("reset failed\n"); + return -EINVAL; + } + + /* 4.5 set bus width */ + host_cntl = he_readl(he_dev, HOST_CNTL); + if (host_cntl & PCI_BUS_SIZE64) + gen_cntl_0 |= ENBL_64; + else + gen_cntl_0 &= ~ENBL_64; + + if (disable64 == 1) { + hprintk("disabling 64-bit pci bus transfers\n"); + gen_cntl_0 &= ~ENBL_64; + } + + if (gen_cntl_0 & ENBL_64) + hprintk("64-bit transfers enabled\n"); + + pci_write_config_dword(pci_dev, GEN_CNTL_0, gen_cntl_0); + + /* 4.7 read prom contents */ + for (i = 0; i < PROD_ID_LEN; ++i) + he_dev->prod_id[i] = read_prom_byte(he_dev, PROD_ID + i); + + he_dev->media = read_prom_byte(he_dev, MEDIA); + + for (i = 0; i < 6; ++i) + dev->esi[i] = read_prom_byte(he_dev, MAC_ADDR + i); + + hprintk("%s%s, %x:%x:%x:%x:%x:%x\n", + he_dev->prod_id, + he_dev->media & 0x40 ? "SM" : "MM", + dev->esi[0], + dev->esi[1], + dev->esi[2], + dev->esi[3], + dev->esi[4], + dev->esi[5]); + he_dev->atm_dev->link_rate = he_is622(he_dev) ? + ATM_OC12_PCR : ATM_OC3_PCR; + + /* 4.6 set host endianess */ + lb_swap = he_readl(he_dev, LB_SWAP); + if (he_is622(he_dev)) + lb_swap &= ~XFER_SIZE; /* 4 cells */ + else + lb_swap |= XFER_SIZE; /* 8 cells */ +#ifdef __BIG_ENDIAN + lb_swap |= DESC_WR_SWAP | INTR_SWAP | BIG_ENDIAN_HOST; +#else + lb_swap &= ~(DESC_WR_SWAP | INTR_SWAP | BIG_ENDIAN_HOST | + DATA_WR_SWAP | DATA_RD_SWAP | DESC_RD_SWAP); +#endif /* __BIG_ENDIAN */ + he_writel(he_dev, lb_swap, LB_SWAP); + + /* 4.8 sdram controller initialization */ + he_writel(he_dev, he_is622(he_dev) ? LB_64_ENB : 0x0, SDRAM_CTL); + + /* 4.9 initialize rnum value */ + lb_swap |= SWAP_RNUM_MAX(0xf); + he_writel(he_dev, lb_swap, LB_SWAP); + + /* 4.10 initialize the interrupt queues */ + if ((err = he_init_irq(he_dev)) != 0) + return err; + +#ifdef USE_TASKLET + tasklet_init(&he_dev->tasklet, he_tasklet, (unsigned long) he_dev); +#endif + spin_lock_init(&he_dev->global_lock); + + /* 4.11 enable pci bus controller state machines */ + host_cntl |= (OUTFF_ENB | CMDFF_ENB | + QUICK_RD_RETRY | QUICK_WR_RETRY | PERR_INT_ENB); + he_writel(he_dev, host_cntl, HOST_CNTL); + + gen_cntl_0 |= INT_PROC_ENBL|INIT_ENB; + pci_write_config_dword(pci_dev, GEN_CNTL_0, gen_cntl_0); + + /* + * atm network controller initialization + */ + + /* 5.1.1 generic configuration state */ + + /* + * local (cell) buffer memory map + * + * HE155 HE622 + * + * 0 ____________1023 bytes 0 _______________________2047 bytes + * | | | | | + * | utility | | rx0 | | + * 5|____________| 255|___________________| u | + * 6| | 256| | t | + * | | | | i | + * | rx0 | row | tx | l | + * | | | | i | + * | | 767|___________________| t | + * 517|____________| 768| | y | + * row 518| | | rx1 | | + * | | 1023|___________________|___| + * | | + * | tx | + * | | + * | | + * 1535|____________| + * 1536| | + * | rx1 | + * 2047|____________| + * + */ + + /* total 4096 connections */ + he_dev->vcibits = CONFIG_DEFAULT_VCIBITS; + he_dev->vpibits = CONFIG_DEFAULT_VPIBITS; + + if (nvpibits != -1 && nvcibits != -1 && nvpibits+nvcibits != HE_MAXCIDBITS) { + hprintk("nvpibits + nvcibits != %d\n", HE_MAXCIDBITS); + return -ENODEV; + } + + if (nvpibits != -1) { + he_dev->vpibits = nvpibits; + he_dev->vcibits = HE_MAXCIDBITS - nvpibits; + } + + if (nvcibits != -1) { + he_dev->vcibits = nvcibits; + he_dev->vpibits = HE_MAXCIDBITS - nvcibits; + } + + + if (he_is622(he_dev)) { + he_dev->cells_per_row = 40; + he_dev->bytes_per_row = 2048; + he_dev->r0_numrows = 256; + he_dev->tx_numrows = 512; + he_dev->r1_numrows = 256; + he_dev->r0_startrow = 0; + he_dev->tx_startrow = 256; + he_dev->r1_startrow = 768; + } else { + he_dev->cells_per_row = 20; + he_dev->bytes_per_row = 1024; + he_dev->r0_numrows = 512; + he_dev->tx_numrows = 1018; + he_dev->r1_numrows = 512; + he_dev->r0_startrow = 6; + he_dev->tx_startrow = 518; + he_dev->r1_startrow = 1536; + } + + he_dev->cells_per_lbuf = 4; + he_dev->buffer_limit = 4; + he_dev->r0_numbuffs = he_dev->r0_numrows * + he_dev->cells_per_row / he_dev->cells_per_lbuf; + if (he_dev->r0_numbuffs > 2560) + he_dev->r0_numbuffs = 2560; + + he_dev->r1_numbuffs = he_dev->r1_numrows * + he_dev->cells_per_row / he_dev->cells_per_lbuf; + if (he_dev->r1_numbuffs > 2560) + he_dev->r1_numbuffs = 2560; + + he_dev->tx_numbuffs = he_dev->tx_numrows * + he_dev->cells_per_row / he_dev->cells_per_lbuf; + if (he_dev->tx_numbuffs > 5120) + he_dev->tx_numbuffs = 5120; + + /* 5.1.2 configure hardware dependent registers */ + + he_writel(he_dev, + SLICE_X(0x2) | ARB_RNUM_MAX(0xf) | TH_PRTY(0x3) | + RH_PRTY(0x3) | TL_PRTY(0x2) | RL_PRTY(0x1) | + (he_is622(he_dev) ? BUS_MULTI(0x28) : BUS_MULTI(0x46)) | + (he_is622(he_dev) ? NET_PREF(0x50) : NET_PREF(0x8c)), + LBARB); + + he_writel(he_dev, BANK_ON | + (he_is622(he_dev) ? (REF_RATE(0x384) | WIDE_DATA) : REF_RATE(0x150)), + SDRAMCON); + + he_writel(he_dev, + (he_is622(he_dev) ? RM_BANK_WAIT(1) : RM_BANK_WAIT(0)) | + RM_RW_WAIT(1), RCMCONFIG); + he_writel(he_dev, + (he_is622(he_dev) ? TM_BANK_WAIT(2) : TM_BANK_WAIT(1)) | + TM_RW_WAIT(1), TCMCONFIG); + + he_writel(he_dev, he_dev->cells_per_lbuf * ATM_CELL_PAYLOAD, LB_CONFIG); + + he_writel(he_dev, + (he_is622(he_dev) ? UT_RD_DELAY(8) : UT_RD_DELAY(0)) | + (he_is622(he_dev) ? RC_UT_MODE(0) : RC_UT_MODE(1)) | + RX_VALVP(he_dev->vpibits) | + RX_VALVC(he_dev->vcibits), RC_CONFIG); + + he_writel(he_dev, DRF_THRESH(0x20) | + (he_is622(he_dev) ? TX_UT_MODE(0) : TX_UT_MODE(1)) | + TX_VCI_MASK(he_dev->vcibits) | + LBFREE_CNT(he_dev->tx_numbuffs), TX_CONFIG); + + he_writel(he_dev, 0x0, TXAAL5_PROTO); + + he_writel(he_dev, PHY_INT_ENB | + (he_is622(he_dev) ? PTMR_PRE(67 - 1) : PTMR_PRE(50 - 1)), + RH_CONFIG); + + /* 5.1.3 initialize connection memory */ + + for (i = 0; i < TCM_MEM_SIZE; ++i) + he_writel_tcm(he_dev, 0, i); + + for (i = 0; i < RCM_MEM_SIZE; ++i) + he_writel_rcm(he_dev, 0, i); + + /* + * transmit connection memory map + * + * tx memory + * 0x0 ___________________ + * | | + * | | + * | TSRa | + * | | + * | | + * 0x8000|___________________| + * | | + * | TSRb | + * 0xc000|___________________| + * | | + * | TSRc | + * 0xe000|___________________| + * | TSRd | + * 0xf000|___________________| + * | tmABR | + * 0x10000|___________________| + * | | + * | tmTPD | + * |___________________| + * | | + * .... + * 0x1ffff|___________________| + * + * + */ + + he_writel(he_dev, CONFIG_TSRB, TSRB_BA); + he_writel(he_dev, CONFIG_TSRC, TSRC_BA); + he_writel(he_dev, CONFIG_TSRD, TSRD_BA); + he_writel(he_dev, CONFIG_TMABR, TMABR_BA); + he_writel(he_dev, CONFIG_TPDBA, TPD_BA); + + + /* + * receive connection memory map + * + * 0x0 ___________________ + * | | + * | | + * | RSRa | + * | | + * | | + * 0x8000|___________________| + * | | + * | rx0/1 | + * | LBM | link lists of local + * | tx | buffer memory + * | | + * 0xd000|___________________| + * | | + * | rmABR | + * 0xe000|___________________| + * | | + * | RSRb | + * |___________________| + * | | + * .... + * 0xffff|___________________| + */ + + he_writel(he_dev, 0x08000, RCMLBM_BA); + he_writel(he_dev, 0x0e000, RCMRSRB_BA); + he_writel(he_dev, 0x0d800, RCMABR_BA); + + /* 5.1.4 initialize local buffer free pools linked lists */ + + he_init_rx_lbfp0(he_dev); + he_init_rx_lbfp1(he_dev); + + he_writel(he_dev, 0x0, RLBC_H); + he_writel(he_dev, 0x0, RLBC_T); + he_writel(he_dev, 0x0, RLBC_H2); + + he_writel(he_dev, 512, RXTHRSH); /* 10% of r0+r1 buffers */ + he_writel(he_dev, 256, LITHRSH); /* 5% of r0+r1 buffers */ + + he_init_tx_lbfp(he_dev); + + he_writel(he_dev, he_is622(he_dev) ? 0x104780 : 0x800, UBUFF_BA); + + /* 5.1.5 initialize intermediate receive queues */ + + if (he_is622(he_dev)) { + he_writel(he_dev, 0x000f, G0_INMQ_S); + he_writel(he_dev, 0x200f, G0_INMQ_L); + + he_writel(he_dev, 0x001f, G1_INMQ_S); + he_writel(he_dev, 0x201f, G1_INMQ_L); + + he_writel(he_dev, 0x002f, G2_INMQ_S); + he_writel(he_dev, 0x202f, G2_INMQ_L); + + he_writel(he_dev, 0x003f, G3_INMQ_S); + he_writel(he_dev, 0x203f, G3_INMQ_L); + + he_writel(he_dev, 0x004f, G4_INMQ_S); + he_writel(he_dev, 0x204f, G4_INMQ_L); + + he_writel(he_dev, 0x005f, G5_INMQ_S); + he_writel(he_dev, 0x205f, G5_INMQ_L); + + he_writel(he_dev, 0x006f, G6_INMQ_S); + he_writel(he_dev, 0x206f, G6_INMQ_L); + + he_writel(he_dev, 0x007f, G7_INMQ_S); + he_writel(he_dev, 0x207f, G7_INMQ_L); + } else { + he_writel(he_dev, 0x0000, G0_INMQ_S); + he_writel(he_dev, 0x0008, G0_INMQ_L); + + he_writel(he_dev, 0x0001, G1_INMQ_S); + he_writel(he_dev, 0x0009, G1_INMQ_L); + + he_writel(he_dev, 0x0002, G2_INMQ_S); + he_writel(he_dev, 0x000a, G2_INMQ_L); + + he_writel(he_dev, 0x0003, G3_INMQ_S); + he_writel(he_dev, 0x000b, G3_INMQ_L); + + he_writel(he_dev, 0x0004, G4_INMQ_S); + he_writel(he_dev, 0x000c, G4_INMQ_L); + + he_writel(he_dev, 0x0005, G5_INMQ_S); + he_writel(he_dev, 0x000d, G5_INMQ_L); + + he_writel(he_dev, 0x0006, G6_INMQ_S); + he_writel(he_dev, 0x000e, G6_INMQ_L); + + he_writel(he_dev, 0x0007, G7_INMQ_S); + he_writel(he_dev, 0x000f, G7_INMQ_L); + } + + /* 5.1.6 application tunable parameters */ + + he_writel(he_dev, 0x0, MCC); + he_writel(he_dev, 0x0, OEC); + he_writel(he_dev, 0x0, DCC); + he_writel(he_dev, 0x0, CEC); + + /* 5.1.7 cs block initialization */ + + he_init_cs_block(he_dev); + + /* 5.1.8 cs block connection memory initialization */ + + if (he_init_cs_block_rcm(he_dev) < 0) + return -ENOMEM; + + /* 5.1.10 initialize host structures */ + + he_init_tpdrq(he_dev); + +#ifdef USE_TPD_POOL + he_dev->tpd_pool = pci_pool_create("tpd", he_dev->pci_dev, + sizeof(struct he_tpd), TPD_ALIGNMENT, 0); + if (he_dev->tpd_pool == NULL) { + hprintk("unable to create tpd pci_pool\n"); + return -ENOMEM; + } + + INIT_LIST_HEAD(&he_dev->outstanding_tpds); +#else + he_dev->tpd_base = (void *) pci_alloc_consistent(he_dev->pci_dev, + CONFIG_NUMTPDS * sizeof(struct he_tpd), &he_dev->tpd_base_phys); + if (!he_dev->tpd_base) + return -ENOMEM; + + for (i = 0; i < CONFIG_NUMTPDS; ++i) { + he_dev->tpd_base[i].status = (i << TPD_ADDR_SHIFT); + he_dev->tpd_base[i].inuse = 0; + } + + he_dev->tpd_head = he_dev->tpd_base; + he_dev->tpd_end = &he_dev->tpd_base[CONFIG_NUMTPDS - 1]; +#endif + + if (he_init_group(he_dev, 0) != 0) + return -ENOMEM; + + for (group = 1; group < HE_NUM_GROUPS; ++group) { + he_writel(he_dev, 0x0, G0_RBPS_S + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPS_T + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPS_QI + (group * 32)); + he_writel(he_dev, RBP_THRESH(0x1) | RBP_QSIZE(0x0), + G0_RBPS_BS + (group * 32)); + + he_writel(he_dev, 0x0, G0_RBPL_S + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPL_T + (group * 32)); + he_writel(he_dev, RBP_THRESH(0x1) | RBP_QSIZE(0x0), + G0_RBPL_QI + (group * 32)); + he_writel(he_dev, 0x0, G0_RBPL_BS + (group * 32)); + + he_writel(he_dev, 0x0, G0_RBRQ_ST + (group * 16)); + he_writel(he_dev, 0x0, G0_RBRQ_H + (group * 16)); + he_writel(he_dev, RBRQ_THRESH(0x1) | RBRQ_SIZE(0x0), + G0_RBRQ_Q + (group * 16)); + he_writel(he_dev, 0x0, G0_RBRQ_I + (group * 16)); + + he_writel(he_dev, 0x0, G0_TBRQ_B_T + (group * 16)); + he_writel(he_dev, 0x0, G0_TBRQ_H + (group * 16)); + he_writel(he_dev, TBRQ_THRESH(0x1), + G0_TBRQ_THRESH + (group * 16)); + he_writel(he_dev, 0x0, G0_TBRQ_S + (group * 16)); + } + + /* host status page */ + + he_dev->hsp = pci_alloc_consistent(he_dev->pci_dev, + sizeof(struct he_hsp), &he_dev->hsp_phys); + if (he_dev->hsp == NULL) { + hprintk("failed to allocate host status page\n"); + return -ENOMEM; + } + memset(he_dev->hsp, 0, sizeof(struct he_hsp)); + he_writel(he_dev, he_dev->hsp_phys, HSP_BA); + + /* initialize framer */ + +#ifdef CONFIG_ATM_HE_USE_SUNI + suni_init(he_dev->atm_dev); + if (he_dev->atm_dev->phy && he_dev->atm_dev->phy->start) + he_dev->atm_dev->phy->start(he_dev->atm_dev); +#endif /* CONFIG_ATM_HE_USE_SUNI */ + + if (sdh) { + /* this really should be in suni.c but for now... */ + int val; + + val = he_phy_get(he_dev->atm_dev, SUNI_TPOP_APM); + val = (val & ~SUNI_TPOP_APM_S) | (SUNI_TPOP_S_SDH << SUNI_TPOP_APM_S_SHIFT); + he_phy_put(he_dev->atm_dev, val, SUNI_TPOP_APM); + } + + /* 5.1.12 enable transmit and receive */ + + reg = he_readl_mbox(he_dev, CS_ERCTL0); + reg |= TX_ENABLE|ER_ENABLE; + he_writel_mbox(he_dev, reg, CS_ERCTL0); + + reg = he_readl(he_dev, RC_CONFIG); + reg |= RX_ENABLE; + he_writel(he_dev, reg, RC_CONFIG); + + for (i = 0; i < HE_NUM_CS_STPER; ++i) { + he_dev->cs_stper[i].inuse = 0; + he_dev->cs_stper[i].pcr = -1; + } + he_dev->total_bw = 0; + + + /* atm linux initialization */ + + he_dev->atm_dev->ci_range.vpi_bits = he_dev->vpibits; + he_dev->atm_dev->ci_range.vci_bits = he_dev->vcibits; + + he_dev->irq_peak = 0; + he_dev->rbrq_peak = 0; + he_dev->rbpl_peak = 0; + he_dev->tbrq_peak = 0; + + HPRINTK("hell bent for leather!\n"); + + return 0; +} + +static void +he_stop(struct he_dev *he_dev) +{ + u16 command; + u32 gen_cntl_0, reg; + struct pci_dev *pci_dev; + + pci_dev = he_dev->pci_dev; + + /* disable interrupts */ + + if (he_dev->membase) { + pci_read_config_dword(pci_dev, GEN_CNTL_0, &gen_cntl_0); + gen_cntl_0 &= ~(INT_PROC_ENBL | INIT_ENB); + pci_write_config_dword(pci_dev, GEN_CNTL_0, gen_cntl_0); + +#ifdef USE_TASKLET + tasklet_disable(&he_dev->tasklet); +#endif + + /* disable recv and transmit */ + + reg = he_readl_mbox(he_dev, CS_ERCTL0); + reg &= ~(TX_ENABLE|ER_ENABLE); + he_writel_mbox(he_dev, reg, CS_ERCTL0); + + reg = he_readl(he_dev, RC_CONFIG); + reg &= ~(RX_ENABLE); + he_writel(he_dev, reg, RC_CONFIG); + } + +#ifdef CONFIG_ATM_HE_USE_SUNI + if (he_dev->atm_dev->phy && he_dev->atm_dev->phy->stop) + he_dev->atm_dev->phy->stop(he_dev->atm_dev); +#endif /* CONFIG_ATM_HE_USE_SUNI */ + + if (he_dev->irq) + free_irq(he_dev->irq, he_dev); + + if (he_dev->irq_base) + pci_free_consistent(he_dev->pci_dev, (CONFIG_IRQ_SIZE+1) + * sizeof(struct he_irq), he_dev->irq_base, he_dev->irq_phys); + + if (he_dev->hsp) + pci_free_consistent(he_dev->pci_dev, sizeof(struct he_hsp), + he_dev->hsp, he_dev->hsp_phys); + + if (he_dev->rbpl_base) { +#ifdef USE_RBPL_POOL + for (i = 0; i < CONFIG_RBPL_SIZE; ++i) { + void *cpuaddr = he_dev->rbpl_virt[i].virt; + dma_addr_t dma_handle = he_dev->rbpl_base[i].phys; + + pci_pool_free(he_dev->rbpl_pool, cpuaddr, dma_handle); + } +#else + pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE + * CONFIG_RBPL_BUFSIZE, he_dev->rbpl_pages, he_dev->rbpl_pages_phys); +#endif + pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE + * sizeof(struct he_rbp), he_dev->rbpl_base, he_dev->rbpl_phys); + } + +#ifdef USE_RBPL_POOL + if (he_dev->rbpl_pool) + pci_pool_destroy(he_dev->rbpl_pool); +#endif + +#ifdef USE_RBPS + if (he_dev->rbps_base) { +#ifdef USE_RBPS_POOL + for (i = 0; i < CONFIG_RBPS_SIZE; ++i) { + void *cpuaddr = he_dev->rbps_virt[i].virt; + dma_addr_t dma_handle = he_dev->rbps_base[i].phys; + + pci_pool_free(he_dev->rbps_pool, cpuaddr, dma_handle); + } +#else + pci_free_consistent(he_dev->pci_dev, CONFIG_RBPS_SIZE + * CONFIG_RBPS_BUFSIZE, he_dev->rbps_pages, he_dev->rbps_pages_phys); +#endif + pci_free_consistent(he_dev->pci_dev, CONFIG_RBPS_SIZE + * sizeof(struct he_rbp), he_dev->rbps_base, he_dev->rbps_phys); + } + +#ifdef USE_RBPS_POOL + if (he_dev->rbps_pool) + pci_pool_destroy(he_dev->rbps_pool); +#endif + +#endif /* USE_RBPS */ + + if (he_dev->rbrq_base) + pci_free_consistent(he_dev->pci_dev, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), + he_dev->rbrq_base, he_dev->rbrq_phys); + + if (he_dev->tbrq_base) + pci_free_consistent(he_dev->pci_dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), + he_dev->tbrq_base, he_dev->tbrq_phys); + + if (he_dev->tpdrq_base) + pci_free_consistent(he_dev->pci_dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), + he_dev->tpdrq_base, he_dev->tpdrq_phys); + +#ifdef USE_TPD_POOL + if (he_dev->tpd_pool) + pci_pool_destroy(he_dev->tpd_pool); +#else + if (he_dev->tpd_base) + pci_free_consistent(he_dev->pci_dev, CONFIG_NUMTPDS * sizeof(struct he_tpd), + he_dev->tpd_base, he_dev->tpd_base_phys); +#endif + + if (he_dev->pci_dev) { + pci_read_config_word(he_dev->pci_dev, PCI_COMMAND, &command); + command &= ~(PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); + pci_write_config_word(he_dev->pci_dev, PCI_COMMAND, command); + } + + if (he_dev->membase) + iounmap(he_dev->membase); +} + +static struct he_tpd * +__alloc_tpd(struct he_dev *he_dev) +{ +#ifdef USE_TPD_POOL + struct he_tpd *tpd; + dma_addr_t dma_handle; + + tpd = pci_pool_alloc(he_dev->tpd_pool, SLAB_ATOMIC|SLAB_DMA, &dma_handle); + if (tpd == NULL) + return NULL; + + tpd->status = TPD_ADDR(dma_handle); + tpd->reserved = 0; + tpd->iovec[0].addr = 0; tpd->iovec[0].len = 0; + tpd->iovec[1].addr = 0; tpd->iovec[1].len = 0; + tpd->iovec[2].addr = 0; tpd->iovec[2].len = 0; + + return tpd; +#else + int i; + + for (i = 0; i < CONFIG_NUMTPDS; ++i) { + ++he_dev->tpd_head; + if (he_dev->tpd_head > he_dev->tpd_end) { + he_dev->tpd_head = he_dev->tpd_base; + } + + if (!he_dev->tpd_head->inuse) { + he_dev->tpd_head->inuse = 1; + he_dev->tpd_head->status &= TPD_MASK; + he_dev->tpd_head->iovec[0].addr = 0; he_dev->tpd_head->iovec[0].len = 0; + he_dev->tpd_head->iovec[1].addr = 0; he_dev->tpd_head->iovec[1].len = 0; + he_dev->tpd_head->iovec[2].addr = 0; he_dev->tpd_head->iovec[2].len = 0; + return he_dev->tpd_head; + } + } + hprintk("out of tpds -- increase CONFIG_NUMTPDS (%d)\n", CONFIG_NUMTPDS); + return NULL; +#endif +} + +#define AAL5_LEN(buf,len) \ + ((((unsigned char *)(buf))[(len)-6] << 8) | \ + (((unsigned char *)(buf))[(len)-5])) + +/* 2.10.1.2 receive + * + * aal5 packets can optionally return the tcp checksum in the lower + * 16 bits of the crc (RSR0_TCP_CKSUM) + */ + +#define TCP_CKSUM(buf,len) \ + ((((unsigned char *)(buf))[(len)-2] << 8) | \ + (((unsigned char *)(buf))[(len-1)])) + +static int +he_service_rbrq(struct he_dev *he_dev, int group) +{ + struct he_rbrq *rbrq_tail = (struct he_rbrq *) + ((unsigned long)he_dev->rbrq_base | + he_dev->hsp->group[group].rbrq_tail); + struct he_rbp *rbp = NULL; + unsigned cid, lastcid = -1; + unsigned buf_len = 0; + struct sk_buff *skb; + struct atm_vcc *vcc = NULL; + struct he_vcc *he_vcc; + struct he_iovec *iov; + int pdus_assembled = 0; + int updated = 0; + + read_lock(&vcc_sklist_lock); + while (he_dev->rbrq_head != rbrq_tail) { + ++updated; + + HPRINTK("%p rbrq%d 0x%x len=%d cid=0x%x %s%s%s%s%s%s\n", + he_dev->rbrq_head, group, + RBRQ_ADDR(he_dev->rbrq_head), + RBRQ_BUFLEN(he_dev->rbrq_head), + RBRQ_CID(he_dev->rbrq_head), + RBRQ_CRC_ERR(he_dev->rbrq_head) ? " CRC_ERR" : "", + RBRQ_LEN_ERR(he_dev->rbrq_head) ? " LEN_ERR" : "", + RBRQ_END_PDU(he_dev->rbrq_head) ? " END_PDU" : "", + RBRQ_AAL5_PROT(he_dev->rbrq_head) ? " AAL5_PROT" : "", + RBRQ_CON_CLOSED(he_dev->rbrq_head) ? " CON_CLOSED" : "", + RBRQ_HBUF_ERR(he_dev->rbrq_head) ? " HBUF_ERR" : ""); + +#ifdef USE_RBPS + if (RBRQ_ADDR(he_dev->rbrq_head) & RBP_SMALLBUF) + rbp = &he_dev->rbps_base[RBP_INDEX(RBRQ_ADDR(he_dev->rbrq_head))]; + else +#endif + rbp = &he_dev->rbpl_base[RBP_INDEX(RBRQ_ADDR(he_dev->rbrq_head))]; + + buf_len = RBRQ_BUFLEN(he_dev->rbrq_head) * 4; + cid = RBRQ_CID(he_dev->rbrq_head); + + if (cid != lastcid) + vcc = __find_vcc(he_dev, cid); + lastcid = cid; + + if (vcc == NULL) { + hprintk("vcc == NULL (cid 0x%x)\n", cid); + if (!RBRQ_HBUF_ERR(he_dev->rbrq_head)) + rbp->status &= ~RBP_LOANED; + + goto next_rbrq_entry; + } + + he_vcc = HE_VCC(vcc); + if (he_vcc == NULL) { + hprintk("he_vcc == NULL (cid 0x%x)\n", cid); + if (!RBRQ_HBUF_ERR(he_dev->rbrq_head)) + rbp->status &= ~RBP_LOANED; + goto next_rbrq_entry; + } + + if (RBRQ_HBUF_ERR(he_dev->rbrq_head)) { + hprintk("HBUF_ERR! (cid 0x%x)\n", cid); + atomic_inc(&vcc->stats->rx_drop); + goto return_host_buffers; + } + + he_vcc->iov_tail->iov_base = RBRQ_ADDR(he_dev->rbrq_head); + he_vcc->iov_tail->iov_len = buf_len; + he_vcc->pdu_len += buf_len; + ++he_vcc->iov_tail; + + if (RBRQ_CON_CLOSED(he_dev->rbrq_head)) { + lastcid = -1; + HPRINTK("wake_up rx_waitq (cid 0x%x)\n", cid); + wake_up(&he_vcc->rx_waitq); + goto return_host_buffers; + } + +#ifdef notdef + if ((he_vcc->iov_tail - he_vcc->iov_head) > HE_MAXIOV) { + hprintk("iovec full! cid 0x%x\n", cid); + goto return_host_buffers; + } +#endif + if (!RBRQ_END_PDU(he_dev->rbrq_head)) + goto next_rbrq_entry; + + if (RBRQ_LEN_ERR(he_dev->rbrq_head) + || RBRQ_CRC_ERR(he_dev->rbrq_head)) { + HPRINTK("%s%s (%d.%d)\n", + RBRQ_CRC_ERR(he_dev->rbrq_head) + ? "CRC_ERR " : "", + RBRQ_LEN_ERR(he_dev->rbrq_head) + ? "LEN_ERR" : "", + vcc->vpi, vcc->vci); + atomic_inc(&vcc->stats->rx_err); + goto return_host_buffers; + } + + skb = atm_alloc_charge(vcc, he_vcc->pdu_len + rx_skb_reserve, + GFP_ATOMIC); + if (!skb) { + HPRINTK("charge failed (%d.%d)\n", vcc->vpi, vcc->vci); + goto return_host_buffers; + } + + if (rx_skb_reserve > 0) + skb_reserve(skb, rx_skb_reserve); + + do_gettimeofday(&skb->stamp); + + for (iov = he_vcc->iov_head; + iov < he_vcc->iov_tail; ++iov) { +#ifdef USE_RBPS + if (iov->iov_base & RBP_SMALLBUF) + memcpy(skb_put(skb, iov->iov_len), + he_dev->rbps_virt[RBP_INDEX(iov->iov_base)].virt, iov->iov_len); + else +#endif + memcpy(skb_put(skb, iov->iov_len), + he_dev->rbpl_virt[RBP_INDEX(iov->iov_base)].virt, iov->iov_len); + } + + switch (vcc->qos.aal) { + case ATM_AAL0: + /* 2.10.1.5 raw cell receive */ + skb->len = ATM_AAL0_SDU; + skb->tail = skb->data + skb->len; + break; + case ATM_AAL5: + /* 2.10.1.2 aal5 receive */ + + skb->len = AAL5_LEN(skb->data, he_vcc->pdu_len); + skb->tail = skb->data + skb->len; +#ifdef USE_CHECKSUM_HW + if (vcc->vpi == 0 && vcc->vci >= ATM_NOT_RSV_VCI) { + skb->ip_summed = CHECKSUM_HW; + skb->csum = TCP_CKSUM(skb->data, + he_vcc->pdu_len); + } +#endif + break; + } + +#ifdef should_never_happen + if (skb->len > vcc->qos.rxtp.max_sdu) + hprintk("pdu_len (%d) > vcc->qos.rxtp.max_sdu (%d)! cid 0x%x\n", skb->len, vcc->qos.rxtp.max_sdu, cid); +#endif + +#ifdef notdef + ATM_SKB(skb)->vcc = vcc; +#endif + vcc->push(vcc, skb); + + atomic_inc(&vcc->stats->rx); + +return_host_buffers: + ++pdus_assembled; + + for (iov = he_vcc->iov_head; + iov < he_vcc->iov_tail; ++iov) { +#ifdef USE_RBPS + if (iov->iov_base & RBP_SMALLBUF) + rbp = &he_dev->rbps_base[RBP_INDEX(iov->iov_base)]; + else +#endif + rbp = &he_dev->rbpl_base[RBP_INDEX(iov->iov_base)]; + + rbp->status &= ~RBP_LOANED; + } + + he_vcc->iov_tail = he_vcc->iov_head; + he_vcc->pdu_len = 0; + +next_rbrq_entry: + he_dev->rbrq_head = (struct he_rbrq *) + ((unsigned long) he_dev->rbrq_base | + RBRQ_MASK(++he_dev->rbrq_head)); + + } + read_unlock(&vcc_sklist_lock); + + if (updated) { + if (updated > he_dev->rbrq_peak) + he_dev->rbrq_peak = updated; + + he_writel(he_dev, RBRQ_MASK(he_dev->rbrq_head), + G0_RBRQ_H + (group * 16)); + } + + return pdus_assembled; +} + +static void +he_service_tbrq(struct he_dev *he_dev, int group) +{ + struct he_tbrq *tbrq_tail = (struct he_tbrq *) + ((unsigned long)he_dev->tbrq_base | + he_dev->hsp->group[group].tbrq_tail); + struct he_tpd *tpd; + int slot, updated = 0; +#ifdef USE_TPD_POOL + struct he_tpd *__tpd; +#endif + + /* 2.1.6 transmit buffer return queue */ + + while (he_dev->tbrq_head != tbrq_tail) { + ++updated; + + HPRINTK("tbrq%d 0x%x%s%s\n", + group, + TBRQ_TPD(he_dev->tbrq_head), + TBRQ_EOS(he_dev->tbrq_head) ? " EOS" : "", + TBRQ_MULTIPLE(he_dev->tbrq_head) ? " MULTIPLE" : ""); +#ifdef USE_TPD_POOL + tpd = NULL; + list_for_each_entry(__tpd, &he_dev->outstanding_tpds, entry) { + if (TPD_ADDR(__tpd->status) == TBRQ_TPD(he_dev->tbrq_head)) { + tpd = __tpd; + list_del(&__tpd->entry); + break; + } + } + + if (tpd == NULL) { + hprintk("unable to locate tpd for dma buffer %x\n", + TBRQ_TPD(he_dev->tbrq_head)); + goto next_tbrq_entry; + } +#else + tpd = &he_dev->tpd_base[ TPD_INDEX(TBRQ_TPD(he_dev->tbrq_head)) ]; +#endif + + if (TBRQ_EOS(he_dev->tbrq_head)) { + HPRINTK("wake_up(tx_waitq) cid 0x%x\n", + he_mkcid(he_dev, tpd->vcc->vpi, tpd->vcc->vci)); + if (tpd->vcc) + wake_up(&HE_VCC(tpd->vcc)->tx_waitq); + + goto next_tbrq_entry; + } + + for (slot = 0; slot < TPD_MAXIOV; ++slot) { + if (tpd->iovec[slot].addr) + pci_unmap_single(he_dev->pci_dev, + tpd->iovec[slot].addr, + tpd->iovec[slot].len & TPD_LEN_MASK, + PCI_DMA_TODEVICE); + if (tpd->iovec[slot].len & TPD_LST) + break; + + } + + if (tpd->skb) { /* && !TBRQ_MULTIPLE(he_dev->tbrq_head) */ + if (tpd->vcc && tpd->vcc->pop) + tpd->vcc->pop(tpd->vcc, tpd->skb); + else + dev_kfree_skb_any(tpd->skb); + } + +next_tbrq_entry: +#ifdef USE_TPD_POOL + if (tpd) + pci_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status)); +#else + tpd->inuse = 0; +#endif + he_dev->tbrq_head = (struct he_tbrq *) + ((unsigned long) he_dev->tbrq_base | + TBRQ_MASK(++he_dev->tbrq_head)); + } + + if (updated) { + if (updated > he_dev->tbrq_peak) + he_dev->tbrq_peak = updated; + + he_writel(he_dev, TBRQ_MASK(he_dev->tbrq_head), + G0_TBRQ_H + (group * 16)); + } +} + + +static void +he_service_rbpl(struct he_dev *he_dev, int group) +{ + struct he_rbp *newtail; + struct he_rbp *rbpl_head; + int moved = 0; + + rbpl_head = (struct he_rbp *) ((unsigned long)he_dev->rbpl_base | + RBPL_MASK(he_readl(he_dev, G0_RBPL_S))); + + for (;;) { + newtail = (struct he_rbp *) ((unsigned long)he_dev->rbpl_base | + RBPL_MASK(he_dev->rbpl_tail+1)); + + /* table 3.42 -- rbpl_tail should never be set to rbpl_head */ + if ((newtail == rbpl_head) || (newtail->status & RBP_LOANED)) + break; + + newtail->status |= RBP_LOANED; + he_dev->rbpl_tail = newtail; + ++moved; + } + + if (moved) + he_writel(he_dev, RBPL_MASK(he_dev->rbpl_tail), G0_RBPL_T); +} + +#ifdef USE_RBPS +static void +he_service_rbps(struct he_dev *he_dev, int group) +{ + struct he_rbp *newtail; + struct he_rbp *rbps_head; + int moved = 0; + + rbps_head = (struct he_rbp *) ((unsigned long)he_dev->rbps_base | + RBPS_MASK(he_readl(he_dev, G0_RBPS_S))); + + for (;;) { + newtail = (struct he_rbp *) ((unsigned long)he_dev->rbps_base | + RBPS_MASK(he_dev->rbps_tail+1)); + + /* table 3.42 -- rbps_tail should never be set to rbps_head */ + if ((newtail == rbps_head) || (newtail->status & RBP_LOANED)) + break; + + newtail->status |= RBP_LOANED; + he_dev->rbps_tail = newtail; + ++moved; + } + + if (moved) + he_writel(he_dev, RBPS_MASK(he_dev->rbps_tail), G0_RBPS_T); +} +#endif /* USE_RBPS */ + +static void +he_tasklet(unsigned long data) +{ + unsigned long flags; + struct he_dev *he_dev = (struct he_dev *) data; + int group, type; + int updated = 0; + + HPRINTK("tasklet (0x%lx)\n", data); +#ifdef USE_TASKLET + spin_lock_irqsave(&he_dev->global_lock, flags); +#endif + + while (he_dev->irq_head != he_dev->irq_tail) { + ++updated; + + type = ITYPE_TYPE(he_dev->irq_head->isw); + group = ITYPE_GROUP(he_dev->irq_head->isw); + + switch (type) { + case ITYPE_RBRQ_THRESH: + HPRINTK("rbrq%d threshold\n", group); + /* fall through */ + case ITYPE_RBRQ_TIMER: + if (he_service_rbrq(he_dev, group)) { + he_service_rbpl(he_dev, group); +#ifdef USE_RBPS + he_service_rbps(he_dev, group); +#endif /* USE_RBPS */ + } + break; + case ITYPE_TBRQ_THRESH: + HPRINTK("tbrq%d threshold\n", group); + /* fall through */ + case ITYPE_TPD_COMPLETE: + he_service_tbrq(he_dev, group); + break; + case ITYPE_RBPL_THRESH: + he_service_rbpl(he_dev, group); + break; + case ITYPE_RBPS_THRESH: +#ifdef USE_RBPS + he_service_rbps(he_dev, group); +#endif /* USE_RBPS */ + break; + case ITYPE_PHY: + HPRINTK("phy interrupt\n"); +#ifdef CONFIG_ATM_HE_USE_SUNI + spin_unlock_irqrestore(&he_dev->global_lock, flags); + if (he_dev->atm_dev->phy && he_dev->atm_dev->phy->interrupt) + he_dev->atm_dev->phy->interrupt(he_dev->atm_dev); + spin_lock_irqsave(&he_dev->global_lock, flags); +#endif + break; + case ITYPE_OTHER: + switch (type|group) { + case ITYPE_PARITY: + hprintk("parity error\n"); + break; + case ITYPE_ABORT: + hprintk("abort 0x%x\n", he_readl(he_dev, ABORT_ADDR)); + break; + } + break; + case ITYPE_TYPE(ITYPE_INVALID): + /* see 8.1.1 -- check all queues */ + + HPRINTK("isw not updated 0x%x\n", he_dev->irq_head->isw); + + he_service_rbrq(he_dev, 0); + he_service_rbpl(he_dev, 0); +#ifdef USE_RBPS + he_service_rbps(he_dev, 0); +#endif /* USE_RBPS */ + he_service_tbrq(he_dev, 0); + break; + default: + hprintk("bad isw 0x%x?\n", he_dev->irq_head->isw); + } + + he_dev->irq_head->isw = ITYPE_INVALID; + + he_dev->irq_head = (struct he_irq *) NEXT_ENTRY(he_dev->irq_base, he_dev->irq_head, IRQ_MASK); + } + + if (updated) { + if (updated > he_dev->irq_peak) + he_dev->irq_peak = updated; + + he_writel(he_dev, + IRQ_SIZE(CONFIG_IRQ_SIZE) | + IRQ_THRESH(CONFIG_IRQ_THRESH) | + IRQ_TAIL(he_dev->irq_tail), IRQ0_HEAD); + (void) he_readl(he_dev, INT_FIFO); /* 8.1.2 controller errata; flush posted writes */ + } +#ifdef USE_TASKLET + spin_unlock_irqrestore(&he_dev->global_lock, flags); +#endif +} + +static irqreturn_t +he_irq_handler(int irq, void *dev_id, struct pt_regs *regs) +{ + unsigned long flags; + struct he_dev *he_dev = (struct he_dev * )dev_id; + int handled = 0; + + if (he_dev == NULL) + return IRQ_NONE; + + spin_lock_irqsave(&he_dev->global_lock, flags); + + he_dev->irq_tail = (struct he_irq *) (((unsigned long)he_dev->irq_base) | + (*he_dev->irq_tailoffset << 2)); + + if (he_dev->irq_tail == he_dev->irq_head) { + HPRINTK("tailoffset not updated?\n"); + he_dev->irq_tail = (struct he_irq *) ((unsigned long)he_dev->irq_base | + ((he_readl(he_dev, IRQ0_BASE) & IRQ_MASK) << 2)); + (void) he_readl(he_dev, INT_FIFO); /* 8.1.2 controller errata */ + } + +#ifdef DEBUG + if (he_dev->irq_head == he_dev->irq_tail /* && !IRQ_PENDING */) + hprintk("spurious (or shared) interrupt?\n"); +#endif + + if (he_dev->irq_head != he_dev->irq_tail) { + handled = 1; +#ifdef USE_TASKLET + tasklet_schedule(&he_dev->tasklet); +#else + he_tasklet((unsigned long) he_dev); +#endif + he_writel(he_dev, INT_CLEAR_A, INT_FIFO); /* clear interrupt */ + (void) he_readl(he_dev, INT_FIFO); /* flush posted writes */ + } + spin_unlock_irqrestore(&he_dev->global_lock, flags); + return IRQ_RETVAL(handled); + +} + +static __inline__ void +__enqueue_tpd(struct he_dev *he_dev, struct he_tpd *tpd, unsigned cid) +{ + struct he_tpdrq *new_tail; + + HPRINTK("tpdrq %p cid 0x%x -> tpdrq_tail %p\n", + tpd, cid, he_dev->tpdrq_tail); + + /* new_tail = he_dev->tpdrq_tail; */ + new_tail = (struct he_tpdrq *) ((unsigned long) he_dev->tpdrq_base | + TPDRQ_MASK(he_dev->tpdrq_tail+1)); + + /* + * check to see if we are about to set the tail == head + * if true, update the head pointer from the adapter + * to see if this is really the case (reading the queue + * head for every enqueue would be unnecessarily slow) + */ + + if (new_tail == he_dev->tpdrq_head) { + he_dev->tpdrq_head = (struct he_tpdrq *) + (((unsigned long)he_dev->tpdrq_base) | + TPDRQ_MASK(he_readl(he_dev, TPDRQ_B_H))); + + if (new_tail == he_dev->tpdrq_head) { + hprintk("tpdrq full (cid 0x%x)\n", cid); + /* + * FIXME + * push tpd onto a transmit backlog queue + * after service_tbrq, service the backlog + * for now, we just drop the pdu + */ + if (tpd->skb) { + if (tpd->vcc->pop) + tpd->vcc->pop(tpd->vcc, tpd->skb); + else + dev_kfree_skb_any(tpd->skb); + atomic_inc(&tpd->vcc->stats->tx_err); + } +#ifdef USE_TPD_POOL + pci_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status)); +#else + tpd->inuse = 0; +#endif + return; + } + } + + /* 2.1.5 transmit packet descriptor ready queue */ +#ifdef USE_TPD_POOL + list_add_tail(&tpd->entry, &he_dev->outstanding_tpds); + he_dev->tpdrq_tail->tpd = TPD_ADDR(tpd->status); +#else + he_dev->tpdrq_tail->tpd = he_dev->tpd_base_phys + + (TPD_INDEX(tpd->status) * sizeof(struct he_tpd)); +#endif + he_dev->tpdrq_tail->cid = cid; + wmb(); + + he_dev->tpdrq_tail = new_tail; + + he_writel(he_dev, TPDRQ_MASK(he_dev->tpdrq_tail), TPDRQ_T); + (void) he_readl(he_dev, TPDRQ_T); /* flush posted writes */ +} + +static int +he_open(struct atm_vcc *vcc) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(vcc->dev); + struct he_vcc *he_vcc; + int err = 0; + unsigned cid, rsr0, rsr1, rsr4, tsr0, tsr0_aal, tsr4, period, reg, clock; + short vpi = vcc->vpi; + int vci = vcc->vci; + + if (vci == ATM_VCI_UNSPEC || vpi == ATM_VPI_UNSPEC) + return 0; + + HPRINTK("open vcc %p %d.%d\n", vcc, vpi, vci); + + set_bit(ATM_VF_ADDR, &vcc->flags); + + cid = he_mkcid(he_dev, vpi, vci); + + he_vcc = (struct he_vcc *) kmalloc(sizeof(struct he_vcc), GFP_ATOMIC); + if (he_vcc == NULL) { + hprintk("unable to allocate he_vcc during open\n"); + return -ENOMEM; + } + + he_vcc->iov_tail = he_vcc->iov_head; + he_vcc->pdu_len = 0; + he_vcc->rc_index = -1; + + init_waitqueue_head(&he_vcc->rx_waitq); + init_waitqueue_head(&he_vcc->tx_waitq); + + vcc->dev_data = he_vcc; + + if (vcc->qos.txtp.traffic_class != ATM_NONE) { + int pcr_goal; + + pcr_goal = atm_pcr_goal(&vcc->qos.txtp); + if (pcr_goal == 0) + pcr_goal = he_dev->atm_dev->link_rate; + if (pcr_goal < 0) /* means round down, technically */ + pcr_goal = -pcr_goal; + + HPRINTK("open tx cid 0x%x pcr_goal %d\n", cid, pcr_goal); + + switch (vcc->qos.aal) { + case ATM_AAL5: + tsr0_aal = TSR0_AAL5; + tsr4 = TSR4_AAL5; + break; + case ATM_AAL0: + tsr0_aal = TSR0_AAL0_SDU; + tsr4 = TSR4_AAL0_SDU; + break; + default: + err = -EINVAL; + goto open_failed; + } + + spin_lock_irqsave(&he_dev->global_lock, flags); + tsr0 = he_readl_tsr0(he_dev, cid); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + if (TSR0_CONN_STATE(tsr0) != 0) { + hprintk("cid 0x%x not idle (tsr0 = 0x%x)\n", cid, tsr0); + err = -EBUSY; + goto open_failed; + } + + switch (vcc->qos.txtp.traffic_class) { + case ATM_UBR: + /* 2.3.3.1 open connection ubr */ + + tsr0 = TSR0_UBR | TSR0_GROUP(0) | tsr0_aal | + TSR0_USE_WMIN | TSR0_UPDATE_GER; + break; + + case ATM_CBR: + /* 2.3.3.2 open connection cbr */ + + /* 8.2.3 cbr scheduler wrap problem -- limit to 90% total link rate */ + if ((he_dev->total_bw + pcr_goal) + > (he_dev->atm_dev->link_rate * 9 / 10)) + { + err = -EBUSY; + goto open_failed; + } + + spin_lock_irqsave(&he_dev->global_lock, flags); /* also protects he_dev->cs_stper[] */ + + /* find an unused cs_stper register */ + for (reg = 0; reg < HE_NUM_CS_STPER; ++reg) + if (he_dev->cs_stper[reg].inuse == 0 || + he_dev->cs_stper[reg].pcr == pcr_goal) + break; + + if (reg == HE_NUM_CS_STPER) { + err = -EBUSY; + spin_unlock_irqrestore(&he_dev->global_lock, flags); + goto open_failed; + } + + he_dev->total_bw += pcr_goal; + + he_vcc->rc_index = reg; + ++he_dev->cs_stper[reg].inuse; + he_dev->cs_stper[reg].pcr = pcr_goal; + + clock = he_is622(he_dev) ? 66667000 : 50000000; + period = clock / pcr_goal; + + HPRINTK("rc_index = %d period = %d\n", + reg, period); + + he_writel_mbox(he_dev, rate_to_atmf(period/2), + CS_STPER0 + reg); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + tsr0 = TSR0_CBR | TSR0_GROUP(0) | tsr0_aal | + TSR0_RC_INDEX(reg); + + break; + default: + err = -EINVAL; + goto open_failed; + } + + spin_lock_irqsave(&he_dev->global_lock, flags); + + he_writel_tsr0(he_dev, tsr0, cid); + he_writel_tsr4(he_dev, tsr4 | 1, cid); + he_writel_tsr1(he_dev, TSR1_MCR(rate_to_atmf(0)) | + TSR1_PCR(rate_to_atmf(pcr_goal)), cid); + he_writel_tsr2(he_dev, TSR2_ACR(rate_to_atmf(pcr_goal)), cid); + he_writel_tsr9(he_dev, TSR9_OPEN_CONN, cid); + + he_writel_tsr3(he_dev, 0x0, cid); + he_writel_tsr5(he_dev, 0x0, cid); + he_writel_tsr6(he_dev, 0x0, cid); + he_writel_tsr7(he_dev, 0x0, cid); + he_writel_tsr8(he_dev, 0x0, cid); + he_writel_tsr10(he_dev, 0x0, cid); + he_writel_tsr11(he_dev, 0x0, cid); + he_writel_tsr12(he_dev, 0x0, cid); + he_writel_tsr13(he_dev, 0x0, cid); + he_writel_tsr14(he_dev, 0x0, cid); + (void) he_readl_tsr0(he_dev, cid); /* flush posted writes */ + spin_unlock_irqrestore(&he_dev->global_lock, flags); + } + + if (vcc->qos.rxtp.traffic_class != ATM_NONE) { + unsigned aal; + + HPRINTK("open rx cid 0x%x (rx_waitq %p)\n", cid, + &HE_VCC(vcc)->rx_waitq); + + switch (vcc->qos.aal) { + case ATM_AAL5: + aal = RSR0_AAL5; + break; + case ATM_AAL0: + aal = RSR0_RAWCELL; + break; + default: + err = -EINVAL; + goto open_failed; + } + + spin_lock_irqsave(&he_dev->global_lock, flags); + + rsr0 = he_readl_rsr0(he_dev, cid); + if (rsr0 & RSR0_OPEN_CONN) { + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + hprintk("cid 0x%x not idle (rsr0 = 0x%x)\n", cid, rsr0); + err = -EBUSY; + goto open_failed; + } + +#ifdef USE_RBPS + rsr1 = RSR1_GROUP(0); + rsr4 = RSR4_GROUP(0); +#else /* !USE_RBPS */ + rsr1 = RSR1_GROUP(0)|RSR1_RBPL_ONLY; + rsr4 = RSR4_GROUP(0)|RSR4_RBPL_ONLY; +#endif /* USE_RBPS */ + rsr0 = vcc->qos.rxtp.traffic_class == ATM_UBR ? + (RSR0_EPD_ENABLE|RSR0_PPD_ENABLE) : 0; + +#ifdef USE_CHECKSUM_HW + if (vpi == 0 && vci >= ATM_NOT_RSV_VCI) + rsr0 |= RSR0_TCP_CKSUM; +#endif + + he_writel_rsr4(he_dev, rsr4, cid); + he_writel_rsr1(he_dev, rsr1, cid); + /* 5.1.11 last parameter initialized should be + the open/closed indication in rsr0 */ + he_writel_rsr0(he_dev, + rsr0 | RSR0_START_PDU | RSR0_OPEN_CONN | aal, cid); + (void) he_readl_rsr0(he_dev, cid); /* flush posted writes */ + + spin_unlock_irqrestore(&he_dev->global_lock, flags); + } + +open_failed: + + if (err) { + if (he_vcc) + kfree(he_vcc); + clear_bit(ATM_VF_ADDR, &vcc->flags); + } + else + set_bit(ATM_VF_READY, &vcc->flags); + + return err; +} + +static void +he_close(struct atm_vcc *vcc) +{ + unsigned long flags; + DECLARE_WAITQUEUE(wait, current); + struct he_dev *he_dev = HE_DEV(vcc->dev); + struct he_tpd *tpd; + unsigned cid; + struct he_vcc *he_vcc = HE_VCC(vcc); +#define MAX_RETRY 30 + int retry = 0, sleep = 1, tx_inuse; + + HPRINTK("close vcc %p %d.%d\n", vcc, vcc->vpi, vcc->vci); + + clear_bit(ATM_VF_READY, &vcc->flags); + cid = he_mkcid(he_dev, vcc->vpi, vcc->vci); + + if (vcc->qos.rxtp.traffic_class != ATM_NONE) { + int timeout; + + HPRINTK("close rx cid 0x%x\n", cid); + + /* 2.7.2.2 close receive operation */ + + /* wait for previous close (if any) to finish */ + + spin_lock_irqsave(&he_dev->global_lock, flags); + while (he_readl(he_dev, RCC_STAT) & RCC_BUSY) { + HPRINTK("close cid 0x%x RCC_BUSY\n", cid); + udelay(250); + } + + set_current_state(TASK_UNINTERRUPTIBLE); + add_wait_queue(&he_vcc->rx_waitq, &wait); + + he_writel_rsr0(he_dev, RSR0_CLOSE_CONN, cid); + (void) he_readl_rsr0(he_dev, cid); /* flush posted writes */ + he_writel_mbox(he_dev, cid, RXCON_CLOSE); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + timeout = schedule_timeout(30*HZ); + + remove_wait_queue(&he_vcc->rx_waitq, &wait); + set_current_state(TASK_RUNNING); + + if (timeout == 0) + hprintk("close rx timeout cid 0x%x\n", cid); + + HPRINTK("close rx cid 0x%x complete\n", cid); + + } + + if (vcc->qos.txtp.traffic_class != ATM_NONE) { + volatile unsigned tsr4, tsr0; + int timeout; + + HPRINTK("close tx cid 0x%x\n", cid); + + /* 2.1.2 + * + * ... the host must first stop queueing packets to the TPDRQ + * on the connection to be closed, then wait for all outstanding + * packets to be transmitted and their buffers returned to the + * TBRQ. When the last packet on the connection arrives in the + * TBRQ, the host issues the close command to the adapter. + */ + + while (((tx_inuse = atomic_read(&sk_atm(vcc)->sk_wmem_alloc)) > 0) && + (retry < MAX_RETRY)) { + msleep(sleep); + if (sleep < 250) + sleep = sleep * 2; + + ++retry; + } + + if (tx_inuse) + hprintk("close tx cid 0x%x tx_inuse = %d\n", cid, tx_inuse); + + /* 2.3.1.1 generic close operations with flush */ + + spin_lock_irqsave(&he_dev->global_lock, flags); + he_writel_tsr4_upper(he_dev, TSR4_FLUSH_CONN, cid); + /* also clears TSR4_SESSION_ENDED */ + + switch (vcc->qos.txtp.traffic_class) { + case ATM_UBR: + he_writel_tsr1(he_dev, + TSR1_MCR(rate_to_atmf(200000)) + | TSR1_PCR(0), cid); + break; + case ATM_CBR: + he_writel_tsr14_upper(he_dev, TSR14_DELETE, cid); + break; + } + (void) he_readl_tsr4(he_dev, cid); /* flush posted writes */ + + tpd = __alloc_tpd(he_dev); + if (tpd == NULL) { + hprintk("close tx he_alloc_tpd failed cid 0x%x\n", cid); + goto close_tx_incomplete; + } + tpd->status |= TPD_EOS | TPD_INT; + tpd->skb = NULL; + tpd->vcc = vcc; + wmb(); + + set_current_state(TASK_UNINTERRUPTIBLE); + add_wait_queue(&he_vcc->tx_waitq, &wait); + __enqueue_tpd(he_dev, tpd, cid); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + timeout = schedule_timeout(30*HZ); + + remove_wait_queue(&he_vcc->tx_waitq, &wait); + set_current_state(TASK_RUNNING); + + spin_lock_irqsave(&he_dev->global_lock, flags); + + if (timeout == 0) { + hprintk("close tx timeout cid 0x%x\n", cid); + goto close_tx_incomplete; + } + + while (!((tsr4 = he_readl_tsr4(he_dev, cid)) & TSR4_SESSION_ENDED)) { + HPRINTK("close tx cid 0x%x !TSR4_SESSION_ENDED (tsr4 = 0x%x)\n", cid, tsr4); + udelay(250); + } + + while (TSR0_CONN_STATE(tsr0 = he_readl_tsr0(he_dev, cid)) != 0) { + HPRINTK("close tx cid 0x%x TSR0_CONN_STATE != 0 (tsr0 = 0x%x)\n", cid, tsr0); + udelay(250); + } + +close_tx_incomplete: + + if (vcc->qos.txtp.traffic_class == ATM_CBR) { + int reg = he_vcc->rc_index; + + HPRINTK("cs_stper reg = %d\n", reg); + + if (he_dev->cs_stper[reg].inuse == 0) + hprintk("cs_stper[%d].inuse = 0!\n", reg); + else + --he_dev->cs_stper[reg].inuse; + + he_dev->total_bw -= he_dev->cs_stper[reg].pcr; + } + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + HPRINTK("close tx cid 0x%x complete\n", cid); + } + + kfree(he_vcc); + + clear_bit(ATM_VF_ADDR, &vcc->flags); +} + +static int +he_send(struct atm_vcc *vcc, struct sk_buff *skb) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(vcc->dev); + unsigned cid = he_mkcid(he_dev, vcc->vpi, vcc->vci); + struct he_tpd *tpd; +#ifdef USE_SCATTERGATHER + int i, slot = 0; +#endif + +#define HE_TPD_BUFSIZE 0xffff + + HPRINTK("send %d.%d\n", vcc->vpi, vcc->vci); + + if ((skb->len > HE_TPD_BUFSIZE) || + ((vcc->qos.aal == ATM_AAL0) && (skb->len != ATM_AAL0_SDU))) { + hprintk("buffer too large (or small) -- %d bytes\n", skb->len ); + if (vcc->pop) + vcc->pop(vcc, skb); + else + dev_kfree_skb_any(skb); + atomic_inc(&vcc->stats->tx_err); + return -EINVAL; + } + +#ifndef USE_SCATTERGATHER + if (skb_shinfo(skb)->nr_frags) { + hprintk("no scatter/gather support\n"); + if (vcc->pop) + vcc->pop(vcc, skb); + else + dev_kfree_skb_any(skb); + atomic_inc(&vcc->stats->tx_err); + return -EINVAL; + } +#endif + spin_lock_irqsave(&he_dev->global_lock, flags); + + tpd = __alloc_tpd(he_dev); + if (tpd == NULL) { + if (vcc->pop) + vcc->pop(vcc, skb); + else + dev_kfree_skb_any(skb); + atomic_inc(&vcc->stats->tx_err); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + return -ENOMEM; + } + + if (vcc->qos.aal == ATM_AAL5) + tpd->status |= TPD_CELLTYPE(TPD_USERCELL); + else { + char *pti_clp = (void *) (skb->data + 3); + int clp, pti; + + pti = (*pti_clp & ATM_HDR_PTI_MASK) >> ATM_HDR_PTI_SHIFT; + clp = (*pti_clp & ATM_HDR_CLP); + tpd->status |= TPD_CELLTYPE(pti); + if (clp) + tpd->status |= TPD_CLP; + + skb_pull(skb, ATM_AAL0_SDU - ATM_CELL_PAYLOAD); + } + +#ifdef USE_SCATTERGATHER + tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev, skb->data, + skb->len - skb->data_len, PCI_DMA_TODEVICE); + tpd->iovec[slot].len = skb->len - skb->data_len; + ++slot; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + + if (slot == TPD_MAXIOV) { /* queue tpd; start new tpd */ + tpd->vcc = vcc; + tpd->skb = NULL; /* not the last fragment + so dont ->push() yet */ + wmb(); + + __enqueue_tpd(he_dev, tpd, cid); + tpd = __alloc_tpd(he_dev); + if (tpd == NULL) { + if (vcc->pop) + vcc->pop(vcc, skb); + else + dev_kfree_skb_any(skb); + atomic_inc(&vcc->stats->tx_err); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + return -ENOMEM; + } + tpd->status |= TPD_USERCELL; + slot = 0; + } + + tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev, + (void *) page_address(frag->page) + frag->page_offset, + frag->size, PCI_DMA_TODEVICE); + tpd->iovec[slot].len = frag->size; + ++slot; + + } + + tpd->iovec[slot - 1].len |= TPD_LST; +#else + tpd->address0 = pci_map_single(he_dev->pci_dev, skb->data, skb->len, PCI_DMA_TODEVICE); + tpd->length0 = skb->len | TPD_LST; +#endif + tpd->status |= TPD_INT; + + tpd->vcc = vcc; + tpd->skb = skb; + wmb(); + ATM_SKB(skb)->vcc = vcc; + + __enqueue_tpd(he_dev, tpd, cid); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + atomic_inc(&vcc->stats->tx); + + return 0; +} + +static int +he_ioctl(struct atm_dev *atm_dev, unsigned int cmd, void __user *arg) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(atm_dev); + struct he_ioctl_reg reg; + int err = 0; + + switch (cmd) { + case HE_GET_REG: + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + if (copy_from_user(®, arg, + sizeof(struct he_ioctl_reg))) + return -EFAULT; + + spin_lock_irqsave(&he_dev->global_lock, flags); + switch (reg.type) { + case HE_REGTYPE_PCI: + reg.val = he_readl(he_dev, reg.addr); + break; + case HE_REGTYPE_RCM: + reg.val = + he_readl_rcm(he_dev, reg.addr); + break; + case HE_REGTYPE_TCM: + reg.val = + he_readl_tcm(he_dev, reg.addr); + break; + case HE_REGTYPE_MBOX: + reg.val = + he_readl_mbox(he_dev, reg.addr); + break; + default: + err = -EINVAL; + break; + } + spin_unlock_irqrestore(&he_dev->global_lock, flags); + if (err == 0) + if (copy_to_user(arg, ®, + sizeof(struct he_ioctl_reg))) + return -EFAULT; + break; + default: +#ifdef CONFIG_ATM_HE_USE_SUNI + if (atm_dev->phy && atm_dev->phy->ioctl) + err = atm_dev->phy->ioctl(atm_dev, cmd, arg); +#else /* CONFIG_ATM_HE_USE_SUNI */ + err = -EINVAL; +#endif /* CONFIG_ATM_HE_USE_SUNI */ + break; + } + + return err; +} + +static void +he_phy_put(struct atm_dev *atm_dev, unsigned char val, unsigned long addr) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(atm_dev); + + HPRINTK("phy_put(val 0x%x, addr 0x%lx)\n", val, addr); + + spin_lock_irqsave(&he_dev->global_lock, flags); + he_writel(he_dev, val, FRAMER + (addr*4)); + (void) he_readl(he_dev, FRAMER + (addr*4)); /* flush posted writes */ + spin_unlock_irqrestore(&he_dev->global_lock, flags); +} + + +static unsigned char +he_phy_get(struct atm_dev *atm_dev, unsigned long addr) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(atm_dev); + unsigned reg; + + spin_lock_irqsave(&he_dev->global_lock, flags); + reg = he_readl(he_dev, FRAMER + (addr*4)); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + HPRINTK("phy_get(addr 0x%lx) =0x%x\n", addr, reg); + return reg; +} + +static int +he_proc_read(struct atm_dev *dev, loff_t *pos, char *page) +{ + unsigned long flags; + struct he_dev *he_dev = HE_DEV(dev); + int left, i; +#ifdef notdef + struct he_rbrq *rbrq_tail; + struct he_tpdrq *tpdrq_head; + int rbpl_head, rbpl_tail; +#endif + static long mcc = 0, oec = 0, dcc = 0, cec = 0; + + + left = *pos; + if (!left--) + return sprintf(page, "%s\n", version); + + if (!left--) + return sprintf(page, "%s%s\n\n", + he_dev->prod_id, he_dev->media & 0x40 ? "SM" : "MM"); + + if (!left--) + return sprintf(page, "Mismatched Cells VPI/VCI Not Open Dropped Cells RCM Dropped Cells\n"); + + spin_lock_irqsave(&he_dev->global_lock, flags); + mcc += he_readl(he_dev, MCC); + oec += he_readl(he_dev, OEC); + dcc += he_readl(he_dev, DCC); + cec += he_readl(he_dev, CEC); + spin_unlock_irqrestore(&he_dev->global_lock, flags); + + if (!left--) + return sprintf(page, "%16ld %16ld %13ld %17ld\n\n", + mcc, oec, dcc, cec); + + if (!left--) + return sprintf(page, "irq_size = %d inuse = ? peak = %d\n", + CONFIG_IRQ_SIZE, he_dev->irq_peak); + + if (!left--) + return sprintf(page, "tpdrq_size = %d inuse = ?\n", + CONFIG_TPDRQ_SIZE); + + if (!left--) + return sprintf(page, "rbrq_size = %d inuse = ? peak = %d\n", + CONFIG_RBRQ_SIZE, he_dev->rbrq_peak); + + if (!left--) + return sprintf(page, "tbrq_size = %d peak = %d\n", + CONFIG_TBRQ_SIZE, he_dev->tbrq_peak); + + +#ifdef notdef + rbpl_head = RBPL_MASK(he_readl(he_dev, G0_RBPL_S)); + rbpl_tail = RBPL_MASK(he_readl(he_dev, G0_RBPL_T)); + + inuse = rbpl_head - rbpl_tail; + if (inuse < 0) + inuse += CONFIG_RBPL_SIZE * sizeof(struct he_rbp); + inuse /= sizeof(struct he_rbp); + + if (!left--) + return sprintf(page, "rbpl_size = %d inuse = %d\n\n", + CONFIG_RBPL_SIZE, inuse); +#endif + + if (!left--) + return sprintf(page, "rate controller periods (cbr)\n pcr #vc\n"); + + for (i = 0; i < HE_NUM_CS_STPER; ++i) + if (!left--) + return sprintf(page, "cs_stper%-2d %8ld %3d\n", i, + he_dev->cs_stper[i].pcr, + he_dev->cs_stper[i].inuse); + + if (!left--) + return sprintf(page, "total bw (cbr): %d (limit %d)\n", + he_dev->total_bw, he_dev->atm_dev->link_rate * 10 / 9); + + return 0; +} + +/* eeprom routines -- see 4.7 */ + +u8 +read_prom_byte(struct he_dev *he_dev, int addr) +{ + u32 val = 0, tmp_read = 0; + int i, j = 0; + u8 byte_read = 0; + + val = readl(he_dev->membase + HOST_CNTL); + val &= 0xFFFFE0FF; + + /* Turn on write enable */ + val |= 0x800; + he_writel(he_dev, val, HOST_CNTL); + + /* Send READ instruction */ + for (i = 0; i < sizeof(readtab)/sizeof(readtab[0]); i++) { + he_writel(he_dev, val | readtab[i], HOST_CNTL); + udelay(EEPROM_DELAY); + } + + /* Next, we need to send the byte address to read from */ + for (i = 7; i >= 0; i--) { + he_writel(he_dev, val | clocktab[j++] | (((addr >> i) & 1) << 9), HOST_CNTL); + udelay(EEPROM_DELAY); + he_writel(he_dev, val | clocktab[j++] | (((addr >> i) & 1) << 9), HOST_CNTL); + udelay(EEPROM_DELAY); + } + + j = 0; + + val &= 0xFFFFF7FF; /* Turn off write enable */ + he_writel(he_dev, val, HOST_CNTL); + + /* Now, we can read data from the EEPROM by clocking it in */ + for (i = 7; i >= 0; i--) { + he_writel(he_dev, val | clocktab[j++], HOST_CNTL); + udelay(EEPROM_DELAY); + tmp_read = he_readl(he_dev, HOST_CNTL); + byte_read |= (unsigned char) + ((tmp_read & ID_DOUT) >> ID_DOFFSET << i); + he_writel(he_dev, val | clocktab[j++], HOST_CNTL); + udelay(EEPROM_DELAY); + } + + he_writel(he_dev, val | ID_CS, HOST_CNTL); + udelay(EEPROM_DELAY); + + return byte_read; +} + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("chas williams <chas@cmf.nrl.navy.mil>"); +MODULE_DESCRIPTION("ForeRunnerHE ATM Adapter driver"); +module_param(disable64, bool, 0); +MODULE_PARM_DESC(disable64, "disable 64-bit pci bus transfers"); +module_param(nvpibits, short, 0); +MODULE_PARM_DESC(nvpibits, "numbers of bits for vpi (default 0)"); +module_param(nvcibits, short, 0); +MODULE_PARM_DESC(nvcibits, "numbers of bits for vci (default 12)"); +module_param(rx_skb_reserve, short, 0); +MODULE_PARM_DESC(rx_skb_reserve, "padding for receive skb (default 16)"); +module_param(irq_coalesce, bool, 0); +MODULE_PARM_DESC(irq_coalesce, "use interrupt coalescing (default 1)"); +module_param(sdh, bool, 0); +MODULE_PARM_DESC(sdh, "use SDH framing (default 0)"); + +static struct pci_device_id he_pci_tbl[] = { + { PCI_VENDOR_ID_FORE, PCI_DEVICE_ID_FORE_HE, PCI_ANY_ID, PCI_ANY_ID, + 0, 0, 0 }, + { 0, } +}; + +MODULE_DEVICE_TABLE(pci, he_pci_tbl); + +static struct pci_driver he_driver = { + .name = "he", + .probe = he_init_one, + .remove = __devexit_p(he_remove_one), + .id_table = he_pci_tbl, +}; + +static int __init he_init(void) +{ + return pci_register_driver(&he_driver); +} + +static void __exit he_cleanup(void) +{ + pci_unregister_driver(&he_driver); +} + +module_init(he_init); +module_exit(he_cleanup); diff --git a/drivers/atm/he.h b/drivers/atm/he.h new file mode 100644 index 000000000000..1a903859343a --- /dev/null +++ b/drivers/atm/he.h @@ -0,0 +1,895 @@ +/* $Id: he.h,v 1.4 2003/05/06 22:48:00 chas Exp $ */ + +/* + + he.h + + ForeRunnerHE ATM Adapter driver for ATM on Linux + Copyright (C) 1999-2001 Naval Research Laboratory + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ + +/* + + he.h + + ForeRunnerHE ATM Adapter driver for ATM on Linux + Copyright (C) 1999-2000 Naval Research Laboratory + + Permission to use, copy, modify and distribute this software and its + documentation is hereby granted, provided that both the copyright + notice and this permission notice appear in all copies of the software, + derivative works or modified versions, and any portions thereof, and + that both notices appear in supporting documentation. + + NRL ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS" CONDITION AND + DISCLAIMS ANY LIABILITY OF ANY KIND FOR ANY DAMAGES WHATSOEVER + RESULTING FROM THE USE OF THIS SOFTWARE. + + */ + +#ifndef _HE_H_ +#define _HE_H_ + +#define DEV_LABEL "he" + +#define CONFIG_DEFAULT_VCIBITS 12 +#define CONFIG_DEFAULT_VPIBITS 0 + +#define CONFIG_IRQ_SIZE 128 +#define CONFIG_IRQ_THRESH (CONFIG_IRQ_SIZE/2) + +#define CONFIG_NUMTPDS 256 + +#define CONFIG_TPDRQ_SIZE 512 +#define TPDRQ_MASK(x) (((unsigned long)(x))&((CONFIG_TPDRQ_SIZE<<3)-1)) + +#define CONFIG_RBRQ_SIZE 512 +#define CONFIG_RBRQ_THRESH 400 +#define RBRQ_MASK(x) (((unsigned long)(x))&((CONFIG_RBRQ_SIZE<<3)-1)) + +#define CONFIG_TBRQ_SIZE 512 +#define CONFIG_TBRQ_THRESH 400 +#define TBRQ_MASK(x) (((unsigned long)(x))&((CONFIG_TBRQ_SIZE<<2)-1)) + +#define CONFIG_RBPL_SIZE 512 +#define CONFIG_RBPL_THRESH 64 +#define CONFIG_RBPL_BUFSIZE 4096 +#define RBPL_MASK(x) (((unsigned long)(x))&((CONFIG_RBPL_SIZE<<3)-1)) + +#define CONFIG_RBPS_SIZE 1024 +#define CONFIG_RBPS_THRESH 64 +#define CONFIG_RBPS_BUFSIZE 128 +#define RBPS_MASK(x) (((unsigned long)(x))&((CONFIG_RBPS_SIZE<<3)-1)) + +/* 5.1.3 initialize connection memory */ + +#define CONFIG_RSRA 0x00000 +#define CONFIG_RCMLBM 0x08000 +#define CONFIG_RCMABR 0x0d800 +#define CONFIG_RSRB 0x0e000 + +#define CONFIG_TSRA 0x00000 +#define CONFIG_TSRB 0x08000 +#define CONFIG_TSRC 0x0c000 +#define CONFIG_TSRD 0x0e000 +#define CONFIG_TMABR 0x0f000 +#define CONFIG_TPDBA 0x10000 + +#define HE_MAXCIDBITS 12 + +/* 2.9.3.3 interrupt encodings */ + +struct he_irq { + volatile u32 isw; +}; + +#define IRQ_ALIGNMENT 0x1000 + +#define NEXT_ENTRY(base, tail, mask) \ + (((unsigned long)base)|(((unsigned long)(tail+1))&mask)) + +#define ITYPE_INVALID 0xffffffff +#define ITYPE_TBRQ_THRESH (0<<3) +#define ITYPE_TPD_COMPLETE (1<<3) +#define ITYPE_RBPS_THRESH (2<<3) +#define ITYPE_RBPL_THRESH (3<<3) +#define ITYPE_RBRQ_THRESH (4<<3) +#define ITYPE_RBRQ_TIMER (5<<3) +#define ITYPE_PHY (6<<3) +#define ITYPE_OTHER 0x80 +#define ITYPE_PARITY 0x81 +#define ITYPE_ABORT 0x82 + +#define ITYPE_GROUP(x) (x & 0x7) +#define ITYPE_TYPE(x) (x & 0xf8) + +#define HE_NUM_GROUPS 8 + +/* 2.1.4 transmit packet descriptor */ + +struct he_tpd { + + /* read by the adapter */ + + volatile u32 status; + volatile u32 reserved; + +#define TPD_MAXIOV 3 + struct { + u32 addr, len; + } iovec[TPD_MAXIOV]; + +#define address0 iovec[0].addr +#define length0 iovec[0].len + + /* linux-atm extensions */ + + struct sk_buff *skb; + struct atm_vcc *vcc; + +#ifdef USE_TPD_POOL + struct list_head entry; +#else + u32 inuse; + char padding[32 - sizeof(u32) - (2*sizeof(void*))]; +#endif +}; + +#define TPD_ALIGNMENT 64 +#define TPD_LEN_MASK 0xffff + +#define TPD_ADDR_SHIFT 6 +#define TPD_MASK 0xffffffc0 +#define TPD_ADDR(x) ((x) & TPD_MASK) +#define TPD_INDEX(x) (TPD_ADDR(x) >> TPD_ADDR_SHIFT) + + +/* table 2.3 transmit buffer return elements */ + +struct he_tbrq { + volatile u32 tbre; +}; + +#define TBRQ_ALIGNMENT CONFIG_TBRQ_SIZE + +#define TBRQ_TPD(tbrq) ((tbrq)->tbre & 0xffffffc0) +#define TBRQ_EOS(tbrq) ((tbrq)->tbre & (1<<3)) +#define TBRQ_MULTIPLE(tbrq) ((tbrq)->tbre & (1)) + +/* table 2.21 receive buffer return queue element field organization */ + +struct he_rbrq { + volatile u32 addr; + volatile u32 cidlen; +}; + +#define RBRQ_ALIGNMENT CONFIG_RBRQ_SIZE + +#define RBRQ_ADDR(rbrq) ((rbrq)->addr & 0xffffffc0) +#define RBRQ_CRC_ERR(rbrq) ((rbrq)->addr & (1<<5)) +#define RBRQ_LEN_ERR(rbrq) ((rbrq)->addr & (1<<4)) +#define RBRQ_END_PDU(rbrq) ((rbrq)->addr & (1<<3)) +#define RBRQ_AAL5_PROT(rbrq) ((rbrq)->addr & (1<<2)) +#define RBRQ_CON_CLOSED(rbrq) ((rbrq)->addr & (1<<1)) +#define RBRQ_HBUF_ERR(rbrq) ((rbrq)->addr & 1) +#define RBRQ_CID(rbrq) (((rbrq)->cidlen >> 16) & 0x1fff) +#define RBRQ_BUFLEN(rbrq) ((rbrq)->cidlen & 0xffff) + +/* figure 2.3 transmit packet descriptor ready queue */ + +struct he_tpdrq { + volatile u32 tpd; + volatile u32 cid; +}; + +#define TPDRQ_ALIGNMENT CONFIG_TPDRQ_SIZE + +/* table 2.30 host status page detail */ + +#define HSP_ALIGNMENT 0x400 /* must align on 1k boundary */ + +struct he_hsp { + struct he_hsp_entry { + volatile u32 tbrq_tail; + volatile u32 reserved1[15]; + volatile u32 rbrq_tail; + volatile u32 reserved2[15]; + } group[HE_NUM_GROUPS]; +}; + +/* figure 2.9 receive buffer pools */ + +struct he_rbp { + volatile u32 phys; + volatile u32 status; +}; + +/* NOTE: it is suggested that virt be the virtual address of the host + buffer. on a 64-bit machine, this would not work. Instead, we + store the real virtual address in another list, and store an index + (and buffer status) in the virt member. +*/ + +#define RBP_INDEX_OFF 6 +#define RBP_INDEX(x) (((long)(x) >> RBP_INDEX_OFF) & 0xffff) +#define RBP_LOANED 0x80000000 +#define RBP_SMALLBUF 0x40000000 + +struct he_virt { + void *virt; +}; + +#define RBPL_ALIGNMENT CONFIG_RBPL_SIZE +#define RBPS_ALIGNMENT CONFIG_RBPS_SIZE + +#ifdef notyet +struct he_group { + u32 rpbs_size, rpbs_qsize; + struct he_rbp rbps_ba; + + u32 rpbl_size, rpbl_qsize; + struct he_rpb_entry *rbpl_ba; +}; +#endif + +#define HE_LOOKUP_VCC(dev, cid) ((dev)->he_vcc_table[(cid)].vcc) + +struct he_vcc_table +{ + struct atm_vcc *vcc; +}; + +struct he_cs_stper +{ + long pcr; + int inuse; +}; + +#define HE_NUM_CS_STPER 16 + +struct he_dev { + unsigned int number; + unsigned int irq; + void __iomem *membase; + + char prod_id[30]; + char mac_addr[6]; + int media; /* + * 0x26 = HE155 MM + * 0x27 = HE622 MM + * 0x46 = HE155 SM + * 0x47 = HE622 SM + */ + + + unsigned int vcibits, vpibits; + unsigned int cells_per_row; + unsigned int bytes_per_row; + unsigned int cells_per_lbuf; + unsigned int r0_numrows, r0_startrow, r0_numbuffs; + unsigned int r1_numrows, r1_startrow, r1_numbuffs; + unsigned int tx_numrows, tx_startrow, tx_numbuffs; + unsigned int buffer_limit; + + struct he_vcc_table *he_vcc_table; + +#ifdef notyet + struct he_group group[HE_NUM_GROUPS]; +#endif + struct he_cs_stper cs_stper[HE_NUM_CS_STPER]; + unsigned total_bw; + + dma_addr_t irq_phys; + struct he_irq *irq_base, *irq_head, *irq_tail; + volatile unsigned *irq_tailoffset; + int irq_peak; + +#ifdef USE_TASKLET + struct tasklet_struct tasklet; +#endif +#ifdef USE_TPD_POOL + struct pci_pool *tpd_pool; + struct list_head outstanding_tpds; +#else + struct he_tpd *tpd_head, *tpd_base, *tpd_end; + dma_addr_t tpd_base_phys; +#endif + + dma_addr_t tpdrq_phys; + struct he_tpdrq *tpdrq_base, *tpdrq_tail, *tpdrq_head; + + spinlock_t global_lock; /* 8.1.5 pci transaction ordering + error problem */ + dma_addr_t rbrq_phys; + struct he_rbrq *rbrq_base, *rbrq_head; + int rbrq_peak; + +#ifdef USE_RBPL_POOL + struct pci_pool *rbpl_pool; +#else + void *rbpl_pages; + dma_addr_t rbpl_pages_phys; +#endif + dma_addr_t rbpl_phys; + struct he_rbp *rbpl_base, *rbpl_tail; + struct he_virt *rbpl_virt; + int rbpl_peak; + +#ifdef USE_RBPS +#ifdef USE_RBPS_POOL + struct pci_pool *rbps_pool; +#else + void *rbps_pages; + dma_addr_t rbps_pages_phys; +#endif +#endif + dma_addr_t rbps_phys; + struct he_rbp *rbps_base, *rbps_tail; + struct he_virt *rbps_virt; + int rbps_peak; + + dma_addr_t tbrq_phys; + struct he_tbrq *tbrq_base, *tbrq_head; + int tbrq_peak; + + dma_addr_t hsp_phys; + struct he_hsp *hsp; + + struct pci_dev *pci_dev; + struct atm_dev *atm_dev; + struct he_dev *next; +}; + +struct he_iovec +{ + u32 iov_base; + u32 iov_len; +}; + +#define HE_MAXIOV 20 + +struct he_vcc +{ + struct he_iovec iov_head[HE_MAXIOV]; + struct he_iovec *iov_tail; + int pdu_len; + + int rc_index; + + wait_queue_head_t rx_waitq; + wait_queue_head_t tx_waitq; +}; + +#define HE_VCC(vcc) ((struct he_vcc *)(vcc->dev_data)) + +#define PCI_VENDOR_ID_FORE 0x1127 +#define PCI_DEVICE_ID_FORE_HE 0x400 + +#define HE_DMA_MASK 0xffffffff + +#define GEN_CNTL_0 0x40 +#define INT_PROC_ENBL (1<<25) +#define SLAVE_ENDIAN_MODE (1<<16) +#define MRL_ENB (1<<5) +#define MRM_ENB (1<<4) +#define INIT_ENB (1<<2) +#define IGNORE_TIMEOUT (1<<1) +#define ENBL_64 (1<<0) + +#define MIN_PCI_LATENCY 32 /* errata 8.1.3 */ + +#define HE_DEV(dev) ((struct he_dev *) (dev)->dev_data) + +#define he_is622(dev) ((dev)->media & 0x1) + +#define HE_REGMAP_SIZE 0x100000 + +#define RESET_CNTL 0x80000 +#define BOARD_RST_STATUS (1<<6) + +#define HOST_CNTL 0x80004 +#define PCI_BUS_SIZE64 (1<<27) +#define DESC_RD_STATIC_64 (1<<26) +#define DATA_RD_STATIC_64 (1<<25) +#define DATA_WR_STATIC_64 (1<<24) +#define ID_CS (1<<12) +#define ID_WREN (1<<11) +#define ID_DOUT (1<<10) +#define ID_DOFFSET 10 +#define ID_DIN (1<<9) +#define ID_CLOCK (1<<8) +#define QUICK_RD_RETRY (1<<7) +#define QUICK_WR_RETRY (1<<6) +#define OUTFF_ENB (1<<5) +#define CMDFF_ENB (1<<4) +#define PERR_INT_ENB (1<<2) +#define IGNORE_INTR (1<<0) + +#define LB_SWAP 0x80008 +#define SWAP_RNUM_MAX(x) (x<<27) +#define DATA_WR_SWAP (1<<20) +#define DESC_RD_SWAP (1<<19) +#define DATA_RD_SWAP (1<<18) +#define INTR_SWAP (1<<17) +#define DESC_WR_SWAP (1<<16) +#define SDRAM_INIT (1<<15) +#define BIG_ENDIAN_HOST (1<<14) +#define XFER_SIZE (1<<7) + +#define LB_MEM_ADDR 0x8000c +#define LB_MEM_DATA 0x80010 + +#define LB_MEM_ACCESS 0x80014 +#define LB_MEM_HNDSHK (1<<30) +#define LM_MEM_WRITE (0x7) +#define LM_MEM_READ (0x3) + +#define SDRAM_CTL 0x80018 +#define LB_64_ENB (1<<3) +#define LB_TWR (1<<2) +#define LB_TRP (1<<1) +#define LB_TRAS (1<<0) + +#define INT_FIFO 0x8001c +#define INT_MASK_D (1<<15) +#define INT_MASK_C (1<<14) +#define INT_MASK_B (1<<13) +#define INT_MASK_A (1<<12) +#define INT_CLEAR_D (1<<11) +#define INT_CLEAR_C (1<<10) +#define INT_CLEAR_B (1<<9) +#define INT_CLEAR_A (1<<8) + +#define ABORT_ADDR 0x80020 + +#define IRQ0_BASE 0x80080 +#define IRQ_BASE(x) (x<<12) +#define IRQ_MASK ((CONFIG_IRQ_SIZE<<2)-1) /* was 0x3ff */ +#define IRQ_TAIL(x) (((unsigned long)(x)) & IRQ_MASK) +#define IRQ0_HEAD 0x80084 +#define IRQ_SIZE(x) (x<<22) +#define IRQ_THRESH(x) (x<<12) +#define IRQ_HEAD(x) (x<<2) +/* #define IRQ_PENDING (1) conflict with linux/irq.h */ +#define IRQ0_CNTL 0x80088 +#define IRQ_ADDRSEL(x) (x<<2) +#define IRQ_INT_A (0<<2) +#define IRQ_INT_B (1<<2) +#define IRQ_INT_C (2<<2) +#define IRQ_INT_D (3<<2) +#define IRQ_TYPE_ADDR 0x1 +#define IRQ_TYPE_LINE 0x0 +#define IRQ0_DATA 0x8008c + +#define IRQ1_BASE 0x80090 +#define IRQ1_HEAD 0x80094 +#define IRQ1_CNTL 0x80098 +#define IRQ1_DATA 0x8009c + +#define IRQ2_BASE 0x800a0 +#define IRQ2_HEAD 0x800a4 +#define IRQ2_CNTL 0x800a8 +#define IRQ2_DATA 0x800ac + +#define IRQ3_BASE 0x800b0 +#define IRQ3_HEAD 0x800b4 +#define IRQ3_CNTL 0x800b8 +#define IRQ3_DATA 0x800bc + +#define GRP_10_MAP 0x800c0 +#define GRP_32_MAP 0x800c4 +#define GRP_54_MAP 0x800c8 +#define GRP_76_MAP 0x800cc + +#define G0_RBPS_S 0x80400 +#define G0_RBPS_T 0x80404 +#define RBP_TAIL(x) ((x)<<3) +#define RBP_MASK(x) ((x)|0x1fff) +#define G0_RBPS_QI 0x80408 +#define RBP_QSIZE(x) ((x)<<14) +#define RBP_INT_ENB (1<<13) +#define RBP_THRESH(x) (x) +#define G0_RBPS_BS 0x8040c +#define G0_RBPL_S 0x80410 +#define G0_RBPL_T 0x80414 +#define G0_RBPL_QI 0x80418 +#define G0_RBPL_BS 0x8041c + +#define G1_RBPS_S 0x80420 +#define G1_RBPS_T 0x80424 +#define G1_RBPS_QI 0x80428 +#define G1_RBPS_BS 0x8042c +#define G1_RBPL_S 0x80430 +#define G1_RBPL_T 0x80434 +#define G1_RBPL_QI 0x80438 +#define G1_RBPL_BS 0x8043c + +#define G2_RBPS_S 0x80440 +#define G2_RBPS_T 0x80444 +#define G2_RBPS_QI 0x80448 +#define G2_RBPS_BS 0x8044c +#define G2_RBPL_S 0x80450 +#define G2_RBPL_T 0x80454 +#define G2_RBPL_QI 0x80458 +#define G2_RBPL_BS 0x8045c + +#define G3_RBPS_S 0x80460 +#define G3_RBPS_T 0x80464 +#define G3_RBPS_QI 0x80468 +#define G3_RBPS_BS 0x8046c +#define G3_RBPL_S 0x80470 +#define G3_RBPL_T 0x80474 +#define G3_RBPL_QI 0x80478 +#define G3_RBPL_BS 0x8047c + +#define G4_RBPS_S 0x80480 +#define G4_RBPS_T 0x80484 +#define G4_RBPS_QI 0x80488 +#define G4_RBPS_BS 0x8048c +#define G4_RBPL_S 0x80490 +#define G4_RBPL_T 0x80494 +#define G4_RBPL_QI 0x80498 +#define G4_RBPL_BS 0x8049c + +#define G5_RBPS_S 0x804a0 +#define G5_RBPS_T 0x804a4 +#define G5_RBPS_QI 0x804a8 +#define G5_RBPS_BS 0x804ac +#define G5_RBPL_S 0x804b0 +#define G5_RBPL_T 0x804b4 +#define G5_RBPL_QI 0x804b8 +#define G5_RBPL_BS 0x804bc + +#define G6_RBPS_S 0x804c0 +#define G6_RBPS_T 0x804c4 +#define G6_RBPS_QI 0x804c8 +#define G6_RBPS_BS 0x804cc +#define G6_RBPL_S 0x804d0 +#define G6_RBPL_T 0x804d4 +#define G6_RBPL_QI 0x804d8 +#define G6_RBPL_BS 0x804dc + +#define G7_RBPS_S 0x804e0 +#define G7_RBPS_T 0x804e4 +#define G7_RBPS_QI 0x804e8 +#define G7_RBPS_BS 0x804ec + +#define G7_RBPL_S 0x804f0 +#define G7_RBPL_T 0x804f4 +#define G7_RBPL_QI 0x804f8 +#define G7_RBPL_BS 0x804fc + +#define G0_RBRQ_ST 0x80500 +#define G0_RBRQ_H 0x80504 +#define G0_RBRQ_Q 0x80508 +#define RBRQ_THRESH(x) ((x)<<13) +#define RBRQ_SIZE(x) (x) +#define G0_RBRQ_I 0x8050c +#define RBRQ_TIME(x) ((x)<<8) +#define RBRQ_COUNT(x) (x) + +/* fill in 1 ... 7 later */ + +#define G0_TBRQ_B_T 0x80600 +#define G0_TBRQ_H 0x80604 +#define G0_TBRQ_S 0x80608 +#define G0_TBRQ_THRESH 0x8060c +#define TBRQ_THRESH(x) (x) + +/* fill in 1 ... 7 later */ + +#define RH_CONFIG 0x805c0 +#define PHY_INT_ENB (1<<10) +#define OAM_GID(x) (x<<7) +#define PTMR_PRE(x) (x) + +#define G0_INMQ_S 0x80580 +#define G0_INMQ_L 0x80584 +#define G1_INMQ_S 0x80588 +#define G1_INMQ_L 0x8058c +#define G2_INMQ_S 0x80590 +#define G2_INMQ_L 0x80594 +#define G3_INMQ_S 0x80598 +#define G3_INMQ_L 0x8059c +#define G4_INMQ_S 0x805a0 +#define G4_INMQ_L 0x805a4 +#define G5_INMQ_S 0x805a8 +#define G5_INMQ_L 0x805ac +#define G6_INMQ_S 0x805b0 +#define G6_INMQ_L 0x805b4 +#define G7_INMQ_S 0x805b8 +#define G7_INMQ_L 0x805bc + +#define TPDRQ_B_H 0x80680 +#define TPDRQ_T 0x80684 +#define TPDRQ_S 0x80688 + +#define UBUFF_BA 0x8068c + +#define RLBF0_H 0x806c0 +#define RLBF0_T 0x806c4 +#define RLBF1_H 0x806c8 +#define RLBF1_T 0x806cc +#define RLBC_H 0x806d0 +#define RLBC_T 0x806d4 +#define RLBC_H2 0x806d8 +#define TLBF_H 0x806e0 +#define TLBF_T 0x806e4 +#define RLBF0_C 0x806e8 +#define RLBF1_C 0x806ec +#define RXTHRSH 0x806f0 +#define LITHRSH 0x806f4 + +#define LBARB 0x80700 +#define SLICE_X(x) (x<<28) +#define ARB_RNUM_MAX(x) (x<<23) +#define TH_PRTY(x) (x<<21) +#define RH_PRTY(x) (x<<19) +#define TL_PRTY(x) (x<<17) +#define RL_PRTY(x) (x<<15) +#define BUS_MULTI(x) (x<<8) +#define NET_PREF(x) (x) + +#define SDRAMCON 0x80704 +#define BANK_ON (1<<14) +#define WIDE_DATA (1<<13) +#define TWR_WAIT (1<<12) +#define TRP_WAIT (1<<11) +#define TRAS_WAIT (1<<10) +#define REF_RATE(x) (x) + +#define LBSTAT 0x80708 + +#define RCC_STAT 0x8070c +#define RCC_BUSY (1) + +#define TCMCONFIG 0x80740 +#define TM_DESL2 (1<<10) +#define TM_BANK_WAIT(x) (x<<6) +#define TM_ADD_BANK4(x) (x<<4) +#define TM_PAR_CHECK(x) (x<<3) +#define TM_RW_WAIT(x) (x<<2) +#define TM_SRAM_TYPE(x) (x) + +#define TSRB_BA 0x80744 +#define TSRC_BA 0x80748 +#define TMABR_BA 0x8074c +#define TPD_BA 0x80750 +#define TSRD_BA 0x80758 + +#define TX_CONFIG 0x80760 +#define DRF_THRESH(x) (x<<22) +#define TX_UT_MODE(x) (x<<21) +#define TX_VCI_MASK(x) (x<<17) +#define LBFREE_CNT(x) (x) + +#define TXAAL5_PROTO 0x80764 +#define CPCS_UU(x) (x<<8) +#define CPI(x) (x) + +#define RCMCONFIG 0x80780 +#define RM_DESL2(x) (x<<10) +#define RM_BANK_WAIT(x) (x<<6) +#define RM_ADD_BANK(x) (x<<4) +#define RM_PAR_CHECK(x) (x<<3) +#define RM_RW_WAIT(x) (x<<2) +#define RM_SRAM_TYPE(x) (x) + +#define RCMRSRB_BA 0x80784 +#define RCMLBM_BA 0x80788 +#define RCMABR_BA 0x8078c + +#define RC_CONFIG 0x807c0 +#define UT_RD_DELAY(x) (x<<11) +#define WRAP_MODE(x) (x<<10) +#define RC_UT_MODE(x) (x<<9) +#define RX_ENABLE (1<<8) +#define RX_VALVP(x) (x<<4) +#define RX_VALVC(x) (x) + +#define MCC 0x807c4 +#define OEC 0x807c8 +#define DCC 0x807cc +#define CEC 0x807d0 + +#define HSP_BA 0x807f0 + +#define LB_CONFIG 0x807f4 +#define LB_SIZE(x) (x) + +#define CON_DAT 0x807f8 +#define CON_CTL 0x807fc +#define CON_CTL_MBOX (2<<30) +#define CON_CTL_TCM (1<<30) +#define CON_CTL_RCM (0<<30) +#define CON_CTL_WRITE (1<<29) +#define CON_CTL_READ (0<<29) +#define CON_CTL_BUSY (1<<28) +#define CON_BYTE_DISABLE_3 (1<<22) /* 24..31 */ +#define CON_BYTE_DISABLE_2 (1<<21) /* 16..23 */ +#define CON_BYTE_DISABLE_1 (1<<20) /* 8..15 */ +#define CON_BYTE_DISABLE_0 (1<<19) /* 0..7 */ +#define CON_CTL_ADDR(x) (x) + +#define FRAMER 0x80800 /* to 0x80bfc */ + +/* 3.3 network controller (internal) mailbox registers */ + +#define CS_STPER0 0x0 + /* ... */ +#define CS_STPER31 0x01f + +#define CS_STTIM0 0x020 + /* ... */ +#define CS_STTIM31 0x03f + +#define CS_TGRLD0 0x040 + /* ... */ +#define CS_TGRLD15 0x04f + +#define CS_ERTHR0 0x050 +#define CS_ERTHR1 0x051 +#define CS_ERTHR2 0x052 +#define CS_ERTHR3 0x053 +#define CS_ERTHR4 0x054 +#define CS_ERCTL0 0x055 +#define TX_ENABLE (1<<28) +#define ER_ENABLE (1<<27) +#define CS_ERCTL1 0x056 +#define CS_ERCTL2 0x057 +#define CS_ERSTAT0 0x058 +#define CS_ERSTAT1 0x059 + +#define CS_RTCCT 0x060 +#define CS_RTFWC 0x061 +#define CS_RTFWR 0x062 +#define CS_RTFTC 0x063 +#define CS_RTATR 0x064 + +#define CS_TFBSET 0x070 +#define CS_TFBADD 0x071 +#define CS_TFBSUB 0x072 +#define CS_WCRMAX 0x073 +#define CS_WCRMIN 0x074 +#define CS_WCRINC 0x075 +#define CS_WCRDEC 0x076 +#define CS_WCRCEIL 0x077 +#define CS_BWDCNT 0x078 + +#define CS_OTPPER 0x080 +#define CS_OTWPER 0x081 +#define CS_OTTLIM 0x082 +#define CS_OTTCNT 0x083 + +#define CS_HGRRT0 0x090 + /* ... */ +#define CS_HGRRT7 0x097 + +#define CS_ORPTRS 0x0a0 + +#define RXCON_CLOSE 0x100 + + +#define RCM_MEM_SIZE 0x10000 /* 1M of 32-bit registers */ +#define TCM_MEM_SIZE 0x20000 /* 2M of 32-bit registers */ + +/* 2.5 transmit connection memory registers */ + +#define TSR0_CONN_STATE(x) ((x>>28) & 0x7) +#define TSR0_USE_WMIN (1<<23) +#define TSR0_GROUP(x) ((x & 0x7)<<18) +#define TSR0_ABR (2<<16) +#define TSR0_UBR (1<<16) +#define TSR0_CBR (0<<16) +#define TSR0_PROT (1<<15) +#define TSR0_AAL0_SDU (2<<12) +#define TSR0_AAL0 (1<<12) +#define TSR0_AAL5 (0<<12) +#define TSR0_HALT_ER (1<<11) +#define TSR0_MARK_CI (1<<10) +#define TSR0_MARK_ER (1<<9) +#define TSR0_UPDATE_GER (1<<8) +#define TSR0_RC_INDEX(x) (x & 0x1F) + +#define TSR1_PCR(x) ((x & 0x7FFF)<<16) +#define TSR1_MCR(x) (x & 0x7FFF) + +#define TSR2_ACR(x) ((x & 0x7FFF)<<16) + +#define TSR3_NRM_CNT(x) ((x & 0xFF)<<24) +#define TSR3_CRM_CNT(x) (x & 0xFFFF) + +#define TSR4_FLUSH_CONN (1<<31) +#define TSR4_SESSION_ENDED (1<<30) +#define TSR4_CRC10 (1<<28) +#define TSR4_NULL_CRC10 (1<<27) +#define TSR4_PROT (1<<26) +#define TSR4_AAL0_SDU (2<<23) +#define TSR4_AAL0 (1<<23) +#define TSR4_AAL5 (0<<23) + +#define TSR9_OPEN_CONN (1<<20) + +#define TSR11_ICR(x) ((x & 0x7FFF)<<16) +#define TSR11_TRM(x) ((x & 0x7)<<13) +#define TSR11_NRM(x) ((x & 0x7)<<10) +#define TSR11_ADTF(x) (x & 0x3FF) + +#define TSR13_RDF(x) ((x & 0xF)<<23) +#define TSR13_RIF(x) ((x & 0xF)<<19) +#define TSR13_CDF(x) ((x & 0x7)<<16) +#define TSR13_CRM(x) (x & 0xFFFF) + +#define TSR14_DELETE (1<<31) +#define TSR14_ABR_CLOSE (1<<16) + +/* 2.7.1 per connection receieve state registers */ + +#define RSR0_START_PDU (1<<10) +#define RSR0_OPEN_CONN (1<<6) +#define RSR0_CLOSE_CONN (0<<6) +#define RSR0_PPD_ENABLE (1<<5) +#define RSR0_EPD_ENABLE (1<<4) +#define RSR0_TCP_CKSUM (1<<3) +#define RSR0_AAL5 (0) +#define RSR0_AAL0 (1) +#define RSR0_AAL0_SDU (2) +#define RSR0_RAWCELL (3) +#define RSR0_RAWCELL_CRC10 (4) + +#define RSR1_AQI_ENABLE (1<<20) +#define RSR1_RBPL_ONLY (1<<19) +#define RSR1_GROUP(x) ((x)<<16) + +#define RSR4_AQI_ENABLE (1<<30) +#define RSR4_GROUP(x) ((x)<<27) +#define RSR4_RBPL_ONLY (1<<26) + +/* 2.1.4 transmit packet descriptor */ + +#define TPD_USERCELL 0x0 +#define TPD_SEGMENT_OAMF5 0x4 +#define TPD_END2END_OAMF5 0x5 +#define TPD_RMCELL 0x6 +#define TPD_CELLTYPE(x) (x<<3) +#define TPD_EOS (1<<2) +#define TPD_CLP (1<<1) +#define TPD_INT (1<<0) +#define TPD_LST (1<<31) + +/* table 4.3 serial eeprom information */ + +#define PROD_ID 0x08 /* char[] */ +#define PROD_ID_LEN 30 +#define HW_REV 0x26 /* char[] */ +#define M_SN 0x3a /* integer */ +#define MEDIA 0x3e /* integer */ +#define HE155MM 0x26 +#define HE155SM 0x27 +#define HE622MM 0x46 +#define HE622SM 0x47 +#define MAC_ADDR 0x42 /* char[] */ + +#define CS_LOW 0x0 +#define CS_HIGH ID_CS /* HOST_CNTL_ID_PROM_SEL */ +#define CLK_LOW 0x0 +#define CLK_HIGH ID_CLOCK /* HOST_CNTL_ID_PROM_CLOCK */ +#define SI_HIGH ID_DIN /* HOST_CNTL_ID_PROM_DATA_IN */ +#define EEPROM_DELAY 400 /* microseconds */ + +#endif /* _HE_H_ */ diff --git a/drivers/atm/horizon.c b/drivers/atm/horizon.c new file mode 100644 index 000000000000..924a2c8988bd --- /dev/null +++ b/drivers/atm/horizon.c @@ -0,0 +1,2953 @@ +/* + Madge Horizon ATM Adapter driver. + Copyright (C) 1995-1999 Madge Networks Ltd. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian + system and in the file COPYING in the Linux kernel source. +*/ + +/* + IMPORTANT NOTE: Madge Networks no longer makes the adapters + supported by this driver and makes no commitment to maintain it. +*/ + +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/mm.h> +#include <linux/pci.h> +#include <linux/errno.h> +#include <linux/atm.h> +#include <linux/atmdev.h> +#include <linux/sonet.h> +#include <linux/skbuff.h> +#include <linux/time.h> +#include <linux/delay.h> +#include <linux/uio.h> +#include <linux/init.h> +#include <linux/ioport.h> +#include <linux/wait.h> + +#include <asm/system.h> +#include <asm/io.h> +#include <asm/atomic.h> +#include <asm/uaccess.h> +#include <asm/string.h> +#include <asm/byteorder.h> + +#include "horizon.h" + +#define maintainer_string "Giuliano Procida at Madge Networks <gprocida@madge.com>" +#define description_string "Madge ATM Horizon [Ultra] driver" +#define version_string "1.2.1" + +static inline void __init show_version (void) { + printk ("%s version %s\n", description_string, version_string); +} + +/* + + CREDITS + + Driver and documentation by: + + Chris Aston Madge Networks + Giuliano Procida Madge Networks + Simon Benham Madge Networks + Simon Johnson Madge Networks + Various Others Madge Networks + + Some inspiration taken from other drivers by: + + Alexandru Cucos UTBv + Kari Mettinen University of Helsinki + Werner Almesberger EPFL LRC + + Theory of Operation + + I Hardware, detection, initialisation and shutdown. + + 1. Supported Hardware + + This driver should handle all variants of the PCI Madge ATM adapters + with the Horizon chipset. These are all PCI cards supporting PIO, BM + DMA and a form of MMIO (registers only, not internal RAM). + + The driver is only known to work with SONET and UTP Horizon Ultra + cards at 155Mb/s. However, code is in place to deal with both the + original Horizon and 25Mb/s operation. + + There are two revisions of the Horizon ASIC: the original and the + Ultra. Details of hardware bugs are in section III. + + The ASIC version can be distinguished by chip markings but is NOT + indicated by the PCI revision (all adapters seem to have PCI rev 1). + + I believe that: + + Horizon => Collage 25 PCI Adapter (UTP and STP) + Horizon Ultra => Collage 155 PCI Client (UTP or SONET) + Ambassador x => Collage 155 PCI Server (completely different) + + Horizon (25Mb/s) is fitted with UTP and STP connectors. It seems to + have a Madge B154 plus glue logic serializer. I have also found a + really ancient version of this with slightly different glue. It + comes with the revision 0 (140-025-01) ASIC. + + Horizon Ultra (155Mb/s) is fitted with either a Pulse Medialink + output (UTP) or an HP HFBR 5205 output (SONET). It has either + Madge's SAMBA framer or a SUNI-lite device (early versions). It + comes with the revision 1 (140-027-01) ASIC. + + 2. Detection + + All Horizon-based cards present with the same PCI Vendor and Device + IDs. The standard Linux 2.2 PCI API is used to locate any cards and + to enable bus-mastering (with appropriate latency). + + ATM_LAYER_STATUS in the control register distinguishes between the + two possible physical layers (25 and 155). It is not clear whether + the 155 cards can also operate at 25Mbps. We rely on the fact that a + card operates at 155 if and only if it has the newer Horizon Ultra + ASIC. + + For 155 cards the two possible framers are probed for and then set + up for loop-timing. + + 3. Initialisation + + The card is reset and then put into a known state. The physical + layer is configured for normal operation at the appropriate speed; + in the case of the 155 cards, the framer is initialised with + line-based timing; the internal RAM is zeroed and the allocation of + buffers for RX and TX is made; the Burnt In Address is read and + copied to the ATM ESI; various policy settings for RX (VPI bits, + unknown VCs, oam cells) are made. Ideally all policy items should be + configurable at module load (if not actually on-demand), however, + only the vpi vs vci bit allocation can be specified at insmod. + + 4. Shutdown + + This is in response to module_cleaup. No VCs are in use and the card + should be idle; it is reset. + + II Driver software (as it should be) + + 0. Traffic Parameters + + The traffic classes (not an enumeration) are currently: ATM_NONE (no + traffic), ATM_UBR, ATM_CBR, ATM_VBR and ATM_ABR, ATM_ANYCLASS + (compatible with everything). Together with (perhaps only some of) + the following items they make up the traffic specification. + + struct atm_trafprm { + unsigned char traffic_class; traffic class (ATM_UBR, ...) + int max_pcr; maximum PCR in cells per second + int pcr; desired PCR in cells per second + int min_pcr; minimum PCR in cells per second + int max_cdv; maximum CDV in microseconds + int max_sdu; maximum SDU in bytes + }; + + Note that these denote bandwidth available not bandwidth used; the + possibilities according to ATMF are: + + Real Time (cdv and max CDT given) + + CBR(pcr) pcr bandwidth always available + rtVBR(pcr,scr,mbs) scr bandwidth always available, upto pcr at mbs too + + Non Real Time + + nrtVBR(pcr,scr,mbs) scr bandwidth always available, upto pcr at mbs too + UBR() + ABR(mcr,pcr) mcr bandwidth always available, upto pcr (depending) too + + mbs is max burst size (bucket) + pcr and scr have associated cdvt values + mcr is like scr but has no cdtv + cdtv may differ at each hop + + Some of the above items are qos items (as opposed to traffic + parameters). We have nothing to do with qos. All except ABR can have + their traffic parameters converted to GCRA parameters. The GCRA may + be implemented as a (real-number) leaky bucket. The GCRA can be |