VMESlave And DMA_dev are involved with the system RAM memory. This matter is resolved using an extra kernel module called uiodrv which alloc, map and release System RAM on user request. The memory granted is always contiguous and it's temporally removed from the internal Linux Memory management. Local and remote applications can use safely DMAs to extract/put data from/to the PC RAM. The maximum size granted from uiodrv is 128Kbytes due internal Linux kernel constrains. However, since 128 devices could be open by the driver, if more memory is needed. Current uiodrv design only provides virtual pointers to the current calling process. I should be very grateful if somebody could think about share the memory granted by uiodrv between external processes ;-). May the shm_xxx POSIX4 calls do it in the 3.0 kernel development. The libVMEBit3.a library is designed having in mind multiple threads applications, therefore each hardware device contain a POSIX1.0 semaphore which is used as "stopper" of multiple reentrant requests on a single Device. Bit3PCI.h provides a bit-field user interface to the internal PCI/VME registers, the header defines a complete structure set which behaviour should be carefully checked the first time that you run the software. If the bit-fields structures are still functional you only needs to index an structure to monitor/set a register flag or programming a DMA PCI address.
Bit3extras object provides dump/monitor functions to each one of the
Bit3 registers blocks:
Local DMA, Remote DMA, Remote CSR (VME), Local CSR (PCI) and
the Local PCI Configuration Space.
Each function returns a decoded and fixed formatted data stream which could be easily parsed by other kind of applications like TCP/IP or GUIs. An optional terminal dump can be as well requested as a parameter control. Examples of using are the incomplete GUI tool Bit3Tree built with gtk-1.2.1 developed to monitoring purposes. Complete applications are Bit3Util (terminal version of Bit3Tree) and Bit3Dump which dumps to the terminal the state of a range of page descriptors on user selection.
ROD code is completely developed and linked with ific-1.0 driver And
Bit3-1.0 VME library. The 1.0
version is related with the rod driver design under Linux-2.2.x which gathers the Memory/Bit3 And SLink management on the kernel side.
However, a public distribution
ific-1.1 is
available splitting the original design in three drivers:
slinkdrv, uiodrv and bit3drv.
The
Bit3-1.1 library have been built on top of two of them (bit3drv and
uiodrv). There are not dramatic changes on the internals of the library
and applications because the services from the drivers layer are the same.
We have considered that to be logically linked with other modules is enough to
increase the version number. The only changes made, are four defines which
redeclarate the file system nodes where each user object opens a file
descriptor.
Raw DMA bandwidths are measured taking the time between
a falling edge and a rising edge of AS in a single DMA block transfer cycle.
Playing as Master device the performance for
read cycles is close to 21Mbytes/sec and 15Mbytes/sec for the write cycles; the VME
target is the system RAM of the PowerPC 2604. Using the MVME167 card as DMA target, read
cycles bursts since 22Mbytes/sec and the write cycles takes 1ms to transfer 16Kbytes therefore there's a good correlation in the numbers.
In the Bit3 Slave mode, PowerPC reads data
from PC memory using DMAs at a rate of
10Mbytes/sec,
the same number is obtained if the MVME167 performs the DMA transfer; Write
DMA cycles from the native VME hosts shows that 16Kbytes are moved in 1250
usec for MVME167 and 1216 usec for PowerPC, in other words roughly
12,5Mbytes/sec
are obtained.
TileCal ROD
Maintaned by Juanba Ific, University of Valencia |