DMA
DMA
DMA
Direct Memory Access (DMA) is a method of allowing data to be moved from one location to another in a computer without intervention from the central processor (CPU). The way that the DMA function is implemented varies between computer architectures, so this discussion will limit itself to the implementation and workings of the DMA subsystem on the IBM Personal Computer (PC), the IBM PC/AT and all of its successors and clones. The PC DMA subsystem is based on the Intel 8237 DMA controller. The 8237 contains four DMA channels that can be programmed independently and any one of the channels may be active at any moment. These channels are numbered 0, 1, 2 and 3. Starting with the PC/AT, IBM added a second 8237 chip, and numbered those channels 4, 5, 6 and 7. The original DMA controller (0, 1, 2 and 3) moves one byte in each transfer. The second DMA controller (4, 5, 6, and 7) moves 16-bits from two adjacent memory locations in each transfer, with the first byte always coming from an even-numbered address. The two controllers are identical components and the difference in transfer size is caused by the way the second controller is wired into the system. The 8237 has two electrical signals for each channel, named DRQ and -DACK. There are additional signals with the names HRQ (Hold Request), HLDA (Hold Acknowledge), -EOP (End of Process), and the bus control signals -MEMR (Memory Read), -MEMW (Memory Write), -IOR (I/O Read), and -IOW (I/O Write). The 8237 DMA is known as a fly-by DMA controller. This means that the data being moved from one location to another does not pass through the DMA chip and is not stored in the DMA chip. Subsequently, the DMA can only transfer data between an I/O port and a memory address, but not between two I/O ports or two memory locations. Note: The 8237 does allow two channels to be connected together to allow memory-tomemory DMA operations in a non-fly-by mode, but nobody in the PC industry uses this scarce resource this way since it is faster to move data between memory locations using the CPU. In the PC architecture, each DMA channel is normally activated only when the hardware that uses a given DMA channel requests a transfer by asserting the DRQ line for that channel.
How it Works
Processors provide one or two levels of DMA support. Since the dawn of the micro age just about every CPU has had the basic bus exchange support. This is quite simple, and usually consists of just a pair of pins. "Bus Request" (AKA "Hold" on Intel CPUs) is an input that, when asserted by some external device, causes the CPU to tri-state it's pins at the completion of the next instruction. "Bus Grant" (AKA "Bus Acknowledge" or "Hold Acknowledge") signals that the processor is indeed tristated. This means any other device can put addresses, data, and control signals on the bus. The idea is that a DMA controller can cause the CPU to yield control, at which point the controller takes over the bus and initiates bus cycles. Obviously, the DMA controller must be pretty intelligent to properly handle the timing and to drive external devices through the bus. Modern high integration processors often include DMA controllers built right on the processor's silicon. This is part of the vendors' never-ending quest to move more and more of the support silicon to the processor itself, greatly reducing the cost and complexity of building an embedded system. In this case the Bus Request and Bus Grant pins are connected to the onboard controller inside of the CPU package, though they usually come out to pins as well, so really complex systems can run multiple DMA controllers. It's a scary thought.... Every DMA transfer starts with the software programming the DMA controller, the device (either on-board a integration CPU chip or a discrete component) that manages these transactions. The code must typically set up destination and source pointers to tell the controller where the data is coming from, and where it is going to. A counter must be programmed to track the number of bytes in the transfer. Finally, numerous bits setting the DMA mode and type must be set. These may include source or destination type (I/O or memory), action to take on completion of the transaction (generate an interrupt, restart the controller, etc.), wait states for each CPU cycle, etc. Now the DMA controller waits for some action to start the transfer. Perhaps an external I/O device signals it is ready by toggling a bit. Sometimes the software simply sets a "start now" flag. Regardless, the controller takes over the bus and starts making transfers. Each DMA transfer looks just like a pair of normal CPU cycles. A memory or I/O read from the DMA source address is followed by a corresponding write to the destination. The source and destination devices cannot tell if the CPU is doing the accesses or if the DMA controller is doing them. During each DMA cycle the CPU is dead - it's waiting patiently for access to the bus, but cannot perform any useful computation during the transfer. The DMA controller is the bus master. It has control until it chooses to release the bus back to the CPU.
Depending on the type of DMA transfer and the characteristics of the controller, a single pair of cycles may terminate to allow the CPU to run for a while, or a complete block of data may be moved without bringing the processor back from the land of the idle. Once the entire transfer is complete the DMA controller may quietly go to sleep, or it may restart the transfer when the I/O is ready for another block, or it may signal the processor that the action is complete. In my experience this is always signaled via an interrupt. The controller interrupts the CPU so the firmware can take the appropriate action. To summarize, the processor programs the DMA controller with parameters about the transfers, the controller tri-states the CPU and moves data over the CPU bus, and then when the entire block is done the controller signals completion via an interrupt.
The DMA will then let the device that requested the DMA transfer know that the transfer is commencing. This is done by asserting the -DACK signal, or in the case of the floppy disk controller, -DACK2 is asserted. The floppy disk controller is now responsible for placing the byte to be transferred on the bus Data lines. Unless the floppy controller needs more time to get the data byte on the bus (and if the peripheral does need more time it alerts the DMA via the READY signal), the DMA will wait one DMA clock, and then de-assert the -MEMW and -IOR signals so that the memory will latch and store the byte that was on the bus, and the FDC will know that the byte has been transferred. Since the DMA cycle only transfers a single byte at a time, the FDC now drops the DRQ2 signal, so the DMA knows that it is no longer needed. The DMA will de-assert the -DACK2 signal, so that the FDC knows it must stop placing data on the bus. The DMA will now check to see if any of the other DMA channels have any work to do. If none of the channels have their DRQ lines asserted, the DMA controller has completed its work and will now tri-state the -MEMR, -MEMW, -IOR, -IOW and address signals. Finally, the DMA will de-assert the HRQ signal. The CPU sees this, and de-asserts the HOLDA signal. Now the CPU activates its -MEMR, -MEMW, -IOR, -IOW and address lines, and it resumes executing instructions and accessing main memory and the peripherals. For a typical floppy disk sector, the above process is repeated 512 times, once for each byte. Each time a byte is transferred, the address register in the DMA is incremented and the counter in the DMA that shows how many bytes are to be transferred is decremented. When the counter reaches zero, the DMA asserts the EOP signal, which indicates that the counter has reached zero and no more data will be transferred until the DMA controller is reprogrammed by the CPU. This event is also called the Terminal Count (TC). There is only one EOP signal, and since only one DMA channel can be active at any instant, the DMA channel that is currently active must be the DMA channel that just completed its task. If a peripheral wants to generate an interrupt when the transfer of a buffer is complete, it can test for its -DACKn signal and the EOP signal both being asserted at the same time. When that happens, it means the DMA will not transfer any more information for that peripheral without intervention by the CPU. The peripheral can then assert one of the interrupt signals to get the processors' attention. In the PC architecture, the DMA chip itself is not capable of generating an interrupt. The peripheral and its associated hardware is responsible for generating any interrupt that occurs. Subsequently, it is possible to have a peripheral that uses DMA but does not use interrupts. It is important to understand that although the CPU always releases the bus to the DMA when the DMA makes the request, this action is invisible to both applications and the
operating system, except for slight changes in the amount of time the processor takes to execute instructions when the DMA is active. Subsequently, the processor must poll the peripheral, poll the registers in the DMA chip, or receive an interrupt from the peripheral to know for certain when a DMA transfer has completed.
moved the data into this buffer, the operating system will then copy the data from the buffer to the address where the data is really supposed to be stored. When writing data from an address above 16Meg to a DMA-based peripheral, the data must be first copied from where it resides into a buffer located below 16Meg, and then the DMA can copy the data from the buffer to the hardware. In FreeBSD, these reserved buffers are called Bounce Buffers. In the MS-DOS world, they are sometimes called Smart Buffers. Note: A new implementation of the 8237, called the 82374, allows 16 bits of page register to be specified and enables access to the entire 32 bit address space, without the use of bounce buffers.
Single
A single byte (or word) is transferred. The DMA must release and re-acquire the bus for each additional byte. This is commonly-used by devices that cannot transfer the entire block of data immediately. The peripheral will request the DMA each time it is ready for another transfer. The standard PC-compatible floppy disk controller (NEC 765) only has a onebyte buffer, so it uses this mode.
Block/Demand
Once the DMA acquires the system bus, an entire block of data is transferred, up to a maximum of 64K. If the peripheral needs additional time, it can assert the READY signal to suspend the transfer briefly. READY should not be used excessively, and for slow peripheral transfers, the Single Transfer Mode should be used instead. The difference between Block and Demand is that once a Block transfer is started, it runs until the transfer count reaches zero. DRQ only needs to be asserted until -DACK is asserted. Demand Mode will transfer one more bytes until DRQ is deasserted, at which point the DMA suspends the transfer and releases the bus back to the CPU. When DRQ is asserted later, the transfer resumes where it was suspended.
Older hard disk controllers used Demand Mode until CPU speeds increased to the point that it was more efficient to transfer the data using the CPU, particularly if the memory locations used in the transfer were above the 16Meg mark.
Cascade
This mechanism allows a DMA channel to request the bus, but then the attached peripheral device is responsible for placing the addressing information on the bus instead of the DMA. This is also used to implement a technique known as Bus Mastering. When a DMA channel in Cascade Mode receives control of the bus, the DMA does not place addresses and I/O control signals on the bus like the DMA normally does when it is active. Instead, the DMA only asserts the -DACK signal for the active DMA channel. At this point it is up to the peripheral connected to that DMA channel to provide address and bus control signals. The peripheral has complete control over the system bus, and can do reads and/or writes to any address below 16Meg. When the peripheral is finished with the bus, it de-asserts the DRQ line, and the DMA controller can then return control to the CPU or to some other DMA channel. Cascade Mode can be used to chain multiple DMA controllers together, and this is exactly what DMA Channel 4 is used for in the PC architecture. When a peripheral requests the bus on DMA channels 0, 1, 2 or 3, the slave DMA controller asserts HLDREQ, but this wire is actually connected to DRQ4 on the primary DMA controller instead of to the CPU. The primary DMA controller, thinking it has work to do on Channel 4, requests the bus from the CPU using HLDREQ signal. Once the CPU grants the bus to the primary DMA controller, -DACK4 is asserted, and that wire is actually connected to the HLDA signal on the slave DMA controller. The slave DMA controller then transfers data for the DMA channel that requested it (0, 1, 2 or 3), or the slave DMA may grant the bus to a peripheral that wants to perform its own bus-mastering, such as a SCSI controller. Because of this wiring arrangement, only DMA channels 0, 1, 2, 3, 5, 6 and 7 are usable with peripherals on PC/AT systems. Note: DMA channel 0 was reserved for refresh operations in early IBM PC computers, but is generally available for use by peripherals in modern systems. When a peripheral is performing Bus Mastering, it is important that the peripheral transmit data to or from memory constantly while it holds the system bus. If the peripheral cannot do this, it must release the bus frequently so that the system can perform refresh operations on main memory.
The Dynamic RAM used in all PCs for main memory must be accessed frequently to keep the bits stored in the components charged. Dynamic RAM essentially consists of millions of capacitors with each one holding one bit of data. These capacitors are charged with power to represent a 1 or drained to represent a 0. Because all capacitors leak, power must be added at regular intervals to keep the 1 values intact. The RAM chips actually handle the task of pumping power back into all of the appropriate locations in RAM, but they must be told when to do it by the rest of the computer so that the refresh activity will not interfere with the computer wanting to access RAM normally. If the computer is unable to refresh memory, the contents of memory will become corrupted in just a few milliseconds. Since memory read and write cycles count as refresh cycles (a dynamic RAM refresh cycle is actually an incomplete memory read cycle), as long as the peripheral controller continues reading or writing data to sequential memory locations, that action will refresh all of memory. Bus-mastering is found in some SCSI host interfaces and other high-performance peripheral controllers.
Autoinitialize
This mode causes the DMA to perform Byte, Block or Demand transfers, but when the DMA transfer counter reaches zero, the counter and address are set back to where they were when the DMA channel was originally programmed. This means that as long as the peripheral requests transfers, they will be granted. It is up to the CPU to move new data into the fixed buffer ahead of where the DMA is about to transfer it when doing output operations, and to read new data out of the buffer behind where the DMA is writing when doing input operations. This technique is frequently used on audio devices that have small or no hardware sample buffers. There is additional CPU overhead to manage this circular buffer, but in some cases this may be the only way to eliminate the latency that occurs when the DMA counter reaches zero and the DMA stops transfers until it is reprogrammed.
Demand, Cascade, etc), and finally the address and length of the transfer are loaded. The length that is loaded is one less than the amount you expect the DMA to transfer. The LSB and MSB of the address and length are written to the same 8-bit I/O port, so another port must be written to first to guarantee that the DMA accepts the first byte as the LSB and the second byte as the MSB of the length and address. Then, be sure to update the Page Register, which is external to the DMA and is accessed through a different set of I/O ports. Once all the settings are ready, the DMA channel can be un-masked. That DMA channel is now considered to be armed, and will respond when the DRQ line for that channel is asserted. Refer to a hardware data book for precise programming details for the 8237. You will also need to refer to the I/O port map for the PC system, which describes where the DMA and Page Register ports are located. A complete port map table is located below.
DMA Command Registers 0x08 0x08 0x09 0x09 0x0a 0x0a 0x0b 0x0b 0x0c 0x0c 0x0d 0x0d 0x0e 0x0e 0x0f 0x0f write read write read write read write read write read write read write read write read Command Register Status Register Request Register Single Mask Register Bit Mode Register Clear LSB/MSB Flip-Flop Master Clear/Reset Temporary Register (not available on newer versions) Clear Mask Register Write All Mask Register Bits Read All Mask Register Bits (only in Intel 82374)
0xd0 0xd2 0xd2 0xd4 0xd4 0xd6 0xd6 0xd8 0xd8 0xda 0xda 0xdc 0xdc 0xde 0xdf 0x87 0x83 0x81 0x82 0x8b 0x89 0x8a 0x8f
read write read write read write read write read write read write read write read r/w r/w r/w r/w r/w r/w r/w r/w
Status Register Request Register Single Mask Register Bit Mode Register Clear LSB/MSB Flip-Flop Master Clear/Reset Temporary Register (not present in Intel 82374) Clear Mask Register Write All Mask Register Bits Read All Mask Register Bits (only in Intel 82374) Channel 0 Low byte (23-16) page Register Channel 1 Low byte (23-16) page Register Channel 2 Low byte (23-16) page Register Channel 3 Low byte (23-16) page Register Channel 5 Low byte (23-16) page Register Channel 6 Low byte (23-16) page Register Channel 7 Low byte (23-16) page Register Low byte page Refresh
Many embedded systems make use of DMA controllers. How many of us remember that DMA stands for Direct Memory Access? What idiot invented this meaningless phrase? I figure any CPU cycle directly accesses memory. This is a case of an acronym conveniently sweeping an embarrassing piece of verbal pomposity under the rug where it belongs. Regardless, DMA is nothing more than a way to bypass the CPU to get to system memory and/or I/O. DMA is usually associated with an I/O device that needs very rapid access to large chunks of RAM. For example - a data logger may need to save a massive burst of data when some event occurs. DMA requires an extensive amount of special hardware to managing the data transfers and to arbitrating access to the system bus. This might seem to violate our desire to use software wherever possible. However, DMA makes sense when the transfer rates exceed anything possible with software. Even the fastest loop in assembly language comes burdened with lots of baggage. A short code fragment that reads from a port, stores to memory, increments pointers, decrements a loop counter, and then repeats based on the value of the counter takes quite a few clock cycles per byte copied. A hardware DMA controller can do the same with no wasted cycles and no CPU intervention. Admittedly, modern processors often have blindingly fast looping instructions. The 386's REPS (repeat string) moves data much faster than most applications will ever need. However, the latency between a hardware event coming true, and the code being ready to execute the REPS, will surely be many microseconds even in the most carefully crafted program - far too much time in many applications.
How it Works
Processors provide one or two levels of DMA support. Since the dawn of the micro age just about every CPU has had the basic bus exchange support. This is quite simple, and usually consists of just a pair of pins. "Bus Request" (AKA "Hold" on Intel CPUs) is an input that, when asserted by some external device, causes the CPU to tri-state it's pins at the completion of the next instruction. "Bus Grant" (AKA "Bus Acknowledge" or "Hold Acknowledge") signals that the processor is indeed tristated. This means any other device can put addresses, data, and control signals on the bus. The idea is that a DMA controller can cause the CPU to yield control, at which point the controller takes over the bus and initiates bus cycles. Obviously, the DMA controller must be pretty intelligent to properly handle the timing and to drive external devices through the bus. Modern high integration processors often include DMA controllers built right on the processor's silicon. This is part of the vendors' never-ending quest to move more and more of the support silicon to the processor itself, greatly reducing the
cost and complexity of building an embedded system. In this case the Bus Request and Bus Grant pins are connected to the onboard controller inside of the CPU package, though they usually come out to pins as well, so really complex systems can run multiple DMA controllers. It's a scary thought.... Every DMA transfer starts with the software programming the DMA controller, the device (either on-board a integration CPU chip or a discrete component) that manages these transactions. The code must typically set up destination and source pointers to tell the controller where the data is coming from, and where it is going to. A counter must be programmed to track the number of bytes in the transfer. Finally, numerous bits setting the DMA mode and type must be set. These may include source or destination type (I/O or memory), action to take on completion of the transaction (generate an interrupt, restart the controller, etc.), wait states for each CPU cycle, etc. Now the DMA controller waits for some action to start the transfer. Perhaps an external I/O device signals it is ready by toggling a bit. Sometimes the software simply sets a "start now" flag. Regardless, the controller takes over the bus and starts making transfers. Each DMA transfer looks just like a pair of normal CPU cycles. A memory or I/O read from the DMA source address is followed by a corresponding write to the destination. The source and destination devices cannot tell if the CPU is doing the accesses or if the DMA controller is doing them. During each DMA cycle the CPU is dead - it's waiting patiently for access to the bus, but cannot perform any useful computation during the transfer. The DMA controller is the bus master. It has control until it chooses to release the bus back to the CPU. Depending on the type of DMA transfer and the characteristics of the controller, a single pair of cycles may terminate to allow the CPU to run for a while, or a complete block of data may be moved without bringing the processor back from the land of the idle. Once the entire transfer is complete the DMA controller may quietly go to sleep, or it may restart the transfer when the I/O is ready for another block, or it may signal the processor that the action is complete. In my experience this is always signaled via an interrupt. The controller interrupts the CPU so the firmware can take the appropriate action. To summarize, the processor programs the DMA controller with parameters about the transfers, the controller tri-states the CPU and moves data over the CPU bus, and then when the entire block is done the controller signals
DMA controllers are wondrous and scary things, each with dozens of registers you must program just right to get any sort of response. I think these things are designed by committee, with each member throwing every possible feature into the chip. Though it's nice to have so much capability, writing the code can be a trial. You must first have a very clear idea of exactly what sort of DMA transfers your system needs. DMA was invented to move data between I/O and memory, but now people use it for a variety of other reasons as well. Traditional Synchronous DMA moves a byte or word at a time between system memory and a peripheral, handshaking with the I/O port for each transfer. This sort of transfer recognizes that the port may not always be in a ready condition; the handshaking is a hardware mechanism to throttle the transactions. With this sort of transfer, the program sets up the controller and then carries on, oblivious to the state of the DMA transaction. The hardware moves one byte or word between memory and I/O each time the I/O port signals it is ready for another transaction. On each read indication, the DMA controller asserts Bus Request, waits for a Bus Acknowledge in response, and then takes over the bus for a single cycle. Then, the DMA controller goes idle again, waiting for another ready signal from the port. Thus, the program and DMA cycles share bus cycles, with the controller winning any contest for control of the bus. Sometimes this is called "Cycle Stealing". Burst Mode DMA, in contrast, generally assumes that the destination and source addresses can take transfers as fast as the controller can generate them. The program sets up the controller, and then (perhaps after a single ready indication from a port occurs), the entire source block is copied to the destination. The DMA controller gains exclusive access to the bus for the duration of the transfer, during which time the program is effectively shut down. Burst mode DMA can transfer data very rapidly indeed. Flyby DMA, something that is not supported on many controllers, is a beast of a different color. The DMA controller gains access to the bus and puts the source or destination address out. Then, it initiates what is in effect a read and a write cycle simultaneously. The data is read from the source address, and written to the destination, at the same time. This implies that either the source or destination does not require an address, since it is very unlikely that both would use the same. An example might be copying data from memory to a FIFO port the source address (a pointer to memory) increments on each transfer, while the
destination is always the same FIFO. Flyby transactions are very fast since the read/write cycle pair is reduced to a single cycle. Both burst and synchronous types of transfers can be supported.
Typical Uses
The original IBM PC, that 8088 based monstrosity we all once yearned for but now snicker at, used a DMA controller to generate dynamic RAM refresh addresses. It simply ran a null transfer every few milliseconds to generate the addresses needed by the DRAMs. This was a very clever design - a normal refresh controller is a mess of logic. The only down side was that the PC's RAM was non-functional until the powerup code properly programmed the DMA controller. Both floppy and hard disk controllers often use DMA to transfer data to and from the drive. This is a natural and perfect application. The software arms the controller and then carries on. The hardware waits for the drive to get to the correct position and then performs the transfer without further reliance on the system software. If I'm working on a microprocessor with a "free" DMA controller (one built onto the chip), I'll sometimes use it for large memory transfers. This is especially useful with processors with segmented address spaces, like the 80188 or Z180 (the Z180's space is not actually segmented, but is limited to 64k without software intervention to program the MMU). Both of these CPUs include on-board DMA controllers that support transfers over the entire 1 Mb address space of the part. Judicious programming of the controller lets you do a simple and easy memory copy of any size to any address - all without worrying about segment registers or the MMU. This is yet another argument for encapsulation: write a DMA routine once, debug it thoroughly, and then reuse it even for mundane tasks. Over the years I've profiled a lot of embedded code, and in many instances have found that execution time seems to be really burned up by string copy and move routines inside the C runtime library. If you have a spare DMA controller channel, why not recode portions of the library to use a DMA channel to do the moves? Depending on the processor a tremendous speed improvement can result. I'll never forget the one time I should have used DMA, but didn't. As a consultant, rushed to get a job done, I carelessly threw together a hardware design figuring somehow I could make things work by tuning the software. For
some inexplicable reason I did not put a DMA controller on the system, and suffered for weeks, tuning a few instructions to move data off a tape drive without missing bytes. A little more forethought would have made a big difference.
Troubleshooting DMA
This is a case where a thorough knowledge of the hardware is essential to making the software work. DMA is almost impossible to troubleshoot without using a logic analyzer. No matter what mode the transfers will ultimately use, and no matter what the source and destination devices are, I always first write a routine to do a memory to memory DMA transfer. This is much easier to troubleshoot than DMA to a complex I/O port. You can use your ICE to see if the transfer happened (by looking at the destination block), and to see if exactly the right number of bytes were transferred. At some point you'll have to recode to direct the transfer to your device. Hook up a logic analyzer to the DMA signals on the chip to be sure that the addresses and byte count are correct. Check this even if things seem to work - a slight mistake might trash part of your stack or data space. Some high integration CPUs with internal DMA controllers do not produce any sort of cycle that you can flag as being associated with DMA. This drives me nuts - one lousy extra pin would greatly ease debugging. The only way to track these transfers is to trigger the logic analyzer on address ranges associated with the transfer, but unfortunately these ranges may also have non-DMA activity in them. Be aware that DMA will destroy your timing calculations. Bit banging UARTs will not be reliable; carefully crafted timing loops will run slower than expected. In the old days we all counted T-states to figure how long a loop ran, but DMA, prefetchers, cache, and all sorts of modern exoticness makes it almost impossible to calculate real execution time.
Scopes
On another subject, some time ago I wrote about using oscilloscopes to debug software. The subject is really too big to do justice in a couple of short magazine pieces. However, I just received a booklet from Tektronix that does justice to the subject. It's called "Basic Concepts - XYZ of Analog and Digital Oscilloscopes", and is their publication number 070869001. Highly recommended. Get the book, borrow a scope, and play around for a while. It's fun and tremendously worthwhile.