Only if it's system RAM being allocated as a mountable volume via a software layer.
Wikipedia, which you seem to hold to a higher standard than your fellow community members, agrees with me by the way: http://en.wikipedia.org/wiki/RAM_disk
If you'll notice, right at the top of the article, it even says "For hardware storage devices using RAM, see solid-state drive."
It matters quite a bit, it completely changes the definition. A software solution involving RAM on the motherboard would be a RAM Drive, a hardware solution like the ACard counts as a SSD.
As a whole, the iRAM is not volatile because of its battery. The type of memory modules used does not change the fact that, in this configuration, it is an SSD.
I've spelled out the distinction, if you refuse to acknowledge it then I can't help but pity you as you wallow in your own ignorance.
Erm, the faster the drive gets, the less important DMA buffer size becomes. Something as fast as a DRAM based SSD with a custom disk controller hooked up to PCIe would be fine on current average DMA buffer sizes.
All DMA is designed to do is keep the processor from getting hung up waiting for a disk operation to complete. A storage device as fast as the one we're talking about would render DMA nearly obsolete. If you were able to load up the SSD with RAM as fast as your system RAM and used a wide enough (as in bandwidth) PCIe slot, you could theoretically switch it over to PIO mode and not notice any performance impact.
Wrong again, they use SATA because it allows the device to use the controller of your choice. It takes supporting the wide range of existing operating systems off of their plate and leaves it all up to the SATA controller manufacturer. It also allows simple configuration of advanced drive setups like RAID.