Jump to content

Archived

This topic is now archived and is closed to further replies.

moonlit

RAD Disk

Recommended Posts

This is probably a bit more extreme than most of the posts in here, but I figured I'll throw it out and see what happens...

Way back when Commodore was still alive (the first time around, not the back-from-the-dead-Commodore-in-name-only performance gaming PC company), the Amiga OS had a RAD disk, which was a RAM disk that could survive a soft reset. It could store random data or even the OS itself once an OS had been installed to it. I don't know of any current implementation of this so I got to thinking. Basically I imagine this as an area in RAM which isn't touched by the OS during normal use. That is, no running code, no stored data, just straight untouched.

The easiest way of protecting a chunk of RAM that I know of without rewriting the OS is to use RAM which isn't/can't be addressed by the OS during normal use. Nowadays the most obvious example of this is a 32bit OS on hardware with more than 4GB RAM. Providing nothing erases the contents of this RAM it should survive as long as power is applied to the RAM (much like the original Amiga implementation).

Now I don't think it'd be impossible to add the same functionality to systems now, given that addresses above 4GB are not used by a 32bit OS and are untouched. This makes a perfect scenario to attempt the above scenario.

If 64bit code exists beneath the OS (hypervisor, BIOS, EFI, bootloader) then that RAM is still addressable and I believe it could be used as a giant RAD disk as long as the OS has some way to read and write to it. So imagine some form of address translation in that 64bit code, perhaps even a disk emulator of some kind, so the OS either sees the space as a hard disk drive or some other storage device which could be accessed using a driver to talk to the underlying code, whether it be in the BIOS or a hypervisor.

If the space was presented as a disk, the OS could be booted from it. Once the OS is installed or imaged to the RAM it would run there believing it was running on a real HDD (again mimicking the RAD disk's ability to support an OS). It would be unaware of the real world situation because it would be unable to "see" the RAM above the 4GB limit, it would only "see" the simulated HDD it's installed on as a real drive according to the hypervisor or BIOS beneath it.

To continue the HDD idea, the unaddressed RAM would be able to hold a straight image so would not require specifically written filesystems, it would work in much the same way as a virtual HDD used in an emulator or virtual machine would, a flat image is presented to the guest OS as a real drive.

My (lack of) technical knowledge prevents me from exploring this idea any further but I'd be interested in thoughts or opinions on the topic. If anyone believes they could put this down in code form I'd be very interested to see it come to life.

(4:17am, 09/02/09): This could also be used to copy a live OS image, for example a liveCD image, to the RAM of a machine using something like OpenBIOS (previously LinuxBIOS). The image could be copied to a machine's RAM at the start of a day from a USB stick or via a network connection by a small piece of code which could be written into OpenBIOS and providing the power remains on, the machine could reboot at each logoff and require no media to boot again. This would result in a completely medialess (and potentially USB-less, if network-driven) machine which could be used as a totally clean public terminal with no external connections or drives accessible.

(4:36am, 09/02/09): If the image is downloaded via the network, the OpenBIOS could checksum to ensure that the image is present, correct and untampered. If the image is missing, corrupt or has been tampered with, download a fresh copy before booting. If the checksum is correct, boot as normal.

Share this post


Link to post
Share on other sites

I simply don't understand the benefit from doing this.

Memory keeps its data for a short (or relatively short period of time) but corruption does happen at a steady pace. If motherboard came with the option of persisting main memory with power while the computer is off, then there would be lots of benefits to having more memory which could be accessed like a drive, something like the RAM-SATA drives available. But without so, your still going to have problems with memory addressing and also having to read data into the RAM disk which is slow from a hard drive.

Share this post


Link to post
Share on other sites

One of the benefits would be that you could, as long as the machine is never (or rarely) powered down, run your OS from the drive. For example, you could have a machine with 8GB RAM which could run XP in the first 4GB and boot from the second 4GB. Granted, there are better ways to use 8GB of RAM, but given that RAM is much faster than HDDs, disk bottlenecks would disappear and boot times would be nuts in the event of failure (BSoD, security updates, etc).

If the machine is powered down occasionally but not often, the RAM could be imaged to a HDD before powering down for hardware upgrades or relocation and reimaged to the RAM upon boot. Copying the RAM drive into RAM once would still be quicker than relying on a HDD and the advantages are extended by the uptime of the machine. Obviously if you power down every night then you're not going to need this, but if a machine has weeks or months of uptime it could be beneficial.

If it's never powered down, there's no problem.

It could also be used for data which needs to be accessed very quickly rather than the OS itself.

Edit: This would also be much cheaper than a SATA-RAM drive or similar (potentially free), and doesn't suffer from the SATA bottleneck given fast enough RAM.

Share this post


Link to post
Share on other sites

Say you have 8GB of memory installed. Your using a 32-bit OS and want to install the OS to the 4GB of unaddressable space by that operating system, allowing you a very fast boot.

You going to require a boot load that can support the full memory size, which will copy in the data to the usable area for the OS. The OS then needs to be able to read that data to know what its got. You can't turn turn the partitioned space into a disk drive to read and write to without a hypervisor which is going to make everything funky. Because te OS now needs to be away of what is in main memory its boot sequence needs altering, which means you can't get Windows to do this as you can't alter the code.

The next problem is that the area of memory which is not used by the OS is not contiguous in memory, virtual to physical memory address translation with the processor is even distributed across all DIMMs, so if you had 4 x 2GB modules then you would be using the first 1GB of each DIMM, with the boot loader it now needs to be know which areas of memory it can read and write, thats going to be non-trivial to implement.

I think the only practical solution to this is a hypervisor, but with even a very small and lightweight hypervisor your still going to loss performance for what I would consider not a great benefit. This is of course unless you wanted to do a hardware solution, either on the motherboard or perhaps a PCIE FPGA memory controller.

Share this post


Link to post
Share on other sites

While I agree with moonlite that the feasibility of this project is a bit questionable, gigabyte does make a pci card which allow you to use some ram as a hard disk. http://techreport.com/articles.x/9312

You could conceivably install the os on the ram disk and then connect to it from a floopy or flash drive loaded with grub or lilo.

Though, performance per dollar, I would say it would be a better value just to buy a raptor hard drive.

Share this post


Link to post
Share on other sites
While I agree with moonlite Moonlit, that I think that the feasibility of this project is a bit questionable,. gigabyte Gigabyte does make makes a pci PCI card which allow would allow you to use some ram as a hard disk. http://techreport.com/articles.x/9312

You could conceivably install the os an OS on the ram disk RAM Disk and then connect to it from a floopy floppy or flash drive loaded with grub or lilo.

Though, performance per dollar, I would say it looks to me as if it would be a better value to just to buy a raptor Raptor hard drive.

55590908.jpg
ok...lets take that one line at a time...

While I agree with Moonlit, I think that the feasibility of this project is a bit questionable. Gigabyte makes a PCI card which would allow you to use some ram as a hard disk. http://techreport.com/articles.x/9312
That's not quite what Moonlit had in mind, he wants to use main system RAM as a RAM Drive, not build a SSD out of yet more RAM.

You could conceivably install an OS on the RAM Disk and connect to it from a floppy or flash drive loaded with grub or lilo.
The Gigabyte iRAM is not a RAM Drive, it's a SSD (Solid State Drive).

You also don't need any special bootloader setup for the Gigabyte iRAM to work either, it uses a trickle-charge from the PCI bus (which is available even wen the computer is off) as well as a battery pack (in case of power failure) to keep the information stored in the RAM alive, and it connects via SATA, so that it acts just like a regular hard disk...just a whole lot faster.

Though, performance per dollar, it looks to me as if it would be a better value to just to buy a Raptor hard drive.
I'm sorry, but the Gigabyte iRAM wins "Performance per dollar" hands down, you would need a pile of Raptors in RAID to match it across the board.

Now, capacity is another matter, a Raptor offers a lot more space per dollar than the iRAM.

Share this post


Link to post
Share on other sites
The Gigabyte iRAM is not a RAM Drive, it's a SSD (Solid State Drive).

Actually the Gigabyte iRAM is a RAM Drive, if you hadn't noticed it is made up of RAM.

A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. A SSD emulates a hard disk drive interface, thus easily replacing it in most applications. An SSD using SRAM or DRAM (instead of flash memory) is often called a RAM-drive.

From Wikipedia.

Also the ACard ANS-9010 is the current RAM disk available, if you read reviews on it you'll find out that it is not particularly faster than a conventional flash based SSD, where the SSD benefits from considerably more capacity and persistence, the ACard requires a Compact Flash card to backup to, which takes about 20mins which is horrible. Also the SSD and the ACard are about the same price, making the SSD a no-brainer.

The only way a RAM disk is going to be useful now is if it transfers across the PCI-E bus, but the problem with that is that I believe for that your going to have to use Direct Memory Access, so what you could end up with is not having any system memory available as its all mapped to external devices. In fact I really wonder what happens with that? Hopefully the architects have been clever and said that a certain amount of system memory can't be mapped away.

Share this post


Link to post
Share on other sites
Actually the Gigabyte iRAM is a RAM Drive, if you hadn't noticed it is made up of RAM.

Incorrect, you're missing a key distinction...A RAM disk, in the traditional sense, is a software layer that enables applications to transparently use RAM (often a segment of main memory) as if it were a hard disk or other secondary storage. Quite obviously, this does not describe the Gigabyte iRAM.

A solid-state drive (SSD) is any data storage device that uses solid-state memory to store persistent data. DRAM is solid state memory, and is made persistent via the enclosed battery, ergo the Gigabyte iRAM is a SSD. Wikipedia calling such SSDs based on DRAM "RAM Drives" is a misnomer (Just as SSDs based on NAND Flash are sometimes refered to "Flash Drives" though they bear little resemblance to their tiny USB cousins).

If that wasn't clear enough, a SSD is hardware, a RAM Disk is software.

The only way a RAM disk DRAM based SSD is going to be useful now is if it transfers across the PCI-E bus, but the problem with that is that I believe for that your going to have to use Direct Memory Access, so what you could end up with is not having any system memory available as its all mapped to external devices. In fact I really wonder what happens with that? Hopefully the architects have been clever and said that a certain amount of system memory can't be mapped away.

That part you got right, to take advantage of a DRAM based SSD you would need the speeds offered by PCI Express. I imagine a 4x slot would be adiquate to start seeing some serious performance gains.

As for the DMA problem, you're over thinking it. All the PCIe card has to do is represent itself to the OS as a disk controller (with an attached drive). Problem solved.

Share this post


Link to post
Share on other sites

Because arguing semantics is always productive. Guess I'm the only Amiga fan then. Oh well, can't say I didn't expect the idea to get trashed on here.

Share this post


Link to post
Share on other sites
Incorrect, you're missing a key distinction...A RAM disk, in the traditional sense, is a software layer that enables applications to transparently use RAM (often a segment of main memory) as if it were a hard disk or other secondary storage. Quite obviously, this does not describe the Gigabyte iRAM.

A solid-state drive (SSD) is any data storage device that uses solid-state memory to store persistent data. DRAM is solid state memory, and is made persistent via the enclosed battery, ergo the Gigabyte iRAM is a SSD. Wikipedia calling such SSDs based on DRAM "RAM Drives" is a misnomer (Just as SSDs based on NAND Flash are sometimes refered to "Flash Drives" though they bear little resemblance to their tiny USB cousins).

If that wasn't clear enough, a SSD is hardware, a RAM Disk is software.

That part you got right, to take advantage of a DRAM based SSD you would need the speeds offered by PCI Express. I imagine a 4x slot would be adiquate to start seeing some serious performance gains.

As for the DMA problem, you're over thinking it. All the PCIe card has to do is represent itself to the OS as a disk controller (with an attached drive). Problem solved.

In industry, if a RAM as in DRAM and SRAM is used as a disk, as in you can read and write to it like a hard drive, it is referred to a RAM disk. Whether it is on the motherboard as part of system memory or on a device like the ACard doesn't matter. It is referred to as a RAM disk because of features of the RAM, like being volatile.

A software version uses system memory, using a drive which takes a portion of memory and mounts a filesystem on it, these are pretty trivial to implement. A hardware version is the ACard, but they are both RAM disks, because they both use RAM to store a filesystem.

The reason why I quote the problem with the DMA, is that you can't use the RAM as a disk controller for the full performance, a disk controller uses DMA to transfer data to the processor. Disk controllers only use a small amount of memory because normally they are slow and the CPU will take the data before it has more to load. In this case it won't be, your bottle neck is now going to be waiting for the CPU to take the data before you can read any more data into system memory. You've now lost all the benefits of RAM disk beacuse your spending half your time waiting. This is why you need a huge DMA area, which is impractical.

This is why the Gigabyte iRAM and the ACard use the Sata interface, because people don't want to loose all of their system memory to a device which uses all the addressable space as DMA.

Share this post


Link to post
Share on other sites
In industry, if a RAM as in DRAM and SRAM is used as a disk, as in you can read and write to it like a hard drive, it is referred to a RAM disk.

Only if it's system RAM being allocated as a mountable volume via a software layer.

Wikipedia, which you seem to hold to a higher standard than your fellow community members, agrees with me by the way: http://en.wikipedia.org/wiki/RAM_disk

If you'll notice, right at the top of the article, it even says "For hardware storage devices using RAM, see solid-state drive."

Whether it is on the motherboard as part of system memory or on a device like the ACard doesn't matter.

It matters quite a bit, it completely changes the definition. A software solution involving RAM on the motherboard would be a RAM Drive, a hardware solution like the ACard counts as a SSD.

It is referred to as a RAM disk because of features of the RAM, like being volatile.

As a whole, the iRAM is not volatile because of its battery. The type of memory modules used does not change the fact that, in this configuration, it is an SSD.

I've spelled out the distinction, if you refuse to acknowledge it then I can't help but pity you as you wallow in your own ignorance.

The reason why I quote the problem with the DMA, is that you can't use the RAM as a disk controller for the full performance, a disk controller uses DMA to transfer data to the processor. Disk controllers only use a small amount of memory because normally they are slow and the CPU will take the data before it has more to load. In this case it won't be, your bottle neck is now going to be waiting for the CPU to take the data before you can read any more data into system memory. You've now lost all the benefits of RAM disk beacuse your spending half your time waiting. This is why you need a huge DMA area, which is impractical.

Erm, the faster the drive gets, the less important DMA buffer size becomes. Something as fast as a DRAM based SSD with a custom disk controller hooked up to PCIe would be fine on current average DMA buffer sizes.

All DMA is designed to do is keep the processor from getting hung up waiting for a disk operation to complete. A storage device as fast as the one we're talking about would render DMA nearly obsolete. If you were able to load up the SSD with RAM as fast as your system RAM and used a wide enough (as in bandwidth) PCIe slot, you could theoretically switch it over to PIO mode and not notice any performance impact.

This is why the Gigabyte iRAM and the ACard use the Sata interface, because people don't want to loose all of their system memory to a device which uses all the addressable space as DMA.

Wrong again, they use SATA because it allows the device to use the controller of your choice. It takes supporting the wide range of existing operating systems off of their plate and leaves it all up to the SATA controller manufacturer. It also allows simple configuration of advanced drive setups like RAID.

Share this post


Link to post
Share on other sites

I'm not going to argue the toss any further because there's no point. I've implemented software RAM disks and work with hardware implementations and my employee and the manufacturers that I work with are quite happy with the definitions that I have previously used.

Share this post


Link to post
Share on other sites

I saw a magazine ad once where a company made HDD's from ram, replacing the need for a SAN or large RAID setups. This was meant for Data centers with redundant power, and expected to never be turned off, but the idea was just what Moonlit is getting at, running from a portion of RAM while sharing the rest for normal purposes, ir coudl be partitioned off or whatever. They had some mechanism though which allowed you to freeze a state in memory(I think battery backup) in the event of a power failure, similar I guess to a the i-RAM disks. The advertisment made it seem like this was the next big thing for cheap server space, but I never heard any more about it after that. Guess it never caught on. This was not meant for home users though, but more for terabytes of storage space with fast disk access, like that of a SAN on a fibre channel.

Share this post


Link to post
Share on other sites

Some models with the idea of this integrated chip with an OS on it thing sounds a lot like what you're implying, which no doubt sounds like a great idea to improve performance, though the advantage of the integrated chip rather than RAM would be that it would remain after power off. What I don't get is how you can expect faster boot times from RAM when any memory that stores the OS is flushed on power-down. The OS would need to be reloaded to this special allocation of RAM from a hard drive before it could boot.

Really it boils down to developing motherboard sized ICs with enough storage capacity for a modern day OS. I can see SSDs one day getting small enough to integrate on a motherboard. Until that time it just isnt feasible.

Share this post


Link to post
Share on other sites
What I don't get is how you can expect faster boot times from RAM when any memory that stores the OS is flushed on power-down. The OS would need to be reloaded to this special allocation of RAM from a hard drive before it could boot.

True, but Moonlit wasn't talking about a full power down, he was talking about a soft-reset (also known as a "warm boot").

the Amiga aOS had a RAD disk, which was a RAM disk that could survive a soft reset.

This is a situation where the OS restarts but the computer itself stays running (POST and BIOS aren't re-run). The RAM never loses power, so the information stored there is preserved.

Similar functionality can be seen on modern hardware when dropping into ACPI mode S3 (suspend to RAM), though in the case of S3 mode, the OS is being suspended and resumed rather than restarted.

Edit: After some discussion with Moonlit elsewhere, it seems there's already a commercial product that does what he's after:

http://www.superspeed.com/desktop/ramdisk.php

Share this post


Link to post
Share on other sites

It wasn't so much something I was after, rather something I was sure should exist by now. If it was possible to a reasonable degree nigh on 20 years ago, it should be very easy now. Not so much, it seems, but at least now I know it's possible and apparently it's a marketable product.

Back to the drawing board for me, but I'm happy with the result, it exists as I knew it should.

Share this post


Link to post
Share on other sites

The problem with things like the iRAM or ACard is that while the memory will happily run at speeds rated in GB/s, its hooked in to a SATA controller rated in MB/s, so until these can been plugged into something other than SATA they are never going to live up to their full potential.

Share this post


Link to post
Share on other sites

While I do agree that there is an obvious bottleneck created by running the ram-disk through a sata controller, it still generates a large enough performance increase to outperform a multi-disk raid 0 setup. Though, I only recommended the device because it was the closest product that represented a modern version Moonlit's idea, without using the ram that is used from the processor. I still resolve that the performance per dollar ratio of a hard drive is much better than using a ram disk.

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...