Sc00bz Posted January 28, 2009 Share Posted January 28, 2009 I had this problem with wanting to write random data to a hard drive before setting it up with encryption. The problem is that I have 6 TB of disk and /dev/urandom was going around 5.5 MB/s which this will take almost two weeks to finish. I came up with a solution but I don't know if it's that good, since I have no formal training in cryptography. I wrote my own random number generator which took /dev/urandom as a seed for SHA-256 then incremented the seed and generated a new SHA-256 and so on. Kinda like this only I wrote SHA-256 in SSE2 (64 bit x86) which gave me over 100 MB/s per core in a Pentium D. SHA256_CTX ctx; int seed[16]; FILE *pFile = fopen("/dev/urandom", "rb"); while (1) { fread(seed, 4, 16, pFile); for (int a = 0; a < 1048576; a++) { sha256_init(&ctx); sha256_update(&ctx, seed, 64); fwrite(&ctx, 4, 8, stdout); seed[0]++; } } What is a faster way than /dev/urandom that is still cryptographically secure? Also I'd like to know if my solution is cryptographically secure? Quote Link to comment Share on other sites More sharing options...
Sparda Posted January 28, 2009 Share Posted January 28, 2009 What command did you use to write urandom to the disk? There may be some performance improving options you could add, such as increase block size. As another suggestion, you could brake the RAID (if possible) then random the disks separately. Quote Link to comment Share on other sites More sharing options...
Sc00bz Posted January 28, 2009 Author Share Posted January 28, 2009 dd if=/dev/urandom of=/dev/hda I'm doing that for each hard drive. dd if=/dev/urandom of=/dev/null 97724+0 records in 97723+0 records out 50034176 bytes (50 MB) copied, 8.85633 seconds, 5.6 MB/s ./rndsha256 | dd of=/dev/null 2553448+0 records in 2553448+0 records out 1307365376 bytes (1.3 GB) copied, 9.67982 seconds, 135 MB/s Quote Link to comment Share on other sites More sharing options...
Sparda Posted January 28, 2009 Share Posted January 28, 2009 try playing with the ibs and obs options (bytes read/writen at once). For example, try [i,o]bs=1k, [i,o]bs=512b, [i,o]bs=2k. Might make a difference. Quote Link to comment Share on other sites More sharing options...
WhollyMindless Posted January 29, 2009 Share Posted January 29, 2009 try playing with the ibs and obs options (bytes read/writen at once). For example, try [i,o]bs=1k, [i,o]bs=512b, [i,o]bs=2k. Might make a difference. That should make a huge diff. I'd definitely use the same block size in and out. But it might be worth trying for a very big block size (1/3 physical memory maybe) to do a few big reads and writes. Your original might have been doing it a single byte at a time - since it writes a whole block, it's not going to be fast. Measure and verify. (and tell us what you find. at 6T you're probably using some hardware to manage the array and it might have some preferences as to block size) Quote Link to comment Share on other sites More sharing options...
Sc00bz Posted February 2, 2009 Author Share Posted February 2, 2009 I tested it with a bunch of block sizes and it looks like 64k wins at 5.8MB/s. I'm also outputting it to /dev/null which is faster than writing it to disk. I ran an analysis program for "detecting non-randomness in binary sequences constructed using random number generators and pseudo-random number generators utilized in cryptographic applications" and it looks like the program I wrote passed more tests than /dev/urandom. Also I don't think changing the block size to anything else will magically increase speed since I am just dumping the data into /dev/null. Ohh also "bs=512b" is not 512 bytes it is actual 512*512 bytes since b stands for block (512 bytes). Something wired that I found is if you pipe something into dd and you set the block size to like 1 MiB it messes up. For some reason these don't work: cat /dev/zero | dd of=/dev/null bs=1024k count=1k 7+1017 records in 7+1017 records out 12115968 bytes (12 MB) copied, 0.01437 seconds, 843 MB/s dd if=/dev/zero | dd of=/dev/null bs=1024k count=1k 1+1023 records in 1+1023 records out 1648128 bytes (1.6 MB) copied, 0.006307 seconds, 261 MB/s But these work: cat /dev/zero | dd of=/dev/null bs=1k count=1024k 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 2.18704 seconds, 491 MB/s dd if=/dev/zero bs=1024k | dd of=/dev/null bs=1024k count=1k 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.677182 seconds, 1.6 GB/s The analysis program can be found here http://csrc.nist.gov/groups/ST/toolkit/rng/index.html Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.