mathematics e+110

Recommended Posts

When your calculator can't display a insanely large number you might see something like 1.10000101110011e+110

I'm doing some math with large numbers, floats... how do I salve the equation above... the answer is about 111 digits long something like this

11000010111001101110011011010000.... and so on

Edited by i8igmac
Share on other sites

Based on what you're saying now I'd say start looking at GMP, which is the GNU Multiple Precision library, specifically designed to work with arbitrarily long numbers. I used it at one point to naively try and brute-force a 2048-bit RSA key... No, I didn't find it (shocker, I know), but the code was nice.

Share on other sites

Ill share my solution to this math equation using the ruby interpreter... maybe similar to python...

"%d" % 1.1000010101e+110

To the tenth power, its like moving the decimal 110 places to the right.

For this kind of large math, a lib is required like BigDecimal other wise the interpreter will provide a inaccurate guesstimation

val=BigDecimal.new(10101010010101010101010100404040))

val*val

Instad of outputting unusable equation or Infinite error. You will see the multiplication happen and output a incredible number

Edited by i8igmac
Share on other sites

if what you were doing was integer based, python should be able to do it without any additional libraries:

python -c 'print (110000101010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 * 10)';

it seems also that the upper limit for floats in python is ~1.79e+308 which might be enough for what your trying to do (at least it is on my system).

Edited by fugu
Share on other sites

Try %lld which, at least to C's printf function, means that the argument isn't a 32-bit signed integer but instead "an at least 64-bit wide signed integer" (and a quick test on my 64-bit machine proves 64-bit is gcc's width of a long long int - Python might use something else though).

Oh, and also in C for floats you should use %f or the number would potentially be misinterpreted.

Edited by cooper
Share on other sites

Ok so. I was board and what I was trying to do...

Take a file and its contents and convert to numbers using pack and unpack. Binary ascii table just for a

"hello".unpack("c*") => 104101108108111

File.unpack("c*") => 104101108108111

So, if we devide this big numba by 2 (35 times) we can shrink this down to a pretty small piece of data

104101108108111/(2**35)=>3029

And then if you reverse this process with multiplication

3029*(2**35)=> 104101108108111

We have just compressed a file down to a smaller size to reduce hard drive space or for quicker file transfer protocol.

Ummm... but wait... I lied about the last math equation, with large numbers, you must use floats, witch includes a long trailing list of decimals witch acutely the content will grow in size...

This has been fun thinking about... any thoughts?

Share on other sites

Heh. Well, let's just say that that's not how this works... Compression algorithms work by looking for repeating sequences and referencing them using less characters than the original. The problem is that there are only so many sequences you can represent in, say, 3 bits so you're going to end up using more than the original amount of bits for some uncommon sequences and the trick is to come out ahead.

This is how a float is represented in memory. Take a good look at it, you'll find that there's a number of bits for the actual number and a number of bits for the exponent, the number of times you need to multiply or, in case of negative, divide by 10 to get the actual number. So you went from having 32 bits to represent an unsigned number (after all, it won't be negative) to having 23 bits to represent a signed number (there's no unsigned float, so you lose 1 bit on account of that already), and using 8 bits for the exponent which is oddly represented but is in effect signed aswell so you drop another bit there too. You'll hopefully also notice that they don't use an equals sign for a float but those 2 squiggly lines (I'm sure there's a name for it but I don't know it - if you do, please inform me as I genuinely want to know) meaning it's an approximation. Floats are known (and designed) to be non-exact. You lose precision to be able to represent a (much) larger range of numbers. The one thing you can't have when dealing with compression is losing precision because that precision is actual data. It works (somewhat) with jpeg and MP3 because you don't notice it in the end result for certain types of images/sounds but there too certain images/sounds exist that really point out how much the result is altered which might not matter in the general case, but not in all cases. Compression requires you to return an exact copy of the original, so losing data won't be acceptable.

Share on other sites

So. Ill try and ask a new question...

X=999999999999999

Y=9999999999999999

So, when I perform math with a number 15 in length or less, the results will be proper and accurate...

Math.sqrt(999*999)=>999

Math.sqrt(x*x)=>999999999999999

Math.sqrt(y*y)=>1000000000000000

So, when I try a number 16 in length, the buffer over flows and outputs inaccurate results...

I know people here are not ruby fans, but any ideas on how I can perform accurate math equations of much larger numbers... or at least perform math on numbrs larger then 16 in length...

Share on other sites

You need a specialist 'bignum' library. That GMP thing I linked to is one such example and I suppose you're looking for something in Ruby that somehow exposes that. The thing to avoid here is to use C's native types specifically because they're too restrictive.

Share on other sites

it also depends on how your doing you math, if you can avoid floats in ruby you can do things like this:

```ruby -e 'x=999999999999999; y=9999999999999999; b = x*x; puts b; puts b/x; c = y*y; puts c; puts c/y'
999999999999998000000000000001
999999999999999
99999999999999980000000000000001
9999999999999999
```
Edited by fugu
Share on other sites

I have found the proper methods for BigDecimal... working with infinite numbers... kinda cool...

So... I think it was gzip2 holds the best compression rates for reducing file size? Maybe a little more time consuming?

lets say I have a file with random chars of 1000 in length... what's the smallest we can make the output file and with what tool?

Using the most

Share on other sites

If it's actually, genuinely random, compression is going to suck balls. It's why you can't repeat-compress a file to get it even smaller and why something like a movie or mp3 file won't compress further.

The best compression I think was provided by xz, which is a variation of LZMA. Most compression algorithms try to find a mid-way between speed and compression so keep an eye on that - you might be able to tweak invocation parameters to get more compression.

Share on other sites

• 2 weeks later...

I have done some testing.

I created a directory called gzip

and another directory called mine

Gzip -9 input_file /gzip

just for testing purposes, I chose about 6 files. png, jpg, gz, pdf, zip and 7z...

I compressed them with the gzip -9 to the destination gzip dir... the contents in this folder adds up to about 1000 K

I ran the same file through my compression ruby script, then gzip -9 to the destination directory mine... 700k.

I win!

Edited by i8igmac
Share on other sites

By that logic the 'rm' compressor has the both of them beat. Restore everything to their original form and do a 'diff' to ensure it's in fact bit-for-bit identical. Only when that's true do you actually win.

Share on other sites

It is true :-) I win.

Care to share some compression examples to achieve the smallest output files?

tar -cf - foo/ | xz -9 -c - > foo.tar.xz

Ill test this when I get home...

A friend told me to start watching a tv show called silicone valley, after he explained the plot, my brain started turning ruby circles lol...

Share on other sites

i just ran another test...

ls -slh gzip_only

``` 40K -rw-r--r-- 1 root root  40K Nov  3 06:42 10k most common.zip.gz
60K -rw-r--r-- 1 root root  53K Nov  3 06:42 AutoProxy.7z.gz
12K -rw-r--r-- 1 root root  12K Nov  3 06:42 changeset_29615.zip.gz
4.0K -rw-r--r-- 1 root root  548 Nov  3 06:42 Donate.png.gz
24K -rw-r--r-- 1 root root  22K Nov  3 06:42 index.png.gz
156K -rw-r--r-- 1 root root 151K Nov  3 06:42 Long Biquad Yagi Template.pdf.gz
872K -rw-r--r-- 1 root root 867K Nov  3 07:27 raspberry.jpg.gz
```

tar -cf - gzip_only/ | xz -9 -c - > gzip.tar.xz

ls -sh gzip.tar.xz

```1.2M gzip.tar.xz
```

And now the same files ran threw my compression!

tar -cf - mine_then_gzip/ | xz -9 -c - > mine.tar.xz

ls -sh mine.tar.xz

```456K mine.tar.xz
```

Edited by i8igmac
Share on other sites

If this is verified you'll have made something that's quite the game changer. Even if it takes really long to process that data, institutions would rally around you and declare you their new god.

Imagine the cost savings something like The Internet Archive could achieve if they could cut their storage costs in half because your compressor allows it?

Share on other sites

ill have to make a demo... you will be my way into the party...

it took my machine maybe 6 hours to generate the table. And I need to make corrections... I hope to find a working cuda gpu example... people outside the hacking community dont want to except the idea of a gpu running ruby threads so when I ask the ruby geniuses for help on this subject they talk down to me...

This stuff is a hobby to me and my wife don't let me on my computers... it will take me a little time

Edited by i8igmac
Share on other sites

running some more tests.

I am very close to successfully undoing this process

encode function and decode function

there is a very small mistake I need to correct

but I can assure you my method works.

I'm running late for work. I only get 30 minutes in the morning to mess around... ill be late for work!

if there are better compression tools I can compare my results to. I'm currently testing against xz -9

Edited by i8igmac
Share on other sites

Try "xz -9e" where the extra e stands for "extreme". Makes the compressor go slower in an effort to compress more.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.