Jump to content

aNoNoMoose

Active Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by aNoNoMoose

  1. I, too, would like to join this! Am currently studying circuit design and device programming at the royal institute of technology in Sweden, been a java developer for 6 years.
  2. From what I understand, the NTLM hash is always stored. The LM hash is only stored for backwards compatibility, and can be disabled as others have said. LM hashes do not conserve case (but NTLM does), so if the authorization procedure is coded properly (to check against NTLM whenever possible, and ignore the LM hash) you still do not know which case the letters need to be in for the login to be successful, after cracking the LM hash. As for size of tables and so on, this depends on what kind of performance you want. Here's a quick run-down of what the variables are: * Chain length: Increase this, and you will cover more passwords in the same amount of disk space. However, this makes the tables slower to operate on. * Chain count: Increasing this will make each table in the set larger, but will require less tables to achieve the same coverage. Experimentation shows that fewer but bigger individual tables (same total size) should be faster to operate on: http://img65.imageshack.us/img65/7299/tablespeedri7.png (note the Max cryptanalysis time at the bottom) * Number of tables: Increasing this raises the amount of passwords in the specified range that will be covered by the table set, and also increases the total size. * Min/Max password length: Fairly obvious, increasing the max length rapidly increases the amount of storage needed AND/OR the complexity of the tables (see chain length). * Character set: Defines which characters will make up the passwords you pre-compute. Adding characters increases the amount of disk space needed AND/OR the table complexity. If a password contains a character outside those in your defined character set, the hash of that password will not be in the tables. Selecting the values to use is all about knowing your priorities (speed? coverage? size?), and balancing them. Both NTLM and MD5 are 16 byte hashes, so the values selected for one of them can be used for the other one to make an equivalent table set with the other algorithm with the same resulting disk size and so on. The screenshot above uses both upper- and lowercase alpha, digits, space, and 14 symbol characters in its character set, however these are not all the ones that can be produced, even on an English keyboard. Including 32 symbols rather than just 14 immediately makes it harder to achieve the same coverage without annoying losses in either search time, disk space, or both ( http://img441.imageshack.us/img441/7327/tablespeed2cx8.png ). Again, this all about priorities.
×
×
  • Create New...