Jump to content

Question Make Comptuer act like USB Rubber ducky


Recommended Posts

So I was wondering would it be possible to take a raspberry pi or some small computer and alter the USB protocols on it so that it would essentially act like a USB rubber ducky, as well as a mouse? One would essentially be altering system files/libraries on the device. If this were possible we could add image processing to the mix via a raspberry pi camera and essentially create a little autonomous robot that actually accepted input from the screen. Think about this, if we had some image processing, and optical character recognition we could literally use the mouse to click buttons via the connected computer, and literally could process and turn off AV etc. The idea would be to connect the sawed off pi to the computer via a male to male USB cable, set it up on the desk looking at the computer screen and have it do its scripted magic. This might not be possible I don't know nearly enough about messing around in the egg salad that is operating systems/protocols. The raspberry pi might not have enough processing power though, so one might need to use an actual laptop. Does anybody know whether something like this is actually possible? If there was some device that actually accepted monitor input then that would be able to parse the screen data as well. This would not be something you could easily pass off as a flash drive, but it would be interesting if it was actually possible, and it might open lots of doors in terms of automated physical attacks.

Link to comment
Share on other sites

Adult Detention Deficit much? Ok, so you say you don't really need to tweak OS libraries etc, you use a teensy? Do you have a specific setup/links I could check out? And to be clear you can send data to the teensy while it is plugged into a computer, and alter the data while it is running? I don't really know anything about teensy so I would be starting from scratch.

Which part did you say you did, the teensy or the image processing via cable?

The OCR, and image processing would be something a programmer would do, don't worry about it, I could probably figure it out if this works exactly as you claim it does.

Edited by overwraith
Link to comment
Share on other sites

My brain just can't handle massive impenterable blocks of text in this heat :)

Yep, a teensy has a usb port and the FTDI cable adds a second so you can hook that into your PC. You get a ducky that can be sent keystrokes in real time (plus mouse movements).

I don't have all the details to hand but you will need:

Teensy LC

3.3V FTDI cable

Perma-proto board

A bit of wire

Micro USB to USB A Male adapter

An Arduino sketch runs on the Teensy with (in my case) python (with pyserial library) script running on the PC.

Total hardware cost about $40. More if you want to get fancy and add OLED displays, switches, etc.

Link to comment
Share on other sites

Well, this looks like a pretty good start, if anything else comes to you, this thread will still be here. Also if anybody else has good tutorials or anything concerning this please post.

The thing that most concerns me at the moment is that I wish to be able to use the output from the monitor to make decisions based on where Windows and buttons pop up, that means either a webcam with recognition software, or a special monitor cable or something with scraping software (is that what they call it?). Ideally I should be able to use a script to recognize where windows are on the screen, or move the mouse like a person would. The script could run on the laptop/other computer.

Link to comment
Share on other sites

@overwraith yes it theoretically should be possible with a pi as long as your model has USB OTG.

Using a camera to capture and analyse the screen would be inconsistent. Lighting, focus, angle, screen type and other external factors will all need to be considered. This will also add more stress on the cpu/gpu.

What you could consider is using USB to capture the screen. For this you will need to turn your device into the USB Monitor Class.

This is a standard in the USB specification. http://www.usb.org/developers/hidpage/usbmon10.pdf

Your device would need to switch back and forth between a keyboard device and monitor device.

Link to comment
Share on other sites

Oh no! Things just got real! How would one do that? Seems like I would require a lot of hardware experience that I just don't have. The "computer" in the equation seems like it would already need upgraded to something more like a laptop.

I am thinking one could get the optical recognition to work however if one had a robust enough API.

Edited by overwraith
Link to comment
Share on other sites

Your "computer" needs to be able to act as a USB slave device. Devices with USB OTG can do that. Your laptop cannot...

A rpi model A would theoretically be able to do what you want because it can do USB OTG. ( A model B has the same chip but the usb port runs through a hub making OTG inaccessible).

A rpi is powerful enough to do what you want it to do, BUT, you would have to code the USB drivers from scratch, which would be no easy task!

A model A rpi would require no hardware changes (apart from the usb cable adapter). Software would be the hard part!

Edited by Polisher
Link to comment
Share on other sites

No, basically just wanna try to figure out how to build a device that can do keyboard and mouse and be effective at them both. The mouse thing requires a visual aspect. I can try to accomplish at least half with the teensy, perhaps I can learn something on the way. Will be a while until I can actually get some hardware development books.

I am basically an entry level web programmer/C#, I might be able to grock the C++/C code but it might take some time.

If there is some kind of curriculum for learning about bread boards and stuff please post, but that too costs money which will take a while.

Edited by overwraith
Link to comment
Share on other sites

You could control the mouse using powershell.

There are enough scripts and examples that you can google!

About the visual aspect, you do not necessarily need any visual feedback. You could move the mouse and click on things that you know will always be in the same positions. If they are not always in the same position you can position the window so the buttons you are looking for are in the positions you want!

Have a look at the following video. The guy did just that but on a mac.

The idea is the same though.

Link to comment
Share on other sites

Maybe just get a teensy and program that before veering into (essentially) advanced robotics.

The rest of the stuff is certainly non-trivial and pretty pointless when a remote shell would normally be 100x easier and more efficient.

Link to comment
Share on other sites

Well optical character recognition has already been implemented in some products, and once you recognize that "ok" has been printed on the screen, presumably over a button, and you have the coordinates of said text you should be able to make a script that moves the mouse pixel by pixel to the "ok" button (a for loop or something).

I have a page scanner which uses optical character recognition. Isn't the best, but if you have the actual source of the info, the monitor feed you would essentially have a flawless picture to draw off of there wouldn't necessarily be the imperfections in lighting that traditional scanners have.

We are not necessarily finding things on the screen as complex looking as birds, a prompt is a box with text and buttons on it. Even if the prompts change from time to time one could theoretically screen shot it, and send it to a centralized DB to be uploaded to other devices once the scripts stop working. If one was using OCR, this prompt upload probably wouldn't even be necessary much of the time.

Edited by overwraith
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...