kavastudios
-
Posts
16 -
Joined
-
Last visited
Posts posted by kavastudios
-
-
Hi.I am trying to use a 3G usb Dongle Alcatel X602A.When I run lsusb I've got this:Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 002: ID 058f:6254 Alcor Micro Corp. USB HubBus 001 Device 003: ID 0bda:8187 Realtek Semiconductor Corp. RTL8187 Wireless AdapterBus 001 Device 006: ID 1bbb:022c T & A Mobile PhonesBus 001 Device 005: ID 058f:6366 Alcor Micro Corp. Multi Flash ReaderIt's recognized as Bus 001 Device 006: ID 1bbb:022c T & A Mobile PhonesI've checked the usb-modeswitch-data package and it show the 1bbb:022c file with the following info (i've downloaded the tar from the website):#Alcatel X602DConfiguration=2Then I cheked in /etc/usb_modeswitch.d/ in my router but the file 1bbb:022c was missing, so I've created the file with the info above mentioned.Now when I run the lsusb I've got the same but if i run dmesg I get:[ 22.170000] scsi 2:0:0:0: CD-ROM USBModem Mass Storage 2.31 PQ: 0 ANSI: 2[ 38.240000] usbcore: registered new interface driver cdc_acm[ 38.250000] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters[ 38.470000] usbcore: registered new interface driver usbserial[ 38.480000] USB Serial support registered for generic[ 38.480000] usbcore: registered new interface driver usbserial_generic[ 38.490000] usbserial: USB Serial Driver core[ 38.630000] usbcore: registered new interface driver asix[ 38.710000] usbcore: registered new interface driver cdc_ether[ 38.950000] Error: Driver 'gpio-keys-polled' is already registered, aborting...[ 39.050000] usbcore: registered new interface driver rndis_host[ 39.160000] sd 1:0:0:0: Attached scsi generic sg0 type 0[ 39.170000] scsi 2:0:0:0: Attached scsi generic sg1 type 5[ 39.190000] USB Serial support registered for GSM modem (1-port)[ 39.200000] usbcore: registered new interface driver option[ 39.200000] option: v0.7.2:USB Driver for GSM modemsAlso, when I runcat /proc/bus/usb/devicesI get this:T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=02 Dev#= 6 Spd=480 MxCh= 0D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 2P: Vendor=1bbb ProdID=022c Rev= 0.00S: Manufacturer=SCDS: Product=HSPA+ USB ModemC: #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=250mAI: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0msE: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=125usC:* #Ifs= 4 Cfg#= 2 Atr=80 MxPwr=200mAI:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=(none)E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0msE: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=125usI:* If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=(none)E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0msE: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=125usI:* If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=(none)E: Ad=83(I) Atr=02(Bulk) MxPS= 512 Ivl=0msE: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=125usI:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=(none)E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0msE: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=125usI am a bit lost, I don't know if the 3g dongle was properly switched and to which /dev/ was attached?What i am missing?Thanks in advance for any guidance.
-
UPDATE:
I have found is possible to filter directly from the sniff function:
def main(): print "[%s] Starting scan"%datetime.now() sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40',store=0)
With that, the CPU consumption when I run it in my pc is between 1%-3% but when I run it on the pinneaple, the script crashes and throw this error:
Traceback (most recent call last): File "snrV2.py", line 66, in <module> main() File "snrV2.py", line 63, in main sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40', store=0) File "/usr/lib/python2.7/site-packages/scapy/sendrecv.py", line 550, in sniff s = L2socket(type=ETH_P_ALL, *arg, **karg) File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 460, in __init__ attach_filter(self.ins, filter) File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 132, in attach_filter s.setsockopt(SOL_SOCKET, SO_ATTACH_FILTER, bpfh) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 99] Protocol not available
I think I should update the libcap in the pinneaple, but how to do it?
in my pc I have libpcap version 1.5.3
and the pinneaple has libpcap version 1.1.1 -
Hi
I am trying to build something similar, but using python.
I am facing a very high cpu consumption, maybe i should try with C (I am not quite experienced in C) but could you provide a hint wich libraries I should use? or an overview in how to code it?
Thanks
-
Hi @schuchwun
That's precisely what I want to do, place at least 3 pinneaples in a closed space. Also I've seen Meraki routers providing those heatmaps with just one router, even in an open space where there's no other meraki router nearby (not even routers not belonging to me). I tried to figure how they do that, and the only answer is maybe each antenna in the router internally is a separeted radio, so they can check in wich antena the packet was recieved first. (microseconds in difference and also difference in milliwats of the percieved power)
I will modify my code to call less functions, remove the sleep (I add it if in some case we have a momentary lost of connection, keep the info in memory)
Also, How I could implement a buffer? Maybe an array and when this array reach X number of packets, call a function to send the info to the server.
I'll keep you posted, this could maybe be the beginning of an open source rtls project.
Thanks.
-
Hi! Thank you for your help.
Before running the script in the pinneaple, I wrote the code in my laptop and in comparation, my pc executes the code much faster and seems that it captures more packets than the pinneaple.
I am aware of the several prints in the screen, those were for debugging, but even if I remove them the consumption is still high.
I am creating a threads to send the info of the packets because if in some case the connection to internet is slow, the script continue working, so the thread can take as long as is required to succesfully send the packet info to my server. (I am using a google app engine for the webservice, so in terms of serverside I am fully covered).
Do you think if instead of create a thread, I should do it in serial, do this could create a bottleneck?
Thanks
-
Hi.
Based in this script http://edwardkeeble.com/2014/02/passive-wifi-tracking/
I have created my own version wich detects and filters the probes coming from nearby devices and reports this information to a central webservice in charge of storing this information in a database.
My main intentions is to replicate the rtls features offered by meraki's routers (https://meraki.cisco.com/lib/pdf/meraki_datasheet_location.pdf)
My main concern and problem is, the script use between 95%-100% of the cpu. I have just tested the script with a couple nearby wifi devices broadcasting probes, but I'm afraid if I use it in crowded places, it will fail or not report the data correctly.
Any idea how I could improve the cpu usage.
This is the code:
#!/usr/bin/python from scapy.all import * import time import thread import requests PROBE_REQUEST_TYPE=0 PROBE_REQUEST_SUBTYPE=4 flag = 0 buf={'arrival':0,'source':0,'dest':0,'pwr':0,'probe':0} uuid='1A2B3' def PacketHandler(pkt): if pkt.haslayer(Dot11): if pkt.type==PROBE_REQUEST_TYPE and pkt.subtype == PROBE_REQUEST_SUBTYPE: PrintPacket(pkt) def PrintPacket(pkt): #global flag arrival= int(time.mktime(time.localtime())) print "Probe Request Captured:" try: extra = pkt.notdecoded except: extra = None if extra!=None: signal_strength = -(256-ord(extra[-4:-3])) else: signal_strength = -100 print "No signal strength found" print arrival, pkt.addr2, pkt.addr3, signal_strength,pkt.getlayer(Dot11).info launcher(arrival, pkt.addr2, pkt.addr3, signal_strength,pkt.getlayer(Dot11).info) def launcher (arrival,source,dest,pwr,probe): global buf if buf['source']==source and buf['probe']==probe: print 'do not report' else: print 'do report' buf={'arrival':arrival,'source':source,'dest':dest,'pwr':pwr,'probe':probe} try: thread.start_new_thread(exporter,(arrival,source,dest,pwr,probe)) print 'start the thread' except: print 'error launching the thread' def exporter (arrival,source,dest,pwr,probe): global uuid print 'this is the thread %r' % source urlg='http://webservice.com/?arrival='+str(arrival)+'&source='+str(source)+'&dest='+str(dest)+'&pwr='+str(pwr)+'&probe='+str(probe)+'&uuid='+uuid try: r=requests.get(urlg) print r.status_code #print r.headers print r.content except: print 'ERROR IN THREAD:::::: %r' % source print 'wait 2 secs' time.sleep(2) r=requests.get(urlg) print r.status_code print r.content def main(): from datetime import datetime print "[%s] Starting scan"%datetime.now() print "Scanning for:" sniff(iface=sys.argv[1],prn=PacketHandler,store=0) if __name__=="__main__": main()
Any guidance would be really appreaciated.
-
I've implemented a mark V+airodump to sniff the wifi enabled devices inside a big room. I'm using the information gathered to calculate the # of people inside this room at any given time. (I analyze and dump to a db the csv files generated by airodump)
My problem comes from the fact that airodump csv files sometimes do not reflect correctly the first time and last time a probe has been transmitted by a device, also they just show the last transmitted power detected of each device (you can't see how the power fluctuates with the time).
Besides this issues, the csv files also include the information of nearby access points + devices detected and when you have a large number of devices or access points, processing this csv file is a little bit resource intensive.
I'd like to know if there's any other tool where I can get the information of the sniffed probes in the same format used by Meraki's routers (they provide an api where you can get the info of the probes detected by the router and dump it directly to a db without any ETL process). They follow the next format:
{
"deviceID":"UUIDofTheAP",<-you can define it using an external cfg file
"mac":"mac address of the device detected",
"timestamp":"timestamp when the probe was transmitted",
"pwr":"transmitted power detected",
"ssid":"name of the ssid the device was looking for" <-if present
}
Having this info directly posted to a server using POST would be great, otherwise just having a plain text file with the json would resolve my problems.I've been looking in the scapy documentation but I don't know if is possible to develop something using python+scapy to get the probes in the way I need them.
Any guidence would be really appreciated.
-
Hi
I have enabled succesfully autossh in my markV, but something is happening that it's causing my device to reboot, this just happens if I'm connected to ssh remotely.
I've checked the memory or cpu to see if the device is overloaded, but everything seems normal, what could be the cause?
-
Oh, I've just noticed a typo in the post title... lol, sorry
-
HiI'm making a research about the digital footprint we leave without noticing we are doing it (cookies, retargeting pixels, browser footprint, mac address scanning, keystroke patterns, etc...)I'm using airodump to capture and analyze the information coming from the clients, in specific the probes broadcasted.But I have noticed that not all the phones broadcast probes of previously known networks or probes looking for an specific SSID.For example, iOS devices just broadcast one ssid probe (the last one they have been connected), some android devices broadcast the last 10 and depending of the brand and android version some just broadcast 2 to none.My guessing is the devices are making a passive scan, but there's any way I can force the devices to broadcast the probes for all the networks they know?
-
Hi.
I have a couple markv's doing some sniffing using airodump, I'm usinng wlan1 to connect to a wifi network (to provide internet access) and wlan0 (as mon0) to sniff (also wlan0 is used as AP so i can connect to the device).
The problem is when the internet connections is dropped (sometimes the router providing that connection fails) in the middle of the day, the devices won't reconnect when the connection is available again.
It's been a little bit of a problem as I need the dump file every couple hours (I have a script uploading the file to my servers)
What can i do so the device reconnects automatically once the wifi connection is available? (some kind of script checking the internet connection and if it's not available try to reconnect to the wifi network)
-
Thank you all for your answers!
At the end the problem is that if your script (sh or php) if they have an output, you should send the output to a file instead to dev/null, otherwise the job doesn't run.
This is the first time I see this in openwrt based system. Do you think this is a feature or a glitch of the markV? -
Hi!
I have some task scheduled to be executed by the cronjobs
here's one example of my tasks
*/2 * * * * /www/res.sh
5 * * * * php-cgi /www/up.php > /dev/null 2>&1
59 23 * * * reboot -f
and the tasks are never executed.
I don't know if it's the way I wrote the tasks or how do I enable the cronjobs? (in the admin panel shows the cron as enabled)
thanks
-
Hi.
But I've been using wifidog for a long time with different routers and different versions of openwrt.
With other routers using same openwrt version I had no trouble installing wifidog.
-
Hi
I want to install the wifidog package but everytime I get this:
root@Pineapple:~# opkg install wifidogInstalling wifidog (20090925-1) to root...Collected errors:* satisfy_dependencies_for: Cannot satisfy the following dependencies for wifidog:* kernel (= 3.3.8-1-d6597ebf6203328d3519ea3c3371a493) ** opkg_install_cmd: Cannot install package wifidog.I've updated my Mark V to the 1.4.1 firmware and I'm still getting the same result.What I could do?Thanks.
SLA battery for the pineapple
in WiFi Pineapple Mark V
Posted
Hi.
I thinking about use this type of batt for the pineapple
http://www.amazon.com/dp/B00GYHBBSS?psc=1
They are 12v 4Ah. I've used this type of batteries in the past with other AP and they provide stable energy, do you think this batt can provide enough juice to run for at least 8h?