UPDATE 2021: This was fun but nowadays I have to direct people to a more complete python solution here.
I got a few Amcrest Wifi security cameras for my mom’s house at her request. They’re pretty nice overall (My only complaint is that the web-interface doesn’t fully support Linux). I set one up to save a jpg snapshot to memory every minute and then flew across the country. When I wanted to access them, I couldn’t just put the SD-card in a computer or anything, and clicking all 14,000 of them seemed like a pain, so I decided to figure out how to get them with a Python script.
Most of the information you need to know to do this is in the Amcrest CGI SDK API Documentation. With that, I was able to figure out how to list the file names of all the jpgs on the disk and then download them one by one to my local drive.
To get a listing of file names you have to first set up authentication, create a Media File Finder instance, and save the number that is returned
1 2 3 4 5 6 |
import os import requests from requests.auth import HTTPBasicAuth auth = HTTPBasicAuth('admin','yourpassword') factory = requests.get('http://yourip/cgi-bin/mediaFileFind.cgi?action=factory.create',auth=auth) factory = factory.text.split('=')[1] |
Then, you have to do a search for a bunch of files (optionally within a date range). Here I am looking for all jpgs made on July 15th:
1 |
requests.get('http://yourip/cgi-bin/mediaFileFind.cgi?action=findFile&object={}&condition.Channel=0&condition.StartTime=2017-7-15%2000:01:00&condition. EndTime=2017-7-15%2023:59:00&condition.Types[0]=jpg'.format(factory), auth=auth) |
The %20
things are just the code for a space. Note that we had to pass the number from the factory above here for this to work. Ok! Now we should be able to list the files found. At first I tried doing this without dealing with cookies and I was able to list files but I could not get them to download even though I was using the same commands used by the web interface. I decided to use the Firefox Developer tools to watch the network while downloading files from there. I noticed some authentication cookies that I thought might help, so I just copied and pasted them into the code (cheating a bit, but whatever).
1 |
cookies = {'DHLangCookie30':'English', 'DhWebClientSessionID':'1234567890', 'DhWebCookie':'long goofy string'} |
Then, to list and download a file, I used this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
files = requests.get('http://yourip/cgi-bin/mediaFileFind.cgi?action=findNextFile&object={}&count=1'.format(factory),auth=auth, cookies=cookies) files = files.text.strip() for line in files.iter_lines(): line = line.decode() if 'FilePath' in line: path = line.split('=')[1] cmd = 'http://yourip/RPC_Loadfile'+path print('Running: ['+cmd+']') resp = requests.get(cmd, auth=auth, cookies=cookies) with open('-'.join(cmd.split(os.path.sep)[-6:]), 'wb') as out: out.write(resp.content) print(resp.status_code) ok = requests.get('http://yourip/cgi-bin/mediaFileFind.cgi?action=close&object={}'.format(factory),auth=auth) |
Totally worked! The loop over files.iter_lines just searches through the like 15 lines of information that come out for each file for the one that has the path on it. I did some name mangling in the open line to strip off the root folder names from the camera path so I just had a nice file name in my directory, such as: 17-07-10-001-jpg-17-48-07[R][0@0][0].jpg
So this got me all my files from my vacation. I transferred 2GB of photos by running this in a loop (with security provided by a VPN on the router). It would be better if I could grab those cookies directly from the script instead of cheating with the web browser so if you know how to do that with requests
, let me know.
Thank you for writing this! I’d like to do this exact same thing for the 3 Amcrest IP cams I just put up, only with videos and snapshots. This blog is 2.5 years old. Is it still working for you? Have you found an easier way to do it since writing this?
I appreciate any advise/guidance you have! I’m not a programmer by nature but could probably figure it out with enough time.
It totally still works and I haven’t found a better way. But I haven’t really looked for a better way since writing this so something may be out there.
In general I think you could manage the cookies by using a Session object:
https://requests.readthedocs.io/en/master/user/advanced/#session-objects
But I don’t see any cookies being used when I create a file finder instance and use findFile and then findNextFile action for some .dav files. I do get mostly valid data back from findNextFile, but a blank / empty value for FilePath so can’t even initiate RPC_Loadfile.
This package has a download feature that works and is a bit nicer to be honest.
https://github.com/tchellomello/python-amcrest/blob/master/examples/get-files.py
Thanks for this, Helped a lot in trying to resolve issues with my IP2M-846 camera.
Did figure a way of getting the session id using RPC2_Login.
auth = HTTPBasicAuth(user,password)
proxies = {‘http’: ‘http://’}
cookies = {
‘DhLangCookie30′:’English’,
‘DhWebCookie’:’%7B%22username%22%3A%22{}%22%2C%22pswd%22%3A%22%22%2C%22talktype%22%3A1%2C%22logintype%22%3A0%7D’.format(user),
}
# logon to device using RPC Login
jdata = { “method”:”global.login”,”params”:{“userName”:str(user),”password”:str(password),”clientType”:”Web3.0″},”id”:10000 }
base_url = “http://{}/”.format(ip)
url = base_url + “RPC2_Login”
# first request fails, but returns the session id
web_resp = requests.post(
url,
json=jdata,
auth=auth,
cookies=cookies,
timeout=timeout,
proxies=proxies
)
jresult = json.loads(web_resp.content.decode(“utf-8”))
sessionid = jresult[“session”]
print(“session_id = {}”.format(sessionid))
# add session id to cookies and json data
cookies[“DhWebClientSessionID”] = str(sessionid)
jdata[“session”] = sessionid
# sign on again to get full authporization to download
web_resp = requests.post(
url,
json=jdata,
auth=auth,
cookies=cookies,
timeout=timeout,
proxies=proxies
)