ImportError: No module named ‘rest_framework_swagger’

Summary

Building our Django app locally (i.e. no Docker container wrapping it) works great. Building the same app in Docker fails. Hint: make sure you know which requirements.txt file you’re using to build the app.  (And get familiar with the -f parameter for Docker commands.)

Problem

When I first started build the Docker container, I was getting the ImportError error after the container successfully builds:

ImportError: No module named 'rest_framework_swagger'

Research

The only half-useful hit on StackOverflow was this one, and it didn’t seem like it explicitly addressed my issue in Docker:

http://stackoverflow.com/questions/27369314/django-rest-framework-swagger-ui-importerror-no-module-named-rest-framework

…And The Lightning Bolt Struck

However, with enough time and desperation I finally understood that that article wasn’t wrong either.  I wasn’t using the /requirements.txt that contained all the dependencies – I was using the incomplete/abandoned /budget_proj/requirements.txt file, which lacked a key dependency.

Aside

I wasn’t watching the results of pip install closely enough – and when running Docker-compose up --build multiple times, the layer of interest won’t rebuild if there’s no changes to that layer’s inputs. (Plus this is a case where there’s no error message thrown, just one or two fewer pip installs – and who notices that until they’ve spent the better part of two days on the problem?)

Detailed Diagnostics

If you look closely at our project from that time, you’ll notice there are actually two copies of requirements.txt – one at the repo root and one in the /budget_proj/ folder.

Developers who are just testing Django locally will simply launch pip install -r requirements.txt from the root directory of their clone of the repo.  This is fine and good.  This is the result of the pip install -r requirements.txt when using the expected file:

$ pip install -r requirements.txt 
Collecting appdirs==1.4.0 (from -r requirements.txt (line 1))
 Using cached appdirs-1.4.0-py2.py3-none-any.whl
Collecting Django==1.10.5 (from -r requirements.txt (line 2))
 Using cached Django-1.10.5-py2.py3-none-any.whl
Collecting django-filter==1.0.1 (from -r requirements.txt (line 3))
 Using cached django_filter-1.0.1-py2.py3-none-any.whl
Collecting django-rest-swagger==2.1.1 (from -r requirements.txt (line 4))
 Using cached django_rest_swagger-2.1.1-py2.py3-none-any.whl
Collecting djangorestframework==3.5.4 (from -r requirements.txt (line 5))
 Using cached djangorestframework-3.5.4-py2.py3-none-any.whl
Requirement already satisfied: packaging==16.8 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 6))
Collecting psycopg2==2.7 (from -r requirements.txt (line 7))
 Using cached psycopg2-2.7-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Collecting pyparsing==2.1.10 (from -r requirements.txt (line 8))
 Using cached pyparsing-2.1.10-py2.py3-none-any.whl
Collecting requests==2.13.0 (from -r requirements.txt (line 9))
 Using cached requests-2.13.0-py2.py3-none-any.whl
Requirement already satisfied: six==1.10.0 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 10))
Collecting gunicorn (from -r requirements.txt (line 12))
 Using cached gunicorn-19.7.0-py2.py3-none-any.whl
Collecting openapi-codec>=1.2.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting coreapi>=2.1.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting simplejson (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached simplejson-3.10.0-cp35-cp35m-macosx_10_11_x86_64.whl
Collecting uritemplate (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached uritemplate-3.0.0-py2.py3-none-any.whl
Collecting coreschema (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting itypes (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting jinja2 (from coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached Jinja2-2.9.5-py2.py3-none-any.whl
Collecting MarkupSafe>=0.23 (from jinja2->coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Installing collected packages: appdirs, Django, django-filter, uritemplate, requests, MarkupSafe, jinja2, coreschema, itypes, coreapi, openapi-codec, simplejson, djangorestframework, django-rest-swagger, psycopg2, pyparsing, gunicorn
 Found existing installation: appdirs 1.4.3
 Uninstalling appdirs-1.4.3:
 Successfully uninstalled appdirs-1.4.3
 Found existing installation: pyparsing 2.2.0
 Uninstalling pyparsing-2.2.0:
 Successfully uninstalled pyparsing-2.2.0
Successfully installed Django-1.10.5 MarkupSafe-1.0 appdirs-1.4.0 coreapi-2.3.0 coreschema-0.0.4 django-filter-1.0.1 django-rest-swagger-2.1.1 djangorestframework-3.5.4 gunicorn-19.7.0 itypes-1.1.0 jinja2-2.9.5 openapi-codec-1.3.1 psycopg2-2.7 pyparsing-2.1.10 requests-2.13.0 simplejson-3.10.0 uritemplate-3.0.0

However, because our Django application (and the related Docker files) is contained in a subdirectory off the repo root (i.e. in the /budget_proj/ folder) – and because I was an idiot at the time and didn’t know about the -f parameter for docker-compose , so I was convinced I had to run docker-compose from the same directory as docker-compose.yml – docker-compose didn’t have access to files in the parent directory of wherever it was launched.  Apparently Docker effectively “chroots” its commands so it doesn’t have access to ../bin/requirements.txt for example.

So when docker-compose launched pip install -r requirements.txt, it could only access this one and gives us this result instead:

Step 12/12 : WORKDIR /code
 ---> 8626fa515a0a
Removing intermediate container 05badf699f66
Successfully built 8626fa515a0a
Recreating budgetproj_budget-service_1
Attaching to budgetproj_budget-service_1
web_1 | Running docker-entrypoint.sh...
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Starting gunicorn 19.7.0
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Listening at: http://0.0.0.0:8000 (5)
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Using worker: sync
web_1 | [2017-03-16 00:31:34 +0000] [8] [INFO] Booting worker with pid: 8
web_1 | [2017-03-16 00:31:35 +0000] [8] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
web_1 | worker.init_process()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/base.py", line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/util.py", line 376, in import_app
web_1 | __import__(module)
web_1 | File "/code/budget_proj/wsgi.py", line 16, in <module>
web_1 | application = get_wsgi_application()
web_1 | File "/usr/local/lib/python3.5/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
web_1 | django.setup(set_prefix=False)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/__init__.py", line 27, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.5/importlib/__init__.py", line 126, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | ImportError: No module named 'rest_framework_swagger'
web_1 | [2017-03-16 00:31:35 +0000] [8] [INFO] Worker exiting (pid: 8)
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Shutting down: Master
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Reason: Worker failed to boot.
budgetproj_web_1 exited with code 3

Coda

It has been pointed out that not only is it redundant for the project to have two requirements.txt files (I agree, and when we find the poor soul who inadvertently added the second file, they’ll be sacked…from our volunteer project ;)…

…but also that if we’re encapsulating our project’s core application in a subdirectory (called budget_proj), then logically that is where the “legit” requirements.txt file belongs – not at the project’s root, just because that’s where you normally find requirements.txt in a repo.

Notes to self: merging my fork with upstream

It’s supposed to be as natural as breathing, right?  See a neat repository on Github, decide you want to use the code and make some minor changes to it right?  So you fork the sucker, commit some change, maybe push a PR back to the original repo?

Then, you want to keep your repo around – I dunno, maybe it’s for vanity, or maybe you’re continuing to make changes or use the project (and maybe, just maybe, you’ll find yourself wanting to push another PR in the future?).  Or maybe messages like this just bother your OCD:

github-branch-is-xx-commits-behind

Eventually, most developers will run into a situation in which they wish to re-sync their forked version of a project with the updates that have been made in “upstream”.

Should be dead easy, yes?  People are doing this all the time, yes?  Well, crap.  If that’s the case, then I’m an idiot because I’d tried this a half-dozen times and never before arrived at the beautiful message “This branch is even with…”.  So I figured I’d write it out (talk to the duck), and in so doing stumble on the solution.

GitHub help is supposed to help, e.g. Syncing a fork.  Which depends on Configuring a remote for a fork, and which is followed by Pushing to a remote.

Which for a foreign repo named e.g. “hackers/hackit” means the following stream of commands (after I’ve Forked the repo in GitHub.com and git clone‘d the repo on my local machine):

git remote add upstream git@github.com:hackers/hackit.git
git fetch upstream
git checkout master
git merge upstream/master

That last command will often result in a bunch of conflicts, if you’ve made any changes, e.g.:

git merge upstream/master
Auto-merging package.json
CONFLICT (content): Merge conflict in package.json
Auto-merging README.md
Auto-merging .travis.yml
CONFLICT (content): Merge conflict in .travis.yml
Auto-merging .babelrc
Automatic merge failed; fix conflicts and then commit the result.

At this point I temporarily abandon the command line and dive into my favourite editor (Visual Studio Code with a handful of extensions) to resolve the conflicting files.

Once I’d merged changes from both sources (mine and upstream), then it was a simple matter of the usual commands:

git add .
git commit -m "merged changes from upstream"
git push

And the result is…

github-branch-is-xx-commits-ahead

(No it wasn’t quite the “even” paradise, but I’ll take it.)

Aside

I somehow got myself into a state where I couldn’t get the normal commands to work.  For example, when I ran git push origin master, I get nowhere:

git push origin master
fatal: 'origin' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Or git push:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Then when I added upstream…:

git remote add upstream git@github.com:hackers/hackit.git

…and ran git remote -v…:

git remote -v
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)

…it appears I no longer had a reference to origin. (No idea how that happened, but hopefully these notes will help me not go astray again.)  Adding back the reference to origin seemed the most likely solution, but I didn’t get the kind of results I wanted:

git remote add origin git@github.com:mikethecanuck/hackit.git
git remote -v
origin git@github.com:mikethecanuck/hackit.git (fetch)
origin git@github.com:mikethecanuck/hackit.git (push)
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)
git push origin master
To github.com:mikethecanuck/hackit.git
 ! [rejected]        master -> master (fetch first)
error: failed to push some refs to 'git@github.com:mikethecanuck/hackit.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

 

And when I pushed with no params, I went right back to the starting place:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

(I finally rm -rf‘d my forked repo, cloned it again, and started over – that’s how I got to the first part of the article.)

Update my Contacts with Python: using pyobjc, Contacts.app & vCards, Swift or my own two hands?

I’m still on a mission to update my iCloud Contacts using PyiCloud to consolidate the data I’ve retrieved from LinkedIn.  Last time I convinced myself to add an update_contact() function to a fork of PyiCloud’s contacts module, and so far I haven’t had any nibbles on the issue I’ve filed in the PyiCloud project a couple of days ago.

I was looking further at the one possibly-working pattern in the PyiCloud project that appears to implement a write back to the iCloud APIs: the reminders module with its post() method.  What’s interesting to me is that in that method, the JSON submitted in the data parameter includes the key:value pair “etag”: None.  I gnashed my teeth over how to construct a valid etag in my last post, and this code implies to me (assuming it’s still valid and working against the Reminders API) that the etag value is optional (well, the key must be specified, but the complicated value may not be needed).

Knowing that this sounds too easy, I watched a new Reminder getting created through the icloud.com web client, and sure enough Chrome Dev Tools shows me that in the Request Payload, etag is set to null.  Which really tells me nothing now about the requirement for the Contacts API…

Arrested Development

Knowing that this was going to be a painful brick wall to climb, I decided to pair up with a python expert to look for ways to dig out from this deep, dark hole.  Lucky me, I have a good relationship with the instructor from my python class from late last year.  We talked about where I am stuck and what he’d recommend I do to try to break through this issue.

His thinking?  He immediately abandoned the notion of deciphering an undocumented API and went looking around the web for docs and alternatives.  Turns out there are a couple of options:

  1. Apple has in its SDKs a Contacts framework that supports Swift and Objective-C
  2. There are many implementations of Python & other languages that access the MacOS Contacts application (Contacts.app)

Contacts via Objective-C on MacOS

  • Contacts Framework is available in XCode
  • There appears to be a bidirectional bridge between Python and Objective-C
  • There is further a wrapper for the Contacts framework (which gets installed when you run pip install pyobjc)
  • But sadly, there is nothing even resembling a starter kit example script for instantiating and using the Contacts framework wrapper

Contacts via Contacts.app on MacOS

  • We found a decent-looking project (VObject) that purports to access VCard files, which is the underlying  data layout for import/export from Contacts.app
  • And another long-lived project (vcard) for validating VCards
  • This means I would have to manually import VCard file(s) into Contacts.app, and would still have to figure out how Contacts.app knows how to match/overwrite an imported Contact with an existing Contact (or I’ll always be backing up, deleting and importing)
  • HOWEVER, in exploring the content of the my Contacts.app and comparing to what I have in my iPhone Contacts, there’s definitely something extra going on here
    • I have at least one contact displayed in Contacts.app who is neither listed in my iPhone/iCloud contacts nor Google Contacts – given the well-formed LinkedIn data in the contact record, I’m guessing this is being implicitly included via Internet Accounts (the LinkedIn account configured here):
      screenshot-2017-01-06-09-45-24
    • What would happen if I imported a vCard with the same UID (the iCloud UUID)?
    • What would happen if I imported a vCard that exists in both iCloud and LinkedIn – would the iCloud (U)UID correctly match and merge the vCard to the right contact, or would we get a duplicate?
  • Here at least I see others acknowledge that it’s possible to create non-standard types for ADR, TEL (and presumably email and URL types, if they’re different).
  • Watch out: if you have any non-ASCII characters in your Address Book, exporting will generate the output as UTF-16.
  • Watch out: here’s a VObject gotcha.

Crazy Talk: Swift?

  • I *could* go learn enough Swift to interface with the JSON data I construct in Python
  • There’s certainly a plethora of articles (iOS-focused) and tutorials to help folks use the Contacts framework via Swift – which all seem to assume you want to build a UI app (not just a script) – I guess I understand the bias, but boy do I feel left out just wanting to create a one-time-use script to make sure I don’t fat-finger something and lose precious data (my wetware memory is lossy enough as it is)

Conclusion: Park it for now

What started out as a finite-looking piece of work to pull LinkedIn data into my current contacts of record, turned into a never-ending series of questions, murky code pathfinding  and band-aiding multiple technologies together to do something that should ostensibly be fairly straightforward.

Given that the best options I have at this point are (a) reverse-engineer an undocumented Apple API, (b) try to leverage an Objective-C bridge that *no one* else has tried to use for this new Contacts framework, or (c) decipher how Contacts.app interacts in the presence of vCards and all the interlocking contacts services (iCloud, Google, LinkedIn, Facebook)…I’m going to step away from this for a bit, let my brain tease apart what I’m willing to do for fun and how much more effort I’m willing to put in for fun/”the community”, and whether I’ve crossed The Line for a one-time effort and should just manually enter the data myself.

Occupied Neurons, February edition

Threaded messaging comes to Slack

https://slackhq.com/threaded-messaging-comes-to-slack-417ffba054bd#.no3gqihm5

Not a freakin moment too soon. One of my all-time top complaints of IRC (Hipchat, Slack) is the impossible-to-skim for relevancy problem – when there’s dozens of messages a day, all of them treated with the same level of a flat hierarchy of information, how do you figure out which to ignore, without reading each one (or just declaring bankruptcy on a regular basis)?

Docker in Production: a History of Failure

https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/

I read this article a couple of months ago and had the hardest time tracking it down again. It’s inflammatory, strident and probably over-emphasizes the problems vs. benefits…BUT, I still think it’s a good read. We technologists need to pursue new technologies with both eyes wide open, so we can mitigate the risks, especially when problems arise.

Top JavaScript Frameworks and Topics to Learn in 2017

https://medium.com/javascript-scene/top-javascript-frameworks-topics-to-learn-in-2017-700a397b711

Simon Sinek on Millenials in the Workplace

What Makes a Team Great

http://www.barryovereem.com/what-makes-a-team-great/

The Greatest Sales Deck I’ve Ever Seen

https://www.linkedin.com/pulse/greatest-sales-deck-ive-ever-seen-andy-raskin

The Most Popular DevOps Stories of 2016

https://medium.com/@eon01/the-most-popular-devops-stories-in-2016-954d10698d67#.7c13xaimk

Update my Contacts with Python: thinking about how far to extend PyiCloud to enable PUT request?

I’m on a mission to use PyiCloud to update my iCloud Contacts with data I’m scraping out of LinkedIn, as you see in my last post.

From what I can tell, PyiCloud doesn’t currently implement support for editing existing Contacts.  I’m a little out of my depth here (constructing lower-level requests against an undocumented API) and while I’ve opened an issue with PyiCloud (on the off-chance someone else has dug into this), I’ll likely have to roll up my sleeves and brute force this on my own.

[What the hell does “roll up my sleeves” refer to anyway?  I mean, I get the translation, but where exactly did this start?  Was this something that blacksmiths did, so they didn’t burn the cuffs of their shirts?  Who wears a cuffed shirt when blacksmithing?  Why wouldn’t you go shirtless when you’re going to be dripping with sweat?  Why does one question always lead to a half-dozen more…?]

Summary: What Do I Know?

  • LinkedIn’s Contacts API can dump most of the useful data about each of your own Connections – connectionDate, profileImageUrl, company, title, phoneNumbers plus Tags (until this data gets EOL’d)
  • LinkedIn’s User Data Archive can supplement with email address (for the foreseeable) and Notes and Tags (until this data gets EOL’d)
  • I’ve figured out enough code to extract all the Contacts API data, and I’m confident it’ll be trivial to match the User Data Archive info (slightly less trivial when those fields are already populated in the iCloud Contact)
  • PyiCloud makes it darned easy to successfully authenticate and read in data from the iCloud contacts – which means I have access to the contactID for existing iCloud Contacts
  • iCloud appears to use an idempotent PUT request to write changes to existing Contacts, so that as long as all required data/metadata is submitted in the request, it should be technically feasible to push additional data into my existing Contacts
  • It appears there are few if any required fields in any iCloud Contact object – the fields I have seen submitted for an existing Contact include firstName, middleName, lastName, prefix, suffix, isCompany, contactId and etag – and I’m not convinced that any but contactID are truly necessary (but instead merely sent by the iCloud.com web client out of “habit”)
  • The PUT operation includes a number of parameters on the request’s querystring:
    • clientBuildNumber
    • clientId
    • clientMasteringNumber
    • clientVersion
    • dsid
    • method
    • prefToken
    • syncToken
  • There are a large number of cookies sent in the request:
    • X_APPLE_WEB_KB–QNQ-TAKYCIDWSAXU3JXP7DXMBG
    • X-APPLE-WEBAUTH-HSA-TRUST
    • X-APPLE-WEBAUTH-LOGIN
    • X-APPLE-WEBAUTH-USER
    • X-APPLE-WEBAUTH-PCS-Cloudkit
    • X-APPLE-WEBAUTH-PCS-Documents
    • X-APPLE-WEBAUTH-PCS-Mail
    • X-APPLE-WEBAUTH-PCS-News
    • X-APPLE-WEBAUTH-PCS-Notes
    • X-APPLE-WEBAUTH-PCS-Photos
    • X-APPLE-WEBAUTH-PCS-Sharing
    • X-APPLE-WEBAUTH-VALIDATE
    • X-APPLE-WEB-ID
    • X-APPLE-WEBAUTH-TOKEN

Questions I have that (I believe) need an answer

  1. Are any of the PUT request’s querystring parameters established per-session, or are they all long-lived “static” values that only change either per-user or per-version of the API?
  2. How many of the cookies are established per-user vs per-session?
  3. How many of the cookies are being marshalled already by PyiCloud?
  4. How many of the cookies are necessary to successfully PUT a Contact?
  5. How do I properly add the request payload to a web request using the PyiCloud functions?  How’s about if I have to drop down to the requests package?

So let’s run these down one by one (to the best of my analytic ability to spot the details).

(1) PUT request querystring parameter lifetime

When I examine the request parameters submitted on two different days (but using the same Chrome process) or across two different browsers (but on the same day), I see the following:

  1. clientBuildNumber is the same (16HProject79)
  2. clientMasteringNumber is the same (16H71)
  3. clientVersion is the same (2.1)
  4. dsid is the same (197715384)
  5. method is obviously the same (PUT)
  6. prefToken is the same (914266d4-387b-4e13-a814-7e1b29e001c3)
  7. clientId uses a different UUID (C1D3EB4C-2300-4F3C-8219-F7951580D3FD vs. 792EFA4A-5A0D-47E9-A1A5-2FF8FFAF603A)
  8. syncToken is somewhat different (DAVST-V1-p28-FT%3D-%40RU%3Dafe27ad8-80ce-4ba8-985e-ec4e365bc6d3%40S%3D1432 vs. DAVST-V1-p28-FT%3D-%40RU%3Dafe27ad8-80ce-4ba8-985e-ec4e365bc6d3%40S%3D1427)
    • which if iCloud is using standard URL encoding translates to DAVST-V1-p28-FT=-@RU=afe27ad8-80ce-4ba8-985e-ec4e365bc6d3@S=1427
    • which means the S variable varies and nothing else

Looking at the PyiCloud source, I can find places where PyiCloud generates nearly all the params:

  • base.py: clientBuildNumber (14E45), dsid (from server’s authentication response), clientId (a fresh UUID on each session)
  • contacts.py: clientVersion (2.1), prefToken (from the refresh_service() function), syncToken (from the refresh_service() function)

Since the others (clientMasteringNumber, method) are static values, there are no mysteries to infer in generating the querystring params, just code to construct.

Further, I notice that the contents of syncToken is nearly identical to the etag in the request payload:

syncToken: DAVST-V1-p28-FT=-@RU=afe27ad8-80ce-4ba8-985e-ec4e365bc6d3@S=1436
etag: C=1435@U=afe27ad8-80ce-4ba8-985e-ec4e365bc6d3

This means not only that (a) the client and/or server are incrementing some value on some unknown cadence or stepping function, but also that (b) the headers and the payload have to both contain this value.  I don’t know if any code in PyiCloud has performed this (b) kind of coordination elsewhere, but I haven’t noticed evidence of it in my reviews of the code so far.

It should be easy enough to extract the RU and S param values from syncToken and plop them into the C and U params of etag.

ISSUE

The only remaining question is, does etag’s C param get strongly validated at the server (i.e. not only that it exists, and is a four-digit number, but that its value is strongly related to syncToken’s S param)?  And if so, what exactly is the algorithm that relates C to S?  In my anecdotal observations, I’ve noticed they’re always slightly different, from off-by-one to as much as a difference of 7.

(2) How many cookies are established per-session?

Of all the cookies being tracked, only these are identical from session to session:

  • X-APPLE-WEBAUTH-USER
  • X-APPLE-WEB-ID

The rest seem to start with the same string but diverge somewhere in the middle, so it’s safe to say each cookie changes from session to session.

 

(3) How many cookies are marshalled by PyiCloud?

I can’t find any of these cookies being generated explicitly, but I did notice the base.py module mentions X-APPLE-WEBAUTH-HSA-TRUST in a comment (“Re-authenticate, which will both update the 2FA data, and ensure that we save the X-APPLE-WEBAUTH-HSA-TRUST cookie.”) and fingers X-APPLE-WEBAUTH-TOKEN in an exception thrower (“reason == ‘Missing X-APPLE-WEBAUTH-TOKEN cookie'”), so presumably most or all of these are being similarly handled.

I tried for a bit to get PyiCloud to cough up the cookies sent down from iCloud during initial session setup, but didn’t get anywhere.  I also tried to figure out where they’re being cached on my filesystem, but I haven’t yet figured out where the user’s tmp directory lives on MacOS.

(4) How many cookies are necessary to successfully PUT a Contact?

This’ll have to wait to be answered until we actually start throwing code at the endpoint.

For now, it’s probably a reasonable assumption for now that PyiCloud is able to automatically capture and replay all cookies needed by the Contacts endpoint, until we run into otherwise-unexplained errors.

(5) How to add the request payload to endpoint requests?

I can’t seem to find any pattern in the PyiCloud code that already POSTs or PUTs a dictionary of data payload back to the iCloud services, so that may be out.

I can see that it should be trivial to attach the payload data to a requests.put() call, if we ignore the cookies and preceding authentication for a second.  If I’m reading the requests quickstart correctly, the PUT request could be formed like this:

import requests
url = 'https://p28-contactsws.icloud.com/co/contacts/card/'
data_payload = {"key1" : "value1", "key2" : "value2",  ...}
url_params = {"contacts":[{contact_attributes_dictionary}]}
r = requests.put(url, data = data_payload, params = url_params)

Where key(#s) includes clientBuildNumber, clientId, clientMasteringNumber, clientVersion, dsid, method, prefToken, syncToken, and contact_attributes_dictionary includes whichever fields exist or are being added to my Contacts (e.g. firstName, lastName, phones, emailAddresses, contactId) plus the possibly-troublesome etag.

What feels tricky to me is to try to leverage PyiCloud as far as I can and then drop to the reuqests package only for generating the PUT requests back to the server.  I have a bad feeling I might have to re-implement much of the contacts.py and/or base.py modules to actually complete authentication + cookies + PUT request successfully.

I do see the same pattern used for the authentication POST, for example (in base.py’s PyiCloudService class’ authenticate() function):

req = self.session.post(
 self._base_login_url,
 params=self.params,
 data=json.dumps(data)
 )

Extension ideas

This all leads me to the conclusion that, if PyiCloud is already properly handling authentication & cookies correctly, that it shouldn’t be too hard to add a new function to the contacts.py module and generate the URL params and the data payload.

update_contact()

e.g. define an update_contact() function:

def update_contact(self, contact_dict)

# read value of syncToken
# pull out the value of the RU and S params 
# generate the etag as ("C=" + str(int(s_param) - (increment_or_decrement)) + "@U=" + ru_param
# append etag to contact_dict
# read in session params from session object as session_params ???
# contacts_url = 'https://p28-contactsws.icloud.com/co/contacts/card/'
# req = self.session.post(contacts_url, params=session_params, data=json.dumps(contact_dict))

The most interesting/scary part of all this is that if the user [i.e. anyone but me, and probably even me as well] wasn’t careful, they could easily overwrite the contents of an existing iCloud Contact with a PUT that wiped out existing attributes of the Contact, or overwrote attributes with the wrong data.  For example, what if in generating the contact_dict, they forgot to add the lastName attribute, or they mistakenly swapped the lastName attribute for the firstName attribute?

It makes me want to wrap this function in all sorts of warnings and caveats, which are mostly ignored and aren’t much help to those who fat-finger their code.  And even to generate an offline, client-side backup of all the existing Contacts before making any changes to iCloud, so that if things went horribly wrong, the user could simply restore the backup of their Contacts and at least be no worse than when they started.

edit_contact()

It might also be advisable to write an edit_contact(self, contact_dict, attribute_changes_dict) helper function that at least:

  • takes in the existing Contact (presumably as retrieved from iCloud)
  • enumerated the existing attributes of the contact
  • simplified the formatting of some of the inner array data like emailAddresses and phones so that these especially didn’t get accidentally wiped out
  • (came up with some other validation rules – e.g. limit the attributes written to contact_dict to those non-custom attributes already available in iCloud, e.g. try to help user not to overwrite existing data unless they explicitly set a flag)

And all of this hand-wringing and risk management would be reduced if the added code implemented some kind of visual UI so that the user could see exactly what they were about to irreversibly commit to their contacts.  It wouldn’t eliminate the risk, and it would be terribly irritating to page through dozens of screens of data for a bulk update (in the hopes of noticing one problem among dozens of false positives), but it would be great to see a side-by-side comparison between “data already in iCloud” and “changes you’re about to make”.

At which point, it might just be easier for the user to manually update their Contacts using iCloud.com.

Conclusion

I’m not about to re-implement much of the logic already available in iCloud.com.

I don’t even necessarily want to see my code PR’d into PyiCloud – at least and especially not without a serious discussion of the foreseeable consequences *and* how to address them without completely blowing up downstream users’ iCloud data.

But at the same time, I can’t see a way to insulate my update_contact() function from the existing PyiCloud package, so it looks like I’m going to have to fork it and make changes to the contacts module.

Update my Contacts with Python: exploring LinkedIn’s and iCloud’s Contact APIs

TL;DR Wow is it an adventure to decipher how to interact with undocumented web services like I found on LinkedIn and iCloud.  Migrating data from LinkedIn to iCloud looks possible, but I got stuck at implementing the PUT operation to iCloud using Python.

Background: Because I have a shoddy memory for details about all the people I meet, and because LinkedIn appears to be de-prioritizing their role as a professional contact manager, I want to make my iPhone Contacts my system of record for all data about people I meet professionally.  Which means scraping as much useful data as possible from LinkedIn and uploading it to iCloud Contacts (since my people-centric data is currently centered more around my iPhone than a Google Contacts approach).

In our last adventure, I stumbled across the a surprisingly well-formed and useful API for pulling data from LinkedIn about my Connections:

https://www.linkedin.com/connected/api/v2/contacts?start=40&count=10&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007

Available Data

Which upon inspection of the results, gives me a lot of the data I was hoping to import into my iCloud Contacts:

  • crucial: Date we first connected on LinkedIn (“connectionDate” as time-since-epoch), Tags (“tags” as list of dictionaries), Picture (“profileImageUrl” as URI), first name (“firstName” as string), last name (“lastName” as string)
  • want: current company (“company” as dictionary), current title (“title” as string)
  • metadata: phone number (“phoneNumbers” as dictionary)

What doesn’t it give?  Notes, Twitter ID, web site addresses, previous companies, email address.  [What else does it give that could be useful?  LinkedIn profile URL (“profileUrl” as the permanent URL, not the “friendly URL” that many of us have generated such as https://www.linkedin.com/in/mikelonergan.  I can see how it would be helpful at a meetup to browse through my iPhone contacts to their LinkedIn profile to refresh myself on their work history.  Creepy, desperate, but something I’ve done a few times when I’m completely blanking.]

What can I get from the User Data Archive?  Notes are found in the Contacts.csv, and email address is found in Connections.csv.  Matching those two files’ data together with what I can pull from the Contacts API shouldn’t be a challenge (concat firstName + lastName, and among the data set of my 684 contacts, I doubt I’ll find any collisions).  Then matching those records to my iCloud Contacts *should* be just a little harder (I expect to match 50% of my existing contacts by emailAddress, then another fraction by phone number; the rest will likely be new records for my Contacts, with maybe one or two that I’ll have to merge by hand at the end).

Planning the “tracer bullet”

So what’s the smallest piece of code I can pull together to prove this scenario actually works?  It’ll need at least these features (assumes Python):

  1. can authenticate to LinkedIn via at least one supported protocol (e.g. OAuth 2.0)
  2. can pull down the first 10 JSON records from Contacts API and hold them in a list
  3. can enumerate the First + Last Name and pull out “title” for that record
  4. can authenticate to iCloud
    • Note: I may need to disable 2-factor authentication that is currently enabled on my account
  5. can find a matching First + Last Name in my iCloud Contacts
  6. can write the title field to the iCloud contact
    • Note: I’m worried least about existing data for the title field
  7. can upload the revised record to iCloud so that it replicates successfully to my iPhone

That should cover all the essential operations for the least-complicated data, without having to worry about edge cases like “what if the contact doesn’t exist in iCloud” or “what if there’s already data in the field I want to fill”.

Step 1: authenticate to LinkedIn

There are plenty of packages and modules on Github for accessing LinkedIn, but the ones I’ve evaluated all use the REST APIs, with their dual-secrets authentication mechanism, to get at the data.  (e.g. this one, this one, that one, another one).

Or am I making this more complicated than it is?  This python module simply used username + password in their call to an HTTP ‘endpoint’.  Let’s assume that judicious use of the requests package is sufficient for my needs.

I thought I’d build an anaconda kernel and a jupyter notebook to experiment with the modules I’m looking at.   And when I attempted to install the requests package in my new Anaconda environment, I get back this error:

LinkError:
Link error: Error: post-link failed for: openssl-1.0.2j-0

Quick search turns up a couple of open conda issues that don’t give me any immediate relief. OK, forget this for a bit – the “root” kernel will do fine for the moment.

Next let’s try this code and see what we get back:

import requests
r = requests.get('https://www.linkedin.com/connected/api/v2/contacts?start=40&count=10&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007', auth=('mikethecanuck@gmail.com', 'linkthis'))
r.status_code

Output is simply “401”.  Dang, authentication wasn’t *quite* that easy.

So I tried that URL in an incognito tab, and it displays this to me without an existing auth cookie:

{"status":"Member is not Logged in."}

And as soon as I open another tab in that incognito window and authenticate to the linkedin.com site, the first tab with that contacts query returns the detailed JSON I was expecting.

Digging deeper, it appears that when I authenticate to https://www.linkedin.com through the incognito tab, I receive back one cookie labelled “lidc”, and that an “lidc” cookie is also sent to the server on the successful request to the contacts API.

But setting the cookie manually with the value returned from a previous request still leads to 401 response:

url = 'https://www.linkedin.com/connected/api/v2/contacts?start=40&count=10&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007'
cookies = dict(lidc="b=OGST00:g=43:u=1:i=1482261556:t=1482347956:s=AQGoGetJeZPEDz3sJhm_2rQayX5ZsILo")
r2 = requests.get(url, cookies=cookies)

I tried two other approaches that people have used in the past – some even successfully with certain pages on LinkedIn – but eventually I decided that I’m getting ratholed on trying to reverse-engineer an undocumented (and more than likely unusually-constructed) API, when I can quite easily dump the data out of the API by hand and then do the rest of my work successfully.  (Yes I know that disqualifies me as a ‘real coder’, but I think we both know I was never going to win that medal – but I will win the medal for “results-oriented” not “pedantically chasing my tail”.)

Thus, knowing that I’ve got 684 connections on LinkedIn (saw that in the footer of a response), I submitted the following queries and copy-pasted the results into 4 separate .JSON files for offline processing:

https://www.linkedin.com/connected/api/v2/contacts?start=0&count=200&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007

https://www.linkedin.com/connected/api/v2/contacts?start=200&count=200&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007

https://www.linkedin.com/connected/api/v2/contacts?start=400&count=200&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007

https://www.linkedin.com/connected/api/v2/contacts?start=600&count=200&fields=id%2Cname%2CfirstName%2ClastName%2Ccompany%2Ctitle%2Clocation%2Ctags%2Cemails%2Csources%2CdisplaySources%2CconnectionDate%2CsecureProfileImageUrl&sort=CREATED_DESC&_=1481999304007

Oddly, the four sets of results contain 196, 198, 200 and 84 items – they assert that I have 684 connections, but can only return 678 of them?  I guess that’s one of the consequences of dealing with a “free” data repository (even if it started out as mine).

Step 2: read the JSON file and parse a list of connections

I’m sure I could be more efficient than this, but as far as getting a working result, here’s the arrangement of code I used to start accessing structured list data from the Contacts API output I shunted to a file:

import json
import os
contacts_file = open("Connections-API-results.json")
contacts_data = contacts_file.read()
contacts_json = json.loads(contacts_data)
contacts_list = contacts_json['values']

Step 3: pulling data out of the list of connections

It turns out this is pretty easy, e.g.:

for contact in contacts_list:
 print(contact['name'], contact['title'])

Messing around a little further, trying to make sense of the connectionDate value from each record, I found that this returns an ISO 8601-style date string that I can use later:

import time
print(strftime("%Y-%m-%d", time.localtime(contacts_list[15]['connectionDate'] / 1000)))

e.g. for the record at index “15”, that returned 2007-03-15.

Data issue: it turns out that not all records have a profileImageUrl key (e.g. for those oddball security geeks among my contacts who refuse to publish a photo on their LinkedIn profile), so I got to handle my first expected exception 🙂

Assembling all the useful data for all my Connections I wanted into a single dictionary, I was able to make the following work (as you can find in my repo):

stripped_down_connections_list = []

for contact in contacts_list:
 name = contact['name']
 first_name = contact['firstName']
 last_name = contact['lastName']
 title = contact['title']
 company = contact['company']['name']
 date_first_connected = time.strftime("%Y-%m-%d", time.localtime(contact['connectionDate'] / 1000))

picture_url = ""
 try:
 picture_url = contact['profileImageUrl']
 except KeyError:
 pass

tags = []
for i in range(len(contact['tags'])):
tags.append(contact['tags'][i]['name'])

phone_number = ""
try:
 phone_number = {"type" : contact['phoneNumbers'][0]['type'], 
 "number" : contact['phoneNumbers'][0]['number']}
except IndexError:
 pass

stripped_down_connections_list.append({"firstName" : contact['firstName'], 
 "lastName" : contact['lastName'], 
 "title" : contact['title'], 
 "company" : contact['company']['name'],
 "connectionDate" : date_first_connected, 
 "profileImageUrl" : picture_url,
 "tags" : tags,
 "phoneNumber" : phone_number,})

Step 4: Authenticate to iCloud

For this step, I’m working with the pyicloud package, hoping that they’ve worked out both (a) Apple’s two-factor authentication and (b) read/write operations on iCloud Contacts.

I setup yet another jupyter notebook and tried out a couple of methods to import PyiCloud (based on these suggestions here), at least one of which does a fine job.  With picklepete’s suggested 2FA code added to the mix, I appear to be able to complete the authentication sequence to iCloud.

APPLE_ID = 'REPLACE@ME.COM'
APPLE_PASSWORD = 'REPLACEME'

from importlib.machinery import SourceFileLoader

foo = SourceFileLoader("pyicloud", "/Users/mike/code/pyicloud/pyicloud/__init__.py").load_module()
api = foo.PyiCloudService(APPLE_ID, APPLE_PASSWORD)

if api.requires_2fa:
    import click
    print("Two-factor authentication required. Your trusted devices are:")

    devices = api.trusted_devices
    for i, device in enumerate(devices):
        print(" %s: %s" % (i, device.get('deviceName',
        "SMS to %s" % device.get('phoneNumber'))))

    device = click.prompt('Which device would you like to use?', default=0)
    device = devices[device]
    if not api.send_verification_code(device):
        print("Failed to send verification code")
        sys.exit(1)

    code = click.prompt('Please enter validation code')
    if not api.validate_verification_code(device, code):
        print("Failed to verify verification code")
        sys.exit(1)

Step 5: matching on First + Last with iCloud

Caveat: there are a number of my contacts who have appended titles, certifications etc to their lastName field in LinkedIn, such that I won’t be able to match them exactly against my cloud-based contacts.

I’m not even worried about this step, because I quickly got worried about…

Step 6: write to the iCloud contacts (?)

Here’s where I’m stumped: I don’t think the PyiCloud package has any support for non-GET operations against the iCloud Contacts service.  There appears to be support for POST in the Reminders module, but not in any of the other services modules (including Contacts).

So I sniffed the wire traffic in Chrome Dev Tools, to see what’s being done when I make an update to any iCloud.com contact.  There’s two possible operations: a POST method call for a new contact, or a a PUT method call for an update to an existing contact.

Here’s the Request Payload for a new contact:

{“contacts”:[{“contactId”:”2EC49301-671B-431B-BC8C-9DE6AE15D21D”,”firstName”:”Tony”,”lastName”:”Stank”,”companyName”:”Stark Enterprises”,”isCompany”:false}]}

Here’s the Request Payload for an update to that existing contact (I added homepage URL):

{“contacts”:[{“firstName”:”Tony”,”lastName”:”Stank”,”contactId”:”2EC49301-671B-431B-BC8C-9DE6AE15D21D”,”prefix”:””,”companyName”:”Stark Enterprises”,”etag”:”C=1432@U=afe27ad8-80ce-4ba8-985e-ec4e365bc6d3″,”middleName”:””,”isCompany”:false,”suffix”:””,”urls”:[{“label”:”HOMEPAGE”,”field”:”http://stark.com&#8221;}]}]}

There are four requests being made for either type of change to iCloud contacts (at least via the iCloud.com web interface that I am using as a model for what the code should be doing):

  1. https://p28-contactsws.icloud.com/co/contacts/card/
  2. https://webcourier.push.apple.com/aps
  3. https://p28-contactsws.icloud.com/co/changeset
  4. https://feedbackws.icloud.com/reportStats

Here’s the details for these calls when I create a new Contact:

  1. Request URL: https://p28-contactsws.icloud.com/co/contacts/card/?clientBuildNumber=16HProject79&clientId=63D7078B-F94B-4AB6-A64D-EDFCEAEA6EEA&clientMasteringNumber=16H71&clientVersion=2.1&dsid=197715384&prefToken=914266d4-387b-4e13-a814-7e1b29e001c3&syncToken=DAVST-V1-p28-FT%3D-%40RU%3Dafe27ad8-80ce-4ba8-985e-ec4e365bc6d3%40S%3D1426
    Request Payload: {“contacts”:[{“contactId”:”E2DDB4F8-0594-476B-AED7-C2E537AFED4C”,”urls”:[{“label”:”HOMEPAGE”,”field”:”http://apple.com&#8221;}],”phones”:[{“label”:”MOBILE”,”field”:”(212) 555-1212″}],”emailAddresses”:[{“label”:”WORK”,”field”:”johnny.appleseed@apple.com”}],”firstName”:”Johnny”,”lastName”:”Appleseed”,”companyName”:”Apple”,”notes”:”Dummy contact for iCloud automation experiments”,”isCompany”:false}]}
  2. Request URL: https://p28-contactsws.icloud.com/co/changeset?clientBuildNumber=16HProject79&clientId=63D7078B-F94B-4AB6-A64D-EDFCEAEA6EEA&clientMasteringNumber=16H71&clientVersion=2.1&dsid=197715384&prefToken=914266d4-387b-4e13-a814-7e1b29e001c3&syncToken=DAVST-V1-p28-FT%3D-%40RU%3Dafe27ad8-80ce-4ba8-985e-ec4e365bc6d3%40S%3D1427
  3. Request URL: https://webcourier.push.apple.com/aps?tok=bc3dd94e754fd732ade052eead87a09098d3309e5bba05ed24272ede5601ae8e&ttl=43200
  4. Request URL: https://feedbackws.icloud.com/reportStats
    Request Payload: {“stats”:[{“httpMethod”:”POST”,”statusCode”:200,”hostname”:”www.icloud.com”,”urlPath”:”/co/contacts/card/”,”clientTiming”:395,”uncompressedResponseSize”:14469,”region”:”OR”,”country”:”US”,”time”:”Wed Dec 28 2016 12:13:48 GMT-0800 (PST) (1482956028436)”,”timezone”:”PST”,”browserLocale”:”en-us”,”statName”:”contactsRequestInfo”,”sessionID”:”63D7078B-F94B-4AB6-A64D-EDFCEAEA6EEA”,”platform”:”desktop”,”appName”:”contacts”,”isLiteAccount”:false},{“httpMethod”:”POST”,”statusCode”:200,”hostname”:”www.icloud.com”,”urlPath”:”/co/changeset”,”clientTiming”:237,”uncompressedResponseSize”:2,”region”:”OR”,”country”:”US”,”time”:”Wed Dec 28 2016 12:13:48 GMT-0800 (PST) (1482956028675)”,”timezone”:”PST”,”browserLocale”:”en-us”,”statName”:”contactsRequestInfo”,”sessionID”:”63D7078B-F94B-4AB6-A64D-EDFCEAEA6EEA”,”platform”:”desktop”,”appName”:”contacts”,”isLiteAccount”:false}]}

I am 99% sure that the only request that actually changes the Contact data is the first one (https://p28-contactsws.icloud.com/co/contacts/card/), so I’ll ignore the other three calls from here on out.

Here’s the details of the first request when I edit an existing Contact:

Request URL: https://p28-contactsws.icloud.com/co/contacts/card/?clientBuildNumber=16HProject79&clientId=792EFA4A-5A0D-47E9-A1A5-2FF8FFAF603A&clientMasteringNumber=16H71&clientVersion=2.1&dsid=197715384&method=PUT&prefToken=914266d4-387b-4e13-a814-7e1b29e001c3&syncToken=DAVST-V1-p28-FT%3D-%40RU%3Dafe27ad8-80ce-4ba8-985e-ec4e365bc6d3%40S%3D1427
Request Payload: {“contacts”:[{“lastName”:”Appleseed”,”notes”:”Dummy contact for iCloud automation experiments”,”contactId”:”E2DDB4F8-0594-476B-AED7-C2E537AFED4C”,”prefix”:””,”companyName”:”Apple”,”phones”:[{“field”:”(212) 555-1212″,”label”:”MOBILE”}],”isCompany”:false,”suffix”:””,”firstName”:”Johnny”,”urls”:[{“field”:”http://apple.com&#8221;,”label”:”HOMEPAGE”},{“label”:”HOME”,”field”:”http://johnny.name&#8221;}],”emailAddresses”:[{“field”:”johnny.appleseed@apple.com”,”label”:”WORK”}],”etag”:”C=1427@U=afe27ad8-80ce-4ba8-985e-ec4e365bc6d3″,”middleName”:””}]}

So here’s what’s puzzling me so far: both the POST (create) and PUT (edit) operations include a contactId parameter.  Its value is the same from POST to PUT (i.e. I believe that means it’s referencing the same record).  When I create a second new Contact, the contactId is different than the contactId submitted in the Request Payload for the first new Contact (so it’s presumably not a dummy value).  And yet when I look at the request/response for the initial page load when I click “+” and “New Contact”, I don’t see a request sent from the browser to the server (so the server isn’t sending down a contactID – not at that moment at least – perhaps it’s cached earlier?).

Explained another way, this is how I believe the sequence works (based on repeated analysis of the network traffic from Chrome to the iCloud endpoint and back):

  1. User loads icloud.com, Contacts page (#contacts), clicks “+” and selects “New Contact”
    • Browser sends no request, but rather builds the New Contact form from cached code
  2. User adds data and clicks the Done button for the new Contact
    • Browser sends POST request to https://p28-contactsws.icloud.com/co/contacts/card/ with a bunch of form data on the URL, a whole raft of cookies and the JSON request payload [including contactId=x]
    • Server sends response
  3. User clicks Edit on that new contact, updates some data and clicks Done
    • Browser sends PUT request to https://p28-contactsws.icloud.com/co/contacts/card/ with form data, cookies and JSON request payload [including the same contactId=x]
    • Server sends response

So the question is: if I’m creating a net-new Contact, how does the web client get a valid contactId that iCloud will accept?  Near as I can figure, digging through the javascript-packed.js this page uses, this is the function that generates a UUID at the client:

Contacts.Contact = Contacts.Record.extend({
 primaryKey: "contactId",
 contactId: CW.Record.attr(String, {
 defaultValue: function() {
 return CW.upperCaseUUID()
 }
 })

Using this function (IIUC):

UUID: function() {
 var e = new Array(36),
 t = 0,
 n = ["8", "9", "a", "b"];
 if (window.crypto && window.crypto.getRandomValues) {
 var r = new Uint8Array(18);
 crypto.getRandomValues(r);
 for (t = 0; t < 18; t++) e[t * 2 + 1] = (r[t] >> 4).toString(16), e[t * 2] = (r[t] & 15).toString(16);
 e[19] = n[r[9] >> 6]
 } else {
 while (t < 36) e[t] = (Math.random() * 16 | 0).toString(16), t++;
 e[19] = n[Math.random() * 4 | 0]
 }
 return e[8] = e[13] = e[18] = e[23] = "-", e[14] = "4", e.join("")
 }

[Aside: I sincerely hope this is a standard library for UUID, not something Apple wrote themselves.  If I ever think that I’m going to need to generate iCloud-compatible UUIDs.]

Whoa – Pause

I need to take a step back and re-examine my goals and what I can specifically address.  I have learned a lot about both LinkedIn and iCloud, but I didn’t set out to recreate them, just find a way to make consistent use of the data I already have.

Parsing PDFs using Python

I’m part of a project that has a need to import tabular data into a structured database, from PDF files that are based on digital or analog inputs.  [Digital input = PDF generated from computer applications; analog input = PDF generated from scanned paper documents.]

These are the preliminary research notes I made for myself a while ago that I am now publishing for reference by other project members.  These are neither conclusive nor comprehensive, but they are directionally relevant.

I.E. The amount of work it takes code to parse structured data from analog input PDFs is a significant hurdle, not to be underestimated (this blog post was the single most awe-inspiring find I made).  The strongest possible recommendation based on this research is GET AS MUCH OF THE DATA FROM DIGITAL SOURCES AS YOU CAN.

Packages/libraries/guidance

Evaluation of Packages

Possible issues

  • Encryption of the file
  • Compression of the file
  • Vector images, charts, graphs, other image formats
  • Form XObjects
  • Text contained in figures
  • Does text always appear in the same place on the page, or different every page/document?

PDF examples I tried parsing, to evaluate the packages

  • IRS 1040A
  • 2015-16-prelim-doc-web.pdf (Bellingham city budget)
    • Tabular data begins on page 30 (labelled Page 28)
    • PyPDF2 Parsing result: None of the tabular data is exported
    • SCARY: some financial tables are split across two pages
  • 2016-budget-highlights.pdf (Seattle city budget summary)
    • Tabular data begins on page 15-16 (labelled 15-16)
    • PyPDF2 Parsing result: this data parses out
  • FY2017 Proposed Budget-Lowell-MA (Lowell)
    • Financial tabular data starts at page 95-104, then 129-130, 138-139
    • More interesting are the small breakouts on subsequent pages e.g. 149, 151, 152, 162; 193, 195, 197
    • PyPDF2 Parsing result: all data I sampled appears to parse out

Experiment ideas

  • Build an example PDF for myself with XLS tables, and then see what comes out when the contents are parsed using one of these libraries
  • Build a script that spits out useful metadata about the document: which app/library generated it (e.g. Producer, Creator), size, # of pages
  • Build another script to verify there’s a non-trivial amount of ASCII/Unicode text in the document (I.e. so we confirm it doesn’t have to be OCR’d)

Experiments tried