Tuesday, August 4, 2015

Custom elements for Chrome Apps APIs

Continuing with my Polymer Chrome App I needed access to the Chrome Storage API. A quick search revealed only elements that haven't been updated to Polymer 1.0 so I started to create my own element based on iron-localstorage.

The "problem" with elements that depend on Chrome Apps APIs is that you can't test/use them outside of Chrome Apps, so I went ahead and created some gulp tasks to make things easier for me.

The main idea of these tasks is to put the contents of demo or test, which you would normally run directly, and all dependencies into one Chrome app that would then use demo/index.html or test/index.html as main page.

First I take all the files relevant for the element itself + the test/demo files, run the html files through crisper and put the result into the output components/my-element/ folder (following the layout of gh-pages for Polymer elements).
All bower dependencies are run through crisper as well and put into the components folder.
For the Chrome App itself only two files are important.

manifest.json defines the necessary permissions (e.g. to use chrome-storage the "storage" permission is required).
main.js launches the test or demo page.
The gulp task copies those two files to the main output folder, and changes the window.create call to point to the right file, e.g at components/my-element/test/index.html
The Chrome demo and test apps created that way can then be loaded as unpacked extensions.



This works nicely for the demo app, but unfortunately the test app reveals this in the console when starting it:

Uncaught Error: document.write() is not available in packaged apps.

Investigating the problem reveals this line in the web-component-tester to cause the issue, which makes sure that all dependencies are loaded before WCT is actually started.

To work around this issue you have to include the necessary scripts in the test files explicitly...
...and tell it not to load any scripts itself...
...before loading web-component-tester/browser.js on all the test pages.

So that I don't have to copy the same couple of lines into each of the test files separately, I extended the gulp-copy task to automatically insert the necessary lines in all files that include a reference to web-component-tester/browser.js
And with this change tests can be run in the Chrome App.


Following the idea of my previous article I also wanted to enable live-reload for this workflow

As opposed to my article where the livereload.js is removed for the production build, I did it the other way round here by adding it to the test/index.html or demo/index.html when running the gulp-live-task.
Watching for changes, rebuilding the app if necessary and triggering the reload works basically the same though.
And with this I can leave the test and/or demo apps running and see right away if all tests still pass after making changes and if the demos work as expected.

And now back to actually working on my app. All those distractions you run into while traversing (mostly) uncharted waters ☺

Thursday, July 30, 2015

Live-reload for Polymer Chrome Apps

While working on a new Chrome App using Polymer (details of which shall remain secret for now) I've encountered the following annoying repetitive steps:
  1. Change some code
  2. Run the code through crisper because the Content Security Policy for Chrome Apps doesn't allow inline scripts
  3. Reload the Chrome App from chrome://extensions/
  4. Repeat
The Polymer Start Kit offers a nice for solution this using Browsersync, which automatically updates all connected browser instances when the source code changes, but that only works for "normal" web apps meant to be hosted on a server which isn't the case for Chrome Apps.

After a bit of googling I found this nice article by Konstantin Raev that deals with the problem of live-reload for (non-Polymer) Chrome Apps and offers a straight-forward, working solution,
using tiny-lr and his own adapation of livereload.js (to work around some Chrome Apps security restrictions).

Using this gulp task will update or reload the Chrome App when any of the source files change, and you can just load your source folder as unpacked extensions, launch your app and start developing/testing:

While this works great for "normal" Chrome Apps the issue with Polymer Chrome Apps is that they at least need one extra crisper step to get all the JavaScript out of the .html as separate .js files.

My first lazy approach was to listen for any changes in the source folders, then run a full build and use the dist folder as unpacked extension.

A full build as per the Polymer Starter Kit involves quite a few steps, like minimizing the css/js/html, optimizing the images and vulcanizing the elements:

The only extra step you have to add in addition to what the Polymer Starter Kit does, is crisper after the vulcanize:

In the dev task (to be started with gulp dev) I first run a build, and then repeat the build step whenever something changes in the app folder. The build creates files in the dist folder (which is loaded as unpacked app), and livereload is triggered by listening to changes in this folder.

Of course this approach has several issues. Not only can the build sometimes take quite a while for even small changes, but you also get a minimized, vulcanized app, which can be terrible for debugging.

So instead I added a simplified dev-build that basically only copies all the files to a `dev` folder (to be loaded as unpacked app)...

and runs crisper on all .html files to get the .js parts out of the elements and their dependencies.

While working on that part I encountered an issue where gulp-crisper would ignore all folder structure and e.g. put all files directly into dev/bower_components/ instead of dev/bower_components/polymer/.  This issue is now fixed so make sure to use the newest version 0.0.5 of gulp-crisper.

When watching for changes I also don't update everything, every time something changes but listen for specific changes and only update the necessary parts.

And to prevent live reload to trigger for each single file change (and the build process creates several file changes for each source change) I'm using gulp-batch, collecting all changes in a batch before sending the info to tiny-lr.


Here's a quick video of how this process looks like now.


So with all of this done I can now proceed to work on my Polymer Chrome App after having learnt far more about gulp than I originally intended ☺



Thursday, June 18, 2015

Data Binding vs. Event Handling

Refactoring some of my projects from Polymer 0.5 (or earlier) to Polymer 1.0 I found myself using data binding and computed properties for situations where I previously had event handlers doing the "hard work". Since I think this is a rather nice and clean pattern  I thought I'd give some examples for this.


Let's take a look at a simple sample of a login. Using the google-signin element you could wait for the google-signin-success event to trigger and then retrieve/display information about the authenticated user and toggle the UI accordingly. Of course then you also have to handle the reverse case if a user signs out by listening for the google-signed-out event.


But the google-signin element also offers an is-authorized attribute / isAuthorized property that you can bind to and observe. Toggling the UI based on this property is as simple as adding hidden$="[[!isAuthorized]]" and hidden$="[[isAuthorized]]" to elements you want to show/hide. No extra JS necessary for this, as opposed to before where you had to set isAuthorized in the event handlers.
To retrieve user information once authorization has been granted you could add an observer to isAuthorized, but I think the much nicer solution is to make user a computed property that depends on isAuthorized. Whenever the value of isAuthorized changes this will re-evaluate the function and set the user property accordingly.

Let's take this sample a bit further. In many cases you will have to retrieve some more information from your server or elsewhere about the authenticated user. So you would need to trigger some request once the user is signed in and handle the response once it is available. In this sample I'm using my discovery-api-elements to fetch information about the user from Google+, but you can do something similar using iron-ajax or any other data-fetching element.


Instead of triggering the request manually, what you can do is binding the auto property (at least for discovery-api-elements or iron-ajax) to the isAuthorized property. Once isAuthorized and with it auto becomes true the request will be triggered automatically and you just have to handle the response.


But this won't remove the data in case the user signs out. To achieve this we make the data that is displayed (activities) a computed property that depends on both the response from the data-fetching element and the isAuthorized property.


Here's what happens now, when a user signs in:
  1. google-signin sets isAuthorized to true.
  2. This sets the auto property on the data element which triggers the request.
  3. Once the request completes plus-activities-list sets the response property accordingly.
  4. This change triggers recomputing activities with the _parseActivities function.
  5. Once there are items in the activities array, they will be displayed by the dom-repeat.
When the user signs out again:
  1. google-signin sets isAuthorized to false.
  2. This triggers recomputing activities which will be set to an empty list.
All of this without having to explicitly care about any event handlers, and provided another data- or sign-in-element offers similar properties you can bind to, you can simply replace those elements.

Friday, June 12, 2015

Polymer Quicktip - Attributes vs. Properties

A recurring problem that people starting with Polymer 1.0 or migrating from earlier version seem to have is the new property name to attribute name mapping.

This issue imho comes mainly from the fact that the element docs generated via iron-component-page only list the JS property names, but in many/most cases you will use the HTML attribute names in your markup that aren't listed anywhere.

Example from the google-signin element:


If you try to include this element in your page like this

<google-signin clientId="MY_CLIENT_ID"></google-signin>

it won't work because the clientId attribute will be mapped to a clientid property that doesn't exist and clientId will stay undefined.

The correct way to use the element would be:

<google-signin client-id="MY_CLIENT_ID"></google-signin>

So if you encounter issues with properties not getting the value you intended make sure your attribute names are correct.

Essentially the attribute name is converted to lower case first, and then dashes are converted to camelcase, SoMeThInG becomes something and SoMeThInG-ElSe becomes somethingElse.

For those interested, here's the part of the Polymer library that takes care of translation between attribute names and property names:
https://github.com/Polymer/polymer/blob/master/src/lib/case-map.html

And if you are really curious you can have a look at Polymer.CaseMap._caseMap to see what mappings are being used on your site.


Thursday, June 11, 2015

Polymer Quicktip - debounce

One of the more hidden features of Polymer is the possibility to "debounce" multiple requests into one function invocation.

This is useful if you have a compute- or time-heavy function that depends on several (published) properties and needs to be executed when those properties get a new value, e.g. if you need to create a new ajax call depending on several parameters.

Here a simple sample to demonstrate this behaviour.
First the element without debounce:
Including this element as <without-debounce property1="foo" property2="bar"></without-debounce> will trigger the function twice when the element is first loaded, and even if you change both properties at the same time you still get two function calls.


Here the same element with the debounce functionality added:
Using this element the console.log will only be called once when the element loads and also only once when properties get changed during a definable time (300ms in this case). This causes a small, but mostly ignorable delay before the actual execution of the function.

An element that uses this functionality is the iron-ajax element to prevent executing the actual request until all properties have "finalised".

I'm using the same behavior for the same reason in my discovery-api-elements.

Monday, June 1, 2015

The Photos Dilemma

While the new Google Photos has some pretty interesting features for users (and several problems as well which I won't discuss here) the situation for developers wanting to do anything with photos gets increasingly depressing. Let's have a look at a little bit of history of how things evolved, where we are today, and what I would wish for the future.

In the beginning there was Picasa

Picasa Web Albums which is still available today comes with a fully-fledged API  with read & write access to fully manipulate and organize photos. Admittedly the old GData APIs aren't the nicest to work with compared to modern APIs, especially for client-side applications in JS, but the API still does its job today.

Probably the most useful API calls for read-access, since the documentation can be a bit confusing:

Request a list of albums:
https://picasaweb.google.com/data/feed/api/user/{{userid}}

Info about one album:
https://picasaweb.google.com/data/entry/api/user/{{userid}}/albumid/{{albumid}}

List photos in an album:
https://picasaweb.google.com/data/feed/api/user/{{userid}}/albumid/{{albumid}}

Along came Buzz

Here's a blog post for those who still remember the good old times:
http://googlephotos.blogspot.co.at/2010/02/photos-in-google-buzz.html

Google Buzz didn't really change much about how Picasa Web Albums and the associated API worked, it mostly seemed like Buzz was using the API itself to achieve all the features.

One feature that was introduced was the concept of "Photos from Posts" that automatically created special albums in Picasa for each post with photos you shared to Buzz. Those albums could be recognized in the Picasa Web Albums API with the <gphoto:albumType>Buzz</gphoto:albumType> tag they had assigned in the album description.

Funny enough photos shared directly in posts on Google+ today still generate "Buzz"-albums.


On the plus side...

With Google+ we got a new UI for managing photos that in many ways still is more cumbersome to use than the old Picasa Web Albums UI. But the album and photo IDs matched so it was easy to use the Picasa Web Albums API for programmatic managing of your Google+ photos

https://plus.google.com/.../6155360510478436241/6155360509668083954

https://picasaweb.google.com/data/.../6155360510478436241/.../6155360509668083954

https://picasaweb.google.com/...#6155360509668083954


With Google+ the new concept of sharing albums to circles was introduced. Those albums would show up with <gphoto:access>private</gphoto:access> and you could (and still can) retrieve the information about what people and circles albums were shared with by requesting the acl of an album:
https://picasaweb.google.com/data/feed/api/user/{{userid}}/albumid/{{albumid}}/acl

This would show information like this:

<entry>
  <gAcl:scope type='group' value='...'/>
  <gphoto:nickname>Photo Share Test</gphoto:nickname>
</entry>

<entry>
  <gAcl:scope type='user' value='...'/>
  <gphoto:user>116...</gphoto:user>
  <gphoto:nickname>Scarygami Test</gphoto:nickname>
</entry>

Google+ also introduced Instant Upload (or Auto Backup as it is called) creating a new automatic album with the <gphoto:albumType>InstantUpload</gphoto:albumType> tag. As with "buzz" the "InstantUpload" name stayed in the API even after the name was changed in the front end

At the Drive-In

Things started to get a little bit weird with Google Drive integration.

It began with the feature to show (some but not all) photos stored on Google Drive in the Google+ Photos UI, with each Drive Folder that contained photos getting their own album.

Those albums wouldn't show up when requesting the list of albums from the Picasa API, but you could request some information and the photos inside if you copied the album ID from the corresponding Google+ URL (https://plus.google.com/photos/.../albums/{{albumId}})

Things got even more confusing when Google(+) Photos were added to Google Drive. This allowed you to add a folder to your Drive which would include all the photos you uploaded and shared on Google+ sorted by year and month. You can then go ahead and re-arrange/edit the photos as you want, but... the sync is one-way and one-time only, meaning that changes done on Google Drive won't be reflected back to Google Photos and you only get the originally uploaded photo in Google Drive without any changes that you might make in Google Photos at a later point.

You can access those photos using the Drive API using the files.get and files.list methods, and you also have write access using insert/update/patch methods, and the Drive API being one of the newer discovery-based APIs is much nicer to work with than the antiquated Picasa API. But it won't help you in managing your Google+ Photos since the data isn't synced, and there is no indication whatsoever in the file meta-information that the files originally came from Google+. The photos in Google Drive also have completely different IDs than the ones you could use in the Picasa API, they are completely decoupled.

New and shiny?

And so we reach the present with the new Google Photos UI to replace the Google+ Photos UI.

Since there are several essential features missing, like the possibility to add geotags, I've been thinking about creating some extensions/scripts to do some of the things via the Picasa API. The problem is that Google Photos invented completely new IDs for photos and album that don't match the corresponding IDs in the Picasa API, even though the photos and albums still show up there.

The Picasa IDs show up nowhere in the page source so they could be parsed, and the Google Photos IDs don't show up anywhere in the Picasa API which makes finding a matching photo to work with in the Picasa API nearly impossible. You could parse some meta information (like date/filename) from the Google Photos page and try to find a match in the Picasa API but that is (a) bound to break regularly as the Google Photos page gets updated and (b) potentially requires a lot of API requests until you get where you want. But that seems to be the only possibility at the moment to get some programmatic access to your photos, or you could completely forget about Google Photos and continue using Picasa Web Albums and the API to manage your photos, only using Google Photos for uploading/backing up/editing/sharing photos.


Talking about sharing: with Google Photos the main way of sharing albums is to create a "secret link" that can be shared and viewed by anyone who has the link. That also means that all albums created with Google Photos now will always show up with <gphoto:access>private</gphoto:access>.

Sharing to Google+ still allows you to share to circles/people without creating the shareable link, and those access permissions are still visible in the Picasa API.

The Picasa API gets a little bit confused though when sharing publicly to Google+. Those albums show up as private in the API, and are shown as "Limited, anyone with the link" in the Picasa Web Albums UI. To make things a little bit more confusing those publicly shared private albums show up in the API even when not authenticated as the owner of the album:
Example of a public private album in the API

A New Hope

It's been almost 4 years now since a blog post about a potential Google+ Photos API was leaked.
While being read-only (as most of the Google+ API) this seemed like a promising start to replace the antiquated Picasa Web Albums Data API. But nothing ever happened there anymore and with Google Photos now getting decoupled from Google+ the Plus API doesn't seem to be right place to add such an API.

As discussed above the Google Drive API probably won't be a good home for new photos features either since there is no sync happening after the initial upload, even though it would be possible to represent most metadata related to albums/sharing/editing using custom file properties.

So it seems that we still have to wait for a separate photos API and try to use the Picasa Web Albums UI now as long as it is still working. The minimal functionality I would wish for is a way to map Google Photos IDs to Picasa IDs...

For lack of a better place you might want to star this feature request and maybe add a comment about what you would want to do with a Photos API and what features you are expecting to see in such an API.

Alternatively/additionally you can also use the feedback option in the new Google Photos site/app to tell Google you care about such an API.

Monday, May 18, 2015

Preparing for Polymer 1.0 - hangout-app

It must have been shortly after the Chrome Dev Summit in 2013 that I first started looking into Polymer. A lot has changed since then and most of the code I had written for the early versions of Polymer looks completely different now and went through a lot of re-write stages, but that's what you get for living on the bleeding edge ☺

Now Polymer has reached beta state with the 0.9 release and 1.0 is expected to come out at I/O so the time of breaking changes is slowly coming to an end. Some of my projects will probably forever remain like they are now, but I thought it was about time to start updating some of my more important (imho) elements, starting with my <hangout-app> element, that makes developing Hangout Apps easier.

While migration is generally easy thanks to migration guide there are still some things I've stumbled over (mostly because I flipped through the migration guide too quickly...)

No inheritance from custom elements (for now)

When I first created this element I had the idea (which seemed brilliant and completely logical at the time) to let people inherit from the hangout-app element to create their own hangout apps, so they could depend on the loaded property of the parent element to know when the Hangouts API is ready to be used:
With inheritance from custom elements not being supported (for now) I had to rethink this idea and I think the new solution is actually much clearer. You can now include the hangout-app element anywhere in your project and either wait for its ready event to fire or bind to its loaded property. Alternatively you can also include any of your markup as content of the hangout-app element and this content won't be rendered until the Hangouts API is ready to be used.

Conditional templates

The old <template if="{{condition}}">...</template> implementation removed/added DOM elements completely when the condition changed, which could have a negative effect on performance if used excessively, and I have to admit that I used it way too much in my projects, simply because it was easy to use and made the code somewhat clearer.

As I wrote a while ago the much better solution in most cases is to simply hide/show by conditional attribute binding to the hidden attribute: <div hidden$="[[!condition]]">...</div>

In the case of the hangout-app element I wanted to make sure that none of the content that might depend on the Hangouts API is part of the DOM until the API is ready, e.g. when using the hangout-shared-state element which tries to called the Hangouts API as soon as it is attached. For that reason I used the new implementation of conditional templates in the form of dom-if.
<template is="dom-if" if="[[loaded]]">
  <content></content>
</template>
This new implementation by default adds the content the first time the condition becomes true and afterwards only shows/hides the elements as necessary.

Layout attributes > Layout classes

I completely missed this part of the migration guide and was very surprised when my layout didn't look the way I expected it to.

The change from attributes to classes is easy enough though, just make sure to include PolymerElements/iron-flex-layout in your dependencies.


That's it for now, more coming as I upgrade more of my elements ☺

Tuesday, April 21, 2015

Google Sign-In 2.0 - Server-side

There have been a couple of questions lately about server-side access when using the new Google Sign-In functionalities, so I've put together this article to cover some possible use-cases.

User verification

Probably the simplest case is when you only want to verify on the server-side who the currently signed-in user is, e.g. to load user-specific data/settings for them. For this you can use the most basic sign-in implementation, securely send the ID token to the server and use one of the Google API Client Libraries to verify the token and get user information from it.


On the client side you wait for the sign-in success event to trigger, get the id_token from the authenticated user and send it to your server. You should always send the id_token via HTTPS for security reasons. On the server side (in this case using Python with Flask) you use the Google API Client library to verify the id_token and then use the information you get in what ever way you need. Please note that in this case you won't be able to make calls to Google APIs on behalf of the user. Here's the information you can get about the user from the id_token: I would highly recommend to read this article about ID-Tokens.

Optional server-side offline access

If you offer a web-service that will do something on behalf of the user while they are not online, I would recommend to make this an opt-in service after the user has signed-in.

E.g. if your service offers sending news to a user via the Google Glass Mirror API, they could sign-in to your website first, pick the news categories they are interested in and then "flip a switch" to enable "offline access".

For this you would have the normal basic sign-in flow on the client-side. You can then use the ID token as before to check if the user already has offline access authorized (i.e. you have credentials stored for their user ID already). If there is no offline access yet you can display an extra button to go through the grantOfflineAccess flow to get a one-time code which can be exchanged for access and refresh tokens on the server side. On the server-side you can then use the client-library to exchange the code for credentials that can be stored to act on behalf of the user at any point. grantOfflineAccess will always cause a pop-up to show for the user requesting offline access. This is the only way to get a refresh token, also in case you lost a previous one.

Necessary server-side offline access

If your service won't work without offline access (would be curious to hear your use-cases here) and you don't want your users to go through two sign-in steps, things get a little bit more difficult on the client-side (while you can still use the same server.py as above). You can't use the default sign-in button for this, since this flow always runs without granting offline access. Instead you have to use your own custom button (make sure to create it following the branding guidlines) which calls grantOfflineAccess.

For "old" users that come to your website again calls gapi.auth2.init will initalize an immediate sign-in flow which you can catch with the isSignedIn listener to check for existing credentials as before (just in case you lost them). For "new" users the grantOfflineAccess flow will return a code which you can exchange as above, and at the same time authenticate the user on the client side as well (calling your isSignedIn listener).

I hope this answers some of the questions you have, feel free to comment if you have more :)