Wednesday, August 10, 2016

PolymerCubed

I've recently started using Microsoft SQL Server Analysis Services for reporting purposes at work, and while querying the cubes worked great, the visualization options were rather unsatisfactory, especially when it came to building web-based dashboards.

And while there are several vendors offering solutions in this area, I decided to give it a try myself and started creating a suite of Polymer elements.

Wednesday, May 25, 2016

I/O 2016

I/O 2016 is over again already, so it's time to sum up my thoughts about what I/O brought for developers and what announcements/updates you should check out.



Monday, April 18, 2016

Polymer and the [hidden] attribute

The hidden attribute is a "fairly new" convenience attribute (fairly new = not implemented in IE<=10) to hide page elements that are not relevant in the current context/state of the website.

It is especially useful in a Polymer web app, since you can use attribute binding to show/hide elements based on (computed) properties, without having to make your own display: none; styles.

There are several cases where you will have to be careful with this attribute though.

Tuesday, March 8, 2016

Polymer on Blogger

I've had some fun over the past few weeks to force Polymer to work on Blogger, or rather to force Blogger to work with Polymer, and here are my results, some of which might be more useful than others.

A quick disclaimer before we get started:
This post definitely falls more into the "because I can" category than in the "because you should" category, and would need some extensive testing and tweaking before being used out in the wild.

Thursday, February 11, 2016

I See... People

To get a better understanding of the new Google People API I created a small (of course Polymer-based) demo, since playing around with the API Explorer can be cumbersome.

https://scarygami.github.io/people-api-demo/

This demo will fetch one "page" of results from the people.connections.list method and display the raw JSON for each contact.

You can then click on "load full data" to fetch the rest of the contact information via people.get for each contact.



Source code for the demo is available if you want to play around some more.

The demo makes use of my discovery-api-elements element, that I should really get around to write a more detailed article about, since, if I may say so myself, this is an awesome way to easily access all of Google's discovery-based APIs and your own Google Cloud Endpoints.



Some takeaways from what I've seen so far in no particular order.

The "resourceNames" are interesting

If you want to fetch the data for a Google(+) account, instead of just using the numeric ID, you will have to use people/ID
This is easy enough to get used to, but makes you wonder what other resources they might have planned to include in this API.

The data structure is confusing to look at

To be fair JSON isn't really meant for human consumption, but to be able to work with it programmatically you first have to understand it.

Each person has one or more sources where the data comes from.
In most cases there will be two:
  • CONTACT from your Gmail contact information
  • PROFILE from the public Google(+) profile.
{
  ...,
  "metadata": {
    "sources": [
      {
        "type": "CONTACT",
        "id": "gmail_id"
      },
      {
        "type": "PROFILE",
        "id": "gplus_id"
      }
    ],
    "objectType": "PERSON"
  },...
}

For each "type" of data (e.g. names, photos, urls, emails, phone numbers, ...) the response will have an array, and each item in this array comes with metadata to show which source the information comes from. While this makes perfect sense for easily parsing and displaying the information programmatically it results in a rather lengthy JSON response. This block here is only for two email addresses:

{
  ...,
  "emailAddresses": [
    {
      "metadata": {
        "primary": true,
        "source": {
          "type": "CONTACT",
          "id": "gmail_id"
        }
      },
      "value": "email1@gmail.com",
      "type": "other",
      "formattedType": "Other"
    },
    {
      "metadata": {
        "source": {
          "type": "CONTACT",
          "id": "gmail_id"
        }
      },
      "value": "email2@somewhere.com",
      "type": "other",
      "formattedType": "Other"
    }
  ],...
}

Google+ profile images are broken

A bug that will hopefully be fixed soon, but for now the profile photo URLs  that come from PROFILE give a 404. Interestingly profile photo URLs from CONTACT work, as do cover photo URLs.

No access to "private" profile data even if you are allowed to see it

That was one of the biggest problems with/complaints about the Google+ API's people.get method as well. Even if you are using authenticated calls you only get the public Google+ profile information which doesn't include the private/limited data you might see when visiting someone's Google+ profile. Unfortunately that hasn't changed with this API...

No Google+ contacts

The people.connections.list method only shows Gmail contacts, and none of your Google+ contacts even if the plus.login scope is included in authentication. So if you want to work with Google+ contacts you will still need to use the people.list method of the Google+ API. And then you might as well use the people.get method of the Google+ API to get the rest of the information as well. The one benefit you get with people.get in the new API, is that any private information that has been added via Google Contacts will be displayed along with the Google+ profile information.

No more GData!

And after all my complaints one positive thing to say as well. If you've been using the old GData Contacts API you should switch to this new API asap. I think everyone who has been forced to work with GData will be happy to never see it again ;)




So to summarize my thoughts:

Great replacement for the old Contacts API, not really adding much value when working with Google+ contacts.

Curious to see what further features (if any) are planned for this API.

Tuesday, January 26, 2016

Polymer in a corporate network

(a.k.a. The Things You Do For Money)


Working as a developer in a non-tech company can be challenging since usually the corporate IT infrastructure and regulations aren't designed for developer tasks but for day-to-day office business. So when I took over some new responsibilities at work and had the chance to finally include Polymer in some internal web projects (with the biggest stopping point of IE8 finally gone) I had some hurdles to overcome to make use of the full Polymer-related tool chain I've come to love for my private projects.

1. Microsoft Windows

The main OS in big corporations (at least in Austria) is still Microsoft Windows and usually you can't just install another OS on company hardware. However, as it turns out, all the tools needed are readily available on Windows, so it might be a unfair to list it among the problems. Consider this part mainly as a summary of what I use for developing with Polymer.

Node.js (with all web development tools being Node-based these days) is pretty well supported across all platforms. It comes with npm as package manager that lets you install other tools you will need like Bower or Gulp.

Git (for fetching Bower dependencies) has a Windows installation that also comes with a bash emulator, that is so much nicer to use than the Windows command prompt.

As for text editors you have a wide variety to choose from, personally I like Sublime Text but have also used Notepad++ quite frequently.

2. Group Policies

Having installation files available for Windows is nice, but you might not actually be allowed to install non-standard applications on your PC thanks to all settings and permissions being managed through Active Directory and Group Policies. If you are nice to your local IT department they might make an exception and give you local administrator rights or install custom applications for you, and luckily I have a very nice local IT department ;)

But in many cases not even the local IT department can help you, depending themselves on a global team that manages all permissions, and then you will have to start looking at portable solutions that you can simply put anywhere you want without having to install anything.

Here's one possible approach to put your whole development environment on an USB stick.
  1. Download the latest version of PortableGit and "install" = extract it to any location you want.
  2. Create a usr/local/bin folder in the same location
  3. Download the latest node.exe (from win-x64 or win-x86), which is all that Node.js needs to run, and put it in usr/local/bin folder you just created.
  4. Download the latest npm release and extract it into usr/local/bin/node_modules/npm/
  5. Copy npm and npm.cmd from usr/local/bin/node_modules/npm/bin/ to usr/local/bin
Your final folder structure should look something like this:


When you run git-bash now you can already use npm to install Bower and any other node modules you may want to use with npm install -g bower

With the default settings PortableGit still has some issues though, the biggest one being that it is not truly portable with some global config settings being written to $HOME which defaults to C:/Users/YourUser

To solve this issue you will need to create a /home/YourUser folder in your PortableGit location. To prevent messing around with the default config scripts (which will be overwritten when updating PortableGit) you can create a batch file to temporarily (so we don't mess up other applications that need the %HOME% environment variable) set HOME to that folder before starting git-bash.

setlocal
set HOME=%~dp0home\%USERNAME%
git-bash.exe
endlocal

You can have different settings for different usernames that way, or if you prefer hard-code the username in the batch file so it will always refer to the same home folder.

Some bower commands (especially bower init) and probably others sometimes have problems with Mintty that the current version of git-bash uses as terminal emulator, so sometimes you might have to use bash.exe directly. You can use another batch file for that.

setlocal
set HOME=%~dp0home\%USERNAME%
bin\bash.exe -login -i
endlocal

3. Corporate Firewall

With all of this set up, you might already have stumbled across another problem in the previous step when trying to run npm install -g bower since the corporate firewall most likely blocked that request.

git, bower and npm all observe the http_proxy and https_proxy environment variables so once you know which proxy to use (easiest by looking at the Internet options IE) you can set those with

export http_proxy=http://company-proxy:port
export https_proxy=http://company-proxy:port

And so you don't have to do this every time you should create a .bashrc file in your /home/YourUser folder and put the commands in that file, which will be executed everytime you start bash.

Hint: since Windows gets easily confused with dot-files the easiest way to do this is via git-bash like this:

cd ~
touch .bashrc
notepad .bashrc

And with this I'm all set to bring Polymer goodness to the company, let's see where this journey takes me. I'll expect some more blogposts along the way :)