Sunday, June 9, 2013

Google Glass: It’s a start.

SNL-Glass

Google wants wearable computing to augment your life and not "get in the way". Glass does not quite do this yet. Glass can do some really cool stuff, but in my opinion its utility is limited, and personally it still feels a bit "in the way".

But it’s a start.

 

 

What Glass Knows

You can ask Glass questions, and it does a very good job of responding with an answer, kind of like the "I'm feeling lucky" button, except that some hot-words and context are taken into consideration. This is clearly tapping into Googimagele's work in conversational search. Just how well Google can understand what we mean was demonstrated at I/O by Johanna Wright. During this demo she had made a prior inquiry regarding the Santa Cruz beach boardwalk, then asked (using pronouns), "ok google, how far is it from here?" And she later asks, "when does my flight leave?"

In my case, I've demonstrate being able to ask Glass things like "who won the race" after the Indy 500, and "show me the picture" after taking a photo, with success. And asking it about weather, time zones, and other facts with well known answers (like "what is two plus two") yields expected information. Even asking for visual answers works, like "show me birds pictures of colorful birds", displays what you would expect, just like when you do a google image search on your desktop. The degree to which machines understand natural language is on an awesome trajectory. (I'm halfway through Ray Kurzweil's book "How to Build a Mind", and it is no surprise Ray is now working at Google. A nice fit if you ask me; they need each other as resources.)

In another case, I was picking up my kids from a "Kids-n-ponies" summer camp. I asked Glass, "get directions to Kids-n-ponies". But the actual name of the organization was "Pony Brooks Stables", which I had forgotten. (And furthermore, there are more than one organization in the US by the name "Kids-N-Ponies", and the top match on google.com is not thdirectionse camp in Tippecanoe County, Indiana...) Worse yet, Glass thought I said "Kids in ponies".  But in my single attempt, Google still figured out what I meant and gave me the correct address directly, along with turn by turn directions. Best of all, I got what I needed fast. Yes, I could have found all this on my phone, but with more effort and time than just simply asking out loud for what I wanted.

Google's knowledge graph plus some very smartly engineered AI makes this possible. And when you (trustingly) provide  a graph of your personal data to the same machinery, then the ability to provide personal answers with more context and relevancy improves greatly, even anticipating answers to fit you.

 

Hands Free

One strength of Glass is being (mostly) hands-free. (I say "mostly" because sometimes you have no choice but to use the touch surface to perform what you need.) I have told Glass to send a text message, very easily, and accurately, with no hands. It even understood my recipient's name when I spoke it. Though annoyingly in advance I had to choose a handful of my contacts to make available to Glass. It doesn't just tap into my existing gmail contacts. Not sure why not.zipline-stand

Also hands-free, I told Glass to take some videos while ziplining. I couldn't have done this with my handheld wearing big leather braking gloves. This was very cool. But of course I don't do this every day.

I've also told Glass to take photos, many times, and I was surprised that this was not as awkward as I thought it would be. Although for group photos people had no idea when you were actually "done" taking the picture. And depending on where you are, it is also not always appropriate to speak the words, "take a picture".

But the "hands-freeness" is of limited access. The steps it takes to get to the point where you can make a query is quite awkward. Here are the steps:

  1. awake the device (touch the side, or tilt your head up 30 degrees)
  2. say "ok, glass"
  3. say "google"
  4. now ask your question.

This is so not a good UX. Though I still feel this is a big step forward, it is still "technology getting in your way".

And once the device has responded to my hands-free command, then it is done taking orders. you-re-dead-to-me-pl-ffffff-1I cannot continue to direct it to do stuff by voice. It's a one shot thing. For example, once you begin navigation for directions, you can't cancel the navigation task without using your hands. You also can't do any other operations, which when doing directions for example, you easily want the other functions of the device to be at your disposal.

I also found that the audio level to be too loud in a quiet space (people around you can hear it), but when using it for turn by turn directions in a vehicle, I could not hear it. There is no volume control. (It should adjust this automatically.)

I have not seen any real "interactive" apps on Glass. There’s one app that I would love to have: I’d like assistance while my hands are busy cooking. All the prep work in the kitchen usually consists of repeated trips between the fridge, the pantry, the measuring-utensil drawer, and the recipe page. And the actual cooking steps consist of many double-checks on sequence and timing in the recipe. So this would be a perfect hands free application to help check off what I am doing, and basically be my sous-chef without a knife. However the Glass OS doesn't really operate in this mode of interactivity. For one, it needs to have always-on audio standby. Meanwhile, my tablet will get food stuffs on it.

swedish_chef

 

So, what's not to like?

All this stuff is "cool", yet I'm not inclined to wear Glass all the time. The physical device still feels a bit "in the way". There is a certain threshold of value and utility that is needed when moving from a device in my pocket to one work on my face. I'd better be getting a lot out of it. Ironically, because it is directly in view where it can disrupt our attention, we developers are supposed to be sure not to inappropriately demand too much user interaction.

Let's look at how I use my Android phone. Here's a snapshot of 20 hours of my phone’s life from this weekend. A majority of my screen-on time is reading books. Then it's email. Or sending some texts. Using my calendar. Maybe playing a game. Maybe a phone call. Probably in that pecking order.

usage2The "screen-on" time of my phone is way bigger than Glass could ever be. It's because the utility of any app on my phone has greater value when I can provide it greater attention. Glass, by its current design, gets very little of your attention. Try coming up with killer apps for that.

Let me just say that no "killer app" on Glass will use touch as primary input. That just isn't going to cut it when I'm asked to give up the touch experience of my smartphone for rich interactiveness. Today’s Glass device practically demands that useful apps operate at this hands free level. CNN alerts at my eyeball? Really? Output info streams are easy. Input is the big challenge. Google has shown they can respond to natural speech and give us amazing search results. And they are a big company geared for this. But what about us little guys writing apps? I want to talk to my Sous-Chef app for it to really be useful. Even if Google provides the heavy lifting for parsing the natural speech, we still need to wire what is meant by spoken words into our own custom apps. IMHO Google has some more heavy lifting do for us here. Basically my app needs to be some domain expert (sous-chef app), and we'll let Google tap into this expertise. That's how it will go down. It's a matter of how we will define that interface between our apps and Google's human interface engine. (Maybe some hand gesture recognition…) Or we can just write an obligatory non-killer app like CNN alerts.

So I'm doubting that we are going to see a large ecosystem of apps for this revision of Glass devices. Now, if you fast-forward and give me 1) full-lens AR, 2) always-on command recognition, 3) physical object pattern matching, 4) recognize some hand gestures for input, and 5) less physical bulk on my face, then we have a platform for apps!! Meanwhile, I think if you take the best of Glass's utility today, being hands free, and being a spigot of information, it might as well be a wrist device, and be far less "in the way". (It will be interesting to see how the wrist device rumors hold up at Apple.)

Potential

It’s easy to be underwhelmed and throw the whole concept under the bus. Google X has attempted a leap with this device, and I’m not sure it landed on both feet yet. But the fact is that they took a jump, and I applaud the direction. This device demonstrates the power of Google, funneled and focused to a fine tip, and placed inches from your eyeball, right in your ear, and tuned-in to your voice. So while I may not like this particular device, there is no doubt that Google (obviously) has viable lifeblood to power a life-augmenting, out-of-your-way device.

Thursday, June 6, 2013

On the aesthetic of a Java enum singleton

Surely java programmers these days use Bloch’s suggested form of singletons, e.g.

  1: 
  2: public enum Cupcake {
  3:     INSTANCE;
  4: 
  5:     private Foo someInternalStuff;
  6: 
  7:     private Cupcake() {
  8:         // get internal stuff ready
  9:     }
 10: 
 11:     public void doSomethingInteresting() {
 12:         // super exciting suff goes here
 13:     }
 14: 
 15: }

And then in our code we see Cupcake.INSTANCE strewn about.

Ok, that’s fine. But lately I found myself using “$” instead of that big fat “INSTANCE”. I just like how it looks. That “S” with a bar through it says “singleton” to me now.

  1: Cupcake.$.doAwesomeness(); // isn't that nice?

This is next to useless information about a personal preference and hardly worth a post. But there it is. :-)