Sunday, June 9, 2013

Google Glass: It’s a start.

SNL-Glass

Google wants wearable computing to augment your life and not "get in the way". Glass does not quite do this yet. Glass can do some really cool stuff, but in my opinion its utility is limited, and personally it still feels a bit "in the way".

But it’s a start.

 

 

What Glass Knows

You can ask Glass questions, and it does a very good job of responding with an answer, kind of like the "I'm feeling lucky" button, except that some hot-words and context are taken into consideration. This is clearly tapping into Googimagele's work in conversational search. Just how well Google can understand what we mean was demonstrated at I/O by Johanna Wright. During this demo she had made a prior inquiry regarding the Santa Cruz beach boardwalk, then asked (using pronouns), "ok google, how far is it from here?" And she later asks, "when does my flight leave?"

In my case, I've demonstrate being able to ask Glass things like "who won the race" after the Indy 500, and "show me the picture" after taking a photo, with success. And asking it about weather, time zones, and other facts with well known answers (like "what is two plus two") yields expected information. Even asking for visual answers works, like "show me birds pictures of colorful birds", displays what you would expect, just like when you do a google image search on your desktop. The degree to which machines understand natural language is on an awesome trajectory. (I'm halfway through Ray Kurzweil's book "How to Build a Mind", and it is no surprise Ray is now working at Google. A nice fit if you ask me; they need each other as resources.)

In another case, I was picking up my kids from a "Kids-n-ponies" summer camp. I asked Glass, "get directions to Kids-n-ponies". But the actual name of the organization was "Pony Brooks Stables", which I had forgotten. (And furthermore, there are more than one organization in the US by the name "Kids-N-Ponies", and the top match on google.com is not thdirectionse camp in Tippecanoe County, Indiana...) Worse yet, Glass thought I said "Kids in ponies".  But in my single attempt, Google still figured out what I meant and gave me the correct address directly, along with turn by turn directions. Best of all, I got what I needed fast. Yes, I could have found all this on my phone, but with more effort and time than just simply asking out loud for what I wanted.

Google's knowledge graph plus some very smartly engineered AI makes this possible. And when you (trustingly) provide  a graph of your personal data to the same machinery, then the ability to provide personal answers with more context and relevancy improves greatly, even anticipating answers to fit you.

 

Hands Free

One strength of Glass is being (mostly) hands-free. (I say "mostly" because sometimes you have no choice but to use the touch surface to perform what you need.) I have told Glass to send a text message, very easily, and accurately, with no hands. It even understood my recipient's name when I spoke it. Though annoyingly in advance I had to choose a handful of my contacts to make available to Glass. It doesn't just tap into my existing gmail contacts. Not sure why not.zipline-stand

Also hands-free, I told Glass to take some videos while ziplining. I couldn't have done this with my handheld wearing big leather braking gloves. This was very cool. But of course I don't do this every day.

I've also told Glass to take photos, many times, and I was surprised that this was not as awkward as I thought it would be. Although for group photos people had no idea when you were actually "done" taking the picture. And depending on where you are, it is also not always appropriate to speak the words, "take a picture".

But the "hands-freeness" is of limited access. The steps it takes to get to the point where you can make a query is quite awkward. Here are the steps:

  1. awake the device (touch the side, or tilt your head up 30 degrees)
  2. say "ok, glass"
  3. say "google"
  4. now ask your question.

This is so not a good UX. Though I still feel this is a big step forward, it is still "technology getting in your way".

And once the device has responded to my hands-free command, then it is done taking orders. you-re-dead-to-me-pl-ffffff-1I cannot continue to direct it to do stuff by voice. It's a one shot thing. For example, once you begin navigation for directions, you can't cancel the navigation task without using your hands. You also can't do any other operations, which when doing directions for example, you easily want the other functions of the device to be at your disposal.

I also found that the audio level to be too loud in a quiet space (people around you can hear it), but when using it for turn by turn directions in a vehicle, I could not hear it. There is no volume control. (It should adjust this automatically.)

I have not seen any real "interactive" apps on Glass. There’s one app that I would love to have: I’d like assistance while my hands are busy cooking. All the prep work in the kitchen usually consists of repeated trips between the fridge, the pantry, the measuring-utensil drawer, and the recipe page. And the actual cooking steps consist of many double-checks on sequence and timing in the recipe. So this would be a perfect hands free application to help check off what I am doing, and basically be my sous-chef without a knife. However the Glass OS doesn't really operate in this mode of interactivity. For one, it needs to have always-on audio standby. Meanwhile, my tablet will get food stuffs on it.

swedish_chef

 

So, what's not to like?

All this stuff is "cool", yet I'm not inclined to wear Glass all the time. The physical device still feels a bit "in the way". There is a certain threshold of value and utility that is needed when moving from a device in my pocket to one work on my face. I'd better be getting a lot out of it. Ironically, because it is directly in view where it can disrupt our attention, we developers are supposed to be sure not to inappropriately demand too much user interaction.

Let's look at how I use my Android phone. Here's a snapshot of 20 hours of my phone’s life from this weekend. A majority of my screen-on time is reading books. Then it's email. Or sending some texts. Using my calendar. Maybe playing a game. Maybe a phone call. Probably in that pecking order.

usage2The "screen-on" time of my phone is way bigger than Glass could ever be. It's because the utility of any app on my phone has greater value when I can provide it greater attention. Glass, by its current design, gets very little of your attention. Try coming up with killer apps for that.

Let me just say that no "killer app" on Glass will use touch as primary input. That just isn't going to cut it when I'm asked to give up the touch experience of my smartphone for rich interactiveness. Today’s Glass device practically demands that useful apps operate at this hands free level. CNN alerts at my eyeball? Really? Output info streams are easy. Input is the big challenge. Google has shown they can respond to natural speech and give us amazing search results. And they are a big company geared for this. But what about us little guys writing apps? I want to talk to my Sous-Chef app for it to really be useful. Even if Google provides the heavy lifting for parsing the natural speech, we still need to wire what is meant by spoken words into our own custom apps. IMHO Google has some more heavy lifting do for us here. Basically my app needs to be some domain expert (sous-chef app), and we'll let Google tap into this expertise. That's how it will go down. It's a matter of how we will define that interface between our apps and Google's human interface engine. (Maybe some hand gesture recognition…) Or we can just write an obligatory non-killer app like CNN alerts.

So I'm doubting that we are going to see a large ecosystem of apps for this revision of Glass devices. Now, if you fast-forward and give me 1) full-lens AR, 2) always-on command recognition, 3) physical object pattern matching, 4) recognize some hand gestures for input, and 5) less physical bulk on my face, then we have a platform for apps!! Meanwhile, I think if you take the best of Glass's utility today, being hands free, and being a spigot of information, it might as well be a wrist device, and be far less "in the way". (It will be interesting to see how the wrist device rumors hold up at Apple.)

Potential

It’s easy to be underwhelmed and throw the whole concept under the bus. Google X has attempted a leap with this device, and I’m not sure it landed on both feet yet. But the fact is that they took a jump, and I applaud the direction. This device demonstrates the power of Google, funneled and focused to a fine tip, and placed inches from your eyeball, right in your ear, and tuned-in to your voice. So while I may not like this particular device, there is no doubt that Google (obviously) has viable lifeblood to power a life-augmenting, out-of-your-way device.

Thursday, June 6, 2013

On the aesthetic of a Java enum singleton

Surely java programmers these days use Bloch’s suggested form of singletons, e.g.

  1: 
  2: public enum Cupcake {
  3:     INSTANCE;
  4: 
  5:     private Foo someInternalStuff;
  6: 
  7:     private Cupcake() {
  8:         // get internal stuff ready
  9:     }
 10: 
 11:     public void doSomethingInteresting() {
 12:         // super exciting suff goes here
 13:     }
 14: 
 15: }

And then in our code we see Cupcake.INSTANCE strewn about.

Ok, that’s fine. But lately I found myself using “$” instead of that big fat “INSTANCE”. I just like how it looks. That “S” with a bar through it says “singleton” to me now.

  1: Cupcake.$.doAwesomeness(); // isn't that nice?

This is next to useless information about a personal preference and hardly worth a post. But there it is. :-)

Monday, February 4, 2013

AngularJS and setting focus on elements

[update: 2/14/2013] There is new information in the angular group discussion that obviates much of this post. Support for setting focus and blur are on the Angular 1.1 roadmap. The question in this post that remains is how to clean up the timing issue.

~~

At last week’s AngularJS meetup in Chicago, I had a discussion about setting focus on an element, which is not built-in to angular. This is a challenge because you basically have to address a specific DOM element from your controller, which we have learned is Bad Form.

I believe that AngularJS should provide for setting focus (and blur, and select). This one of those common things that developers have to face, and we only end up with various solutions posted online, and no canonical form. I’d like angular to say, “here’s how we can set focus”. So after playing around with this problem, this is my proposal.

Why is setting focus troublesome in angular?

The underlying problem is that setting focus is transitional, and it flows from the controller to the DOM. To be clear, a click or keyboard event is also transitional, but those transitions flow from the DOM to the controller. And unlike our elements bound with ng-bind or ng-model, transitional events are not represented by a state, nor do they participate in the “binding event loop”, which I believe they should.

The other concern I expressed is that this same problem probably applies to more than just setting focus. But how many other “fire-an-event-at-a-DOM-element” things are there? Offhand I could think of calling blur() on an element. Once back at my desk, I looked at events in the W3C DOM Level 2 spec (better summarized on wikipedia), and it would seem that there are really only three events that are appropriate to fire from the controller. These are focus(), blur(), and select().

How might we do it?

The only method I have seen to solve this problem is to $emit() or $broadcast() a message from the controller, and have a directive listening with $on() who can fire .focus() on the raw element. This is fine, and it achieves the proper decoupling, i.e., not letting the raw element leak into the controller. But it feels somewhat clumsy to address your element by way of calling $scope.$emit('agreed-upon-name'). Contrast this with changing a variable bound to an input element, which feels more direct. For that we simply assign to the bound variable, e.g., $scope.form.personName = "Alice".

I propose decoupling focus events the same way we decouple value assignment to a DOM element. Use an attribute directive to indicate a $scope variable that we will $watch. And when that variable changes from 0 to 1, fire the focus event and reset the value to 0 again. In fact, we can go the extra step and generate a focus() function that will set this variable to 1 for you. This feels more natural, actually calling a focus() function for the element you care about, and doing so without addressing the DOM directly.

The result is clean and simple. Here’s the simplest form, which doesn’t have any controller code:

  1: <input type="text" x-ng-model="form.color" x-ng-target="form.colorTarget">
  2: <button class="btn" x-ng-click="form.colorTarget.focus()">do focus</button>

For me, it is clearer to see form.colorTarget.focus() than $emit(‘colorFieldFocus’).

Note that by having our directive place a .focus() function directly into our scope, we can now use a list of objects in ng-repeat and have each object be blessed with a .focus() method. We don’t have to use $index to construct some naming convention for our event names. In fact, I’m not certain how we’d do this with the $emit() solution. The ng-repeat example looks like:

  1: <h3>demo with ng-repeat</h3>
  2: <div x-ng-repeat="p in people">
  3:     <span>{{$index}}</span>
  4:     <span x-ng-bind="p.name"></span>
  5:     <input type="text" x-ng-target="p">
  6:     <button class="btn btn-mini" x-ng-click="p.select()">select</button>
  7: </div>
  8: <button class="btn" x-ng-click="people[0].focus()">focus item 0</button>
  9: <button class="btn" x-ng-click="people[2].blur()">blur item 2</button>
 10: <button class="btn" x-ng-click="people[3].select()">select item 3</button>

The controller just has a list of people objects, e.g.

  1: angular.module('app', []);
  2: angular.module('app').controller('DemoCtrl', function($scope) {
  3: 
  4:     $scope.people = [
  5:         {id: 123, name: 'alice'},
  6:         {id: 714, name: 'bob'},
  7:         {id: 531, name: 'carly'},
  8:         {id: 284, name: 'dave'},
  9:     ];
 10: 
 11: });

Each one of those objects will gain a .focus(), .blur(), and .select() function by virtue of the ng-target attribute. The downside of this approach is that you would not want to have multiple DOM elements with ng-target pointed at the same underlying people objects. In that case, whoever writes over the .focus() function last wins.

Here is a functioning jsfiddle:  http://jsfiddle.net/bseib/WUcQX/

Timing is everything

With this approach, I like that the firing of the raw element.focus() is placed where it needs to be, such that it participates in the $digest/$watch event loop. But why should it belong here?

The actual focus event doesn’t fire right away, not until its transition happens to be noticed, along with other variables that are being $watched. We inevitably want to fire one of these focus events on the heels of changing a class, or changing an attribute of an element. We might remove the attribute disabled from a <button> followed by a call to focus() for the same button. Or we might remove a class like .hide={ display: none; } , and then call focus(). Say we want to show a hidden panel, then set focus on a text element within. The key here is that we want these events to happen in a particular order.

To do this correctly means having some priority assigned in Angular’s binding mechanism. Basically, during the binding cycle, you want to apply all the data binds first and let them actually take effect in the DOM. Then once the DOM element attributes or classes have changed, then we can fire the lower priority items, i.e. any of focus(), blur(), or select() events that are ready to fire.

Considering Angular’s event loop, it seems that the $watch callbacks themselves are the place to tackle this timing problem. I think it can be solved by passing an optional listenerPriority integer when you setup your $watch, so that the execution of the callbacks can be sorted by their priority. We would use the same priority semantics established by $compile, i.e. lower numbers get applied last. I think a middle-of-the-road default priority (like 100) should be used if none supplied. And the ng-target events would default at 50. This leaves wiggle room on all sides. The $watch signature could look like:

$watch(watchExpression, listenerPriority, listener, objectEquality);

Without a listener priority, we have no control over the order that $watch callbacks will fire, and thus, we cannot control the order that we apply visual changes to the UI. Presently, I wrap the element.focus() call inside a $timeout with a 50ms delay. It seems to work, but wow is it super hacky.

Here is an example of the timing problem in action.

 

I’d like some discussion/feedback from angular folks on having a listenerPriority in $watch.
Please leave comments in this thread: http://goo.gl/ipsx4

 

How might we expose this functionality from the DOM?

We could have individual attributes to cover each of these three events, e.g., ng-selectable, ng-focusable, ng-blurable, or something similar. (Wow, “blurable” is a little awkward looking…)

I feel it is better to create a “mothership” attribute, so that we expose all three methods focus(), blur(), or select() all at once on the same scope variable. But to pile these all on one attribute begs the question, what elements can you actually set focus()? blur()? select()?  The W3C spec shows the following elements accept the corresponding functions.

HTMLSelectElement focus() blur()
HTMLInputElement focus() blur() select()
HTMLTextAreaElement focus() blur() select()
HTMLAnchorElement focus() blur()

Remember that browsers don’t necessarily adhere to the spec. For example, where’s HTMLButtonElement in the spec? (e.g. <button> rather than <input type=”submit”>) We expect to be able to call focus() on buttons.

My proposal is an attribute directive with a name of ng-target that has all three functions attached to it. The name ng-target still fits whether the element is an anchor, button, or input field.

Directive Implementation

Here is the directive implementation as proposed. This is what I am currently using in my code. However it still has the ugly super-hacky $timeout delay. A listenerPriority in $watch should address this issue. Or perhaps I have overlooked another solution. If anyone is interested, I am willing to add documentation to this code and contribute it per the Angular contribution guidelines. I need some feedback first.

  1: angular.module('ng').directive('ngTarget', function($parse, $timeout) {
  2:     var NON_ASSIGNABLE_MODEL_EXPRESSION = 'Non-assignable model expression: ';
  3:     return {
  4:         restrict: "A",
  5:         link: function(scope, element, attr) {
  6:             var buildGetterSetter = function(name) {
  7:                 var me = {};
  8:                 me.get = $parse(name);
  9:                 me.set = me.get.assign;
 10:                 if (!me.set) {
 11:                     throw Error(NON_ASSIGNABLE_MODEL_EXPRESSION + name);
 12:                 }
 13:                 return me;
 14:             };
 15:             
 16:             // *********** focus *********** 
 17:             var focusTriggerName = attr.ngTarget+"._focusTrigger";
 18:             var focusTrigger = buildGetterSetter(focusTriggerName);
 19:             var focus = buildGetterSetter(attr.ngTarget+".focus");
 20: 
 21:             focusTrigger.set(scope, 0);
 22:             focus.set(scope, function() {
 23:                 focusTrigger.set(scope, 1);
 24:             });
 25:             
 26:             // $watch the trigger variable for a transition
 27:             scope.$watch(focusTriggerName, function(newValue, oldValue) {
 28:                 if ( newValue > 0 ) {
 29:                     $timeout(function() { // a timing workaround hack
 30:                         element[0].focus(); // without jQuery, need [0]
 31:                         focusTrigger.set(scope, 0);
 32:                     }, 50);
 33:                 }
 34:             });
 35: 
 36:             // *********** blur *********** 
 37:             var blurTriggerName = attr.ngTarget+"._blurTrigger";
 38:             var blurTrigger = buildGetterSetter(blurTriggerName);
 39:             var blur = buildGetterSetter(attr.ngTarget+".blur");
 40: 
 41:             blurTrigger.set(scope, 0);
 42:             blur.set(scope, function() {
 43:                 blurTrigger.set(scope, 1);
 44:             });
 45:             
 46:             // $watch the trigger variable for a transition
 47:             scope.$watch(blurTriggerName, function(newValue, oldValue) {
 48:                 if ( newValue > 0 ) {
 49:                     $timeout(function() { // a timing workaround hack
 50:                         element[0].blur(); // without jQuery, need [0]
 51:                         blurTrigger.set(scope, 0);
 52:                     }, 50);
 53:                 }
 54:             });
 55: 
 56:             // *********** select *********** 
 57:             var selectTriggerName = attr.ngTarget+"._selectTrigger";
 58:             var selectTrigger = buildGetterSetter(selectTriggerName);
 59:             var select = buildGetterSetter(attr.ngTarget+".select");
 60: 
 61:             selectTrigger.set(scope, 0);
 62:             select.set(scope, function() {
 63:                 selectTrigger.set(scope, 1);
 64:             });
 65:             
 66:             // $watch the trigger variable for a transition
 67:             scope.$watch(selectTriggerName, function(newValue, oldValue) {
 68:                 if ( newValue > 0 ) {
 69:                     $timeout(function() { // a timing workaround hack
 70:                         element[0].select(); // without jQuery, need [0]
 71:                         selectTrigger.set(scope, 0);
 72:                     }, 50);
 73:                 }
 74:             });
 75:             
 76:         }
 77:     };
 78: });

I welcome your feedback.

 

Other random thoughts:

How might this relate to setting elements “tabbable”? Is unit testing ok with this? Are there any memory leak possibilities using inside an ng-repeat? How can I break it?

Monday, January 21, 2013

Ready for IPv6 addresses to visit your web app?

In a java servlet you lookup the client’s IP address by calling request.getRemoteAddr(), which returns a String. But are you expecting a dotted decimal IP address in that String? Your code should be prepared to see IPv6 addresses too.
I recommend looking at Google’s Guava library, a useful collection of commonly useful Java Stuff, i.e. “collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth”. Specific to reading Inet addresses is the class InetAddresses.

With InetAddresses.forString(ipAddr) you can pass in a raw internet address string (whether IPv4 or IPv6) and get back an InetAddress, which you can test to see which type you were returned, either Inet4Address or Inet6Address.

Here’s an example where I am only calling a library if I have an IPv4 address. (Until the lib will lookup IPv6 too.)

  1: import java.net.Inet4Address;
  2: import java.net.InetAddress;
  3: 
  4: import com.google.common.net.InetAddresses;
  5: 
  6: 
  7: String ipAddr = request.getRemoteAddr();
  8: Location loc = null;
  9: try {
 10:     InetAddress ia = InetAddresses.forString(ipAddr);
 11:     // TODO fix geo library to handle IPv6 addresses
 12:     if ( ia instanceof Inet4Address ) {
 13:         loc = Geo.DB.lookup(ipAddr);
 14:     }
 15: }
 16: catch ( IllegalArgumentException e ) {
 17:     // the string was not a valid IPv4 or IPv6 address
 18:     loc = null;
 19: }

Another thing I am interested in from the Guava library is a means to produce a canonical form of an IPv6 address. There is a recommendation proposed (IETF RFC5952) for a canonical form of IPv6, and it calls out some good reasons why we should care. The actual proposal itself is very simple, having only five concerns and are what you’d expect.
I have not thoroughly examined yet what Guava does to format IPv6 addresses. That’s next.


Update 1/22/2013:  The function InetAddresses.toAddrString(InetAddress ip) returns the string representation of the IP address, and for IPv6 addresses, it adheres to RFC 5952. Hat tip to +Paul Marks for pointing me to the right version of the API.

Wednesday, January 2, 2013

Jersey @Provider example

I’ve found many examples of writing a Jersey @Provider to implement data marshalling, or mapping exceptions into a Response, but I've found very little on how to inject your own data types into your Jersey resource methods.

My code used to look something like this:

@POST
@Path("/create")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Response createThing(@Context HttpServletRequest request, String arg) {
    UserSession us = /* code to extract UserSession from cookie in request */
    /* more code here */
}

But I wanted it to look like this:

@POST
@Path("/create")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Response createThing(@Context UserSession us, String arg) {
    /* more code here */
}

Doing it this way cleans up the code more than the above snippets suggest. I eliminate some exception handling from all the resource functions that need the UserSession (which are many), plus I get a good separation of concerns. It’s just dependency injection.

So the @Provider class to support the new code looks like this:

import java.lang.reflect.Type;
import javax.ws.rs.core.Context;
import javax.ws.rs.ext.Provider;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sun.jersey.api.core.HttpContext;
import com.sun.jersey.core.spi.component.ComponentContext;
import com.sun.jersey.core.spi.component.ComponentScope;
import com.sun.jersey.server.impl.inject.AbstractHttpContextInjectable;
import com.sun.jersey.spi.inject.Injectable;
import com.sun.jersey.spi.inject.InjectableProvider;
@Provider
public class UserSessionProvider extends AbstractHttpContextInjectable<UserSession> implements InjectableProvider<Context, Type> {
    final static private Logger logger = LoggerFactory.getLogger(UserSessionProvider.class);
    @Override
    public Injectable<UserSession> getInjectable(ComponentContext ic, Context a, Type c) {
        if (c.equals(UserSession.class)) {  
            return this;  
        }  
        return null;
    }
    @Override
    public ComponentScope getScope() {
        return ComponentScope.PerRequest;
    }
    @Override
    public UserSession getValue(HttpContext context) {
        try {
            // CookieUserSession takes care of the marshalling/validation of data in the cookie
            CookieUserSession cookie = CookieUserSession.checkForCookie(context.getRequest().getCookies());
            return cookie.recoverAllFields(); // returns a UserSession, clearly
        }
        catch (CookieTamperedException e) {
            logger.error(e.getMessage(), e);
            throw new RsException(e); // my subclass of WebApplicationException
        }
    }
}

It took me a while to sort out which class I needed to extend (and interface to implement). A number of trials and errors gave me Jersey exceptions complaining that it could not find an injector for my resource function. Messages like this:

INFO: Initiating Jersey application, version 'Jersey: 1.14 09/09/2012 07:21 PM'
Jan 02, 2013 10:20:12 PM com.sun.jersey.spi.inject.Errors processErrorMessages
SEVERE: The following errors and warnings have been detected with resource and/or provider classes:
  SEVERE: Missing dependency for method public javax.ws.rs.core.Response com.example.rs.MyService.createThing(com.example.stateful.UserSession,java.lang.String) at parameter at index 0
  SEVERE: Method, public javax.ws.rs.core.Response com.example.rs.MyService.createThing(com.example.stateful.UserSession,java.lang.String), annotated with POST of resource, class com.example.rs.MyService, is not recognized as valid resource method.

Hat tip to Antoine Vianey’s blog post where one of his examples demonstrated this.

Wednesday, October 17, 2012

FoxitReaderOCX.ocx failed to load.

Here’s how to fix this error.

For a long while I have been using Foxit Reader for PDF files rather than using Adobe’s PDF Reader. (Adobe kept bugging me too much about installing stuff, all the time it seemed.) After I installed a new Foxit Reader, I began seeing an error whenever a PDF was to be displayed in the browser. It just gave a dialog that said, “FoxitReaderOCX.ocx failed to load.”

FoxitReaderOCX-failed-to-load

Interestingly enough, even when I uninstalled Foxit completely, Chrome still gave me the exact same behavior. The software didn’t change.

Googling turned up little help, but eventually led me to this interesting fact:  Chrome actually loads plugins directly from your Firefox plugins directory. Check your chrome plugins at chrome://plugins/ in your browser and you find some loaded directly from your Firefox directory (if you have Firefox).  You’ll have to click the [+] Details button/link in the upper right to see the full file paths. Here’s some of mine:

plugins-from-firefox-are-loaded

Aha! I remember when I installed the new version of FoxitReader, I unchecked the box to install the Firefox plugin, since I rarely use Firefox.

So first I went to the Firefox plugins directory and removed the offending file “npFoxitReaderPlugin.dll”. Then I ran my new FoxitReader installer and this time I allowed it to install the Firefox plugin, i.e. the new one.

 

new-foxit-plugin

Thursday, September 6, 2012

twilio4j updated to support new Twilio <Queue>

 

twilio_queue_graphic Yesterday, Twilio introduced a new feature, call queuing.

This allows you to answer a phone call and place the caller into a queue. You can still use all the other Twilio features, like giving the person a message, and playing “on hold” music the same way you would for a Conference call. The API exposes information like the caller’s position in the queue, the time that the caller has been in the queue, the average queue time, and the current queue size. You can also capture events like caller hangups.

 

Today I have updated the twilio4j java library (version 1.0.4) to support these new <Queue>, <Enqueue>, and <Leave> noun/verbs.