Monday, March 3, 2014

Your Online Identity Crisis

Well, it has finally begun.  The intense identity aggregation of products like Google and Facebook is pushing users towards anonymous services.  Whisper and Secret are both making headlines, each promising to let you escape in some way from the ruthless scrutiny of the mainstream social networks.  While these services are great for providing a momentary distraction for their users, they are still doing nothing to address the core problem of online identity.

In real life, there are very few situations where it is useful or even desirable to be anonymous outside of explicitly anti-social or criminal behavior.  The standard examples – corporate leaks, personal confessions, honest reviews, etc. – do not benefit from true anonymity.  Instead, what people want is to expose some subset of their true identity  but nothing more.

For example - if I am an Apple employee releasing a corporate leak, I don’t want Apple to discover who I am, but it is still important that others know I am an Apple employee and not just some random fanboy.  Likewise, if I am confessing something about my personal life, I want to do it with a supportive community and not to strangers who don’t care about me and with whom I have no lasting relationship.    

It isn't about being anonymous or even pretending to be someone I am not.  It is about controlling which subsets of true facets of my person are relevant in different social contexts.  This is fundamentally not deceptive, but actually enables one to be authentic.

Outside of the internet, it is extremely difficult to find out information about a person so that we can easily and naturally compartmentalize our experiences.  I can go to my AA meeting and discuss my issues with alcohol, and then later I can go to a car show and discuss my love of 60’s muscle cars.  I don’t worry much about someone in the car show reacting poorly to me because I am an alcoholic.    Again, I am not a different person in these settings – it is always me – but different parts of my identity are relevant.

The Googles and FaceBooks (GoogleBooks?) of the world want to aggregate all of these into a single identity.  They want to do this, not because they think this is good for users or because this is how they think society works, but rather because it helps them monetize your interactions.  However, this type of aggregation is a very bad deal for users.

Users’ primary experience with this comes in the form of hyper-targeted ads.  A perfect example of this is when I go to my online AA support community to do some searching or posting, and then navigate to my car community.  In the car community, I receive ads targeting me as an AA member -- this scares the s*** out of me.  Even though users primarily are reacting to this “Google is stalking me” factor, there is actually a subtle but much more insidious force at work.

These services are making an extremely strong push to get you to sign in everywhere on the internet with a single ID.   This is initially great for users because, who wants to remember so many passwords?  But when you do this, GoogleBook is aggregating your identity into their system, and all activity on that new site is mixed with everything else you have told them before.  Most users are really unaware that this is undermining the trust relationships that you have with those new sites.

When I decide to share information with a service, I make a trust decision that is between me and that other entity.   I can decide to purchase things from Amazon knowing that Amazon will retain my purchase history and use that to create my “Amazon Identity”.  I am OK with OpenTable knowing where I go out to eat, I trust my bank with my account information, and I trust my fellow AA members with my personal struggles.   For each of these entities, I have made a conscious trust decision.


But when I use GoogleBook for these sign-ins, I am tossing that out the window.  I am implicitly granting GoogleBook the union of all of those trust relationships.  Through GoogleBook, I am giving it away to all of their advertisers and other players.  So, not only am I suddenly trusting GoogleBook with my Amazon purchase history, I am also potentially trusting OpenTable and Clash of Clans with that information as well.  This is a fundamental undermining of these original trust relationships, and will lead to very large problems down the road.  

It doesn't have to be this way though, and that’s why my company is working on solving these problems .  Stay Tuned.

Thursday, September 12, 2013

Introducing Blahgua, a new social sharing network

Many years ago, while working for the Human Interface Group at Apple Computer, a colleague of mine Harry Chesley created a piece of software called "Rumor monger".  Rumor monger ran on the Apple corporate LAN and was a used for spreading anonymous rumors from computer to computer.  Rumor monger quickly became incredibly popular.  It was the first time I encountered the idea of something "going viral."  A big part of the addiction was that in these pre-WWW days, you KNEW that every rumor on that network was coming from another Apple employee.  The rumors quickly went from jokes to serious product complaints to which VP was sleeping around with which secretary, and it was eventually cleansed from the network completely.  

Harry went on to invent things like Shockwave for Macromedia and managed the Social Computing Group at Microsoft Research, where our paths crossed again.  In SCG we spent a lot of time studying people's online behavior and watched the comings (and goings) of various sites like Myspace,  Friendster, Facebook, Google circles, and the like.  All the sites struggled with the issue of how to get good content in front of an audience, how to let creators of content use their real-world qualifications to endorse their content, and how to do all of this (and make money) without compromising the privacy and personal identity of their users.  

Amazingly, 20 years later, these issues remain further from being solved than ever.  Facebook, Twitter, and Google are collecting information like crazy, and not surprisingly it turns out that it is going not only in aggregated form to advertisers but also to the NSA.  It isn't something that can be solved with privacy policies.  As the Arab Spring protesters found out, large companies can always be compelled to turn over their records.

So last year I made a decision to try to do something myself.  I left Microsoft and connected with another old friend from the Apple days - Ruben Kleiman, the guy you probably best know from his work on the Netflix classifier.  We set out to create a new type of social content sharing network.  One that would have the advantages of viral amplification in an anonymous system, while still including reputation to stop spamming and other bad behavior and allowing viewers to make trust decisions about online content.

The result, one year later, is "Blahgua":





Blahgua (pronounced “Blah-Gwah”, a play on the Chinese word for gossip) is a social sharing network like twitter or instagram or Reddit, but with some important differences. 

The first is that blahgua does not rely on friends or followers to get good content.  Instead we use real-time categorization, recommendation, and viral propagation algorithms to distribute an ever-changing stream of interesting content to each user.  When you create content in blahgua, it is sent to a small number of random users.  If those users like it, it is automatically spread to more.  The result is that interesting content is quickly amplified across the network while spam and other boring content dies out.  This is fed back into your reputation, increasing or decreasing your initial audience the next time you post.

The second big difference is that blahgua does not rely on personal identity.  We don’t need your email address, or even your IP address* to operate .  Instead of using personal information for credibility, blahgua has the notion of badges.  A badge is a fact about yourself that you verify outside of blahgua.  For instance – you could get a badge that shows you are an Apple employee (or to be more accurate, that you can respond to a apple.com email address).    You can then bring that badge to blahgua and use that in your posts.  Everyone will know that your post was made by an Apple employee, but no one can determine which employee it was.  And when I say “no one”, I don’t mean that we have some encryption technology that becomes irrelevant when the NSA shows up with a search warrant.  I mean the information simply does not exist because we never collect it in the first place.  The badge issuers communicate with blahgua through a zero-knowledge protocol, so blahgua can’t identify you in the world and the badge service can’t identify you in blahgua.   

Badging is really great because it is simple and direct.  No one knows that you are an Apple employee unless you specifically add that badge to your post.  And if you want to say you are an Apple employee on one post and a WalMart shopper on the other, you never have to worry the information is going to be cross-leaked.  Almost any external fact can be badged, from whether you are a friend of Facebook to whether you ate at a restaurant to whether you are a member of a fraternity.  


a user can have any number of badges


Blahgua has some other cool features.  One is that we surface all of the analytics back into the app, so you can easily see how many people are viewing, opening, or commenting on your content or anyone else's.  The hope is that this will let you understand how you are connecting with your audience (as a creator) or who else is in the audience (as a viewer).  It is also our way of making sure you know everything about blahgua that blahgua knows about you.


blahgua exposes detailed statistics

We also support structured communications, so you can do things like make predictions (that expire) or create polls that users can respond to.  We will continue to expand the set of speech acts we support.


have fun with structured communications, like polls

We have a lot of plans for the future.  We are still in early beta, but please give it a try and tell me what you think.  We don't have native apps yet, and there are some UI issues and occasional perf concerns.  Sometimes things don't work at all.  No doubt there is a lot of work ahead, and like any start-up we know there will be hard times and some pivots required.  But we think it is new and different and serves a need that the other networks are not addressing.

Any comments, please let me know.  And watch this space for more details!

http://www.blahgua.com


Thanks.

- davevr










Wednesday, April 18, 2012

Why Google+ is still not working for humans


Ah, poor Google.  So full of really smart people, so detached from reality.  I say this with great respect for my many friends and colleagues who work there.  Your fundamental inhumanity is your tragic flaw, and the thing that made you so good providing search is going to doom you in the social space.

Marie Antoinette Syndrome
As I mentioned, I know a a dozen or so people at Google.  These are really smart guys.  I would say the average IQ is around 135-145.  That is great when you need to create complex distributed algorithms for a massive compute cluster.  Not so great if you are trying to design something for ordinary humans. 

The IQ distribution curve (from Wikipedia)

Think about this for a moment:  The average IQ, by definition, is 100.  An IQ of 145 is three standard deviations over norm, representing about 1% of the population.  And most important for our discussion, the gap between a person with an IQ of 145 and a typical person is the same as between the average person on the street and someone with an IQ of 55.  IQ of 55 would signify moderate retardation.

Did you know that someone with an IQ of 55 is capable of doing repetitive tasks like housework, mowing lawns, laundry, etc., but would have a hard time doing critical reasoning tasks like working on a farm, harvesting crops, fixing fences, etc.?  It is interesting.  For someone with an average IQ, both of these types of tasks seem trivially easy so it is very hard to keep these distinctions in mind when designing things.

This happens in the other traits as well, not just IQ.  I read a lot of fiction, and I often read stories where one of the characters is supposed to be incredibly brilliant – a genius of one kind or another.   Or they feature characters that are fabulously rich, or the leaders of power government agencies or corporations.  Unfortunately the writers of these fictions tend not to have high IQ or massive bank accounts themselves, so they just cannot even imagine how high IQ people think or very wealthy people live.   For instance, a recent book had a scene where the incredibly wealthy corporate head used his status and influence to cruise through a special lane at airport security.  But an actual person like that would be more likely to have a corporate jet and not enter the public terminal at all.  In this case, the idea that someone could have their own jet was just beyond the writer’s imagination.  It is like watching the Big Bang Theory – the writers claim that Sheldon has an IQ of 187, but they cannot even imagine how smart a person with an IQ of 145 is, much less 187.  (and of course that wouldn’t be nearly as funny…)

The Big Bang Theory - from CBS
So any trait of discrimination can have this same effect –IQ, beauty, emotional understanding, political power, money, etc.  We might call this the Marie Antoinette syndrome after the (likely apocryphal) story of the former queen, upon hearing that the peasants have no bread to eat, suggested that they eat cake instead.
 
Google has the same issue.  They think they are designing things that will be simple and easy for everyone to understand, but they don’t seem to be able to tell when they go off the reservation.  When they were just doing search, it doesn’t matter.  They can distill all of that brilliance into something super simple:  a search box.   But high IQ doesn’t mean high EQ.  In fact, their EQ – emotional understanding – is probably just as low as their IQ is high.  And in cases like social – where EQ is at the forefront – they simply can’t tell that what feels satisfying emotionally to them is far below the bar of the typical human.  They just cannot feel outside the box.

Let them +1 instead
Case in point:  where FaceBook has Like, Google has +1.  To someone at Google, this makes perfect sense.  After all, someone clicking “like” doesn’t really mean that they like the subject matter.  It just means that they want to see it promoted.  So if someone shares an article about some negative event like a convicted killer being released on a technicality, it feels a little weird to click “Like”.  A thumbs-up icon has the same issue.  I just have to assume that people understand that I don’t like the fact that the killer got off, I just want to spread the outrage.   +1 avoids this connotation.  It is a neutral vote.  Besides (the google person would remind us), under the hood +1 is what all of these other things are doing anyway – incrementing an entry in some database table somewhere in the cloud.

The +1 button, from Google+
The problem is that this voting is not an emotion-free task.  People click the Like or Thumbs Up button because they feel an emotion about the item.  By moving this from an emotional indicator like “Like” to an unemotional one, they are diminishing this feeling.  Furthermore, “+1” is a math equation, and math overall does not have a neutral emotional connotation.   For most of the population, math has a negative connotation.  So if my sister likes my llama photo, she has to overcome the emotional dissonance between this positive feeling of llama love and her negative feeling of math even to click this button.

Crazy Llamas.  I like them.  I don't +1 them.


I know my Google friends are rolling their eyes now.  “Dave,” they are saying, “+1 is not a math equation!  It has nothing to do with math!  It is just the same as Like!”   Sorry guys.  The fact that these are the same to you just shows just how out of touch with the normal humans you are.  No doubt you are still happy that you managed to shut down the people who wanted to add "-1", or "+0" as a refresh vote, or even have a currency (based on reputation) that allowed you a certain # of credits that you could add to or subtract from posts, etc.  But let's be real here:  If your buddy comes up to you and says “dude, I am getting married!” it would be fine to say “I like!”.  It is fine to give him a thumbs up.  But who in their right mind would say “+1”?  No one can emotionally get behind that.  It is only sensible in some dystopian future where Google has finally fulfilled their mission statement.


Google+ has a ton of awesome functionality.  It isn’t a matter of features.  But it is riddled with things like the +1 that are like constant reminder that it isn’t designed for normal people.  It is covered with – let’s call them “emotional edges” – sharp areas that make the whole thing an unpleasant place to hang out.   For instance – circles.  Great idea - almost.  But why do you draw them as perfect circles?  The one thing that a social circle isn’t is perfect.  You could have made them look a bit lumpy or hand-drawn or really ANYTHING but a geometric circle to indicate that you understand that circles are a fuzzy concept.  And yet – you don’t.  Again, a small point, but combined with the other hundreds and hundreds of small points, you get the overall emotional unpleasantness that is Google+.

Google circles.  Note the visual difference between friends and family.  Um...
Here is what normal humans think when you say "friend circle".  (via web image search)


The Greeting Card Equation
Social circles are interesting.  Since the earliest days of computer-mediated social networking, researchers and designers have been trying to figure out how to capture social networks.  On the one hand, we know that all of our “friends” are not created equal.   Some are closer than others.  Some are real friends, some are just co-workers, some are family.   So it makes sense to categorize them.  The downside is that for many reasons, when it comes to each individual person making a categorization is quite difficult!   Computer scientists tend to view this as a technical issue and add features like the ability to put people in multiple circles or define a hierarchy of groups, etc., but the real problems are all emotional.

Adding someone to a circle is a form of labeling, and labeling another person is an emotional act.  Is this guy just a co-worker or is he a friend?  If I put him in co-workers, I am implicitly saying he is not a friend.  I can overcome this emotion, of course.  I can override this feeling and tell myself that these labels are not real, that they are just for convenience in distributing relevant social notifications, but that in itself takes emotional energy.  This is actually one of the most underappreciated aspects of UX design – the energy that it takes to suppress the natural emotional response to a product.

There is a little equation I use to test the emotions of my designs:

The amount I care about something
=
The amount other people think I care about it
=
The amount of work I put into it

Read that equal sign as "should equal", and if you designed it right, the equation should be true.  In other words, if I really like something, I should be able to spend a ton of time on it.  And anyone who sees the end product should know I really cared about it.  And if I don’t care about something, I shouldn’t have to spend any time on it at all, and anyone watching should know I don’t care.

Greeting cards are a good match for different emotional expressions

I call this the greeting card equation, because paper greeting cards solve this perfectly and computer greeting cards totally fail at it.  Consider:  if I really like you, I can go to the store and spend a ton of time finding the perfect card for your birthday.  When I give it to you, you can tell the card is perfect and you know I must have spent a ton of time finding it.  You might keep it by your desk for a few weeks to remind you of how special it is.  On the other hand, if I am giving generic corporate cards to every client on their birthday, I just order some corporate card.  I spend almost no time on it.  When the client gets it, they are maybe happy that I bothered to remember their birthday but they know the card is not special.  They don’t feel bad tossing it in the trash, and neither do I.  Everything is in sync.  But for online cards, this breaks down.  I can’t really tell how much effort you put into the card, and because it is just in email, I really have no way to treat it special.  The emotional connection has been removed.

Most software is terrible at this.  Not only do I often spend too much time on things I don't care about, I often have no way of spending more time (in a meaningful way) on things I DO care about.  Software tends to strip emotion out of the delivered artifact and make everything soulless.  The exceptions here are things like certain games, like World of Warcraft, where I can immediately tell if another player has spent a ton of time on their character because I can see them wearing epic drops that can only come from months or years of play.  Even a lot of social network sites only recognize and reward activity, even though a user might spend hours and hours on the site reading things without commenting.

This emotional mismatch issues plays large in the next problem with circles – an even larger one - that stems from the fact that humans and our relationships are not stable.  Someone who is my BFF today might be merely a friend tomorrow and a stranger in 5 years.  There is a high entropy factor in social networks.  New groups are forming all the time, and old groups die out.  Creating a group or adding people to a group is driven by a positive emotion.  I care about my new friend, or my new Guys Night Out group, so it feels appropriate to spend the energy it takes to create the group or add the person to my friend list.  But then later – when the person is not my friend, or the Guys Night Out plan has run its course – I no longer care about it.  And because I don’t care, I don’t want to spend any energy on it - not even the energy to remove it.  In fact, spending the energy to delete the group might inadvertently make the other group members think I cared about it, and make them feel bad for letting the group die, etc.  

This is why social networking sites tend to decay over time.  Because I never remove people, my Friendster or MySpace or Facebook account has a smaller and smaller percentage of meaningful relationships in it, and as a result it becomes less relevant over time.  This is also why a new social network always feels somehow better than the last one.  It has smarter people, more relevant conversations, etc.  It is all because your social network in the new space has not had time to decay.  This is a very difficult problem for computers to handle.  In an ideal world, the computer would be able to infer your social groups for you based on your behavior.  Unfortunately, Google, FaceBook, and the like do not have sufficient signals to make this determination accurately.  At least not yet, and not in the near future.   For most people, there is still way too much of life that happens outside of the digital realm for the computer to be usefully accurate about this except in very limited domains.  The one big advantage that FaceBook has is that it was the first network to grab a significant set of older users, who have more stable social networks than teens or college students.

This brings us to the third issue.  Humans are not computers.  We are not always rational creatures.  We do not think of our daily doings as activity feeds.  Our social networks do not consist solely of people who use computers.   In the late 90’s – long before Friendster and MySpace – I was working in Lili Cheng’s Social Computing Group at Microsoft Research.  We were doing research on social networks and had a strong belief that these would be incredibly popular in the future.  As part of this, Shelly Farnham and I went to a local mall and asked 100 or so random people to draw their “social network” on a blank piece of paper.   This research was quite fascinating, and despite being cited hundreds of times in the literature, none of its major findings have been incorporated into any major commercial product.

A user's hand-drawn social network

Here are some of the not so intuitive findings:


·         Every single person drew themselves in the center of the paper.  Everyone sees themselves as the core of their social network, and they define others primarily based on the distance from themselves.  And yet, none of the major social networks visualizes things this way.
·         People put positive as well as negative groups in their social networks.  It was very common to find groups like “enemies”, “competitors”, “people I don’t talk to”, and the like.  And yet social network software only has positive circles.
·         Most people either classify their networks as groups like “friends”, “family”, “co-workers”, or in terms of relationships, where there is a hierarchy of relationships that are directly connecting members of the network to each other, like mom->sister->nephew or boss->co-worker, etc.  Very few people used both.  To me, this indicates that there are different ways people think of human relationships.  Most of the popular sites only support groups, not relationships.
·         Most people put figures in their social network that they have never contacted and likely never will.  Examples might be the boss of their company, a sports team, movie star, church leader, etc.  These tend to serve the role of emotional anchors for a group or as connection points.  However, these are not just labels – they are thought of as full members of the social network in the user’s mind.  Ironically, most existing social networks actually work hard to prevent you from adding people to your network that you cannot connect with.  They try to claim they are doing the community a service by limiting you to “real friends”, but actually this is just going against the natural emotionally flow.
·         Related to the above, a surprising number of people have members of their social network that are not “contacts” in the traditional sense.  Very common examples would be dead relatives, pets, fictional characters from a book or TV show, and God.   Again, these sorts of entities are explicitly omitted from current networking sites.

So – some thoughts for all of you designing your social networking sites.  I hope it is interesting for you, and look forward to seeing more humanity from you in the future.

PS:  If you are interested in some alternative approaches to sharing and identity, try Heard!   Thx.

Sunday, March 25, 2012

Metro in LOB apps

I was talking to some folks about Metro the other day.  For those who don't know, Metro is Microsoft's new design language for Windows Phone, XBOX, and soon Windows 8.  Without naming names, let's just say that these are people who are in charge of delivering line-of-business applications for a Large Pacific Northwest Software Company.   And the topic was how to design these sorts of applications for Windows 8.

In a previous post I talked about some of the challenges that Metro presents to the Microsoft developer ecosystem.  Now, after talking with these internal IT developers, I am more concerned than ever.

The Fallacy of Consumer vs. Enterprise
The first thing that struck me was the active debate the team was having on whether or not Metro was even appropriate for this sort of "enterprise app".  There were two serious misunderstanding here.

First was the mistaken belief that Metro is designed for consumer apps.  This just isn't the case.  While it is true that Metro made its first appearance on a consumer device - the Zune - Metro is actually inspired by the design of things like airport signage.  In other words, Metro was inspired by the need for information management, task support, and efficiency - all things that matter even more in a business app than a typical consumer app, where efficiency is less of a concern.  In fact, when I am doing Metro-style design, I often get into the same mental space I would be in when doing form design.

But there is an even more fundamental misunderstanding here, which is that there are even such things as consumer vs. enterprise applications anymore.  Much has been written about the "consumerization of IT.  As devices - led by the iPhone and iPad - become increasing useful, personal, and delightful, users are demanding that they maintain these experiences in their work environment as well.  The zeitgeist is that the tyranny of IT is over.  No more will employees put up with inferior user experience just because there is some corporate mandate.  The signs of this are everywhere, and the sooner that you as an IT developer can get on board, the better life will be for both your users and yourself.

HeadTrax:  A Quick Case Study
To show the sort of thinking that is required, let me go through a mini mythical case study of a typical LOB app.  My inspiration for this is the headtrax application used inside Microsoft.  Like many internal applications, Headtrax is basically a front-end for a database - in this case, a database of personnel records.  Everything that can be done that in some way affects a personnel record is done through this app.  As such, there are hundreds of commands spread across dozens of pages.

A typical Line of Business App
I like this example because I think this is fairly typical pattern in this sort of app.  If want to Metro-ify it, we have our work cut out for us.  But let's break it down step by step:

Step 1:  How many apps is this, anyway?
If you go through Headtrax, the first thing you notice is that it has a ton of functionality.  Everything from updating your emergency contact to extending a vendor is done in the app.  The first mental shift to overcome is in thinking that a Metro version of headtrax would even be a single app in the first place.  In the age of the phone, monolithic apps are going away and constellations of simple applications are taking their place.  Think of the Apple iPhone mantra:  "There's an app for that".  By breaking headtrax down into different apps, each app can be custom tailored to the task at hand.  Users only need to install those apps that they need.

Turning one monolithic app into a collection of more focused apps
Conveniently, many of these sort of apps already have a list of shortcuts or common tasks bubbled up to the top.  These make a great starting place for thinking about how to break up a big app into little ones.

Finding and managing apps?  There's an app for that!
Multiple apps used to be a big pain - they were hard for users to find, hard to install, hard to update, etc.  But in the age of the app store, these problems have all gone away.  Users are very familiar with the pattern of searching an app store to find the apps they want.  The app store handles access control, installation, and even updating.

The Windows 8 app store
More is not always harder
One the the common concerns I hear about making a single traditional monolithic LOB application into a dozen or more focused apps is that the development cost will be much higher.  There are more apps to write, more to test, and if the database schema or other underlying condition changes, everything has to change.  It sounds like a reasonable concern but when you go a little deeper you will see this goes away.  Consider:


  • a single project in your development environment can easily support all of the related apps.  This makes it easy to have shared libraries, classes, and data models and ensure any change causes the appropriate updates.
  • although more apps mean more apps to test, this is more than offset by the fact that simpler apps are much easier to test than complex apps.   Simple apps eliminate many of the combinatorial issues that arise in monolithic apps.  They also tend to have much simpler UI.  
  • coordination and dependencies between developers on a project is often a considerable time cost.  Independent applications need much smaller teams and can be developed much faster.  So instead of 20 developers all working on a single app, you can put those 20 developers into 5 teams of four, making 5 independent apps.  


Step 2:  Getting rid of the grids
The next most common question I get is people asking how to do tabular in Metro.  While there are several nice visual designs for grids out there that have a pleasantly clean Metro look, the first task is deciding if a grid is the correct tool at all.  In many cases, the answer is "no".

While there are some times where a grid is definitely the correct UI, many times a grid is used just because it is the easiest thing to use to show a databound view.   So how do you know if a grid is the right thing, particularly if you are not some kind of information architect?

I've found that the simplest thing is just to ask a few questions.  Let's use this grid as a typical example:

A standard tabular grid
The first question is, are there any columns we can get rid of entirely.  For example, anyone really ever need to see the detailed employee ID number?  This is probably only useful in rare cases where you need to search by ID.  And how about the date of birth?  Why is this important?  I might ask around and find out the  only people using that are the admins, planning for birthday parties.

After you eliminate columns you don't need, the next step is to ask if all of these columns are actually of the same importance.  Are they used with the same frequency and used in the same way?  If they are, then a grid is probably the right UI.  But if the columns are used with different frequencies and/or in different ways, then you are better off designing a more appropriate representation.

For example, in this case I might mostly care about peoples names and titles.  I might be secondarily interested in knowing where their office is.  Finally, I might want to know about upcoming birthdays.  Based on that, I might do a UI more like this:

A more Metro way of showing data in a grid
If I really wanted to know more detailed information, I could get that in a drill-down.  You can still sort items or do searching in this format, just like we get in the windows explorer when we put it into another view.

Concluding thoughts
So hopefully this is useful for thinking about re-doing your LOB app for Metro.  And you don't have to wait for Windows 8, either.  This sort of design is also great for the web and mobile devices.

If you have good examples of LOB app redesigns, please share them in the comments.





Friday, March 16, 2012

The Challenge of Metro

By now most people have seen Metro, the new design language from Microsoft, either on a Windows Phone or in the previews of Windows 8.  Overall, people like Metro.  It is clean and modern while also somewhat timeless in its minimalism.  It works well with touch and gestures but is also great with mouse or even keyboard.

Metro in the Windows 8 Developer Preview


So, on the one hand, this is a great time for Microsoft and Design.  We finally have an overarching design language we can be proud of, and the company has made impressive strides in getting many of its notoriously independent product groups to embrace it.   So Windows, Office, Xbox, WinPhone, and dozens of others will all be getting the new look shortly.  And there have never been more senior, principal, and even partner level UX people in the company.  

So why am I worried?

I’m worried because Microsoft has always been about the ecosystem play – the big tent – and not about the boutique.  And I don’t think the ecosystem – both inside and outside of Microsoft – is ready for Metro.

To understand the concern, let me take you on a little walk down memory lane.  Our journey starts a little over a decade ago, back in 2001, when Microsoft released XP.  At that time Microsoft create a new design language called Luna.  Remember Luna?

Luna theme in XP
Luna wasn’t really a whole new UI paradigm the way Metro is – it was primarily a visual theme, and it could be applied to existing Win95 or even Win 3.1 code with a fair degree of success.   Many people were exposed to XP as their very first exposure to a computer, so the design was intended to be bright, cheerful, and obvious.  It was not super ambitious.

And here lies the seed of our problem.  Luna was easy to implement, and hard to screw up.  As a design, it was very forgiving.  You could have controls that were not quite aligned, screens that were too dense, icons where you didn’t need them, and so on, and your stuff would still pretty much fit in.  It was a low bar, and thus was easy to hit.

Later, when we did what became Vista, we had a new design language called Aero.  This was a little more ambitious.  For Aero, we were not going after new computer users.  People already knew how to use a computer.  We wanted to make the computer experience more comfortable and stylish.   Aero added more emphasis on layout and white space.  We added more subtle visual cues.  Overall, we turned down the volume (visually) on the UI to make the workspace more calm.  We added transparency and a little animation.  It looked great.

Aero in Windows Vista
But we immediately found a problem.  The Windows design team created designs for all of the various core pieces in windows, like explorer, the photo browser, and so on.  But there were still dozens of other groups in windows making things like control panels and utility apps that had to do their own design.  And these teams really struggled.

At that time, I had a discussion with Don Lindsay, who was one of the primary architects of Aero, about the cost of the design.  Don was of the opinion that software was different from most manufacturing.  You could increase the quality of the design without increasing any other production costs.  I disagreed.

Aero was harder for teams to get right.  You had to understand about effective use of white space, which was hard for teams without a graphic designer.  You had to understand what to make subtle and what to make obvious.  You needed to understand motion.  But that was just the up-front cost.  These designs were also harder for developers to implement.  Animation is harder to code than static screens.  Old techniques like automated dialog layout that looked acceptable in Luna looked like crap in Aero.  And then there was the testing cost.  In Luna, small errors didn’t really jump out.  If a few things were out of alignment, it wasn’t so glaring.  But in Aero, even small mistakes were very noticeable. 

So truly embracing a new design language really requires a serious commitment of time and expense to get that language right.  This was not budgeted in the Vista planning process, and as a result most teams came up short.  It was even worse outside Microsoft.  Many third party developers simply did not have the time and money to properly make their products conform to the Aero design language. 
And now we get to Metro.

Metro (for those who don’t know) was originally designed for the Zune.  It is kind of an evolution of the design thinking in Media Center, Microsoft’s television UI. 

Media Center UI
Metro got its inspiration from the clean and timeless graphic design you see in posters and brochures that provide information, such as the signage in a metropolitan subway station or airport.  Hence the name “Metro”.



http://www.flickr.com/photos/everydaylifemodern/page8/

The first thing to point out is that doing this sort of design is hard.  It difficult from a visual perspective.  You have to understand a lot about typography.  Things like the weight, leading, and kerning start to matter.  Empty space, negative space, balance, hinting, flow, and so on all become crucial in that the lack of them will immediately jump out.  Most people who didn’t graduate from design school don’t even know these things exist.  Many people who did graduate don’t use them very well.   And in the starkness of Metro, there is no place for mediocre visual design to hide.

But the visual part is just the tip of the iceberg.  The real challenge is that Metro requires a deep rethinking of the information architecture itself.  You can’t just take the same information you have today and let a graphic artist “make it Metro”.  It doesn’t work that way. 

For example – here is a map of Shanghai that is showing the subway maps and stops.

Map of Shanghai
How do we make this Metro?  By doing this:

Map of Shanghai - Metro-style
Think of the process that went in to making this map.  Someone had to decide that a whole bunch of stuff that is factually accurate and even relevant to users of the map just didn’t matter.  They abandoned any accurate sense of travel time, distance, actual physical proximity, and even the actual shape of the tracks.  They decided to remove all street names, landmarks, and every geographical feature except the river. 

Doing this requires someone who is not only an expert in design, but also an expert in the domain of maps and even in the city itself.   It also requires a certain boldness and confidence – confidence to go into a meeting and convince a bunch of non-designers that all of those things really don’t matter.
So now look back at that street map.  THAT is your product now.  Who on your team is going to make it into the subway map for Metro?  Is there any individual or even set of individuals who know enough about design, and enough about your product, to make that kind of radical redesign?  And if they do exist, are they empowered in your organization to actually make that change happen?  Do you have the development team that can pull it off?   And does your test/QA team have the skills and tools to check for errors with highly polished design?  For about 98% of current internal Microsoft teams, Fortune 500 IT groups, and even ISVs, the answer is NO.

Most teams are simply not prepared in any way - staffing, budget, etc. - to pull off great Metro redesigns.  And wose - Microsoft is not communicating to them that this is even necessary.  We just need to take a peek at the windows phone marketplace to see the future.  For every app that does Metro nicely, there are about 1000 more that really screw it up and turn it into an ugly abomination.  And those are just phone apps, which tend to be a) small and b) created just for the phone.

For desktop apps, most of the apps are going to be large existing apps.  The developer is going to look for easy solutions.  They are going to want something as easy as moving from WIn95 was to Luna.  And god help us, Microsoft is delivering it to them.  I have already been exposed to PowerPoint templates that turn your current presentation into Metro.  How?  By making all of the Headings into Monocolor boxes with white text in them.  If that all people think of Metro, we are all in trouble.

Monday, January 30, 2012

Extremely Inconvenient Truths


When I was reading Freakonomics, one of the interesting and controversial arguments in the book was that the legalization of abortion led to a fall in the crime rate.  The logic was that unwanted children were more likely to wind up as criminals, and this legalizing abortion essentially caused those unwanted babies to be aborted before they could grow up into criminals.   



Shortly after the book was published, many people brought up serious objections.  In the end, the theory was largely debunked.  

However, it lives on as a powerful memes.  At some level, for certain types of people, it just makes sense, and when a theory matches expectations so strongly, people become extremely reluctant to validate it, much less challenge it.  Just based on that, I suspect that it will be a long time before this meme leaves the public consciousness.

Recently, though, I heard another argument that had some of the same feel.  I was discussing the European economy with some intellectual friends, and we were speculating on how it was possible that Germany could be so strong while other sections were so weak.  It is especially impressive given that Germany did the bail out of all bail outs when they absorbed East Germany as part of German unification in 1990 - estimated at 1.9 trillion dollars.

Germany has to deal with all of the "normal" problems of the other economies in Europe, including an aging population, relatively high unemployment, and a general entitlement culture.   Stat-wise, it seems there are no major differences...

So what is the difference?

In discussing this, there was a general sense that Germans are good workers, or have a German mindset, or german values or work ethic, or german engineering, and so on.  But this is just pushing the question down the road, making it OK, why do germans have that mindset?



At this point, a new point came up.  In the years leading up to world war II, the Nazi leadership of the country systematically went out to kill, deport, or otherwise remove everyone who they thought was inferior.  This included Jewish people, of course, but also included mentally ill, communists and socialists (and others who were not supportive of what the Nazi's considered the German way), homosexuals, prostitutes, homeless, immigrants, and "the unemployable".  It is difficult to get fixed estimates, but a safe estimate is that 10 million or more people were either killed, deported, or fled.  

So was this the determining event?  Did Germany become stronger by literally pruning off the weakest members of their population?  

Not surprisingly, it is difficult to find any kind of academic studies on this stuff.  There are many articles discussing the opposite issue - how Germany's persecution caused a brain drain of physicists and rocket scientists, and how this loss might have cost them the war.  And there is general anecdotal evidence that genetic diversity leads to a more robust population.  But there is also literature discussing culling in general, and how can be beneficial to improving the health of a herd.

We are the (healthy) 99%
A lot has been made of the fact that the richest 1% control almost 40% of the nation's wealth.  This in itself is not terribly surprising to me.  I mean, it is hardly news that rich people have money, and that it takes money to make money.   But for some reason, much less has been made of the similar statistic that the sickest 1% are responsible for 22% of all healthcare spending.  The sickest 5% are doing more than 50% of the spending.  With all of the talk about so-called "healthcare reform", are we going to see a similar backlash against the ultra-sick that we see against the ultra-rich? 

You might think that people will reject this simply because it should be obvious to anyone that only sick people spend money on healthcare, but I wouldn't count on that.  Logic rarely has play in these situations - emotion reigns supreme.  Consider the anti-corporate sentiment going on now.  I read an article recently lambasting the governor of Wisconsin for giving tax incentives for corporations to move there.  This was being characterized as a hand-out to the wealthy.  The logic that only corporations create jobs, and jobs are the only actual source of income for the populous just seemed completely lost on these guys.  Luckily for the seriously ill, most people are sympathetic to them in a way that they aren't with the millionaires.








Monday, January 23, 2012

Feminism with Chinese Characteristics



W has been - not exactly complaining, but talking a lot lately about male and female differences, and in particular actions or attitudes that I have that make her feel more or less feminine or make me seems more or less masculine to her.

I should state for background that W is Chinese, and she grew up in Sichuan in a family of pure intellectuals.  So her sense of proper gender roles is basically out of The Story of the Stone.   In this model, women are objects of beauty, art, and culture.  Their role is to create a harmonious environment for those around them - primarily the family.  Women are not supposed to be useful, in the sense of doing specific things.  What a westerner might consider a traditional female role - say, cooking, cleaning, or doing laundry - are not considered feminine in this culture.  Those tasks are very low status, and are suitable for a maid or servant.
Men in this model are basically the deciders.  They are responsible for bringing in income and maintaining social status.  They are leaders, of both men and the family, and they are expected to make the decisions for the family and rally the rest of the family around them.  As in the female case, many of the traditional western roles (such as being a handyman around the house or doing yard work) are considered jobs for servants and not particularly masculine.
I on the other hand am basically a normal American male, with a protestant work ethic underlying a liberal education in so-called feminist propaganda.  So while I intellectually and emotionally absolutely believe that men and women are vastly different creatures, I am culturally programmed to think that ignoring those differences will somehow be good for both me and the female involved.
This disconnect has led to countless misunderstandings between W and me over the years.  It started when we we first going out.  When we met, I owned a house, and I would do all of the cleaning and maintainance of the home, including home improvements like removing walls or putting in Ethernet.  This was just baffling to W.  In her mind, only a pauper would act this way.  It was particularly perplexing to her because I didn't had any special talent for these things.  I took ten times the amount of time to do the job at half the quality compared to a professional.  The idea that there was a certain satisfaction in doing the job yourself was completely alien.   I know now this pride in self-self-sufficency is a uniquely American value.

In the western world, we often think of the male female difference along emotional lines.  In short, females do the emotional work and men do the practical work.  This is not the case for W.  in her model, emotional work is really the only work that matters.  Both male and females do mostly emotional work, and the difference is just in the nature and domain of that work.
This was a very hard concept for me to grasp, and I struggled with it many times over the years.  For example:  let's say that we are planning a vacation.  My first attempt is to plan in together.  We figure out a budget, look at places to go, and start evaluating options.  This highly practical approach went over like a lead balloon.  W felt I was trying to remove emotion from this, which was true.  For her, though, emotion is what you want to maximize, not minimize.    So next I told her that she could figure out where to go and let me know, and then I would decide how we were going to fund that.  In my mind, that was being nice to her.  But this felt like I was abandoning her to do all of the work and that I was doing nothing.  Next I offered to figure this out myself, and just let her know the result.  You know - stepping up as the male and getting things done.  But then she felt excluded.  There was some fun part about planning this - some emotional warmth - and she wanted to be part of it.  After several more attempts, I finally figured out that what she wanted was for me to decide where we are going, and to completely own that decision.  But that decision I make should take all of the emotional concerns of the family into account, not merely the practical concerns.   And where she wanted to engage was in the conversation with me on determining what those emotional concerns might be.



One time we were watching a video in a Karaoke room.  In the video (whose name I forget now), there was a sequence of a man in the water, pulling a boat along, while a girl (the singer) sat in the boat holding a parasol for shade.  Periodically the man would become exhausted from the work.  When this happened, he would sit in the boat himself and the girl would wipe his brow and fan him.  This seems to me to be an example of the chinese ideal.  If no one is pulling, the boat goes nowhere.  But if the girl gets in the water and pulls herself, there is no one to help cool them down and relax.

Since the time when we met, we have met many other couples where one person is from China and one from a Western country.  I think many of them struggle with these issues, and very few of them realize this gender expectation difference is at the heart of it.