IA lessons from the mall

When I am conducting IA training, I like to deconstruct and critique the BBC Food site. It’s a fascinating exercise, illustrating lots of UX and IA principles, mostly excellent with enough you’re kidding moments to enrich the learning.

We start with a quick walk through the site. It’s busy! Twelve thousand recipes, richly connecting to ingredients, chefs, shows. Really busy. Structurally busy.

But it is busy in the same sense that a mall is busy, and this gives us some useful insights.

In each case, we are building places with lots of offerings of different types, where users go to meet their specific needs. Just as we don’t go into every store in a mall, users don’t look at every page in a site like BBC Food.

Of course, there is no such thing as “users”. Instead, there are people with goals. Some visitors to a mall have a specific shopping goal, for example to get a new headset for their iPod at the Apple store. Others may want to find that perfect Christmas present for their partner. Some may just be getting out of the cold.

It’s the same with BBC Food. Some visitors may want a lobster stroganoff recipe because they enjoyed it on a recent trip; others may want to cook an ostentatious Christmas dinner for their in-laws. Some may just have got there by accident.

Designers have to help all types of user meet their goals.

Navigation comes to mind right away. Both the mall and the BBC Food site, as well as individual stores and sub-sites, have their own navigation schemes: floor plans, directories, lists, menus, signage, and headings for page sections. Navigation is pretty familiar.

But navigation by itself is always a compromise, pointing us in the right direction but seldom getting us exactly or uniquely there.

The floor plan tells us that the Apple store is at the far end of the next corridor on the right, but you still have to walk down that corridor, and scan store fronts until you see it. And once you get in the store, you have to find the iPod area. As it happens, there is a big sign for iPod (a different navigational scheme than the mall as a whole). But at the end of the day, you have to scan the packages of earbuds one by one.

This interplay of navigating and scanning is typical, but combinations vary. In the Apple-iPod-earbud example, there was a lot of navigating and a tolerably small amount of scanning. In the perfect-Christmas-present example, there will be a lot of scanning, some would say intolerable.

Intolerable as the scanning may be, it is certainly efficient. We can assess in an instant whether an unfamiliar store is likely to contain that perfect Christmas gift. We walk along, judging yes-no-maybe without breaking step. We don’t even go into some stores.

How on earth do we do this? Well, it is based on brand and signage and numerous display cues, and on our goals. The window dresser has sent us a myriad of signals that they design with skill. We respond to the set of signals, even if we can’t recognize and articulate them individually. “Too expensive, too dull, nothing new, worth a look, she’d kill me, let’s go in” we summarize, ruling things in and ruling things out in the blink of an eye.

Designers in the online world are the counterpart of skilled window dressers. They provide skillfully designed signals about content; the user can decide whether the content is useful without having to read the content itself. We call these signals “information scent”. And just like real-world scents, they are complex and elicit an immediate yes-no-maybe response.

Information scent is used extensively throughout BBC Food. Some content areas are handled in a straightforward and systematic way.

  • Chefs: we know an area of a page refers to a chef because there is a head and shoulders picture of the chef, captioned by the text “By” followed by a name, styled as a hyperlink.
  • Programme: we know an area of a page refers to a BBC programme because there is slice of life picture of the host(s) in their show setting, captioned by the text “From” followed by a programme name, styled as a hyperlink, and followed by a date.

Is this scent effective? It depends. If I am an avid viewer of a show, this scent is very effective. It will be used in conjunction with my goals to come up with the yes-no-maybe assessment: “they always do good stuff”, “I can’t stand them”. If I don’t know any of the shows, then it triggers no response.

Other content areas are less consistent, and need considerable design skill to provide effective information scent. Let’s take a look at Ingredients as a great example; they range from highly recognizable vegetables to obscure spices and enigmatic bowls of stuff.

BBC Food cues up an ingredient by its name and an image of the ingredient in a kitchen setting. Here are some examples of information scent and its effectiveness in triggering responses in users.

  • The ingredients “Dover Sole” and “Sea Bass” both have a thumbnail of a fish. The thumbnails do not help me distinguish the fish; however, if ever I need a medium-sized fish, these would be candidates, and I would know to examine each ingredient in detail.
  • The ingredient “asafoetida” is completely unknown to me. It is illustrated by a small mound of yellow particles, perhaps a grain. It turns out to be a spice and I later figure out that a small mound is a common representation for a spice. The thumbnail is useful to me as I am currently looking for exotic meats, so I can rule this ingredient out.
  • The ingredient “gooseberry” is illustrated by glasses of white gooseberry fool; they look delicious. Would a thumbnail of a gooseberry have been better? More literal perhaps, but not as enticing.

Providing useful information scent is a multi-faceted, nuanced activity.

So where does all this leave us? Let’s summarize the whole exercise this way. In large complex information spaces (malls, BBC Food), we can and should help visitors by providing both navigation and information scent.

Navigation is the better known. It involves building classification schemes, taxonomies, labelling systems, as we have learnt from the Polar Bear book and others. It is mainly left brain work, but needs an ability to understand user goals and behaviours, and to engage users in testing your constructs.

Information scent is less mainstream. There are research articles, illustrative articles (on labelling, glosses, icons, providing previews), but there is no equivalent of the Polar Bear book.

We should introduce information scent into our conceptual framework and vocabulary, and ask how it can be incorporated into our solutions.

Until the definitive book is written, we should watch for examples of information scent and see what lessons we can bring back into our solutions.

The mall is a good place to start.
View Martin Stares' profile on LinkedIn


For an excellent introduction to user search behaviours, information scent, and with comprehensive references, see the book “Designing The Search Experience” by Tony Russell-Rose and Tyler Tate, reviewed here https://theinformationartichoke.com/designing-the-search-experience-book/.

For a detailed application of information scent to help users process search results, see https://theinformationartichoke.com/information-scent/.

Driving Knowledge-Worker Performance with Precision Search Results [Webcast]

Driving Knowledge-Worker Performance with Precision Search Results
Earley & Associates

This recent webcast from Earley & Associates on search based applications caught my eye for two reasons.  First, I greatly respect this organization. It consults and coaches on taxonomies, IA, and search solutions, and I have benefited from their webcasts over the years.  Secondly, I was just wrapping up a series of posts on search, and this nicely illustrated many of the things I had been talking about.

The webinar starts by explaining how the nature of solutions for knowledge workers has changed over the years due to changes in technology, a more sophisticated understanding of information, and a richer understanding of knowledge worker goals and what success looks like to them.

These are elaborated in subsequent sections.  I particularly liked the material on componentized content and structured authoring. Unlike adding metadata externally to a document,  this provides internal structure based on the semantics of the document, allowing pieces to be created, searched for and retrieved at a more granular level than the entire document.

The webcast finishes with two cases studies that both talk about delivering information to knowledge workers how and when they need it, though detailed analysis of the user goals, tasks and information resources available.

These are leading edge solutions, and give those of us who are more mainstream a mouth-watering glimpse of treats in store.

Interacting With Search Results

One consistent theme in modern discussions of search is that it is more often than not part of a larger process.

Let’s start with a very general task, that of interacting with a set of search results, and consider a couple of different search modes: exploratory searches, and searches where you want to take some follow-on action.

First, exploratory searches.  As part of doing exploratory searches, users will be presented with a set of search results, which they may want to utilize in progressively richer ways:

  • does the result look useful to my exploration
  • does it contain anything useful to my exploration
  • how does it fit into what I already know.

This is definitely not a waterfall process; all aspects can be happening in parallel.

The large search engines and popular browsers provide almost no support for these beyond bookmarking.  Instead, we find ourselves cutting and pasting links into a word processing document, adding comments, opening the link and capturing snippets of information, all the time reorganizing our document as our exploration proceeds.

Imagine, however, a situation where you could interact with search results the way you interact with rows in a spreadsheet or emails in your in-box.  Some of the following features could be useful

  • deleting or hiding search results
  • colouring or highlighting them, in all or in part
  • categorizing them
  • adding comments
  • saving your processed search results.

And if we could also expose content snippets without having to open the search result, and further select or bookmark at the snippet level, we have the beginnings of a useful tool to help us with exploratory searches.

There are technology components and tools to help do this, but this type of thinking is not mainstream.  If the exploration is highly patterned, for example in the case of market research or product reviews, it may well be worth building exploration-support systems with some of these capabilities.

Let’s turn now to the notion of taking action from search results. The research question we are asking is, “given a search result, what actions might the user want to take”.

In the post on information scent, we devised a pattern for structuring search results so they have a consistent format, and illustrated it with three examples, a blog post, a job posting, and an employee benefits overview.

What might somebody landing on these search results want to do?  Here are some possibilities:

  • for a blog post
    • read the post,
    • reply to the post
  • for a job posting
    • read the post
    • find out about the company
    • learn how to apply  Most people say, duh, “apply for the job”; this is not usually a process that would be completed within the set of search results, unlike a quick read of a blog post
  • for the employee benefits overview
    • get more information
    • find out who to ask for help.

As part of the business justification of this appraoch, we should consider whether there are underlying repeatable types of action that we can exploit, and/or repeatable information or usage patterns that will lower our cost to delivery.  There are at least two: go somewhere useful, or do something useful.

We are seeing a number or technologies that support this type of thinking.  They differ somewhat based  on the level of structure of the domain.

The corporate clients I consult for are often concerned with overall improvements in findability, in a mix of fairly structured domains.  In these settings, I can cite the SharePoint 2013’s hover panel, a customizable area which appears when you hover over a search result.  This can be programmed to allow the reader to take the type of actions mentioned in the above examples.

In the world wide web at large, there are technologies that allow a webmaster to add interaction to their search results when served up by the large search engines.  A search of “interactive snippets” and “microformats” will open the door to this space, but I have not personally walked far through this door.

On the other hand, I have spent many years developing custom corporate applications for specific domains, tasks and processes. In these cases, we can integrate search into a knowledge worker’s environment so tightly that they do not even know from their surface interactions that they are using search, just that the custom application is pulling in the information that they need, when they need it, and how they need it.  Under the hood, though, the solution will utilize our deep knowledge of the users and their information needs, and deliver using metadata driven search.

For examples of this type of application, see the webinar from Earley & Associates entitled “Driving Knowledge-Worker Performance with Precision Search Results“.   This covered many of the things we have been doing; here is my summary.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents


Information Diet Examples

In the last post we spoke about information diet, the situation where a user searching for information has access to multiple classes of resources and strategically chooses which one(s) to interact with based on estimations of cost/benefit, or profitability.

This concept encourages us to think about information resources not just as this web page or that document, but as sets of delivered information differing in how well they support certain dimensions of profitability that users might adopt based on their current situation.

What dimensions of profitability might we consider as we explore this concept?  User research and introspection might indicate the following candidates:

  • time factors: are there pressure to find and consume the information, or is slow and thoughtful deliberation and evaluation of information encouraged
  • where needed: is the information loosely or tightly coupled to where the need for the information arose or where the information might be applied
  • content factors: are we looking for broad brush strokes or exhaustive information
  • interface factors: does the user prefer to consume the content on a mobile device or large engineering workstation
  • preferred or habitual access methods: all things being equal, does the user prefer documents, face-to-face with another person, videos, etc.

Let’s take the example of helping users learn about our company’s corporate web conferencing program. We might imagine a cluster of related information resources for users, including:

  • a survival guide, for people under pressure to set up their first web conference, say in the middle of a meeting  This could be made available in hard copy as a wallet reference, or for delivery on a smartphone
  • a features guide, for someone tasked with finding a way of enhancing remote collaboration
  • a training guide, intermediate in coverage between the survival and features guide, with content curated to align with common usage scenarios for this organization
  • video training, definitely not suited as a survival guide.

It makes sense to cross-link them so that users can choose a more profitable resource if the one they first find is not quite right for them.

These are presentation level variants.  Other presentation level variants might involve manipulating density of information, or catering to holistic or serial users, or providing different organizations of the same information, for example depth- or breadth-first.

But how can we create all these variants in a way that is flexible and maintainable?  One key concept is to deconstruct content into a pool of useful fragments that can be recombined in multiple useful ways.

For example, a pool of pieces for our web conferencing material might include:

  • how-to fragments: a structured set of how-to fragments, with short explanation, more detailed explanation, screen shots, etc.
  • features fragments: a structured set of features fragments, describing the features and associated user scenarios.

Suitably constructed, these allow the creation of the different presentations mentioned above, and more beside.

This power of this approach is only hinted at by the web conferencing example, and the motivation is broader than information diet.  For a deeper understanding of the concepts and some of the technologies involved, I recommend the book “Content Everywhere: Strategy and Structure for Future Ready Content” by Sara Wachter-Boettcher, which I have reviewed.

That’s all for now on the Information Foraging model.  We have discussed it mainly in the context of what is achievable in typical corporate technology and platform settings.  But it has also been applied in specialized government and other settings, informing the design of specialized user interfaces.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents

Information Diet

Information foraging theory gives us fruitful insights into how users interact with information, and we have looked at the concepts of information scent and information patches. Another concept from information foraging theory is that of information diet, which is today’s subject.

The popular version of the concept is that people looking for information have a variety of options for meeting their information needs, and that they choose which ones to use based on some notion of cost/benefit (profitability), guided by information scent. Again, this concept has biological origins, in which animals choose food based on nutritional, energetic and even medicinal values, trying to minimize the costs and maximize value.

The authors of the concept take this further, by saying that there is a strategic component to this, that the user will create a diet of the most profitable resources available, then progressively less profitable ones, but will stop including other types of resource when their profitability become quantifiably too low.

The user’s notion of profitability is highly situational and unknowable to an individual web page or document, but the diet model encourages us to think about the whole set of resources we could offer to meet a user’s information needs, with various characteristics, rather than just focussing on one resource at a time.

Let’s take a real world example, namely me learning about information foraging. There was an abundance of resources and no way that I could read them all. I had the options of reading blog posts, original papers, books by leading researchers, watching a YouTube video, or attending international conferences, to name a new.

Which did I choose, and why? The following extended example might seem a bit conversational, but please regard it as an example of user research that has already been done for you.

Initially, I was looking for quality information or resources, a high level understanding of the information space, and an early assessment of whether I could use these in my work. I also wanted to pursue my investigation right away, and subsequently at times of my convenience, so I chose to start with web resources.

I trust my ability to skim, and didn’t want to spend too much time, so I ruled out videos (can’t skim).

I ruled out SlideShare presentations (at this time) which often lack the presenter’s commentary and make me work too hard.

I searched for “information foraging” and the search results had a good information scent, specifically the presence of some of the leading authorities in the user experience world. Good, this is not just a an academic discipline. I looked for practitioner blogs talking about applications of the theory, and found several.  Their presence indicated I had found a likely patch, and I decided to press on.

My focus was now quite different. As a practising consultant, I was looking for two things:

  • are there examples or case studies I can utilize when working with clients, and
  • are there general principles that I can apply in my analysis and design work, and teach to others.

I was prepared to spend more time, effort, and money to find out.

I bought and read parts of Designing The Search Experience: The Information Architecture of Discovery by Tony Russell-Rose & Tyler Tate. This gave me lots of concepts, examples, and stimulation. It didn’t provide guiding principles quite the way that I like them. Great, there’s room for me to make a contribution.

I read several practitioner blogs, and very soon ran into diminishing returns. There was a recurring pattern of naming the ingredients of the theory, providing a biological example, and giving some valid but generally the same examples, such as the importance of good names for links. A few blogs drilled a bit deeper and I bookmarked them, but I essentially ruled blogs out as a resource class.

SlideShare presentations did become an eligible and productive resource class. Based on this work, I felt there was a lot more to be learned and applied, and was prepared to work harder and longer to find out.

My focus became different again. This time, I was looking for a deep understanding of information foraging, with a view to developing some guiding principles for my solutions design. I selected the 1999 paper on Information Foraging by Pirolli and Brand, and read chunks of it, some many times. This paper has a very high density of useful concepts, and I spent a lot of time making sense of what I was reading, primarily from the perspective of how I apply it in my work.

Now, I’m at the point where I’m integrating these concepts into my practice, and am not searching for additional resources. I have been left with strong go-forward scents. Pirolli has written a book on Information Foraging Theory, and has published a paper on social information foraging (search for “pirolli social information foraging”). The latter is especially high-scent for me, as I do a lot of work on collaboration.

The key things that this example points out are:

  • users think about resources in classes
  • users utilize these classes based on notions of profitability
  • there are some classes that will be highly used
  • there are some classes that might never be used
  • these notions of profitability are individual and situational
  • the user’s notions of profitability are heuristic and will be incorrect in cases.

These concepts apply not just to online activities. As a segue into an upcoming discussion of cross-channel information foraging, you might want to look at the following exercise.

Exercise for the reader: describe the following story from the point of view of information foraging.

“Information Architect John is on a phone call with a colleague Sally who wants help redesigning their part of the intranet. Half way through the call, when they want to walk through the current site, they realize that John doesn’t have access. They decide to use their corporate web conferencing solution. Neither John nor Sally knows how to set up the call. John does a search on the intranet. A lot of the search results are past meeting notes with conferencing details. There is some self-help material, but it is a ten-minute video. John rushes down the corridor and bursts in on a colleague, Nancy, asking if she can help. Nancy can and does, real time, saving the day.”

Think about the information foraging perspective, and any thoughts that arise for solutions design.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents


Information Patches

Last time we explored the notion of information scent, and in this post we will talk about information patches.  Both are thought provoking ingredients of the Information Foraging model of search behaviour, which says that the way we look for information in any complex situations has characteristics in common with the way an animal forages for food.

In the biological foraging model, a bear foraging for food would search for a berry patch, eat there till the point of diminishing returns, and then look for another patch, but not too soon.  By diminishing returns, we mean that it involves progressively more work to get a mouthful of berries once the patch has been picked over.

Is it plausible that we forage for information this way? Sometimes, yes. When I was learning what’s new in SharePoint 2013 information architecture, distinct topic areas revealed themselves, some examples being content types, hover panels, cross-site publishing, etc. I foraged in each of these until content started to look familiar, which was my point of diminishing returns.  No need to read the same thing a third time if there is nothing new.

Now anyone reading this will challenge the comparison between biological and information foraging, as I did, but I am looking for ideas that are useful.  One big idea for me was the notion that information can be patchy, with implications for defining patches, making them easy to find, and making it easy to search within them. That’s what we’ll explore in this post.

Let’s start with my paperwork. I have a smallish amount on my desk that I use constantly, a bookcase at hand with frequently used books, and lots of books on shelves in the family room, with groups for science fiction and travel, the remainder being a jumble.  This organization is designed to reduce the time to find material.  It doesn’t reduce the time to zero.  I still have to shuffle papers on my desk, or scan the travel books for the one on Istanbul.

So there are two aspects to searching when patches are involved:

  • finding a suitable patch
  • searching within a patch.

If we had woken up from a twenty year sleep and hit the web for the first time, searching for a patch on the internet would be hit-and-miss   After a while, though, we learn authoritative sources, and these become patches, like BBC Food or Microsoft’s TechNet.

Of course, this naming of patches is nothing new.  We have the Periodicals section in libraries, China Town and named neighbourhoods in cities, and the spice, carpet, and gold sections of bazaars.  It is just something under-exploited in web and especially intranet settings. In an intranet setting for example, we typically identify patches through broad navigational labels such as HR, Social Club, etc., but there are many specialized patches we could create, such as microsites and dashboards.

Ecommerce sites have highly structured patches.  A carefully designed faceted structure gets us to the patch of all DVD players made by a certain manufacturer and in a certain price range.  Then we have to browse within the patch.

There is not a rigorous definition of information patch.  Rather it is some level of thematic cohesion. So the following would be considered patches:

  • BBC Food
  • an At-A-Glance page of links for New Employees on an intranet
  • the sales and marketing portal in a corporate intranet.

Patches can contain sub-patches but at some level we reach pages with no information structure implied, for example Calgary weather.

What about a search results page?  Search results can be considered a dynamically generated patch. Those from the big search engines have thousands of hits for even the most unlikely queries, and do not usually indicate their patch structure.  In an intranet setting, where we can control metadata, we can do better and introduce search refiners to filter down to useful (by design) patches.

Pushing this to the extreme, we can omit the search results themselves and just list the patches and how many results there are in each patch.  So if an executive queried “SharePoint” in such a design, the search could identify that the company had done fifty SharePoint projects, there were eighteen employees skilled in SharePoint, we had five documents on corporate licencing, and eighty pieces of content to do with training.  In other words, the search concentrates on the structure of the information rather than detailed instances.

We can see that our main job as designers so far is creating a rich patch structure and helping our clients find the most useful ones.

Once we have found a patch, how do we search within it?  There are several approaches, depending on the content:

  • scan it manually, just like I do with my travel books
  • utilize any internal pathfinding that the designer has provided
  • look for significant scent identified and implemented by the designer.  In the eCommerce world, this might be product images, ratings, reviews, or specifications.

Our main job as designers here is providing scent and/or pathfinding appropriate to searching within a patch rather than finding a patch. Two distinct design exercises!

These concepts from Information Foraging don’t exhaust the range of concepts introduced.  The biggest area remaining is how actually do we as information foragers operate, and make decisions about what how long to stay in a patch, and what information diet to subscribe to.  My approach to the subject when I first encountered it was very much foraging, looking for aspects of the theory that could inform my analysis and design, and it is these aspects that I have shared.

Key names in this field are Peter Pirolli and Stuart K. Card.  They have written many papers; one that I like is called Information Foraging, available at http://act-r.psy.cmu.edu/papers/280/uir-1999-05-pirolli.pdf.

This is no mere blog post, but a seventy-odd page technical report.  The density of new concepts is staggering, some fruitful ones just thrown out offhandedly. Be warned, different parts of the document have different information scents, with some narrative, some mathematical modelling, and some simulation.  But the first twenty pages are accessible to everyone, and the key point of the modelling section is Figure 5 (Charnov’s model).

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents


Information Scent

From the examples in the previous post in the Search series, we have seen that searching is a more constructive activity than punching in search terms and consuming the result.  One aspect of this is the ability of the user to quickly scan information looking for characteristics which they can employ to assess relevance.  These characteristics are collectively are known as “information scent”.

The definition “those characteristics of information which the user can employ to determine relevance” might sound a bit too vague to be useful, but in reality it invites us to think broadly about what those characteristics might be, and how as designers we might apply them to the information or interfaces we are involved with. It also invites us to consider how the user interacts with these characteristics.

Here’s how information scent might apply to a single search result. When I search for my recent blog posts using the query “information artichoke search”, the big search engines will include a result like this

Upcoming series on search | The Information Artichoke
Sep 27, 2013 – I’ve had an interesting few months advancing my understanding of search from combined information architecture and user experience …

Deconstructing this, I make the following observations:

  • The title “Upcoming series on search” – not bad scent, but I’m looking for the series itself, not an announcement.  A user however might follow the link to see the pages had links to the series.
  • This thought reminds me that I need a table of contents page for the Search series. My other series on Modern Information Architecture has a table of contents and I wonder how it shows up.  Not well – its table of contents has the title “Table of Contents”.  Not quite what I intended, perhaps “Modern IA Course Table of Contents” would look better here.  This is a good example of content that works well in a specific context (i.e. a richly linked blog) not working so well when some context is removed (i.e. in a set of search results)
  • My blog name “The Information Artichoke” appears in the title, although I didn’t ask the search provider to do this.  Depending on the reader, this might be considered noise or the name of a trusted authority and hence positive scent
  • I suspect the search provider knows that this is a blog post
  • The date appears in this search result, but not in all
  • The description “I’ve had an interesting few months … ” is nicely conversational within a blog, but not high scent here; a short description “We will be exploring such topics as information foraging, information scent, and sensemaking” would be better for a quick assessment.

So do these observations help anyone?  Personally, I suspect they could be applied to corporate settings where I am trying to improve enterprise search through the application of metadata and smarter formatting of search results.  I decide to give it a try.

I start by abstracting what I have found and make notes to myself, along these lines

  • A blog is an example of a type of content.  Others might be News, or Reference Material.  Coming up with content types to the finest level of granularity is hard work, but some high level typology might be helpful to categorize or refine search results. We might find that not all content needs to get tagged with a content type.
  • The name of my blog, The Information Artichoke, might be considered a source or a publisher or origin.  Other content types might be tagged as coming from a particular department, or person, or from outside the organization altogther
  • The date might be especially valuable for blog postings, news articles, job postings
  • The meaning of the date value could be spelled out.  For a blog, it could be the date posted; for job postings it could be the date that the job was posted or the date the posting expires; I have an opinion, certainly, but user research will resolve this
  • Is there any significance to the date value?  We often see a  “New” indicator on fresh content, and could imagine other treatments such as red flags on action items
  • The description should be constructed to have high information scent.  Scraping the first elements of the content does not give good results.  For example, the search result description for some of my blog posts include the names of the blog navigation elements; spreadsheets are even worse, showing the contents of a number of  cells 3.14, 1.41, 999.

We can desk check our ideas by constructing some sample search results

BLOG POST Upcoming series on search | The Information Artichoke
Posted Sep 27th, 2013
We will be exploring such topics as information foraging, information scent, and sensemaking.

JOB POSTING Senior Information Architect | HR
Closes Nov 5th, 2013 **NEW**
Required to formulate content strategy for sales and marketing

Employee Benefits Overview | HR
Last Updated August 27, 2013
Information on health, insurance, personal development allowance, and …

To proceed, we can stress test the concept on paper, explore business rules for any metadata needed, and talk to our UX colleagues to do usability testing on some of these ideas.  They can also suggest some visual treatments and iconography.

Information scent does not apply just to search results, but can apply to documents, pages, links and navigation, among others.

When it comes to links, the simplest implementation is a text link.  If the name does not provide high enough scent, then we can add a short description and/or a picture to increase the scent.  Another approach is to give the user a chance to lean in for a closer sniff, by providing alt text, or revealing more information when the cursor hovers over the link.

Here’s an example involving documents. I enjoy reading lecture notes in physics and have noticed distinctive aspects of scent that affect my assessment and seem reliable.   If the document is all text or all mathematics, it has a low information scent for me.  If the math is too hard for me, I rule out the document immediately.  Diagrams provide a positive scent.  Interestingly, I can make this assessment almost as soon as the document loads in the browser.

I have also noticed characteristics of lecture notes that reflect a distinct bias on my part.  If the format is PowerPoint slides, I notice myself downgrading the information scent, but am prepared to persist if the authority is high.  If the format is scanned handwritten lecture notes, I rule the document out without further investigation.

This last case is interesting.  Most literature on information scent seems to focus on the user selecting stuff to pursue, but this example suggests that information scent is also useful for letting the user select stuff to rule out.   Likewise, in our previous examples, somebody not interested in blogs could easily bypass them in a search results page, if they were easily identifiable.  This is quite consistent with the biological origins of the concept, which is how animals use scent to identify good foraging opportunities. It also challenges the notion that we want to make content sticky.  Maybe traversable might be a better notion. 

To summarize, information scent consists of characteristics of information that we can use to make decisions about the value of that information, without having to read the information itself.

I’m not sure about the boundaries of the concept.  People are influenced by branding, layout, tone, and a host of other things that we can manipulate.  Some people are influenced by US vs. British spellings. Some people discount content with incorrect semi-colon usage.  Does this make all of these characteristics part of information scent or something else?

Looking back to the biological origins, we know that bees are attracted to flowers by their shape and colour as well as scent.  So maybe we could factor out visual elements. You probably didn’t know that bees can also detect the shape of cells comprising a leaf, and hence whether the leaf will afford their feet a good grip. http://www.botanic.cam.ac.uk/Botanic/Page.aspx?p=27&ix=2847&pi..

Personally, I’m not going to attempt to define the boundaries of information scent.  I will just adopt it as a useful concept to have in my toolkit, and apply where it might add value.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents


Deconstructing Some Searches

In the last post in this series, we speculated that search goals might be a useful characteristic to explore.  Let’s consider the interplay between search goals and search results in the four reference search tasks we introduced last time, namely:

  • what will the weather be in Calgary this weekend?
  • how can I fix the <model> laser printer in my home office, which is blinking with a certain sequence of lights?
  • what do I have to do the make my concrete path less slippery in winter?
  • how can I get up to speed on SharePoint 2013?

These were real searches I performed.  Here’s what happened.

Calgary Weather

The weather example is classical information retrieval.  My goal is specific, and I know through experience that I can fulfil it with the simple query “Calgary weather”.  If this recurring information is important to me, I can increase its prominence by delivering it through a persistent widget or app.

Laser Printer

In this case, the goal is explicit (“diagnose the problem, and fix it myself if possible”).  I don’t have a guaranteed search query, but have fixed hardware before and have some ideas. I start with “troubleshoot <model> laser printer”. By the way, if I had previously looked unsuccessfully in my office for the printer manual, I might have started with “download manual <model> laser printer”.  I do a quick scan of the search results to see if I am on the right track.  I am not.  [The ability to do a quick scan introduces the idea of information scent which we will talk about in an upcoming post]. The high frequency terms “laser” “printer” overwhelm “troubleshoot”, stuffing the results with printer sales and printer reviews.  I try “blinking lights <model>”, and get results that are recognizably useful, or in other words, have a high information scent.  One of them helps me  my problem.

Exercise for the reader: How would things be different if I ran a help desk and frequently had to help fix a variety of printers?

Reflecting on this as an information architect, a couple of things cross my mind.

  • I start to see that the world of laser printers has a little ontology (specifications, sales, use, problems, ratings, support), and wonder how I could exploit this.
  • And I wonder how I can learn how to manipulate information scent so that page viewers can assess the value of the page quickly.

Concrete Path

The slippery concrete path example was more problematical.  I am clueless when it comes to DIY, so my goal is not very explicit (“get something to make the concrete path less slippery”), and I expect a bit of fumbling around.  I am not disappointed.  I make a big misstep in the query by asking for “concrete surfacing”.  Adding “exterior” doesn’t help. I still get lots of product sites, talking about things I don’t understand.

I have an ah-ha moment, and enter “concrete path slippery”.  Good call, I find lots of question-answering forums answering my problem.  I still have work to do, but I know I’m in the right sort of place. Now, I have to drill into a variety of options in true evaluation and decision making mode.  In doing so, I learn about evaluation criteria such as appearance, permanence, cost, etc.

Reflecting on this as an information architect, a couple of things cross my mind.

  • I see a distinction between product sites and question-answering sites, and wonder if there are broad categories of site that I can exploit, perhaps as metadata in an enterprise search solution.
  • I also notice that I started to makes notes on a piece of paper.

SharePoint 2013

The SharePoint 2013 example raised some other points.  I understand SharePoint 2010 very well when it comes to designing knowledge rich solutions, have formerly done a lot of development, and avoid infrastructure. Nevertheless, I chose an initial query “what’s new sharepoint 2013” to get an overview of the new stuff.

SharePoint 2013 is huge, and I realise that my goal is ill-formed.  Over period of time, I refine my goal, first to “how can I get up to speed on SharePoint 2013 search and information architecture?”, and finally “what do I need to become a consulting information architect in SharePoint 2013”.  This refinement of goal came about as a result of interacting with the whole information space, and turned out to be instrumental in screening content in or out.

This latest goal did not perform well as a query term. The query “how can I become a consulting information architect in SharePoint 2013” pulled in SharePoint consulting groups as well as SharePoint 2013 information architecture.  I needed a strategy for formulating my query.  My survey of the What’s New information had identified a list of focus areas that I felt I needed to drill into;  I used the names of these areas as the basis for a deep dive.

Reflecting on this as an information architect, I noticed

  • I worked in both survey mode and deep dive mode. I wondered how content providers support these different modes through different IA or UX patterns
  • There was quite an ecosystem of information sources
    • large, highly structured, richly interconnected sites from Microsoft
    • blogs providing practitioner experience, tips and tricks
    • training providers.
  • Some sites had curated overview content.  This takes work and categorization. It works well if the overview gets the audience right.  Microsoft had categories for the reader to select their role as Manager, Developer, etc.  I’ve done this sort of curation for New Employee Quick Reference microsites, and wonder whether there are general principles for when and how to do this
  • My “deep dives” were not infinitely deep.  Once I understood an area to a certain level, I stopped looking at that area and moved to a new one [information foraging]
  • I built my own information structures, ranging from annotated lists of links to scrappy diagrams to documents;  some of these become permanent summaries, others were intermediate stepping stones to understanding, and got trashed when they had served their purpose.

All in all, I found this deconstruction of the four searches useful.  It started to cement my understanding of the intricacies of search, and it gave me enough thoughts for solutions design that I am encouraged to continue.

Stay tuned

The Information Artichoke Home Page | Search Table of Contents


Designing The Search Experience [Book]

Designing The Search Experience: The Information Architecture of Discovery
Tony Russell-Rose & Tyler Tate

This is an excellent and stimulating book!  I recommend it to anyone who wants a deeper understanding of how people search, and who strives to exploit this understanding in their solutions design.

The first part of the book, a Framework for Search and Discovery, is a well-referenced presentation of some behavioural attributes of information seekers, and different ways that they interact with information. It introduces concepts such as information scent, information foraging, and sensemaking, and follows this with sections on context and search modes.

This section has been consciousness-raising.  I have kept the framework in mind as I observed myself interacting with a variety of search tools; it has proven valuable in articulating my own behaviours and identifying how well (or not) my search tools support (or could support) these behaviors.

The second part of the book, Design Solutions, provides a wealth of attractively presented examples of user interfaces showing how the insights from the first part have been applied.  Some of the examples expose design decisions that we see every day in our experience with the large search engines. Others describe search interfaces that push(ed) the envelope in different directions.  Some of these no longer exist in the form presented.  Some no longer exist. But the ongoing struggle for improved search experiences is well represented.

So is this book for you?  If you’re looking for a paint-by-numbers book, afraid not. The struggle for improved search experiences is the theme of the entire book, and the authors are thoughtful practitioners and part of this ongoing struggle. If you share some of those characteristics, need to contribute in this space, and are looking for a quick journey to the leading edge, you will benefit greatly from this book.

By the way, from a coverage point of view, most of the examples come from search engines or consumer facing sites.  I personally work a lot in creating solutions for knowledge workers within an organization. The ideas presented in this book still apply.  In fact, with the ability to access our users, and the opportunity to define information structures tailored to their goals, I suspect we can meet their needs very convincingly.

Nice job!

There is a third part to the book on Cross-Channel Information Architecture which I haven’t read yet.





Different Faces of Search

I am confident that, over the last few weeks, we have all made numerous on-line searches.  For most of us, this is an unexamined activity. But as an information architect with a keen interest in user experience, I have become aware that I use several different search modes, and hope that by identifying useful distinctions, I can help build more useful search solutions.

Here are some of the search tasks that I have performed:

  • what will the weather be in Calgary this weekend?
  • how can I fix the laser printer in my home office, which is blinking with a certain sequence of lights?
  • what do I have to do to make my concrete path less slippery in winter?
  • how can I get up to speed on SharePoint 2013 Information Architecture?

From these, it is clear that there are different types of search task. But how do we describe the differences?  Some commentators talk about search modes, or seek to provide taxonomies of search activity.  While these recognize the variety inherent in the search task, and our roles as active participants as our query activities move us closer to our goals (or not), I find them unfulfilling in some ways.

Let me give a couple of examples. One popular framework has top levels Lookup, Learn, Investigate, which include activities such as Verification, Comparison, Analysis, Discovery, Synthesis Another talks about Moves, Stratagems, Tactics, and Strategies.

Why didn’t I warm to these? When working as a solutions consultant, I look for practical distinctions that I can use when talking to clients and thinking about solutions. I’ve nailed this in a number of domains.  In document management, I can differentiate between static reference materials (such as policies), and operational documents (such as meeting minutes), and use these to focus discussion and shape solutions.  But the frameworks mentioned above didn’t do the same thing for search.  First, I would find it very awkward to talk to a client in such abstract terms.  Second, the frameworks are too broad to help me shape solutions.  And finally, I simply cannot apply them.  When I tried to apply them to my sample searches, it was a struggle, inconclusive and not insightful.

The problem is perhaps a mismatch between what I need and what the researchers were trying to provide.  I was looking for entry points for solutions design; they were trying to provide a cognitive-behavioural framework, and in an information free context.

In my world, user research provides definite context.  Stimulated by other reading, and deconstructing my search examples, I came up with the following dimensions of the search task that I felt might be helpful.

  • how well is the goal defined
  • are the information resources and authorities well known
  • do we know what success looks like
  • can we tell when we are making progress toward our goal
  • are we time limited
  • does the goal results in creation of an information artefact, or doing something in the real world, or retrieving a data value
  • do we expect the goal will be accomplished easily and directly, or will there be fumbling around.

This is starting to feel better for me for a number of reasons.

  • I was able to create facets that I can shape and refine and explore, rather than being forced into wholesale design of a taxonomy or framework (which of course are two different ways that IAs approach information)
  • I can look at my sample searches, and easily see how they line up along these dimensions
  • I can glimpse some ways in which I can help end users and information providers improve their worlds.

In the next few posts, I will explore some of these dimensions.  Comments welcome.  Stay tuned.


For the Lookup, Learn, Investigate framework and critiques, search online for “Marchionini exploratory search”, especially  http://www.inf.unibz.it/~ricci/ISR/papers/p41-marchionini.pdf

For the Moves, Stratagems, Tactics, and Strategies, search online for “Marcia Bates Moves, Stratagems, Tactics, and Strategies”, especially http://citeseerx.ist.psu.edu/viewdoc/download?doi=

For descriptions of these and others, see the book “Designing The Search Experience” by Tony Russell-Rose and Tyler Tate.  See the book review for Designing The Search Experience in this blog.

The Information Artichoke Home Page | Search Table of Contents