Search Results and Representations

Last time we spoke about letting users interact with search results, either by letting them go somewhere useful, or doing something useful.

But often search is part of a larger exploratory or investigative activity that is long-lived. In these situations, one consideration that helps me think about systems design is the relationship between REPRESENTATION and RESULTS.  Informally, we might say:

  • Representation: the structure of the set of information I am processing with the help of search
  • Results: the pieces of information that I dealing with, through my discovery process including search.

We can think about two types of relationship between these:

  • the representation is well-known, and results are sought that fit into it
  • the representation is not well-known, but evolves from working with results.

As an example of the first case, consider technology watchers researching a new technology and analysing its likely impact.  They probably have evolved a standard approach to this, and access the same resources and authorities over and over, before applying their particular brand of analysis.  Those of us who like to bake regularities into software love this kind of situation.  Regarding the task as a high-level pattern match, we might be able to partially automate this with saved searches, auto-categorize the results in a pre-structured repository, and allow the researchers to add annotations.  And if we find that the researchers present their output (Market Analyst Reports) in a different structure, we could consider viewing their annotations by their research framework or their output framework.

There are some oversimplifications here.  The technology watcher’s approach is actually less rigid than the example indicates; in particular, it will evolve over time as their domain changes.  I found this myself when I was trying to get up to speed on SharePoint 2013 Information Architecture, and deliberately looked for differences via What’s New, Deprecated Features, etc. So in addition to populating a framework with facts, our solutions should help us shape the framework itself.  This assessment may be informed by experience, change in the number of hits in a category, “buzz” about what’s hot and what’s not, or how effective our analysis is relative our competitors.

As an example of the second case, consider the detective work as portrayed in many TV shows.  The detective is trying to find out “whodunnit”, with a set of facts that grows from someone being murdered all the way to the crime being solved.  The detectives stand around a whiteboard with boxes and arrows showing parties involved, their relationships, and timelines. In addition to people and places, the detectives use investigative constructs that have evolved over the years, such as “alibi”, “means”, “motive”, “opportunity”, as well as trust attributes, based on whether  the information came from a DA or a “snitch”.  These don’t jump out when doing information modelling based on physical entities, but do jump out when doing user research. Hypotheses are constructed and deductive logic applied to sharpen focus or rule people out.  The detectives create a representation of the crime, supported by evidence.  For the next crime, they do it all over again.  The constructs will be the same, but their instances and combinations will be completely different.

The requirements for a system to support this type of activity seem to be those that make the whiteboard so effective:

  • the ability to create and modify entities of known types, for example the victim and other participants, using conventional iconography
  • the ability to create and modify relationships of known types between these entities; these could of various kinds, including space-time trajectories, business relationships, interpersonal relationships, beneficial relationships; these could also be real and hypothetical.

Search for the YouTube video “sensemakeing III” for a discussion.

What has this got to do with search? Well, watching the TV shows, we see both broad based exploratory searches (“interview everyone who was at the reception”), as well as very detailed information seeking searches (“George, find out if Mr. Smithers bought his shoes from Pringles”).  But to a large extent, we have crossed over into specialised sense-making applications and won’t discuss these further.

Stay tuned.

From Information Structure To Information Presentation

We’ve reached a transition point in this course.

So far we have been talking about the information structures that are needed to meet user goals.  We have explored various techniques for uncovering the information structures lurking in user stories, business requirements, process diagrams, and flat visual representations.

We will now turn to information presentation, the different ways that information can be organized into user experiences. There is also the whole other issue of searching for information, and this is the subject of a sister series of posts on Search.

What will we be talking about?

  • Predictable Exploration – where do we think the user might want to go from their current location
  • Predictable Focus – what other information could be pulled in that would be useful to the user right now
  • Information Complexes – a common pattern for rich, reusable content
  • Nested Contexts – broadening the perspective from one instance of abstracted user interaction to include other contexts that might be useful

Stay tuned.

Crossing The Swim Lane

One important input that information architects receive from business analysts is swim-lane diagrams. If you need a reminder, search for “swim-lane diagram” on the web. These show how a process threads through the various participants involved in moving it from start to finish.  We have already mentioned how to analyze the information needs for a participant, given a specification of task requirements.  But the swim-lane diagram points to the collaborative aspects of solutions design.  As the process flow crosses a swim-lane, ownership of the process passes from one participant to another (let’s call them P1 and P2 respectively). At that visually unremarkable crossing point, we actually have a variety of considerations that massively affect the overall solution we are building.

Here are some of them:

  • Participant Profile – what are some characteristics of the participants that influence solutions design
  • Notification Method – how does P2 know he has been given ownership
  • Collaboration Content – how do the participants message each other
  • Allowed Actions – what are the allowed patterns of request and response.

These decisions affect both the information structures that we build and the user experiences we present.

To illustrate Participant Profile, consider an online credit card application.  The person applying for a credit card might participate in the process just once.  The agents who process credit card applications typically do so in large numbers.  A manager handling exceptions or escalations might have occasional involvement. This type of profiling highlights differences in process expertise and familiarity, and influences the decisions that follow. Note that this profiling is not user research.  From user research, we might additionally find there are different clusters of behavioural attributes for each participant, leading to their own personas.

Notification deals with how P2 knows that the ball is not in his court.  There are three common patterns; which one to adopt depends on the participant profile and other factors:

  • Work queue: for high volume process participation, it is common to have a work queue, which might be called Credit Card Applications in the above example; design considerations include how to assign work items among agents, etc.
  • Email with Link: for lower volume participation, such as the manager handling exceptions, it is common to send out an email with a link to the item needing attention
  • Email without Link: in some situations, we send an email without any link.  This might be the case acknowledging receipt of a credit card application, or later, saying that the application has been accepted or declined.

Collaboration Content refers to any discretionary content provided by a participant.  This is often in the form of comments, but a priority flag assigned by P1 might be another example.  Design considerations include whether we are allowed to change the comments we exchange with our users, or whether these must be held unchanged and in their totality.

Allowed Actions is best is explained by an example.  Consider the following dialogues

  • Business Card request:
    • P1: “please order me new business cards”
    • P2: “sure”
  • Help Desk request
    • P1: “can you handle this printer problem for me”
    • P2: “that’s Peter’s area, please run it by him”

They differ in how P2 is allowed to respond.

The Business Card example comprises an INSTRUCTION which we expect will be fulfilled.  Even if it isn’t fulfilled, P2 has agreed to start the processing and later found a problem which was handled by one of the exception flows.

In contrast, the Help Desk example comprises a REQUEST, which may be accepted or declined by P2. This a realistic example in cases where it is hard to provide a rules-based assignment of resources to deal with requests, or where there are resource limitations.  In this case, we have to allow P2 to decline the request.  Detailed process design will determine whether P2 will find P3 to fix the printer, or kick it back to P1 to find P3.

In terms of swim-lane diagrams, the swim-lane for P2 should be considered an abbreviation for a sheaf of P2s with different attributes (printer expert, laptop expert), from which P1 has to make a choice.  This of course is another aspect of Participant Profiling.

This notion of Allowed Actions is valuable in preventing oversimplification of design.  The way I have presented it here is in turn an oversimplification.  The real world is full of other cases that we might want to consider in our solution:

  • P1 no longer needs the printer to be fixed
  • P2 said they could fix the printer, but can no longer do so and wants to get out of the agreement
  • P2 said they could fix the printer, but ignores the request.

It also includes cases we probably wouldn’t build, for example “P2 will let us know next Tuesday if they can fix the printer”.

If you are interested in a deeper understanding of this, look up “conversations for action winograd flores” or “understanding computers cognition winograd flores” for the theoretical underpinnings, or www.actiontech.com for Business Process Software based on these approaches.  These aren’t always easy reads, but do serve to open our mind to the types of information needed to support business processes.

Driving Knowledge-Worker Performance with Precision Search Results [Webcast]

Driving Knowledge-Worker Performance with Precision Search Results
Earley & Associates
http://info.earley.com/webcasts/intelligent-assistance-drive-performance-answers-not-documents

This recent webcast from Earley & Associates on search based applications caught my eye for two reasons.  First, I greatly respect this organization. It consults and coaches on taxonomies, IA, and search solutions, and I have benefited from their webcasts over the years.  Secondly, I was just wrapping up a series of posts on search, and this nicely illustrated many of the things I had been talking about.

The webinar starts by explaining how the nature of solutions for knowledge workers has changed over the years due to changes in technology, a more sophisticated understanding of information, and a richer understanding of knowledge worker goals and what success looks like to them.

These are elaborated in subsequent sections.  I particularly liked the material on componentized content and structured authoring. Unlike adding metadata externally to a document,  this provides internal structure based on the semantics of the document, allowing pieces to be created, searched for and retrieved at a more granular level than the entire document.

The webcast finishes with two cases studies that both talk about delivering information to knowledge workers how and when they need it, though detailed analysis of the user goals, tasks and information resources available.

These are leading edge solutions, and give those of us who are more mainstream a mouth-watering glimpse of treats in store.

Interacting With Search Results

One consistent theme in modern discussions of search is that it is more often than not part of a larger process.

Let’s start with a very general task, that of interacting with a set of search results, and consider a couple of different search modes: exploratory searches, and searches where you want to take some follow-on action.

First, exploratory searches.  As part of doing exploratory searches, users will be presented with a set of search results, which they may want to utilize in progressively richer ways:

  • does the result look useful to my exploration
  • does it contain anything useful to my exploration
  • how does it fit into what I already know.

This is definitely not a waterfall process; all aspects can be happening in parallel.

The large search engines and popular browsers provide almost no support for these beyond bookmarking.  Instead, we find ourselves cutting and pasting links into a word processing document, adding comments, opening the link and capturing snippets of information, all the time reorganizing our document as our exploration proceeds.

Imagine, however, a situation where you could interact with search results the way you interact with rows in a spreadsheet or emails in your in-box.  Some of the following features could be useful

  • deleting or hiding search results
  • colouring or highlighting them, in all or in part
  • categorizing them
  • adding comments
  • saving your processed search results.

And if we could also expose content snippets without having to open the search result, and further select or bookmark at the snippet level, we have the beginnings of a useful tool to help us with exploratory searches.

There are technology components and tools to help do this, but this type of thinking is not mainstream.  If the exploration is highly patterned, for example in the case of market research or product reviews, it may well be worth building exploration-support systems with some of these capabilities.

Let’s turn now to the notion of taking action from search results. The research question we are asking is, “given a search result, what actions might the user want to take”.

In the post on information scent, we devised a pattern for structuring search results so they have a consistent format, and illustrated it with three examples, a blog post, a job posting, and an employee benefits overview.

What might somebody landing on these search results want to do?  Here are some possibilities:

  • for a blog post
    • read the post,
    • reply to the post
  • for a job posting
    • read the post
    • find out about the company
    • learn how to apply  Most people say, duh, “apply for the job”; this is not usually a process that would be completed within the set of search results, unlike a quick read of a blog post
  • for the employee benefits overview
    • get more information
    • find out who to ask for help.

As part of the business justification of this appraoch, we should consider whether there are underlying repeatable types of action that we can exploit, and/or repeatable information or usage patterns that will lower our cost to delivery.  There are at least two: go somewhere useful, or do something useful.

We are seeing a number or technologies that support this type of thinking.  They differ somewhat based  on the level of structure of the domain.

The corporate clients I consult for are often concerned with overall improvements in findability, in a mix of fairly structured domains.  In these settings, I can cite the SharePoint 2013’s hover panel, a customizable area which appears when you hover over a search result.  This can be programmed to allow the reader to take the type of actions mentioned in the above examples.

In the world wide web at large, there are technologies that allow a webmaster to add interaction to their search results when served up by the large search engines.  A search of “interactive snippets” and “microformats” will open the door to this space, but I have not personally walked far through this door.

On the other hand, I have spent many years developing custom corporate applications for specific domains, tasks and processes. In these cases, we can integrate search into a knowledge worker’s environment so tightly that they do not even know from their surface interactions that they are using search, just that the custom application is pulling in the information that they need, when they need it, and how they need it.  Under the hood, though, the solution will utilize our deep knowledge of the users and their information needs, and deliver using metadata driven search.

For examples of this type of application, see the webinar from Earley & Associates entitled “Driving Knowledge-Worker Performance with Precision Search Results“.   This covered many of the things we have been doing; here is my summary.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents

 

Information Diet Examples

In the last post we spoke about information diet, the situation where a user searching for information has access to multiple classes of resources and strategically chooses which one(s) to interact with based on estimations of cost/benefit, or profitability.

This concept encourages us to think about information resources not just as this web page or that document, but as sets of delivered information differing in how well they support certain dimensions of profitability that users might adopt based on their current situation.

What dimensions of profitability might we consider as we explore this concept?  User research and introspection might indicate the following candidates:

  • time factors: are there pressure to find and consume the information, or is slow and thoughtful deliberation and evaluation of information encouraged
  • where needed: is the information loosely or tightly coupled to where the need for the information arose or where the information might be applied
  • content factors: are we looking for broad brush strokes or exhaustive information
  • interface factors: does the user prefer to consume the content on a mobile device or large engineering workstation
  • preferred or habitual access methods: all things being equal, does the user prefer documents, face-to-face with another person, videos, etc.

Let’s take the example of helping users learn about our company’s corporate web conferencing program. We might imagine a cluster of related information resources for users, including:

  • a survival guide, for people under pressure to set up their first web conference, say in the middle of a meeting  This could be made available in hard copy as a wallet reference, or for delivery on a smartphone
  • a features guide, for someone tasked with finding a way of enhancing remote collaboration
  • a training guide, intermediate in coverage between the survival and features guide, with content curated to align with common usage scenarios for this organization
  • video training, definitely not suited as a survival guide.

It makes sense to cross-link them so that users can choose a more profitable resource if the one they first find is not quite right for them.

These are presentation level variants.  Other presentation level variants might involve manipulating density of information, or catering to holistic or serial users, or providing different organizations of the same information, for example depth- or breadth-first.

But how can we create all these variants in a way that is flexible and maintainable?  One key concept is to deconstruct content into a pool of useful fragments that can be recombined in multiple useful ways.

For example, a pool of pieces for our web conferencing material might include:

  • how-to fragments: a structured set of how-to fragments, with short explanation, more detailed explanation, screen shots, etc.
  • features fragments: a structured set of features fragments, describing the features and associated user scenarios.

Suitably constructed, these allow the creation of the different presentations mentioned above, and more beside.

This power of this approach is only hinted at by the web conferencing example, and the motivation is broader than information diet.  For a deeper understanding of the concepts and some of the technologies involved, I recommend the book “Content Everywhere: Strategy and Structure for Future Ready Content” by Sara Wachter-Boettcher, which I have reviewed.

That’s all for now on the Information Foraging model.  We have discussed it mainly in the context of what is achievable in typical corporate technology and platform settings.  But it has also been applied in specialized government and other settings, informing the design of specialized user interfaces.

Stay tuned.

The Information Artichoke Home Page | Search Table of Contents