Category Archives: Thoughts

Planning is imagination and preparation

Yesterday I had a phone interview with Peter Morville who is writing a book about planning. We talked about planning, career and some of the things I’ve noticed so far living in Rwanda. The interview will be out next week.

Peter asked me what planning means to me. I love questions that require reflection like that one. I realize that over the years my thinking about planning has changed a lot. I tried to articulate some of that while we talked. The first thing is, I think of planning distinctly from goal setting.

I am certain that would be met with disagreement by most people. However, figuring out what you want to accomplish and why, and stating that clearly and unambiguously, mostly precedes the act of planning, which is what you do in order ready yourself to fulfill the goals set.

Sure, sometimes you may have a weakly defined starting point, or something like a fuzzy goal, but nonetheless, I consider planning to be what comes after you identified a problem space and decided to do something within it.

Having said that, these days I think of planning as a two part effort: imagination and preparation.

The first part is an exercise of envisioning scenarios, playing them out to understand what are the required actions and inputs, and the resulting effects and outcomes. I use the term imagining because I specifically mean playing it out in your mind. More like watching it unfold in your brain than actively playing it out yourself in the physical world. This does not mean you don’t use the physical world to figure out scenarios, I just mean the central concern of this aspect of planning is exploratory, observational and introspective (externalized if you are doing collaboratively, but still).

When considering scenarios, sometimes the constraints are self-evident from the get-go, sometimes they unfold through the process of going through scenarios themselves. Like imagining a camping trip and considering “What if it snows?”, “What if I run out of food?”, “What if a bear shows up?” The goal setting exercise that precedes this often creates a frame for the scenario landscape, sometimes with soft boundaries (i.e.: to become more emotionally aware and in touch with one’s own emotions), sometimes with hard boundaries (i.e.: to complete a sprint for a set of features in a software project).

In my view, by anticipating/considering a diversity of ways in which things can play out you can consider more or less likely scenarios and the next stage, preparation, can happen in alignment. This is where a lot of people hit a wall in a professional context: exploring how you are going to plan in this manner can clash with a presumed or pre-established work process, which has built-in assumptions about how planning takes place.

Having worked on projects with very different approaches along my career has reinforced the value of going through this exercise, even if it is to assume a pre-defined process to follow (and even if you are only doing it for half an hour to get yourself situated). At a minimum, it helps identify risks and potential obstacles, allowing you to be subsequently better prepared, and in certain situations it may give you insight into why you should reject the presumed process altogether.

A good distinction for me is that imagining, or identifying and playing through scenarios, is about effectiveness, reaching your goal. Preparation is concerned with efficiency, going through the real live situation with minimal disruption until that goal is met.

Preparation can be so many things though. Your packing list for a trip, a mise en place when you are cooking. These are the things we traditionally think about when we talk about preparing for something. However, I believe preparation also includes things like: getting into the right state of mind, setting expectations with yourself and others, minimizing likely disruptions, ensuring infrastructure is available, testing some aspect of the scenario, prototyping, and so on. My point is: what’s in scope for preparation is whatever is needed to satisfy the goal within the presumed scenario or scenarios you picked out.

You can prepare for one specific scenario, some variations on a scenario or multiple distinct scenarios, based on your judgment of the likelihood they will happen. You can emphasize your preparation for primary scenarios and have alternate plans for secondary scenarios. That’s what we mean when we say we have a plan B. And C and D and so on.

At the end, what we call a plan is a presumed set of circumstances (scenario or scenarios) and presumed artifacts, participants, flow, that were chosen to reach a particular goal.

I don’t know if this frame makes sense to anyone else, but it is currently helping me. I’m personally more attuned to the imagination stage of planning, though I usually prepare sufficiently, but know that I enjoy winging it too. Winging it is not ‘not planning’, it is not just ‘showing up and seeing what happens’. It means dealing with the scenario that plays out in reality without the proper preparation. And it’s just more fun that way sometimes. ;)

Introducing Carebot

You may have heard about Carebot in the media and you may have talked to us about the technical aspects of what it could be, so here is a quick update about what Carebot is today and where things stand.

What is Carebot for?

I started describing Carebot as “meaningful analytics for journalism” when I noticed that the conversation about success in journalism always turns to what data people are looking at and what tools they are using rather than what problem it seeks to address and why.

We are not creating Carebot because there is a shortage of analytics tools or even a shortage of interest in analytics in journalism (at this point, there is great interest and tools abound). There is, however, a misalignment between the day-to-day of newsrooms and the level of insight journalists want and need about the performance of their work once it’s out in the world.

Carebot’s objective is to better align analytics for newsrooms to the reality of the journalists working in these diverse settings. That requires building a tool to deliver information but it also requires understanding the workflow and needs of the journalist, acknowledging that there are a variety of storytelling devices that can be employed in their work, and developing metrics that best align all these characteristics.

Carebot

Analytics are collections of data points; in order to be meaningful they need to be contextualized and relevant to the circumstances in which they are presented. What makes analytics particularly meaningful (as opposed to a handful of generic tracked data regurgitated from a database into a dashboard or report), is how well it drives people to act, whether to tweak a story, celebrate their success or make decisions about subsequent work.

The drive behind Carebot is a set of hypotheses. Our project is a prototype for the approach we believe can help us test and explore these hypotheses and understand how things play out in the real world of day-to-day journalism. We are exploring a few different facets:

1. New thinking around what newsroom analytics should be. This includes what we measure (what’s ‘a story’ and what it can be compared to – or not), and how we are measuring (which measures and indicators make sense for what questions, and which ones are NOT appropriate, or misleading, or deceiving).

2. The technical implementation and adoption of those analytics. This concerns how we are measuring (the technical approach to tracking, analyzing and reporting) and when and where we are helping journalists become aware of these insights (the content, framing, frequency, volume and scope of information shared).

Since Carebot is first an experiment, the way it works today may be completely different from how it will work in six months, but the premise is always of continuously exploring newsroom analytics and appropriate implementation fitting with newsroom circumstances.

How does Carebot work?

The mechanics of Carebot are extremely straightforward: Ze collects usage data on live stories and reports specific metrics as notifications to the newsroom over a finite period of time. In our current phase of experimentation, here’s how it’s done:

For every story, Carebot uses a tracking component (a little snippet of code) which captures a few different aspects of how users are accessing and interacting with a story.

example tracking of on-screen visibility for graphics

example tracking of on-screen visibility for graphics

This information is fed into Google Analytics (as events and event categories, if that lingo matters to you). Carebot is also tracking some aspects of usage for things more granular than a story, such as graphics and images (which can be interesting in an of themselves).

This is not unlike any other measuring and tracking approach on the web, with the exception that we are intentionally focusing on a very narrow and specific set of measures — measures and indicators we are developing and testing to understand their relevance and usefulness to decision-making in the newsroom (more on that in a later post).

Newsrooms are busy places, so after aggregating this data, Carebot does two things differently from other reporting tools:

a) it only surfaces metrics identified as possibly most useful in understanding story performance for a given story type and

b) it offers this information through periodic notifications over a finite period of time, following the usual traffic pattern for a story.

a notification about the graphic on-screen visibility for a story

a notification about the graphic on-screen visibility for a story

Currently the notifications are being delivered via Slack in a channel used by a specific desk (our Graphics desk) because we are prototyping Carebot in a newsroom where this particular technology is in use. Carebot is agnostic to the delivery method and the notification would work just as well as a text message, an email, a mobile app, or as a bot integration on other services like Hipchat or Twitter.

When Carebot first becomes aware of a story (the story gets published and the tracking mechanism starts gathering data from user visits), it alerts the newsroom so they know Carebot is keeping track of things and will start sending relevant notifications over the course of the next 3 days (3 days only due to the current scope of our work):

first notification carebot shares about a story

first notification carebot shares about a story

The scope of our current experiment is only stories with graphics and the metric we are testing is an indicator we call the “linger rate”, to assess how much time users are spending with the graphic, not just the time they spend with story the graphic is in.

After the first notification to indicate tracking has begun, Carebot reports that metric for the story every 4 hours for the first day, then twice daily for the second and third day. This frequency and volume are part of our experiment to find a good balance of awareness without nagging.

example of wording variation used on 2nd and 3rd notifications

example of wording variation used on 2nd and 3rd notifications

Some stories contain multiple graphics or graphics may be used across different stories. These unique scenarios are helping us explore different presentation needs and how to best answer questions raised by journalists when the right metric is available at the right time.

early example for a story with multiple graphics

early example for a story with multiple graphics

Carebot is currently a broadcasting tool more than it is a bot that responds to specific user requests, but we are beginning to add a few capabilities based on user feedback and what we are learning as they receive information from Carebot.

test query for specific graphic information

test query for specific graphic information

What’s next for Carebot?

We have been developing Carebot for just four weeks and in this short time have seen great potential from the questions designing it has raised, as well as from user feedback. We are working on 2-3 week cycles where we increase Carebot capability by growing scope of stories we analyze, scope of metrics we develop and track and report, and scope of features we develop.

Our work is currently supported by a Knight Foundation Prototype Grant through May 2016. In this time, our goal is to have a concrete set of metrics (well articulated and documented), with a functional tracking and reporting mechanism (Slack notifications for now) being used in a live newsroom setting (NPR), actively helping the work of journalists (Visuals’ desks).

This will help us demonstrate the potential of Carebot’s approach and share the lessons we learn over the course of experimenting with this notification based approach and new metrics to improve understanding and assessment of performance for journalism.

Note: The Carebot team will be at NICAR 2016, March 9-13, and more than eager to talk to anyone interested in understanding performance and analytics for journalism. Please connect with us in person (there will be a session about this very topic in the conversation track!) and online through @thecarebot. You can follow our progress on Github.

Concept Modeling is Hard

Concept Modeling is a method used to visually express understanding. It forces the author to indicate the explicit relationships between the concepts that make up a domain. I became interested in this approach back when Bryce Glass created a very visually compelling one to explain Flickr (10 years ago!).

Flickr User Model, v0.2
Bryce Glass, 2005

I’ve talked to Bryce about how to do these things and how hard it is, and I feel like every time I try to do one I fail and abandon it before I get to a place where it can be useful for anything.

Since starting at NPR three months ago, I’ve met lots of people with very different answers to questions about how NPR works. Not conflicting, but nuanced based on their role (and using very specialized language). I thought creating a Concept Model would be a good way to capture these perspectives and hopefully generate an artifact that could help people start conversations from a common base of understanding so they could more quickly dig in deeper into whatever topic they need to address.

This past week was Serendipity Days at NPR (often called hackdays/weeks in other places) a time for people to work on things that they wouldn’t normally work on, but that are useful to the collective. So I decided to try to get others to help create a Concept Model to explain NPR and how users relate to it. Being new to the domain (NPR) makes me ill equipped to explain it, but I thought I could facilitate the process and get smarter and more qualified folks to contribute the content I lack.

We started out with a free-listing exercise generating one concept per post-it of “things that make up the NPR ecosystem”, then clustered like-items by affinity. First inspection showed that most things were quite understandable among us: the relationship between NPR and local stations, the financial model behind NPR, even how NPR plays out in social media, but the bucket of things we labeled “content” was a complete mess. There was very little clarity around what an “episode” meant in the context of radio versus podcasts, for example. So we decide to dig into this area more deeply to make that understanding more explicit.

Freelisting followed by affinity clustering

It was unavoidable: get 3 information architects in front of a whiteboard discussing definitions and you will very quickly see a taxonomy conversation shape up. As we discussed the differences in concepts and their hierarchical relationship across radio, podcasts, articles, photo essays, apps and web apps, we needed more robust language and structure that a general mapping was not offering. We were really talking about the Content Model needs of the whole organization.

The blob

So I went back to the procedural model of how you make a Concept Model and wrote down statements to define what the terms meant. This helped disambiguate situations where one term meant different concepts and single concepts that had multiple terms depending on domain.

Concept statements

Interesting to note that this did not flow well at first. My focus question was “how does NPR work?” and my statements were broad and useless. I reframed the question to “how does NPR tell stories?” and all the above came out easily: concrete and specific. A big lesson learned about Concept Modeling regarding the usefulness of the focus question. (Later iterations it became “How does NPR investigates and tells stories?”)

Once I had a good set to work with, we started visually connecting them and making the implicit relationships visible by linking the post-its on the whiteboard and using verbs to express the relationships between them.

concept model fetus

This was very revealing. We found we were missing more granularity and after flailing for a while (but asking and generating lots of new and interesting questions), we realized that we had hit a dead-end talking about this in terms of abstractions. We kept going to look at the website to see how things were actually organized, or looking up glossaries people had made internally.

Veronica suggested we map out specific examples so we could get a really concrete sense of the relationships and then see if a pattern emerged from them so we could go back to abstractions. That was brilliant and we found new terms we had not even included like “a desk”, “a beat” or the actual people that make the content. This is relevant because we observed that most of the work is organized around people, so that had to be a central construct in the model.

example relationships Screenshot 2015-05-15 15.30.14 Screenshot 2015-05-15 15.30.43

By this point we had identified something more concrete to model, but also gathered a lot of new questions we could not quite answer. For example, if a “desk” is a group of journalists that pursue a specific beat (topic area), are all groups of journalists with a topical interest a desk? It turns out the answer is no, but what are they then? The thing about Concept Models is that you need to be precise, so you have to come up with an answer in order to include a concept like this. Presumption without definition does not play well, because after all, this is about expressing understanding and vagueness is NOT understanding.

We almost got off track with the Concept Model because we needed to understand the Content Model of all of NPR for this part, but it raised the issue of how a unified Content Model would benefit the various areas that share the same concepts and relationships. This is how far we got with this exercise given how much time we had to explore, but serendipitously, I happened to tweet about what we were doing…

Paul Rissen, Data Architect from the BBC, who shares our enthusiasm for this geekery wrote back and told us that the BBC had done a lot of work in this area (Content Modeling, not Concept Modeling) and was kind enough to Skype with us for an hour and share some of the definitions, relationships and terms the BBC uses to express its content ecosystem (thanks Paul!).

This was helpful because it offered a concrete AND complete model to talk about the same things, as well as it offered different language to show us which of our terms has general application and what is just NPR lingo.

This was a few hours of work and now I’ve decided to spend more time continuing to validate the relationships with other teams and people in other roles (editors, reporters, producers, etc) to document the Concept Map as I originally intended. In addition, I drew some other conclusions from this exercise:

  • Creating a Concept Model collaboratively was WAY more productive than attempting to interview people and distill that understanding on my own, as I tried before. The process of quickly asking and answering questions together got me much further than I could have on my own.
  • I knew Concept Models were damn hard to do, but learned that the visual representation of the relationships is relatively easy in comparison to gaining agreement around the meaning of things.
  • Even though I didn’t finish the whole thing, Dan and I did a show & tell to explain how we went about it and I received a lot of positive feedback, validating that doing this and having an artifact is indeed desirable and worth pursuing (which is why I’ll continue doing it).
  • It is very easy to abandon the whole thing to avoid this complexity and attempt to write a glossary instead. Except that glossaries are definitional by nature, while Concept Models are relational. And it’s in this relationship between things that deeper meaning and understanding emerges. So, I doubt I’ll create a glossary again, unless it’s in the service of creating a Concept Model, which has already proven more useful.
  • We came to realize that the shape and emphasis of certain concepts in the diagraming was a political statement. So much so that at one point we noticed that the way we did it was a publishing-driven perspective, whereas we wanted a journalism-driven perspective. I suspect the shape/tone of the focus question is partially responsible for this. It also made me wonder if there are assumptions or principles worth stating before starting to help frame this.
  • Throughout this exercise we repeatedly asked “are we trying to illustrate what is or what should we be?”, which was revealing. The point of the Concept Model is to make the complex clear, not simpler. At the same time, the process of understanding concepts, their relationships and disambiguating things forces you to create new language and normalizing existing language, so in a sense it can express a viewpoint that some may not see as “current state”. This simply shows that there is NOT a current shared mental model of that domain.

Thank you Veronica and Dan for your patience going along with my crazy ambition to map all the things (and Kate for stopping by and answering tons of questions!). Special shout-out to Paul Rissen for being so generous with his time and expertise.

I’ll likely write another post with more details as this work progresses.

2015 Knight-Mozilla Fellowship

Next year I will be a fellow in the Knight-Mozilla Fellowship program run by OpenNews, where I will work with the NPR Visuals team, making fantastic things that help empower people with tools and disseminate the spirit of open source journalism. I am very excited about this opportunity and have lots to say about why I decided to take this path and why it’s important. That deserves more writing so I’ll get to it later.

I plan to capture my journey to keep track of what I do and what I learn. I’m starting with a timeline visualization of the Fellowship milestones thus far, from first becoming aware of it through announcing my participation.

Check it out.