Subscribe to RSS - blogs

Blogs

Submitted by Derrick on

http://dilbert.com/strip/2011-05-16

Have you ever been part of a software development team that complained they could not make progress because the users’ requirements kept changing? It is certainly frustrating to feel like you have wrapped something up, only to have to go back and tweak it over and over again. Even “Agile” teams, which are based around the idea that requirements are a conversation that becomes clear as you move along, often find it hard to go back and revisit completed stories.

Do you know what is even more frustrating than changing requirements? Pushing a new system into production only to have users fight adopting it because it doesn’t meet their needs. My previous article, How Much of Your Project Value is At-Risk Due to Cognitive Bias? discusses the many ways that project teams can misperceive the willingness of the target audience to adopt. Setting aside higher level political interests or organizational fears, people will adopt when they believe the value of making a change will be greater than the effort required to make that change. Essentially: if given a choice, people will adopt when perceived value > effort, and will not when perceived value < effort. When adoption is not an option, people believing that value < effort creates organizational friction in the form of reduced productivity, political turmoil, or shadow IT. Systematic biases lead us to overestimate the value of our projects and to underestimate how much effort people will require to change their habits.

So what do we do about it? How can we overcome deeply rooted problems in the way we process information? Research has demonstrated that you can’t simply think yourself out of these pitfalls, you need to expand your feedback loops with people outside of your core team. Let's start with what this means related to project execution. It is not uncommon for teams to have some form of validation process in place, but often this process fails to break teams out of their cognitive biases. In order to be effective, project validation processes need to be designed to test assumptions made by the team with unbiased external parties. The approach of early unbiased feedback is designed to emulate a scientific process and is composed of four steps: Hypothesize, Validate, Feedback, Adjust.

First, design a hypothesis on how to solve a need of the target audience. How will you help them with their Job to be Done? Define an assertion about the future state, and be explicit about the assumptions you have made that need to be tested.

Second, create something tangible that you can validate with your target audience. In software development, this could be a wireframe, clickable prototype, or your iteration deliverable. On a business intelligence project this could be a simple version of a report using a one-time data extract. When implementing a strategic process improvement, this could be a session where you roleplay a scenario with employees from different departments. The point is to do something where they can get a good feeling for what the end-state will mean to them. A straightforward way to accomplish this is to take the deliverables you are already creating for project sponsors, and use them in one-on-one guerrilla interviews with members of your target audience. To be clear, giving a presentation to your Product Owner does not count. That person is a proxy for the target audience, and is subject to the same biases as the rest of the team. Validation needs to be done with someone who is external to your project team in order to break past cognitive bias.

Third, define a feedback mechanism to synthesize qualitative and quantitative learning from the validation exercises. This should enable you to test your assumptions, reveal blind spots you were not considering, and to think about what you have learned in aggregate, grouping your target audience by how they perceive the value and effort of the change. Teams fail with this step when they don't take time to synthesize what they have learned and discuss how it should impact their project direction.

Fourth, place deliberate checkpoints in your project plan to review the feedback from users and adjust as needed. A good process will enable you to continually course-correct to arrive at the right intersection of effort and value for your target audience, while providing enough stability to your development team so they can move forward. Teams usually use the feedback from validation sessions to adjust the scope of certain features, but it is less common to step back and assess how the high level project direction should change. We should be forcing ourselves to reevaluate if we are still on track to provide real value at a reasonable level of effort.

Using this approach can help you get outside of your mindset to be better aligned with the needs and pain points of your target audience. You are likely already doing many similar steps, but are your current methods truly enough to break your team out of its biases? My goal in publishing this article is to get feedback on my approach and resources so I can continue to adjust them to be better. How valuable do you think this approach would be for your teams? What might make it difficult to implement this approach? If you have used some form of this approach, what methods of validation, feedback and course correction have you found to be successful?

Static requirements indicate that you have not been effectively seeking feedback from your target audience. Your and their ideas about the effort and value required to accomplish what they need will change over time, and if your process doesn’t allow for course correction, you are likely to create a solution to no one’s problem. Who will be complaining then?

Free Resources

You can download a feedback planning worksheet and adoption testing template I have developed here. All I ask is that if you do, please like or share this article so that more people can find them as well. If you use them, please leave a comment with your experience and feedback so that we can continue to improve the tools.

Thank you!

(See also the LinkedIn version of this article)

Submitted by Derrick on

“The confidence people have in their beliefs is not a measure of the quality of evidence but of the coherence of the story the mind has managed to construct.” – Daniel Kahneman

I recently read Michael Lewis’ book “The Undoing Project” about how a ground-breaking pair of Psychologists, Daniel Kahneman and Amos Tversky, upended the science of decision making by demonstrating that the human mind can make systematic errors of judgement. Their work won a Nobel Prize in Economics, paved the way for Richard Thaler and Behavioral Economics, and has even led to the creation of “Nudge Units” within governments to help shape policy to fit the way people think.

I’ve long been an advocate of including feedback from your target audience frequently throughout the project lifecycle, and this book got me thinking about why this is important, and what exactly are the hurdles of perception that prevent a project team from fully understanding the end user. My next post is about how to collect this early, unbiased feedback, but first we need to discuss the impacts of cognition on envisioned project value.

PROBLEM: Why don’t people adopt changes we think will benefit them?

Often as project teams we define “Success” as Go-Live, or the moment when we release our new change, software, product, or whatever out for others to use. But really, value is only created as people start using what we released. If a start-up releases to market what they believe is a ground-breaking product and no one buys it, they will soon be out of business. Similarly, if a company releases a new process and accompanying software update, its value is directly tied to whether employees are willing and able to use it.

If we want to create the most value from our project, we need to be aware of what is driving or slowing adoption. So the question is: why don’t people adopt changes we think benefit them? We can think about this in terms of two factors, the perceived effort that will be required to make the change, and the perceived value that will come from the future state. When someone believes that the EFFORT to change is greater than the VALUE they will receive, then they will resist adoption. It is their perception of the effort and value that are important for us, as we don’t know the actual effort required and value received until after the fact. Also, the effort vs. value equation will not be the same for everyone involved.

Generalizing the audience by these factors, we can see patterns that affect adoption. The project team and executive sponsor(s) are unlikely to push out a new change or product unless they feel that the benefit is high and the effort to adopt by the target audience will be relatively low. Where they are in relation to each other may vary depending on your project, and you should be aware of the perceptions of your core team to know if someone senses a risk that others may be ignoring. Usually at least some portion of your target audience is also aligned, and believes that the value of making the change is worth the effort to do so. These will be your early adopters, and while they will be tolerant of helping to work out the kinks, others will be watching them to see how the effort vs. value equation works out in real-life.

People who believe that the change will benefit them (in efficiency, quality, higher profits or reduced costs) but also feel the move could be difficult, will be willing to make the change but hesitant to get started. Attention by the project team to reduce the effort to adopt will help get them past their initial inertia. Some people may be skeptical of the value of the project, while not seeing the transition as difficult. This might be because the main value of the project is not aimed at them, or they have external constraints you did not take into consideration. Another possibility if you are standardizing across the organization is that the new maturity level is below what that group is used to. For these people, working on ease of use or ease of transition will be mostly a waste. You should instead find a way to either include features they value in another area, or else sell them (and their leaders) on the overall value for the organization and that this is just the first step on a larger journey.

The final group of course is the resistant users, who do not believe they will realize as much value from the future state and that it will require a great deal of effort to transition. These people will be the last to adopt the change, and will be willing to put up with inconveniences and painful work arounds to keep with their status quo. They will be vocal with their negative reviews, seeking to persuade others to reject your project. They may engage in political maneuvers to delay or subvert the roll out. They are also the most difficult to persuade, as you need to work on improving their perception of both factors at once.

Cognitive biases change the project team’s perception of effort vs. value from that of end users

In working on adjusting the perceptions of effort vs. value of your target audience, it helps to understand the cognitive biases that affect our judgements, and how many of these biases push the project team and end-users in opposite directions. If you are not careful, your team will be ready to go live with a new system or change and only then realize that your perception of the effort vs. value equation is vastly different from that of your target audience. For our purposes, we will discuss biases that lead our teams to underestimate the effort users believe a change will require, overestimate the value users believe they will receive from the change, or do both at the same time. I am focusing here on how these biases affect a project team’s understanding of the target audience’s perception of effort and value, as that is the first area we need to correct. However, these biases can also impact the target audience’s perception vs. reality.

Biases that lead us to Underestimate Effort and Overestimate Value

 

Underestimating Users’ Perception of Effort

Curse of Knowledge – Research has shown that once people understand a piece of knowledge, we systematically underestimate how difficult it will be for someone else to learn about it. For instance, a Stanford University study by Elizabeth Newton asked people to tap out the rhythm of a well-known song (such as “Happy Birthday”) on a table while a listener tried to guess the name of the song. The “tappers” predicted that listeners would guess the song correctly 50% of the time, when in fact listeners were only successful 2.5% of the time. Much as we are unable to account for information we lack, we are unable to properly discount information that we do have. In project settings, this causes us to believe that end-users will be able to learn a new system or process more quickly than they are actually able.

Blind Spots – The day-to-day actions of our target audience can be very nuanced, and even when we spend time understanding where they are coming from it is impossible for a project team to see everything. When judging the difficulty of completing a task, we are incapable of factoring in items we don’t know about. On paper, we adjust for this by adding a buffer to our estimates but it is much harder to add a buffer to our mental opinions.

Empathy Gap – The more removed a project team is from the target audience, the more difficult it will be to account for their habits and preferences. For instance, an analyst fresh out of college will have trouble appreciating how a blue-collar worker close to retirement prefers a screen to be laid out. What is intuitive for one group will be confusing to another, and it is impossible to discern what works best without enlisting feedback from your target audience.

Loss-aversion – People prefer to avoid a loss rather than to acquire an equivalent gain. A study published in the Journal of Services Marketing found that a larger number of customers will defect for competitors after a price increase vs. the number of new customers that a similar magnitude price decrease will bring. Our target audience feels ownership over their current systems and processes, and so will be more reticent to give them up than we expect. However, we can use this to our advantage if we include users as co-creators in the project design process so that they feel ownership of the future state as well.

Overestimating Users’ Perception of Value

Ikea Effect – An experiment by Norton, Mochon and Ariely published in 2011 demonstrated that people value work they are involved in more than non-participants do. Subjects were willing to pay 63% more for furniture they helped assemble than for the same furniture pre-assembled. As project teams, we can’t help but see the effort that goes into a project, and are required to frequently persuade others of the value of continuing the work. This is a necessary component of demonstrating the professional will required to see a project to completion, but has an adverse effect on our perception of how much users will value the result.

Availability – When building a product or designing a new process, we will account for the complications that are easiest to conceive of and validate. For instance, we are more likely to protect against bugs that can be found by a team member in a test environment than for issues that arise once multiple subject matter experts are using the system at the same time. An 80% unit test coverage rate gives us a sense of quality, and then a major bug that only happens with the version of Internet Explorer that the finance team has installed renders the application unusable.

Recency – New information comes with a sense of urgency that makes it feel more important than information we have known about for a while. We assume because we’ve known about an issue for a long time that it is less important to the overall quality of the project, or that there is a workaround in place. When a user starts to adopt a system or process all of these quirks will be discovered together, and so the criticality of each piece will be perceived differently than the project team. This bias can also lead to project teams who jump into fighting the next fire before they have fully extinguished the last one.

Optimism – When we have a goal or outcome that we wish to achieve, it is common to err optimistically about our probability of achieving that outcome. This optimism can present itself positively (we feel more likely to be successful than we should) or negatively (we feel more likely to avoid a negative outcome than we should). Optimism bias has been shown to be greater when we have more perceived control. For instance, people believe they are less likely to be in a car accident if they are the one driving. Prior experience and putting in place a mechanism to test the results have been shown to reduce optimism bias.

Underestimating Effort and Overestimating Value

Confirmation Bias – Human tendency is to readily accept evidence that favors our point of view, and to reject evidence that contradicts it. We are more likely to interpret ambiguous information as supporting our point of view. This stems from inductive reasoning and a natural preference to win. The effect of this bias is stronger in emotionally charged situations, or when we have a stake in a particular outcome. As a project team, we are highly invested in the project being successful, so we discount issues or more readily accept workarounds to maintain the belief that the end result will be valuable overall.

Bandwagon Effect – Confirmation bias can be compounded by a dogmatic leader. Subordinates become used to “not taking NO for an answer” and so filter out or frame problems in a way that minimizes their impact to the effort vs. value equation before they present them to the team.

Anchoring – Research has shown that we put too much weight on the first piece of information we receive, even if it is not relevant to the question. In one study, some participants were asked whether Mahatma Gandhi died before or after the age of 9, while others were asked if he died before or after the age of 140. Both anchors are obviously ridiculous, however, the average response was 50 years old for the first question and 67 for the second. On our projects, once we have designed a workable solution to the problem we have been tasked to solve, it can be very difficult to accept alternatives. This is partly because we compare the later partial solutions to the first more complete solution and look for similar positive characteristics. Anchoring also leads us to overestimate the probability of conjunctive events (e.g. hitting all milestones on time. The probability of each separately is high, so we estimate the overall probability as too likely) and underestimate the probability of disjunctive events (e.g. that any one of many risks will impact the timeline. The probability of any one risk causing a delay is low, but their combined probability is higher than we expect). Complex systems increase this bias because they tax our working memory.

Simplification – In order to make comparisons and decisions, we take complex alternatives and reduce them into their simplest forms. The judgement that we reach may not follow from the original situation depending on the importance of the factors we stripped out. An example of this is causal oversimplification: We assume that because Y followed X that X caused Y. However in reality A, B, C, D … etc. also contributed to Y directly or indirectly.

Pressure to Deliver (Fixed vs. Growth mindset) – Projects can be high stress situations, especially as we approach milestones and deadlines. A cultural environment that follows a fixed mindset, i.e. one where resources are limited and people are static can lead to project teams becoming protective and uncomfortable acknowledging that they don’t have all the answers. They feel required to protect themselves: the risk of losing face outweighs the benefit of open collaboration, especially when they feel they are on the right track. Teams are more likely to become entrenched and force their perspectives, including that of the effort vs. value equation.

What Comes Next?

A true understanding of your target audience’s perception of effort vs. value will inform you about the adoption risk your project faces. Most of the cognitive biases listed here are difficult to self moderate, as they are systematic to the way we reason and make judgements. Rules of thumb that most of the time work to our advantage lead us astray in certain contexts. As mentioned earlier, the best way to combat these biases is through early and frequent feedback from the target audience, which forces us out of our entrenched point of view. But, what is the best way to collect this feedback for your project, and what do you do with the information once you have it? My follow up post, Stop Complaining About Changing Requirements! explains my approach based on my past project experience, but I am interested to hear your thoughts as well. What biases have you seen crop up on projects? What have you done to collect meaningful feedback from your target audience? How much of your project value is at-risk due to cognitive bias?

(See also the LinkedIn version of this post)

References

Tversky, Amos; Kahneman, Daniel. (1974) “Judgment under uncertainty: Heuristics and biases.” Science, Vol 185(4157), Sep 1974, 1124-1131. http://dx.doi.org/10.1126/science.185.4157.1124

Heath, Chip; Heath, Dan. (2006) “The Curse of Knowledge” Harvard Business Review, December 2006. https://hbr.org/2006/12/the-curse-of-knowledge

Dawes, J. (2004) “Price Changes and Defection Levels in a Subscription-type Market: Can an Estimation Model Really Predict Defection Levels?” Journal of Services Marketing. 18 (1): 35–44. doi:10.1108/08876040410520690

Norton, Michael I.; Mochon, Daniel; Ariely, Dan (2012). "The IKEA effect: When labor leads to love" Journal of Consumer Psychology. 22 (3): 453–460. doi:10.1016/j.jcps.2011.08.002

Klein, Cynthia T. F.; Marie Helweg-Larsen (2002). "Perceived Control and the Optimistic Bias: A Meta-analytic Review." Psychology and Health. 17 (4): 437–446. doi:10.1080/0887044022000004920

Strack, Fritz; Mussweiler, Thomas (1997). "Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility." Journal of Personality and Social Psychology. 73 (3): 437–446. doi:10.1037/0022-3514.73.3.437

Submitted by Derrick on

Starting with this SO answer, I built out a fully generic Unit Test POJO builder:

package com.derrickbowen.testutils;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import com.google.common.testing.ArbitraryInstances;

public class Builder <T> {

 private final Class <?> clazz;
 private Map <String, Object> map;

 public Builder(Class <T> clazz) {
  super();
  this.clazz = clazz;
  this.map = new HashMap <> ();
 }

 public static Builder <?> start(Class <?> clazz) {
  return new Builder <> (clazz);
 }

 public Builder <T> with(String name, Object value) {
  map.put(name, value);
  return this;
 }

 /**
  * Attempts to find the setter for the given property name
  * 
  * @param instance
  * @param name
  *            the property
  * @param value
  * @return
  * @throws IllegalAccessException
  * @throws IllegalArgumentException
  * @throws InvocationTargetException
  * @throws NoSuchMethodException
  * @throws SecurityException
  */
 private Builder <T> setProperty(T instance, String name, Object value)
 throws IllegalAccessException,
 IllegalArgumentException,
 InvocationTargetException,
 NoSuchMethodException,
 SecurityException {
  try {
   if (value != null) {
    invoke(instance, name, value, value.getClass());
   } else {
    findMethodAndInvoke(instance, name, value);
   }
  } catch (NoSuchMethodException nm) {
   if (value.getClass() == java.lang.Integer.class) {
    invoke(instance, name, value, int.class);
   } else if (value.getClass() == java.lang.Long.class) {
    invoke(instance, name, value, long.class);
   } else if (value.getClass() == java.lang.Float.class) {
    invoke(instance, name, value, float.class);
   } else if (value.getClass() == java.lang.Double.class) {
    invoke(instance, name, value, double.class);
   } else if (value.getClass() == java.lang.Boolean.class) {
    invoke(instance, name, value, boolean.class);
   } else {
    findMethodAndInvoke(instance, name, value);
   }
  }
  return this;
 }

 /**
  * Iterates through all methods on the class to find the setter
  * 
  * @param instance
  * @param name
  * @param value
  * @throws IllegalAccessException
  * @throws InvocationTargetException
  * @throws NoSuchMethodError
  */
 private void findMethodAndInvoke(T instance, String name, Object value)
 throws IllegalAccessException,
 InvocationTargetException,
 NoSuchMethodError {
  Method[] methods = clazz.getMethods();
  String setterName = getSetterName(name);
  boolean invoked = false;
  for (int i = 0; i < methods.length; i++) {
   Method method = methods[i];
   if (method.getName().equals(setterName)) {
    method.invoke(instance, value);
    invoked = true;
   }
  }
  if (!invoked) {
   throw new NoSuchMethodError(
    "Cannot find method with name " + setterName);
  }
 }

 private String getSetterName(String name) {
  return "set" + name.substring(0, 1).toUpperCase() + name.substring(1);
 }

 private void invoke(T instance, String name, Object value, Class < ? > claz)
 throws NoSuchMethodException,
 SecurityException,
 IllegalAccessException,
 IllegalArgumentException,
 InvocationTargetException {
  Method method = clazz.getMethod(getSetterName(name), claz);
  method.invoke(instance, value);
 }

 public T build() {
  @SuppressWarnings("unchecked")
  T instance = (T) ArbitraryInstances.get(clazz);
  try {
   for (Entry <String, Object> val: map.entrySet()) {
    setProperty(instance, val.getKey(), val.getValue());
   }
  } catch (Exception e) {
   throw new RuntimeException("Unable to set value with builder", e);
  }
  return instance;
 }
}

 I am using it along with Guava testlibs and Lombok to get better coverage of POJOs without much effort: 

public class StateProvinceTest {

    @Test
    public void sanityTests() {
        StateProvince sp1 = (StateProvince) Builder.start(StateProvince.class)
            .with("stateProvinceValue", 2).with("stateProvinceName", "TX")
            .build();
        StateProvince sp2 = (StateProvince) Builder.start(StateProvince.class)
            .with("stateProvinceValue", 3).with("stateProvinceName", "LA")
            .build();
        EqualsTester etester = new EqualsTester();
        etester.addEqualityGroup(sp1,
            (StateProvince) Builder.start(StateProvince.class)
            .with("stateProvinceValue", 2)
            .with("stateProvinceName", "TX")
            .with("stateProvinceAbbreviation", "Big T").build());
        etester.addEqualityGroup(sp2);
        etester.testEquals();
    }

    @Test
    public void testCreateAndSerialize()
    throws ClassNotFoundException, IOException {
        // arrange
        StateProvince sp1 = (StateProvince) Builder.start(StateProvince.class)
            .with("stateProvinceValue", 2).with("stateProvinceName", "TX")
            .build();
        // act & assert
        SerializableTester.reserializeAndAssert(sp1);
    }
}
Submitted by Derrick on
Cast your vote for Pariveda's entry into SXSW 2016! The intern team I managed this summer partnered with the Recipe for Success Foundation to build a mobile app for their VegOut! challenge, and we have a great talk planned for SXSW Interactive this year: Learn how, with the right techniques meeting the right medium, technologists and nonprofits combined their expertise and used gamification & social media to spread the message of healthy eating on a global scale. Please create an account on this page (only takes a minute) and vote for us: http://panelpicker.sxsw.com/vote/46941
Submitted by Derrick on

I have been setting up an Ionic (http://ionicframework.com/) project using gulp, and the ionic templates don't include any unit tests. This would not do. First, I followed these posts to get unit testing set up with Karma and Jasmine:

But that still didn't get the gulp build to fail and exit upon unit test failures, as I would want for a nice Continuous Integration setup. After looking through a bunch of Gulp and Karma api docs and stack overflow posts, I came to this setup. It will exit the build if tests fail (though tasks still run in parallel to take advantage of the gulp paradigm) and it also adds the karma test watcher to the gulp watch task.

gulpfile.js

...

/**
* Test task, run test once and exit with error if failure
*/
gulp.task('test', function(done) {
    karma.start({
        configFile: __dirname + '/tests/karma.conf.js',
        singleRun: true
    }, function(code) {
        if (code == 1){
        console.log('Unit Test failures, exiting process');
           done('Unit Test Failures');
        } else {
            console.log('Unit Tests passed');
            done();
        }
    });
});

gulp.task('default', ['test', 'sass']);

gulp.task('sass', function(done) {
  gulp.src('./scss/ionic.app.scss')
    .pipe(sass({
      errLogToConsole: true
    }))
    .pipe(gulp.dest('./www/css/'))
    .pipe(minifyCss({
      keepSpecialComments: 0
    }))
    .pipe(rename({ extname: '.min.css' }))
    .pipe(gulp.dest('./www/css/'))
    .on('end', done);
});

gulp.task('watch', function() {
    gulp.watch(paths.sass, ['sass']);
    karma.start({
      configFile: __dirname + '/tests/karma.conf.js',
      singleRun: false
    });
});

...
Submitted by Derrick on
I got a google hit to my site for: "Almost Positive" chords derrick bowen Which blew my mind. Almost positive was my garage band from high school. I had a backup of our old website, so I uploaded it to http://derrickbowen.com/almostpositive/ for posterity's sake. It includes lyrics, some chords, photo journals, and a couple pretty sweet Photoshop generated javascript hover images.
Submitted by Derrick on

Em                   G
There was a man from Gotham
       D            Em
In the Batmobile he rode
Em              G
Defending the defenseless
        D               Em
It's to him I sing this ode
         Am          G
With his Hammers of Justice
   D                 Em
He struck down every foe
C
Safety for our families
     D              Em
It's this to him we owe
         Em
Grey and blue
         G
Grey and blue
    C        G
The man from Gotham
D                 Em
Wore the grey and blue

Em-G-D-Em
He fought the vilest villains
Too numerous to list
Rendering his verdict
With Batwing covered fists
Am-G-D-Em
Descending from the night sky
His scalloped cape would flow
C-D-Em
Those who broke the law deserve
the punches he would throw

Chorus

Em-G-D-Em
But beneath the mask was just a man
Same as you and me
His true face he could never share
A secret identity
Am-G-D-Em
But why endure this lone crusade
Fight-the-fight you just can't win
C-D-Em
If asked the Bat would tell you
Someone's gotta stand up to all this sin

Chorus X2

    C        G
The man from Gotham
D                 Em
Wore the grey and blue

Lyrcis credit goes to: http://ian2x4.deviantart.com/art/Grey-and-Blue-Lyrics-209516865 Chords from: http://chordify.net/chords/batman-the-brave-and-the-bold-grey-and-blue-b...

Pages