MOOC: People Analytics

analyticsThese are my notes from MOOC from Coursera on People Analytics by University of Pennsylvania.
Performance Evaluation
  • “For any given level of effort, a range of outcomes can occur due to factors outside the employee’s control.”
    • i.e. skils vs luck
      • “Chance doesn’t persist. And if the challenge is to parse skills from chance, the single most important test is persistence. Do you see it across periods? Do you see it over time? you see the positive performance measures persist?”
  • Reasons for skepticism regarding skill
    • overconfidence in the candidate
    • outcome bias
      • If the outcome is good, then the employee is good, disregarding potential for luck.
      • If the outcome is bad, then the employee is bad, disregarding other factors that are out of the employee’s control.
    • Law of Small Numbers
      • Infer too much from small sample of data
  • Regression to the Mean
    • “Anytime you sample based on extreme values of one attribute, any other attribute that is not perfectly related will tend to be closer to the mean value.”
      • Examples:
        • “Performance at different points in time”
          • “E.g., last year’s stock returns and this year’s”
        • “Different qualities within the same entity”
          • “E.g. a person’s running speed and language ability”
    • What gets in the way of seeing this?
      • “Outcome bias”
        • i.e. “We tend to believe that good things happen to people who work hard. Bad things happen to people that worked badly.”
        • “We tend to judge decisions and people by outcomes and not by process.”
      • “Hindsight bias”
        • “Once we’ve seen something occur we have a hard time appreciating that we didn’t anticipate it occurring.”
      • “Narrative seeking”
        • “we came to believe things better we can tell a causal story between what took place at time one and what took place at time two. And if we can tell a causal story, then we actually have a great confidence in our ability to predict what happens next.”
  • Small Samples
    • “Sample means converge to the population mean as the sample size increases. (This is known as the Central Limit Theorem.) Thus, you will see more extreme values in small samples.”
    • “We tend to believe that what we see in a small sample is representative of the underlying population. We don’t appreciate that a small sample can bounce around quite a bit. And not in fact be representative of the underlying population. This means that we should be very careful to not follow our intuition on what it means when we only have a small sample. This is a bias acknowledged in the literature called, the law of small numbers”
  • Signal Independence
    • “The average of a large number of forecasts reliably outperforms the average individual forecast.”
    • “But the value of the crowd critically depends on the independence of their opinions.”
    • “Independent means uncorrelated.”
    • Sources of correlation:
      • “They’ve discussed it already!”
      • “They talk to the same people”
      • “They have the same background — from the same place, trained the same way, same historical experiences, etc.”
  • Process vs. Outcome
    • Focus on process: “Use analytics to better understand, and focus on the process that tend to produce desired outcomes.”
  • Biases
    • Non-regressive predictions: “We don’t understand regression to the mean and so we forecast too directly from things that have happened in the past.”
    • Outcome bias: “We tend to believe. The outcomes reflect the underlying quality or efforts with this Outcome bias.”
    • Hindsight bias: ” We tend to believe that we knew something was gonna happen before it happened, even though we didn’t, this is Hindsight bias.”
    • Narrative bias: ” And we tend to tell narratives that make sense of all the events that we observe.”
    • ” All of these things get in the way of our appreciating the role that chance plays.”
  • Account for chance
    • “The key issue: Persistence”
    • “The more fundamental (skill-related) a performance measure is, the more it will persist over time.”
    • “The more chance-related a performance measure is, the more it will regress to the mean over time.”
  • Critical questions:
    • “Are the differences persistent or random? I.e., how do we know this is’t just good/bad luck?”
    • “Is the sample large enough to draw strong conclusions? How can we make it larger?”
    • “How many different signals are we really tapping into here? How can we make them as independent as possible?”
    • “What else do we care about? Are we measuring enough? What can we measure that’s more fundamental?”
Staffing and Assessing Causality
  • Correlation with subsequent performance (0-1)
    • Work Samples (0.54)
    • Cognitive Ability Tests (0.51)
    • Structured Interviews (0.51)
    • Job Knowledge Tests (0.48)
    • Integrity Tests (0.41)
    • Unstructured Interviews (0.31)
    • Personality test (conscientiousness) (0.31)
    • Reference Checks (0.26)
  • Illusion of validity: “a sense that we think we know much more about people than we actually do. And the challenge with on structured interviews when we sit down and we talk to people, and try and figure out what they’re like is really giving into the illusion of validity,right, we don’t know learn very much, but whatever biases, prejudices we have. They fill up the whole room in our mind in terms of actually judging them. So unstructured interviews are found to be really pretty bad at predicting what people are going to do and how they are going to perform.”
  • “So, in a structured interview, like I say, rather than just sitting down, getting to know somebody, what you’re really trying to do is figure out where they score on the various different attributes. And so, you should have a series of questions that are aimed to tap into those attributes where you can rate them kind of high, medium, low on that attribute. There’s then the possibility to go back after a year or two and say, okay, which of these questions and types of questions actually seems to predict whether or not they’re going to do well on the job and which don’t.”
  • “Once they knew what was really important in this job, then they could think about how are we going to go out and actually screen for that. And so, the idea, then, is basically to take these predictors and see, based on what we know about the people in our organization who are performing well, which of these predictors do tell us something about what people’s performance is likely to be?”
  • “So, if you’re looking at differences in performance, you want to make sure that people are doing the same work in the same place, or at the same level.”
  • “And the challenge is if you just look at one of these variables, is it that that’s predicting performance, or is it just that it’s highly correlated with something else. You wanna disentangle those influences to make sure you really are getting at the attributes that drive performance.”
  • Internal Mobility
    • “In time, every post is occupied by an employee who is incompetent to carry out its duties” — Peter and Hull, 1969
      • “people rise to the level of their own incompetence”
    • “Which dimensions of lower level performance best predict performance in the higher level job?”
    • Internal transfers
      • “Creates ‘unconventional’ career paths”
      • “Leads to higher performance ratings:
        • Larger pool of candidates
        • Disciplines decision-making”
      • “Associated with higher salaries (3%-6%)”
  • Causality
    • “Correlation is NOT Causation”
    • “Measure and control for omitted variables”
      • “Include in regressions”
      • “Create matched pairs with similar values”
      • “examine within person changes to hold person constant”
      • “Not everything can be measured…”
    • “Look for evidence to rule out alternatives”
      • “What would be some implication of alternative explanations?”
      • “Can you find evidence for or against those explanations in the data?”
    • “Exploit natural sources of randomization”
      • “‘Natural Experiments’ change your X variable in ways that shouldn’t also affect Y”
      • “Mimics assignment to treatment vs control group in genuine experiment”
      • “Allows for assessment of ‘causal effects”
      • “You need to be lucky”
    • “Conduct an experiment”
      • “Randomly assign individuals/jobs to ‘treatment’ and ‘control’ groups (ensuring balanced characteristics of each group)”
      • Test whether results in two groups are different”
      • “You need to persuade people to let you do it”
      • “Very time-consuming”
  • Turnover
    • Problems
      • “Hiring Costs”
      • “Training Costs”
      • “Loss of Critical Knowledge”
      • “Impact on Customer Relationships”
    • Levers
      • “Inform hiring strategy”
      • “Target interventions”
        • “Improve conditions”
        • “Address unmet needs”
        • “Train managers”
        • “Focus retention efforts”
    • “Inverse correlation w turnover”
      • “Supervisor relationship” (0.25)
      • “Job satisfaction” (0.22)
      • “Role conflict” (0.22)
      • “Promotion opportunities” (0.16″
      • “Stress” (0.13)
      • “Co-worker satisfaction” (0.13)
      • “Pay” (0.11)
    • Approaches to Predicting Attriction
      • “Comparisons of % attrition across time and across units”
      • “Comparison of % leaving before specific milestones: 3 months, 6 months, 1 year”
      • “Use multivariate regression to predict who reaches each milestone”
      • “Use of survival / hazard rate models to test which factors accelerate risk of exit”
Collaboration
  • “Collaboration is the action of working with others to produce or creating something”
  • Organizational Network Analysis (ONA)
    • A –> B “A seeks information from B”
    • A <–> B “A and B seek information from each other
    • Based on this, you can map individuals with these arrows to get a picture of your organizational network
    • Questions to ask
      • How can we describe collaboration patterns between employees?
        • 5 Building Blocks
          • “Network size”: number of people you are connected to
          • “Network strength”: the strength of each connection i.e. strong ties, weak ties
          • “Network range”: How many groups you are connected to
          • “Network density”: A high density network has everyone knows one another. A low density network with low number of connections among members.
            • High density network has higher trust and can verify easier
            • Low density network can tap into a greater network of information/resources.
          • “Network centrality”: a person has high centrality if s/he is effectively a hub in a network of connections
      • How can we map these collaboration patterns?
        • Collect data on “A seek information from B”
        • By Surveys and other sources
        • Sample size typically range 25-300. For less than 25, you pretty much know who talks to who.
        • Survey should be no more than 10-15 min max
        • Example survey: “Below is a list of all the members of your product development team. How frequently do you go to each of these individuals to seek information related to your work?”
          • “Less than once a month”
          • “About once a month”
          • “About 2 or 3 times per month”
          • “About once per week”
          • “About 2 or 3 times per week”
          • “Daily or almost daily”
        • Can use software like UCINET, Netdraw to visualize and analyze data
        • Other sources:
          • Big data (email, phone, social networks, etc.)
          • Archival records
          • Fieldwork
      • How can we evaluate these collaboration patterns?
        • Compare 5 building blocks across individuals and changes over time and mapt to outcome
        • Implications for managing employees
          • “Performance assessment”
          • “Roles & responsibilities”
          • “Pay & promotions”
          • “Training & mentoring”
          • “Job rotations & career development”
          • “Retention”
        • Individual outcomes
          • “Performance”
          • “Satisfaction”
          • “Commitment”
          • “Burnout”
          • “Turnover etc”
        • Level of analysis: is it at employee, team, organization level?
        • Reliability
        • Validity: accurate?
        • Comparability
        • Comprehensiveness
        • Cost effectiveness
        • Causality: defensible
      • How can we improve these collaboration patterns?
        • “Is more collaboration needed?”
        • “Where is more collaboration needed?”
        • “How to increase collaboration?”
        • “How to increase collaboration?”
          • “Emphasize & promote collaboration”
          • “Recognize & reward collaboration”
            • During performance evaluation. How much did you help your colleagues beyond your own work.
          • “Cross-functional meetings, conference calls, job rotations, site visits, events, etc.”
        • “Reducing employee overload”
          • Problem: “about 5% of people accounted for up to 35% of the value-added collaborations; these valuable people often felt very overloaded.”
          • “Identify overloaded people (top right corner), and match them with well-regarded employees who are relatively underutilized (often from bottom left corner), who can relieve some of the burden.”
        • “Improving resiliency of global teams”
          • Problem: “relied only a few key people to connect their members across the world”
          • “Identify a small number of new connections that would have the biggest positive impact on team connectivity, and shift responsibilities more evenly across the members.”
        • “Reducing collaboration inefficiencies”
          • “asked employees how much time they spent interacting with each other and how useful those interactions were”
          • “Focus personalized coaching efforts on collaborative issues unique to each of the low performers.”
        • “Eliminating organizational silos”
          • “Identify and target network connections that hold most strategic relevance for the firm, and track changes to these ties over time to assess the impact of interventions.”
        • “Enhancing career paths”
          • “Revise performance evaluation systems to recognize contributions of partners who help others to win new clients or serve current clients”
    • “There is no one ‘best’ collaboration network for every organization in every situation!”
    • See also:
Talent Analytics
  • “identify differences in ability”
  • “Develop so that everyone’s ability is maximized”
  • Challenges
    • Context
      • “We tend to neglect context when evaluating performance”
        • “Over-attribute performance to personal traits (personality, skill, etc.)…”
        • “…under-attribute performance to the situation the person was in (easy vs. difficult task, helpful vs. hurtful colleagues, favorable vs. unfavorable economy, etc.)”
      • These are “the ‘fundamental attribution error'”
      • “Don’t confuse brains and a bull market.”
      • “When using data to compare employees you must find ways to put them on an even playing filed.”
      • “Think of performance relative to expectations, as driven by team, product, industry, economy, boss, etc.”
    • Interdependence
      • “A humbling amount of our work depends on other people.”
      • “Means performance evaluation is often best done at the group level.”
      • “Reliable individual evaluation typically requires seeing them with multiple teams.”
      • Organizational network analysis can help identify value contributors within the team.
    • Self-fulfilling Prophecies
      • “People tend toward performing consistent with expectations. High expectations increase performance, low expectation decrease.”
        • “Can occur because we treat them differently as a result of our own expectations.”
        • “Can also occur because our expectations literally change their behavior”
      • The Matthew Effect: “The rich get richer and the poor get poorer”
        • “Where experience and recognition matter, those with early advantage will be increasingly privileged over time.”
      • Questions:
        • “Where might your expectations be affecting others’ behavior? Or your evaluation of their behavior?”
        • “What steps can you take to protect evaluation processes from these expectations?”
        • “How can you ensure equal access to valuable resources?”
    • Reverse Causality
      • “When we see two correlated factors, we tend to believe one caused the other. Especially when there is an intuitive direction.”
      • “We are driven to make sense of the world we live in, so we build causal stories from what we observe.”
      • “But this leads us to see things that don’t exist, and this can lead to giving people credit, or blame, they don’t deserve.”
  • Tests and Algorithms
    • Pros
      • “Processing efficiency”
      • “Broader search”
      • “Unbiased” in execution, not the design of the test/algorithms
    • Cons
      • “Hyber-focused” i.e. only look for the qualities that was designed in the test/algorithms, nothing more
      • “Low explanatory power”
    • Prescriptions
      • “Do the science”
        • “Rigorous testing in the relevant setting”
      • “Provide human oversight”
        • “Program, test, error-check”
      • “Use multiple tools”
        • “Draw on as many diverse signals as possible”
  • Prescriptions
    • Broaden sample
      • “Ala good performance evaluation, expand the sources/signals.”
        • “Additional opinions”
        • “Additional performance metrics”
        • “Additional projects, assignments”
        • “Remember: From maximally diverse sources!”
      • “Second chances. And thirds.”
        • “E.g., a new employee’s boss has a huge impact, but is completely outside his/her control.”
    • Find/create exogenous variation
      • “The only truly valid way to tease out causation is to manipulate an employee’s environment.”
      • “Trade-off: You still have to run a business!”
      • “But can and should be willing to trade off a bit of operational efficiency for greater insight into the abilities of the employees.”
      • “This is major motivation for rotational programs.”
      • “Lesser versions: change teams, direct reports, projects, offices.”
    • Reward in proportion to signal
      • “Match the duration and complexity of rewards to the duration and complexity of past accomplishments.”
      • “For short, noisy signals, better to give bonuses rather than raises, praise rather than promotions.”
        • “Note 1: Most signals are noisy, and we are prone to underestimate the noise.”
        • “Note 2: Of course you also have to retain people, so must fator in external labor market.”
      • “Drawing major distinctions, and granting major rewards, should only follow major signals.”
        • “E.g., consulting/law firm partnerships typically involve a multi-year, up-or-out partnership track.”
        • “E.g., academic tenure is practically irrevocable, so is granted to relatively few and only after 5-10+ years of performance.”
    • Emphasize development
      • “Talent analytics is not all about selection.”
      • “Even in a field as selection-oriented as venture capital, firms spend considerable resources developing people within their portfolio firms.”
      • “Testing and assessment is at least as valuable as development tools as selection tools. And more palatable.”
    • Ask the critical questions
      • “Are we comparing ‘apples to apples’? I.e., have we sufficiently adjusted for context?”
      • “What impact have other people had on this person’s work? How interdependent are these measures?”
      • “How have expectations colored our evaluations? To what extent have successes and failures been influenced by the way we’ve treated people, the situations we’ve put them in?”
      • “Are the factors we believe lead to success (and failure) truly causal?”
Conclusion
  • Organizational Challenge
    • “Be transparent”
    • “Embed yourself”
      • “People are more easily influenced by people they like, and a fundamental driver of liking is similarity.”
      • “Px: Find and/or create sources of similarity.”
      • “You want to be seen as one of the people you’re trying to influence.”
    • “Share control”
      • If use tests/algorithm, share the decision control with human will enable human to increase acceptance.
  • See also: Knowledge@Wharton video and article, How Effective Is a Number-crunching Approach to Managing People? –  http://knowledge.wharton.upenn.edu/article/cade-massey-management-numbers/
Advertisement

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s