Project Management Lessons from Paleoanthropology

In early 1987, a study of 145 mitochondrial DNA samples from women representing a variety of populations, conducted by biochemists and geneticists, was published in Nature. Using a complex analytical model based on mutation rates, the authors determined that all living people have a common ancestor, later dubbed Mitochondrial Eve, who lived in east Africa between 140,000 and 200,000 years ago. This was a blow to the multiregional hypothesis promoted by several prominent paleoanthropologists, which asserted that the fossil record showed continuous evolution over the last two million years in widely distributed locations. But recently, a team of geneticists, paleoanthropologists, and other scientists collaborated to develop a new model. And their approach has important lessons for those of us who manage teams of knowledge workers with diverse specialties.

Acknowledge Biases and Assumptions

Every well-developed knowledge specialty has its own culture, models, methodologies, favored data sources, and assumptions. Consequently, practitioners have biases that reflect their specialty. The scientists in this interdisciplinary team, led by archeologist Eleanor Scerri, wanted to avoid letting their professional biases lead to “cherry picking across different sources of data to match a narrative emanating from one [field].” So, the team met for three days to review each other’s work—challenging assumptions, noting accomplishments and problems, and learning to communicate effectively with their colleagues in other specialties. This process led to a coherent view, goodwill, and mutual respect. Lesson learned: many of our biases arise from deep knowledge in our specialty and confronting them early can facilitate cooperation and team building.

Develop a Common Vocabulary

Paleoanthropologists, geographers, geneticists, and environmental scientists have very different ways of talking about their work. Each field has its own jargon, buzzwords, and acronyms. Scerri noted, “[Our] understanding of findings tends to be influenced by the models and paradigms we have in our heads, which tend to … [affect] how we process new information.” The team had to pool their knowledge in a way that let them share data, methods, and models in a way that didn’t leave anyone out. This required them to adapt their communications to use terminology that was meaningful to the entire group and avoid a dependence on jargon. Lesson learned: time invested in establishing a common vocabulary facilitates understanding and leads to real progress.

Become Accustomed to Conflict

The researchers were able to reconcile their different theories into a cohesive story that accounts for the complexity of the different data points and leaves room for the abundant ambiguity still present. Scerri noted, “Insights from different models can help to shed light on the answers we look for … it’s all about incremental steps and changing perspectives.” Lesson learned: conflict can often be resolved, but even when it can’t, the root of the conflict is often based in some ambiguity. Acknowledging that ambiguity is a step toward a tentative agreement, pending eventual resolution of the ambiguity.

Scerri and her colleagues recognize that, like humanity itself, their model is still evolving. New data and new ideas will inevitably lead to future refinements, and they are fine with that. And that might be the most important lesson of all: you don’t need to be absolutely certain in order to deliver something of immediate and future value.

And if you’re curious, here’s a link to their paper.

New Post at AITS: The Three “Ins”

My latest article for AITS was published today: The Three “Ins” That Are Putting Our Projects Behind Schedule.

The Execution stage is where projects fall behind, and where leadership is needed the most. Plainly, it takes more than just a good project manager to overcome indecision, inactivity, and indifference but it falls to the PM to engage the team and their management throughout the project.

As always, thanks for taking the time to read my stuff.

The Internet of (Human) Things and other Siri-ous Issues

“I need your FM voice.” My wife says I sound like the announcer on a classical music station. The problem is, Lien sounds like a Taiwanese woman speaking English, which she mostly learned as an adult. Siri mangles half of what she says, and it annoys her beyond depiction. My reaction would be to not speak to that wretched Apple faux person at all, but Lien expects that things should work as advertised. Her solution: she composes a message, recites it to me, and then holds her iPhone up to my face so I can repeat it in my dulcet tones. Pointing out to her that using the keyboard would be faster only exacerbates her annoyance. So I help her overcome one more twenty-first century, First World problem caused by the overreach of consumer technology. Which brings me to the Internet of Things.

Useless Cases

An article by Paul Sawyers in VentureBeat last year reported on funding secured by San Francisco-based June, which is developing a Smart Oven. I won’t bore you with the feature set – instead, I’ll just ask: How much baking goes on in your household? Based on that, how much usage would you get from an Internet-connected gadget which inspected whatever you plopped in the oven, determined what you were cooking, and adjusted the temperature accordingly? Isn’t this why God created thermostats for regulating oven temperature, which recipes invariably stipulate? Perhaps someone smarter than me can explain the use case for this “solution.”

That Looked Better on Jeri Ryan

CosFailThat oddity aside, there are a lot of incredibly valuable applications for placing passive RFID tags on newly manufactured products so they can self-report their presence. It simplifies everything from preventing inventory shrinkage to check-out (bar codes are so 20th century). So, do we want to use human-implanted RFID chips to authenticate identity? This is a thing, at least in small number. A recent article about RFID implants in Australia makes it seem like a silly fad, but the number of available applications for the technology is impressive. And as more phishing attacks expose more of our personal data, the allure of an identification that can’t be spoofed is undeniable.

Useful Cases

Over the last few years, the IRS has detected a number of fraudulent tax returns submitted electronically, with W-2 forms apparently retrieved by providing minimal information, such as SSN and birth date. If you had an implant with a very long unique identifier that could be read by your phone or other device and validated by some central database, would you feel more or less secure? How about if it could be read by any pocket-sized harvester? Well, would you like your device to generate a complementary key based on your fingerprint that would combine with your RFID tag to uniquely identify you? At what level would you feel secure about being an internet “thing?”

Scenario: Imagine you are working in a hospital emergency room. An ambulance brought in a patient who is unresponsive. Fortunately, her RFID tag was read on the way in, and her records – from medical history to address, next of kin, and insurance coverage – have already been retrieved. But the other victim in the accident lost his arm, where the tag was implanted. He’s bleeding out, and you have to collect his identification the old-fashioned way in order to treat him. While this seems extreme, it’s not unrealistic. An embedded RFID tag might be the difference between life and death.

You Knew This Would Be About Ethics, Right?

As project managers, we’re going to be asked to manage a lot of projects that will be done because they are possible, or because they solve another twenty-first century, First World problem. We need to accept responsibility for being not just the agent of the sponsor but the agent and voice of society. We have to be prepared to point out flaws and even talk powerful people out of their pet projects. If someone had been the voice of reason in 1945, saying, “The war is almost over, and this nuclear Genie should be left in the bottle,” would the world be a safer place? On the other hand, we have a responsibility to support the development of technologies that can save lives, even if they seem a bit creepy to us.

Siri and Alexa are just the beginning. From autonomous vehicles to next-generation biometric authentication, we are changing the way humans interact with the world. You might never find yourself in a position to influence the future. But if you do, don’t hesitate to speak out. Don’t wait for the Law of Unintended Consequences to catch up with our innovations.