Tag Archives: productivity

Strategies for Delivering Unpopular News

The mass never comes up to the standard of its best member, but on the contrary degrades itself to a level with the lowest.” – Henry David Thoreau
Recent experiences of Republican congressional members returning to their home districts during recess to explain President Trump’s proposed changes to the Affordable Care Act are instructive for the design of organizational change management (OCM) strategies.
How should you plan to communicate when the news will be unpopular?
We want to surface resistance, and we want to give voice to those closest to the work being affected. Evidence suggests that despite the additional time and cost, it makes sense to plan for small groups, or one-on-one meetings to deliver messages and get feedback from affected stakeholders.
I’ll never forget a meeting I attended many years ago where senior management assembled all the union machinists in an organization to update them on hazardous material safety. About ten minutes into the meeting one outspoken participant broke the ice with an accusation that leadership was concealing the true hazards of one of the materials that they routinely worked with. Within minutes, it became a shouting match, a mob mentality had taken over, meaningful communication stopped, and the meeting had to be abruptly ended. Eventually, one-on-one meetings were scheduled, not only to share the original information, but to repair the damage that occurred during the mob scene.
Excerpts from a recent New York Times editorial that highlights this issue in detail and references some of the research follows.

Can the G.O.P. Turn Back the Tide of Town Hall Anger?
Ralph Waldo Emerson wrote in “The Conduct of Life”: “Masses are rude, lame, unmade, pernicious in their demands and influence, and need not to be flattered, but to be schooled. I wish not to concede anything to them, but to tame, drill, divide, and break them up, and draw individuals out of them.”
Thoreau and Emerson argued that crowds add up to something less than the sum of their parts. The principle behind this is called “deindividuation,” in which an individual’s social constraints are diminished and distorted by being part of a crowd that forms to express a particular point of view. The French psychologist Gustave Le Bon first explained this concept in his magisterial 1895 text “The Crowd: A Study of the Popular Mind.” Le Bon found that crowds were inherently “unanimous, emotional and intellectually weak.”
Lots of research confirms this, showing that deindividuation can lower inhibitions against immoral behavior. In one of my favorite studies, researchers set up a bowl of candy for Halloween trick-or-treaters, told them to take just one piece and then left them alone. Some of the children were in anonymous groups, others were by themselves. When kids were part of a group, 60 percent took more than one piece of candy. When they were by themselves but not asked their names, 20 percent cheated. But when they were alone and asked their names, only 10 percent took more than they were allotted.
Of course, it stands to reason that deindividuation could improve individuals instead of making them worse. We can all think of cases in which we have been swept up in a wave of kindness and compassion in a group, even in spite of our personal feelings. Group polarization, in which individuals are pushed emotionally in the general direction of the crowd, can be either positive or negative.

The common error is when leaders treat the whole group like one individual. Remember Le Bon’s theory that a crowd is stronger, angrier and less ideologically flexible than an individual. Getting irate or defensive will always be counterproductive. Similarly, it is mostly futile to try talking over a protest chant.
The opportunity is to “re-individuate” audience members — to treat people as individuals and not as part of a mass. This is done not by acknowledging questions shouted anonymously but by asking audience members to physically separate from the mass and identify themselves if they wish to speak. When people detach from a group, the research suggests they will become more ethical, rational and intelligent.

A link to the entire article is below

 

Org Change is Missing in Health Care Tech

If you are an Organizational Change Management (OCM) professional you must read this.  If it doesn’t make you angry then you need to read it again.

Robert M. Wachter, professor of medicine at the University of California, San Francisco, and the author of “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” wrote an op-ed that appeared on Sunday, March 22, 2015, in the New York Times under the headline “Why Health Care Tech Is Still So Bad.”

As Org Change professionals, we are collectively pressured to provide hard data to prove the value of OCM.  We see project teams rigorously measured and rewarded for meeting scope, budget and schedule without any regard for actual adoption or business impact.  We deal with senior leaders who believe that they will realize 100% of a projected benefit on the day a project is completed.  We know that this is crazy, but can’t seem to do anything about it.

My recommendation: Download this article from the NY Times.  Circulate it among your sponsors.  Talk with them about it.  This is what OCM is all about.

Here is the unedited text of the op-ed:

 LAST year, I saw an ad recruiting physicians to a Phoenix-area hospital. It promoted state-of-the-art operating rooms, dazzling radiology equipment and a lovely suburban location. But only one line was printed in bold: “No E.M.R.”

In today’s digital era, a modern hospital deemed the absence of an electronic medical record system to be a premier selling point.

That hospital is not alone. A 2013 RAND survey of physicians found mixed reactions to electronic health record systems, including widespread dissatisfaction. Many respondents cited poor usability, time-consuming data entry, needless alerts and poor work flows.

If the only negative effect of health care computerization were grumpy doctors, we could muddle through. But there’s more. A friend of mine, a physician in his late 60s, recently described a visit to his primary care doctor. “I had seen him a few years ago and I liked him,” he told me. “But this time was different.” A computer had entered the exam room. “He asks me a question, and as soon as I begin to answer, his head is down in his laptop. Tap-tap-tap-tap-tap. He looks up at me to ask another question. As soon as I speak, again it’s tap-tap-tap-tap.”

 “What did you do?” I asked.  “I found another doctor.”

Even in preventing medical mistakes — a central rationale for computerization — technology has let us down. A recent study of more than one million medication errors reported to a national database between 2003 and 2010 found that 6 percent were related to the computerized prescribing system.

At my own hospital, in 2013 we gave a teenager a 39-fold overdose of a common antibiotic. The initial glitch was innocent enough: A doctor failed to recognize that a screen was set on “milligrams per kilogram” rather than just “milligrams.” But the jaw-dropping part of the error involved alerts that were ignored by both physician and pharmacist. The error caused a grand mal seizure that sent the boy to the I.C.U. and nearly killed him.

How could they do such a thing? It’s because providers receive tens of thousands of such alerts each month, a vast majority of them false alarms. In one month, the electronic monitors in our five intensive care units, which track things like heart rate and oxygen level, produced more than 2.5 million alerts. It’s little wonder that health care providers have grown numb to them.

The unanticipated consequences of health information technology are of particular interest today. In the past five years about $30 billion of federal incentive payments have succeeded in rapidly raising the adoption rate of electronic health records. This computerization of health care has been like a car whose spinning tires have finally gained purchase. We were so accustomed to staying still that we were utterly unprepared for that first lurch forward.

Whopping errors and maddening changes in work flow have even led some physicians to argue that we should exhume our three-ring binders and return to a world of pen and paper.

This argument is utterly unpersuasive. Health care, our most information-intensive industry, is plagued by demonstrably spotty quality, millions of errors and backbreaking costs. We will never make fundamental improvements in our system without the thoughtful use of technology. Even today, despite the problems, the evidence shows that care is better and safer with computers than without them.

Moreover, the digitization of health care promises, eventually, to be transformative. Patients who today sit in hospital beds will one day receive telemedicine-enabled care in their homes and workplaces. Big-data techniques will guide the treatment of individual patients, as well as the best ways to organize our systems of care. (Of course, we need to keep such data out of the hands of hackers, a problem that we have clearly not yet licked.) New apps will make it easier for patients to choose the best hospitals and doctors for specific problems — and even help them decide whether they need to see a doctor at all.

Some improvements will come with refinement of the software. Today’s health care technology has that Version 1.0 feel, and it is sure to get better.

But it’s more than the code that needs to improve. In the 1990s, Erik Brynjolfsson, a management professor at M.I.T., described “the productivity paradox” of information technology, the lag between the adoption of technology and the realization of productivity gains. Unleashing the power of computerization depends on two keys, like a safe-deposit box: the technology itself, but also changes in the work force and culture.

In health care, changes in the way we organize our work will most likely be the key to improvement. This means training students and physicians to focus on the patient despite the demands of the computers. It means creating new ways to build teamwork once doctors and nurses are no longer yoked to the nurse’s station by a single paper record. It means federal policies that promote the seamless sharing of data between different systems in different settings.

We also need far better collaboration between academic researchers and software developers to weed out bugs and reimagine how our work can be accomplished in a digital environment.

I interviewed Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design.

Our iPhones and their digital brethren have made computerization look easy, which makes our experience with health care technology doubly disappointing. An important step is admitting that there is a problem, toning down the hype, and welcoming thoughtful criticism, rather than branding critics as Luddites.

In my research, I found humility in a surprising place: the headquarters of I.B.M.’s Watson team, the people who built the computer that trounced the “Jeopardy!” champions. I asked the lead engineer of Watson’s health team, Eric Brown, what the equivalent of the “Jeopardy!” victory would be in medicine. I expected him to describe some kind of holographic physician, like the doctor on “Star Trek Voyager,” with Watson serving as the cognitive engine. His answer, however, reflected his deep respect for the unique challenges of health care. “It’ll be when we have a technology that physicians suddenly can’t live without,” he said.

And that was it. Just an essential tool. Nothing more, and nothing less.