[This post is the introduction of the resource “The significant evolution in staff assessments: a good fit for think tanks?” by Raymond Struyk. Download the resource.]
There is an ongoing, comparatively quiet evolution in the business world, especially among tech firms, large consultancies and major international law firms, to refashion their traditional staff performance assessment systems and staff annual goal setting. Long simmering dissatisfaction began in 2011-2012 to fuel a reform movement that has gradually gained momentum with a jump in adopters of the new system from 2017. The new procedures, still often in trial stages, are now spreading to a broader array of businesses and to some think tanks.
One key attribute of the new paradigm is frequent (at some firms, ideally, ‘continuous’) feedback on performance so staff can develop more quickly professionally and bring more tailored approaches to tasks at hand. More frequent supervisor-staff interactions are thought to lead to strong working relations and better mentoring. The accent is strongly on the future rather than reviewing last year’s activities.
Second, the leader industries now place very high value on agility—the ability to shift processes and products in response to technological changes, important changes in the regulatory and fiscal environments, and market developments.+
A close friend of mine has worked in the marketing department of a major solar power company for the past 15 years. Technical disruption has been a constant factor in product development and production and, in turn, pricing. Additionally, countries, or even states within countries, have frequently shifted tax and other incentives to encourage consumers and firms to adopt solar power as their electricity source. In such an environment annual feedback and goals do not make much sense.
Think tanks clearly operate in a different, much more stable, environment. Some are, however, confronting staff demands for more frequent feedback and promotions. Such demands are seemingly most often found among millennials (those born between January 1983 and December 1994) and a somewhat younger cohort. They focus on not succeeding quickly enough within their organizations or at least their progress not being recognized concretely, with other staff following their lead. Some survey data suggest this is a widespread issue.+
Many think tanks have for 15 years and more employed sophisticated annual performance assessment systems that avoid the clear limitations of some of those that for-profit entities are replacing. Strong points often found in think tank protocols include:
- Obtaining the views of a staffer on her accomplishments for the full past year;
- The staffer’s views about whether the developmental goals set the previous year were achieved; and, if not, why not, including the supervisor not providing promised support;
- Goals for the next year being set through discussions between the supervisor and the staffer; and,
- Not reducing the assessment results to a single number used to comparatively mechanically rank staff as a basis for determining promotions, dismissals, bonuses and salary increments. +
On the other hand, it seems probable that there has been a deficit of ‘ongoing feedback’ at many think tanks. A natural time for ‘check-in conversations’ is at completion of a project or a milestone in one with an extended implementation period. My guess is that there are fewer such consultations than optimal and that managers’ side of such discussions tend to be focused on perceived staff shortcomings rather than leading more balanced exchanges. There are natural mentors out there to be sure, but my experience suggests they tend to be a minority everywhere.
This post explores the adjustments actually being made by think tanks in line with the trends cited above. It draws on information provided by the four think tanks I view as well-managed that have generously served as the panel for my series of posts at On Think Tanks on management topics. They include three U.S. think tanks: NORC at the University of Chicago, the Urban Institute, and the Results for Development Institute.+ The Institute for Urban Economics in Moscow rounds out the panel.+
Two of these are making major revisions in their staff management protocols somewhat in line with the changes mentioned above. Both are in the midst of incremental, multi-year processes of introducing change. These adjustments have been widely discussed internally and some outside guidance sought. The two have sufficient experience to be able to share it with those thinking about making changes following the ‘new paradigm.’
As usual in these blogs, I do not name the Institute associated with a specific practice. Hence, I will refer to these as Adaptor 1 and Adaptor 2, respectively. Adaptor rather than ‘adopter’ is being used because these are not straightforward on-boarding of the protocol.
The other two are referred to Traditionalist 1 and Traditionalist 2. While continuing to refine their protocols in recent years, the traditionalists have not seen the merit for themselves in the more thorough-going changes embodied in the new paradigm, although one has made modest adjustments in that direction. Clearly these four do not constitute a ‘representative sample’ in any sense. Their distribution does, however, afford us the opportunity to explore both sides of the merits of adopting the new protocol which has not received universal praise.+
The discussion proceeds under five headings. The first looks at the motivations for the sample organizations to adopt the new protocol or not. The second explores how ‘more frequent feedback’ is being structured. The third considers the adoption of more steps in career ladders. And the fourth provides additional discussion on the most significant change observed. The final section concludes.
This document is a part of the new OTT Best Practices Series. If you would like to submit a piece on best practices for research and policy institutes, please get in touch.