Several states use judicial performance evaluation (JPE) programs to periodically evaluate state judges. In all states that use JPE, evaluation results are used to promote the development and professional growth of the evaluated judge, and to develop training programs for the judiciary more generally. In many states, JPE is also used to provide information to those charged with determining whether a judge should stay on the bench. In states where judges face retention elections, for example, JPE results are often communicated to voters in the weeks preceding the election. And in states in which the legislature or a commission decided whether the judge should be retained, JPE results are typically times to give valuable information to the decisionmaker about each judge’s strengths and weaknesses.
JPE has never been used to determine judicial salaries or benefits, and with good reason: an independent judiciary should not feel that remuneration is tied to specific outcomes. This has always seemed like such a given that I never found it necessary to mention when discussing JPE programs. But this article about a proposed salary hike for state judges in Arkansas, which felt the need to explain that “There isn’t a performance evaluation process for judges and prosecutors in Arkansas,” made me realize that perhaps the general public perception of JPE’s purpose is different.
In many industries, an employee’s salary, benefits and bonuses are tied to meeting or exceeding certain standards during an annual performance review. Within the court system there exist measurable criteria–such as time to disposition and reversal rate–that could be used to compare judicial performance and accordingly adjust salary and bonuses. These criteria are already used by many JPE programs, typically in connection with survey data, interviews, and other information, to give a comprehensive sense of the judge’s performance. Perhaps these docket data could be isolated and used to identify the “best” judges, in the sense of those who resolve their cases more quickly and accurately than their peers, and reward them with higher compensation.
Yet connecting judicial salaries and bonuses to such measures is rife with problems. Even within the same court, the volume and complexity of dockets can vary substantially. Because the docket during an evaluation period is at least somewhat different for each judge, there really is no equal baseline for comparison. Moreover, even if judges could be fairly compared based on the speed of case clearance (for example), linking such measures to compensation would lead a rational judge to sacrifice due process guarantees to secure a quick end to the case. Settlement, plea bargains, and dispositive motion practice would be heavily favored. Cases that should go to trial would not. Issues that deserve a hearing would not receive one. I cannot imagine that the public would favor this approach.
For these reasons (among others), I do not expect any state to seriously consider tying judicial salary to JPE. But the Arkansas story does suggest that state courts and state legislatures need to do a better job of explaining what JPE is designed to address (communication skills, legal knowledge, professional demeanor, administrative skills, and impartiality), and what it is not (salary and bonuses, judicial discipline, and case outcomes). If JPE is seen as a hybrid type of performance review, aiming to preserve judicial independence and impartiality while informing decisionmakers and the public, the programs are likely to receive better support and elicit less confusion.