Golf GPS Device Reviews – How to Choose the Best Golf GPS Device

With the USGA’s ruling in Decision 14-3/0.5, golf GPS devices that measure distance only (as opposed to other conditions such as the slope of the ground) may now be permitted by a Local Rule.  Most courses have adopted such a rule, but if you are competing in a tournament, you should check to see if a golf GPS device (or for that matter, a laser rangefinder) may be used.  Note that the USGA Handicap system requires players to post scores when a GPS device (that measures distance only) has been used, regardless of whether or not a Local Rule has been adopted permitting the use of such devices (Rule 14-3 and Decision 5-1f/2 of “The USGA Handicap System” manual). followed up with the USGA on the question of devices that use shot tracking functions and calculate the average distance per club. The response we received stated that these devices are permitted if they only provide analyses of information collected during prior rounds. A device is not permitted if it provides a player with analysis of data that is collected during the current round (such as the OnPar, which determines the average club distance and the percentage of times the drive lands in certain parts of the fairway or rough) – and use of such a prohibited device is a breach of Rule 14-3. [Editor’s Note: OnPar’s web site now states that they’ve received a decision from the USGA that the device is permitted under the Rules of Golf.]

The point of comparative performance evaluation is simple disability placard : It permits the evaluator to control for the impact of unmeasured environmental variables on tangible measures of performance. For example, instead of worrying about the optimal measure or formulas for how well a division performed in absolute, one looks at how well it performed relative to other firms or divisions engaged in similar trade and facing similar environmental uncertainties. A salesman is compensated not for the absolute level of its sells, but instead for how he sells compare with those of other salesmen in the organization.

As long as the unmeasured exogenous factors influencing performance – such as weather, interest rates, or the odds of a key supplier or customer going bankrupt – affect the individuals or groups in the comparison set in a similar manner, measuring the relative performance of the affected individuals or groups provides a (statistically) better measure of the individuals’ or groups’ levels or qualities of effort.

The performance measures that are compared should be similarly affected by uncontrollable environmental variables, so that relative performance reflects to the greatest extent possible things under the control of the individuals involved. Suppose, for instance, that regional sales managers are evaluated comparatively and that their company manages its advertising nationally, placing ads on national network TV shows and in national magazines and newspapers. If advertising is important and if sales regions vary in how well those ads reach potential customers, then sales managers in some districts may have a plausible case in claiming that they are being inappropriately compared to sales managers who are fortunate to have been assigned to districts where the company’s advertising has better penetration. In such a case, the company may find itself facing pressure to decentralize its advertising, so that each sales manager can have control over one of the factors that is likely to influence his or her comparative standing with other managers.

Or consider the case of a firm that seeks to measure the performance of its research staff. There may be good reasons not to evaluate individual members of the research staff relative to the performance of their closest peers. The firm might therefore choose instead to measure performance relative to members of other research teams within the organization, or even relative to the performance of the research staffs of competitors. The problem with this is that, in most cases, the more diffuse the comparison set, the less control there is for environmental factors. If, say, a measure of comparison is the number of patents gained, the specific research agendas of the different research staffs, set to some extent by managerial decree, may become important: Group X is “penalized” because it works in an area that is less likely to lead to many patents. This may lead to research groups arguing for a greater measure of autonomy: “If our performance is going to be measured on the basis of the patents we gain,” will go the argument, “then we should control what we work on.”

Leave a Reply

Your email address will not be published.