Questions About The MTA’s Satisfaction Survey
The following op-ed is by Allan Rosen, a Manhattan Beach resident and former Director of MTA/NYC Transit Bus Planning (1981). For a complete list of his contributions to Sheepshead Bites, which includes many articles about the bus cuts, MTA and DOT, click here.
The MTA recently released the results of its first system-wide Customer Satisfaction Survey. Several years ago a survey of bus passengers gave the system a grade of C. For this survey, a ten-point scale was used, and you were either satisfied, dissatisfied, very satisfied or very dissatisfied. This is one of the most biased surveys I have ever seen.
A survey needs to be objective and the methodology clear for it to be worthwhile and believable. The MTA definitely had an agenda which was to show that most people are satisfied with the system and that the MTA is actively addressing problems where significant numbers of people are not satisfied, or implying the problem is beyond their control, e.g. dismissing bus timeliness as a factor of traffic conditions.
You do not ask a question regarding which issues are important for bus riders prefacing it with, “Other than service cutbacks and fares, what else is important to you?”
Why is the MTA afraid of publishing results showing the percent of passengers who are not satisfied regarding these two issues, the ones that are foremost on most riders’ minds?
Why are there so few service-related questions for buses? Where are the bus results for the questions regarding service frequency and travel times? Were the results not favorable enough or not important enough to publish? But a question regarding ease of payment of fare on the bus is reported on. Is that more important or was it something that the MTA knew beforehand is not a problem?
Only two service-related questions are shown in the tables regarding availability and reliability of bus service, but there are 16 questions relating to every other aspect of the system, i.e. Comfort, cleanliness, security, appearance, information, courtesy, maintenance, etc. There is nothing wrong with those questions, but if they are going to go into such detail about those issues – like asking how important it is if subway conductors are wearing their uniform – they also need to go into the same amount of detail regarding service, for example to what degree does the number of transfers or fares needed to make trips deter riders from using the system more frequently?
Bus routing needs to be improved in many parts of the City, but the question was not asked if people are satisfied with the directness of the routes they take. They were asked about convenience of routes, but most people would interpret that merely as the distance to walk to a bus route.
The methodology is not fully described. What is the difference between the Citywide Survey and Customer Priority Survey? When were the surveys undertaken and during what time of day? If undertaken only in the evening, wouldn’t that discriminate against those who work during the late afternoons and evenings and use the system during the day during non-rush hours?
One chart is ambiguous showing two sets of numbers for the same two questions (“How fast the bus gets you where you are going” and “reliability”) without an explanation of the difference. Apparently a needed footnote is missing, so how many other errors were made?
Only 888 bus riders were surveyed. Everyone did not answer every question and everyone was not asked the same set of questions so it is important to see the raw numbers as well as percentages. The numbers of passengers responding for each route to determine if every part of the City was equally represented is also missing, although the question was asked. (There are nearly as many bus routes as there are passengers surveyed.) Someone whose route was recently discontinued and was forced to use a less convenient route cannot answer a question regarding service being better or worse in the past year on the route he currently uses. Were former B4 passengers who now have a long walk to the B36 surveyed?
The rating system leaves much to be desired. Can someone answering questions on the telephone comprehend a 10-point rating system? An A, B, C, D, or F or 1-5 rating would have been easier to understand. By using a ten-point system and declaring 6 and above as satisfactory, as the MTA has done, you are really saying that if something works well a little more than half the time, you are doing a satisfactory job.
That, in essence, is what the MTA has done here when they state that about 75 percent of the subway passengers are satisfied and about 62 percent of the bus passengers are satisfied. If ratings five and six were defined as neither satisfied or dissatisfied, as they should have been, rather than 5 being unsatisfied and 6 being satisfied, then the number of satisfied bus riders would be closer to 50% which I believe would have been a fair assessment of overall service, maintenance and other factors. And when I went to school, a 50 percent or 62 percent was still considered a failing grade.