Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

299 Chapter 15 EVALUATING TECHNOLOGY DEPLOYMENT ..., Exams of Computer Systems Networking and Telecommunications

A Technology Linkages Office was established to link Georgia firms with other state and federal resources. GMEA now operates a network of 18 regional offices, ...

Typology: Exams

2022/2023

Uploaded on 05/11/2023

ekapad
ekapad 🇮🇳

5

(17)

266 documents

1 / 25

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
299
Chapter 15
EVALUATING TECHNOLOGY DEPLOYMENT AT THE STATE LEVEL: METHODS,
RESULTS AND INSIGHTS FROM THE GEORGIA MANUFACTURING EXTENSION
ALLIANCE
by
Philip Shapira1
School of Public Policy, Georgia Institute of Technology, Atlanta
and
Jan Youtie
Georgia Tech Economic Development Institute, Atlanta
Introduction
Much greater consideration has been given in recent years to the role of technology deployment
in promoting business competitiveness and regional economic development (OECD, 1997). This is
certainly the case in the United States where, despite evident strengths in developing technological
innovations and creating new businesses, there has been increased concern about the performance of
existing industries and mature enterprises in effectively applying available technologies and improved
business practices. Attention has particularly focused on America’s 380 000 small and medium-sized
manufacturers employing fewer than 500 employees. Successive studies have documented that many
of these smaller firms find it difficult to introduce modern manufacturing technologies and methods
(for example: Office of Technology Assessment, 1990; Rosenfeld, 1992; National Research
Council, 1993). Smaller firms frequently lack expertise, time, money and confidence to upgrade their
current manufacturing operations, resulting in underinvestment in more productive technologies and
missed opportunities to improve product performance, workforce training, quality and waste reduction.
At the same time, private consultants, equipment vendors, universities and other assistance sources often
overlook or cannot economically serve the particular needs of smaller firms; potential suppliers of
information and assistance also face learning costs, may lack expertise or face other barriers in
promoting the diffusion of rewarding technologies. System-level factors, such as lack of
standardization, regulatory impediments, weaknesses in financial mechanisms and poorly organised
interfirm relationships, can also constrain the pace of technological diffusion and investment
(Shapira, Roessner and Barke, 1995).
While federal and state governments have for several decades sponsored various programmes to
aid small firms and promote technology transfer, only in the last few years has a more consistent
nation-wide system of technology assistance and business service providers emerged. The
centrepiece of this network is the Manufacturing Extension Partnership (MEP) – a collaborative
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19

Partial preview of the text

Download 299 Chapter 15 EVALUATING TECHNOLOGY DEPLOYMENT ... and more Exams Computer Systems Networking and Telecommunications in PDF only on Docsity!

Chapter 15

EVALUATING TECHNOLOGY DEPLOYMENT AT THE STATE LEVEL: METHODS,

RESULTS AND INSIGHTS FROM THE GEORGIA MANUFACTURING EXTENSION

ALLIANCE

by

Philip Shapira 1 School of Public Policy, Georgia Institute of Technology, Atlanta and Jan Youtie Georgia Tech Economic Development Institute, Atlanta

Introduction

Much greater consideration has been given in recent years to the role of technology deployment in promoting business competitiveness and regional economic development (OECD, 1997). This is certainly the case in the United States where, despite evident strengths in developing technological innovations and creating new businesses, there has been increased concern about the performance of existing industries and mature enterprises in effectively applying available technologies and improved business practices. Attention has particularly focused on America’s 380 000 small and medium-sized manufacturers employing fewer than 500 employees. Successive studies have documented that many of these smaller firms find it difficult to introduce modern manufacturing technologies and methods (for example: Office of Technology Assessment, 1990; Rosenfeld, 1992; National Research Council, 1993). Smaller firms frequently lack expertise, time, money and confidence to upgrade their current manufacturing operations, resulting in underinvestment in more productive technologies and missed opportunities to improve product performance, workforce training, quality and waste reduction. At the same time, private consultants, equipment vendors, universities and other assistance sources often overlook or cannot economically serve the particular needs of smaller firms; potential suppliers of information and assistance also face learning costs, may lack expertise or face other barriers in promoting the diffusion of rewarding technologies. System-level factors, such as lack of standardization, regulatory impediments, weaknesses in financial mechanisms and poorly organised interfirm relationships, can also constrain the pace of technological diffusion and investment (Shapira, Roessner and Barke, 1995).

While federal and state governments have for several decades sponsored various programmes to aid small firms and promote technology transfer, only in the last few years has a more consistent nation-wide system of technology assistance and business service providers emerged. The centrepiece of this network is the Manufacturing Extension Partnership (MEP) – a collaborative

initiative between federal and state governments that also involves non-profit organisations, academic institutions and industry groups. The National Institute of Standards and Technology (NIST) of the US Department of Commerce is the federal sponsor (National Institute of Standards and Technology, 1997). Between 1992 and 1997, the MEP grew from a handful of federally sponsored manufacturing technology centres to a network of more than 70 centres in all 50 states. These centres work with over 2 000 affiliated public and private organisations to deliver or support the delivery of services to small and mid-sized firms. In fiscal year 1997, federal funding for the MEP of about $95 million was matched by at least a further $100 million of state and some private funds and revenues. Current service loads are around 20 000 firms assisted each year through diverse services, including assistance with quality, business systems, manufacturing technologies, products and processes, training, marketing, environmental performance and electronic commerce.

With the growth of investment in new service partnerships to promote the deployment of technology and improved business practices by industry has come a corresponding increase in evaluation. To an important extent, there has been a strong “internal” motivation to evaluate the MEP, not only to assess the economic and business impacts of programme interventions but also to promote more effective service delivery. MEP managers have been concerned to demonstrate the value of their programme to public sponsors and private customers and thus seek validation of impacts. Additionally, the industrial orientation of the MEP, coupled with frequent external reviews and pressures to generate significant fee income, encourages management attention to measuring programme performance and ensuring that centres deliver services efficiently and in ways that are responsive to changing industry “customer” needs. At the same time, the MEP has grown up during a period of renewed interest in governmental reform in the United States. While promoting the formation of new public-private partnerships (of which the MEP is a leading example), this has also greatly increased “external” pressure on all programmes to measure and evaluate their quality and performance (Shapira, Kingsley and Youtie, 1997).

This paper examines the experience of the Georgia Manufacturing Extension Alliance (GMEA) in implementing an evaluation of its technology deployment services. As part of the US Manufacturing Extension Partnership, GMEA provides assistance to manufacturers to resolve industrial and business problems and upgrade technology, training and business performance, focusing primarily on firms in the state of Georgia. The programme has established an evaluation component along with other assessment and review mechanisms. Several evaluation methods are employed, including customer surveys, economic analyses of benefits and costs, controlled studies and logic-based case studies. The paper examines the strengths and weaknesses of these different approaches, reviews the insights each method offers, and how the resulting evaluative information is used.

Development of the GMEA programme

The Georgia Manufacturing Extension Alliance (GMEA) provides industrial extension and technology deployment services to the state’s manufacturers. Services are focused on the small and medium-sized manufacturing establishments with fewer than 500 employees that comprise 98 per cent of the state’s 10 000+ manufacturers. The lead organisation in GMEA is the Georgia Institute of Technology (Georgia Tech) Economic Development Institute, which has a 35-year history of industrial extension service provision (Clifton et al. , 1989).

strategic management assistance, energy, and environmental services. From February 1994 to December 1996, GMEA served over 2 100 companies, equivalent to 21 per cent of all manufacturers in the state. Included here were 39 per cent of Georgia manufacturers with 20 to 499 employees. GMEA customers were served through 2 647 informal engagements, technical projects and assessments; 11 network group service projects (usually involving quality or labour force development); and 240 workshops and seminars (see Figure 1). Roughly 36 per cent of closed projects involved referrals to other organisations or private-sector consultants and vendors.

Figure 1. Project types and methods of service delivery, GMEA, 1994-

GMEA Projects Types, 1994-

0 100 200 300 400 500 600

Materials testing

Management

Energy

Product development

Marketing

Computers

Human resources

Plant layout

Process improvement

Environmental

Quality

Number of Projects

GMEA Customers, By Service Delivery Method

0 200 400 600 800 1000 1200 1400

Informal Engagements

Formal Assessments

Projects

Network Groups

Workshops/ Seminars

Number of Companies

Evaluation plan

Georgia Tech did not formally evaluate its predecessor industrial extension services, but – with the development of GMEA in 1994 – an explicit evaluation element was built into the programme. The programme’s evaluation activities are designed to meet three main aims:

◊ provide consistent feedback on the effectiveness, targeting and impacts of GMEA’s services;

◊ support systematic learning about how services are being delivered and which services and approaches work best and why, so as to assist the ongoing improvement and management of programme services; ◊ furnish evaluative information to GMEA’s major stakeholders and sponsors, including the state of Georgia and NIST.

GMEA’s evaluation element is under the direction of this paper’s authors. By design, it combines an “external” faculty member (from a separate academic unit who is not employed or supervised by the programme) and an “internal” senior researcher (within the programme’s home institute, who does not provide direct services to firms but who has access to direct service data). To date, approximately 3 per cent of the programme’s federal funding has been annually allocated to evaluation.

To develop evaluative procedures, we developed a model of the programme, including delineating programme inputs, work processes and expected intermediate and final outcomes (Figure 2). This model drew on efforts by other evaluators to understand the logic and likely effects of industrial extension services (see, in particular, Eric Oldsman’s work in Nexus Associates, 1994).

Figure 2. Programme logic model, Georgia Manufacturing Extension Alliance

Customer Valuation

CUSTOMERS

Business Outcomes Sales, investment, operations, etc.

Development Outcomes Jobs, taxes, linkages, etc.

Facilities, equipment, new methods, training, management change, etc.

Customer Intermediate Actions

Staff, other resource inputs, outputs

Program Intervention

Services & assistance provided

Program Outreach

Customer Participation Customer Profile, Customer Inputs

NON- CUSTOMERS (Controls)

Customer progress, longitudinal benchmarking and non-customer controls. GMEA maintains a progress tracking system. In 1994, a benchmark survey was conducted of manufacturers in the state with ten or more employees (Youtie and Shapira, 1995). In 1995, a one-year follow-up survey of GMEA customers was conducted to track changes in customer business performance outcomes ( e.g. sales, cost savings, investment, employment) one year after project closure. In 1996, a second benchmark survey was undertaken of manufacturers (with ten or more employees) in the state (a third benchmark survey is planned for 1998) (Youtie and Shapira, 1997). This design allows tracking of customers, industries and technology use over time. Since the benchmark surveys also go to non- customers, it is also possible to study longitudinal changes comparing customers and non- customers over time.

Case studies and special studies. The evaluation team has conducted a series of case studies to provide an in-depth examination of the linkages between GMEA services and impacts on firm operations and profitability. These case studies helped to understand how GMEA’s services are received by firms and what factors influence how customers respond to these services. Special studies have also been undertaken on such topics as defence dependency and diffusion of ISO 9000 practices. ◊ Organisational assessments and external reviews. The evaluation team, along with GMEA management, has co-ordinated responses to MEP first-, second- and third-year review panels to provide feedback regarding programme operations and impacts.

The findings and results of these procedures have been used to produce a series of analytical and evaluative studies which are distributed or used in briefings to programme management, field staff, programme sponsors, industry advisors and customers. A World-Wide Web site is maintained (http://www.cherry.gatech.edu/mod) that allows open access to GMEA evaluation studies. In several cases, data has been shared (with individual firm confidentiality protected) with external oversight agencies and other researchers.

For several procedures, as noted above, the actual task of tracking information is devolved to GMEA’s MIS systems, with the oversight of the evaluation team. An on-line system (ProTrac) has been implemented to electronically track customers, projects and services, and customer valuations. ProTrac is a distributed MIS system, fully accessible by GMEA’s 18 regional offices. A key element in the system is the role of a data quality co-ordinator who reviews the information entered into ProTrac for completeness and accuracy. In addition to providing operational information for GMEA management and the evaluation team, ProTrac provides information to fulfill NIST Management Information Reporting Guidelines for periodic reporting and monitoring of centre finances, activity levels, organisational linkages, staff qualification and customer satisfaction.

In the following sections, we examine the results and insights from different elements of the GMEA evaluation system.

Customer surveys

Customer surveys are used to obtain feedback from customers on the quality and impact of GMEA services. The first customer survey is a post-project form sent to the company manager responsible for the project, with telephone follow-up as necessary. All firms meeting the following conditions are sent a customer evaluation survey: operating as a manufacturer; in receipt of a notable

level of assistance and services ( e.g. roughly eight or more hours of assistance from GMEA staff); and identified as a “closed” project. For multiple projects with the same company, in a few cases more than one survey is sent where key contacts and the scope of work is significantly different ( e.g. in different divisions or departments). Shorter programme interactions with companies, such as initial visits or informal consultations, are not formally evaluated through this procedure. In 1994, about 55 per cent of the programme’s interactions with customers were for eight hours or more (by 1996, these more lengthy interactions had grown to represent two-thirds of programme interventions). The time required for information reporting and mailing means that customers usually receive the post-project questionnaire about 30-45 days after the completion of the project. As necessary, the first questionnaire is followed by a second mailing and telephone contact. The response rate to the post-project survey procedure is relatively high – about 70 per cent.

The questionnaire asks for information on the following satisfaction and quantitative and qualitative outcomes:

◊ Customer confirmation of services (we check our record of the service provided with that of the customer). ◊ Measures of satisfaction with the assistance and services along several dimensions ( e.g. timeliness, staff knowledge).

◊ An indication of whether the customer has taken, or intends to take, any action as a result of assistance, services or recommendations provided. If the customer does not anticipate any action, we ask why not. ◊ Customer staff time commitment to the project.

◊ Whether the customer has received or anticipates business or economic impacts. We provide a check box (yes/no), and ask for dollar values if checked. The dimensions probed are:

− increase in sales; − increased capital spending on plant, equipment or other capital items;

− avoidance of capital spending on plant, equipment or other capital items;

− reductions in amount of inventory carried; − savings in labour, materials, energy or other costs;

− creating new jobs;

− saving current jobs;

◊ Other impacts, such as introduction of new methods, technologies, processes, software, training, etc. ◊ Written comments about the programme’s services.

◊ Whether the customer desires additional follow-on assistance.

Roughly 540 surveys were received and processed up to 31 December 1996 (Table 1). These customer satisfaction surveys show an overall mean satisfaction rating of 4.47 on a five-point scale (with one being poor, and five excellent). Timeliness and staff knowledge and experience received particularly high ratings. Referrals to other organisation received lower mean ratings (3.67).

programme participation results in very large positive impacts, we find some evidence that immediate post-project measurements underestimate the scale of the ensuing benefits (Table 2).

Table 2. Comparison of business reported impacts of GMEA project assistance using post-project and one-year follow-up surveys

Impact categories Post-project survey One-year follow-up Number Per cent Value ($’000)

Number Per cent Value ($’000) Customer action Taking action 64 85.3 - 51 68.0 - On hold n.a. - - 9 20.0 - Not taking action 10 13.3 - 15 12.0 - Sales increase (annualised) (^) 23 30.7 - 13 17.3 - Mean - - 1 311.5 - - 2 689. Adjusted mean^1 - - 170.8 - - 206. Median - - 100.0 - - 80. Operating costs (annualised) 34 45.3 30 40.0 - Mean - - 64.4 - - 124. Adjusted mean^2 - - n.a. - - 17. Median - - 50.0 - - 20. New capital expenditures 24 32.0 - 21 28.0 - Mean - - 272.2 - - 407. Adjusted^3 - - 57.1 - - 244. Capped mean^4 - - 116.0 - - 207. Capped adjusted mean^4 - - 57.1 - - 165. Median - - 25.0 - - 87. Capital expenditures avoided 13 17.3 - 7 9.3 - Mean - - 83.0 - - 74. Median - - 50.0 - - 35.

  1. Adjusted mean excludes $30 million sales impact reported for one project which was more than three standard deviations from the mean.
  2. Adjusted mean excludes $2.1 million operating cost savings reported for one project which was more than three standard deviations from the mean.
  3. Adjusted mean excludes $3.5 million capital expenditure reported for one project which was more than three standard deviations from the mean.
  4. Capital expenditures capped at $1 million. Source : Analysis of post-project surveys of 75 GMEA business customers with projects closed in 1994 who responded to the one-year follow-up survey. Post-project survey conducted 30-45 days after project closed. One-year follow-up survey conducted in July-August 1995.

The follow-up survey indicated that, one year after project completion, 68 per cent of firms had actually taken action on the programme’s recommendations. However, we also found that 17 per cent of the projects were still on hold ( i.e. the firm was still considering whether to implement project recommendations). It appears that one year later, customers did not take action to the extent they thought 30-45 days after project closure. This is due mainly to decisions to put some projects on hold, rather than to definitely not take action on project recommendations. If some of the projects reported to be on hold are actually implemented, there will be a narrowing of the gap between the follow-up survey rate of action and the post-project survey expectation.

As part of the 1996 Georgia Manufacturing Survey, we obtained a further round of information on longer-run project impacts. In this survey, we asked a broader set of questions, to include both

economic and non-economic factors. Customers who had completed projects 12 or more months prior to the survey point reported that involvement with GMEA had resulted in significant effects in areas that are hard to quantify, including existing process improvement (60 per cent), improved management skills (over 55 per cent) and greater attention to quality (about 45 per cent).

Project impact analysis

A further analysis of customer evaluation surveys allows us to provide information on the differential impacts that certain types of projects have. Drawing on aggregated customer reports of whether or not an impact is expected in particular categories, we can estimate the likelihood of an impact by project type. Table 3 shows that product development and marketing projects are 60 per cent more likely than the average project to increase sales. Energy projects are most likely to lead to cost savings, and plant layout and environmental projects tend to help companies avoid capital spending. Marketing projects have the strongest link to job creation, and management and human resource projects have the strongest link to job retention. Quality projects do not rate highly in any impact area, although they do require the greatest manufacturing customer staff time commitment.

Table 3. GMEA project types by relative impact Actual and expected likelihood, reported by customers 1 Capital Capital Mean Sales spending spending Inventory Cost New jobs Jobs customer Project type increase increased avoided savings savings created saved time (days) Computers 0.90 1.41 1.12 2.55 1.21 1.02 1.22 1. Plant layout 1.18 1.20 1.57 1.23 1.22 1.34 1.28 0. Environmental 0.35 0.86 1.96 0.30 0.78 0.38 0.78 0. Human resources 0.80 0.75 0.33 1.29 1.18 1.10 1.54 0. Marketing 1.66 0.65 0.43 0.21 0.07 2.20 0.80 0. Materials testing 0.65 0.81 0.80 0.26 0.73 0.81 0.50 0. Management 1.37 1.15 0.41 2.17 1.10 0.85 2.27 0. Process improvement 1.24 1.37 1.21 1.18 1.07 0.96 0.80 1. Energy 0.27 1.34 0.19 0.36 1.59 0.34 0.35 0. Product development 1.64 0.87 1.24 0.35 0.73 1.18 0.67 1. Quality 1.09 0.67 0.65 1.09 1.05 1.07 0.87 1. Note: Index: 1.00 = Impact by project type as a ratio of average impact by project type (column). A ratio of greater than 1 means above average impact. A ratio of less than 1 means below average impact. Source: Georgia Manufacturing Extension Alliance, Customer Evaluation of Service Surveys, 1 February 1994 - 31 December 1996, based on 538 surveys.

Cost-benefit analysis

As manufacturing extension and related technology transfer programmes have increased in scale, there has been increased interest in trying to assess not only the outcomes for individual firms, but also the economic and regional impacts and returns on the public resources invested. In related fields of technology policy, efforts have been made to assess the range of benefits and costs over time associated with programme intervention using benefit cost analysis (Feller and Anderson, 1994; Roessner et al. , 1996). These efforts, as Feller and Anderson (1994) note, “must be done explicitly, with full specification of benefits and costs actually estimated, and theoretical and empirical context provided for each estimate.” To date, few benefit-cost estimations of industrial extension

Table 4. GMEA benefit-cost analysis Treatment of benefit-cost elements

Issue Treatment Anticipated returns/investments • Use of median and adjusted mean to develop range

  • One-year follow-up survey suggests little difference overall “Zero-sum outcome” (geographical shift of benefits)
  • Adjustment based on survey – Georgia manufacturing sales volume by market Value-added adjustment • Sales involve expenses, so use value-added measure. Sales adjusted by ratio of value-added to shipments from 1992 Census of Manufactures Treatment of returns and investments over time
  • One-time impacts (capital spending) Multi-year impacts (sales, operating cost savings, inventory): 3- year time frame, with declining impacts in years 2 and 3 (to zero in year 4) Private staff commitments • Estimated burdened cost of private management hours Public benefits and costs • Include tax benefits (estimated using an input-output model), but avoid double-counting; public costs include federal overhead Qualitative benefits • Quantitative benefit focus Multipliers and indirect effects • Only first round of benefits counted – secondary multiplier/indirect effects not included

Results from the cost-benefit model indicate that GMEA’s industrial modernisation resources are leveraging relatively high levels of private investment which, in turn, are likely to lead to favourable and positive public and private returns over time. The estimated net public- and private-sector benefits from GMEA’s first year of services – scaled up to just over 530 projects – ranged between $10 million and $26 million. The ratio of private and public returns to private and public investment ranges between 1.2 and 2.7. Most significantly, the programme’s public investment was found to have a substantial leveraging effect on private investment. Companies invested from $3 to $13.3 for every dollar of public expenditures. For a typical company, the estimated private payback period for this private investment ranged from six to 22 months.

Controlled studies

Although Georgia Tech-assisted manufacturers report benefits, this does not necessarily prove that the results are attributable to Georgia Tech assistance. For example, unassisted firms could also have experienced these same benefits during the 1994-96 time period, suggesting that the results may have arisen from the general economic conditions of the time period. Thus, it is important to compare performance measures of Georgia Tech clients and non-clients. However, differences between client firms and non-client firms may well be explained by differences in the underlying facility employment size and industry mix; for example, the fact that Georgia Tech clients tend to be larger than non-client firms. (Larger firms usually experience lower short-term growth rates than smaller firms, although larger firms tend to be more stable over the long run.) Furthermore, simply comparing clients and non-clients fails to account for the influence of non-extension services (for example, offered by vendors and consultants), and subsequent information flows from other manufacturing firms.

To address these problems, the GMEA evaluation team used a controlled survey sent to all Georgia manufacturers with ten or more employees, designed to assess the longer-term impacts of the programme. The controlled survey allows for a comparison of the performance of client and non- client manufacturers. This survey, conducted in the winter of 1996-97, examines business performance for the period 1994 to 1996 (the survey also asks about companies’ problems, needs and technology plans for the period through to 1998). The survey went to all Georgia manufacturing firms with ten or more employees. More than 1 000 responses were received and weighted to reflect the actual distribution of manufacturers by industry and employment size. The 1996 survey refines and repeats an earlier survey administered in 1994 to all Georgia manufacturing firms with ten or more employees (Youtie and Shapira, 1997).

The evaluation team used survey responses to develop a model which estimates the impact of GMEA/Georgia Tech project-related extension services on client productivity (value added per employee). Drawing on Jarmin (1997 a ) and Oldsman and Heye (1997),^3 we examine the growth rate in the standard value-added production function from 1994 to 1996 (logged), as a function of receiving GMEA/Georgia Tech services (in the form of projects) and an array of plant characteristics, including:

◊ facility employment growth rate 1994-96 (logged); ◊ growth rate in the percentage of employees using computers or programmable machine control on a weekly basis 1994-96 (logged);

◊ whether this is the only facility in the company (dummy variable);

◊ two-digit industry classification (dummy variable); ◊ level of employment (dummy variable);

◊ whether the facility is located in a metropolitan statistical area (dummy variable);

◊ whether the facility is located in a county with a Georgia Tech extension office (dummy variable);

◊ whether the survey respondent reported using a private consultant (dummy variable); ◊ whether the survey respondent reported using a non-Georgia Tech public service provider (dummy variable);

◊ whether the survey respondent reported participating in a co-operative activity with other firms involving design or new product development, manufacturing, training, quality assurance or marketing (dummy variable).

This model was estimated using ordinary least squares. Table 5 presents the results, which indicate that GMEA/Georgia Tech assistance can be linked to productivity growth. Over the study period, GMEA/Georgia Tech clients experience a 0.3 per cent growth rate in value added per employee over non-clients. In terms of productivity, this is significant and is equivalent to a value- added increase of $366 000 to $440 000 for the average client plant, backing out what the model estimates value added per worker for the average client plant would have been had it not been a client.^4

systematic, programme-wide case study methodology developed and directed by Robert Yin, President of COSMOS Corporation, was sponsored by MEP. This more extensive case study effort was designed to “document exemplary client engagements for internal and external marketing purposes” and to “build capacity within the centres to document and disseminate exemplary engagements” (Cosmos Corporation, 1996). The studies would provide descriptions of how specific services are delivered and received (the “wiring”), as well as showing quantitative analyses (of service “inputs” and “outcomes”).

Within this MEP case study framework, GMEA has continued case studies of successful projects in order to better understand the linkages between programme assistance and customer outcomes (for an example, see Youtie, 1996). These studies show significant impacts from the GMEA cases:

◊ A product development project yielded $2 million over two years and ten new jobs.

◊ A plant layout project generated an $8 million sales increase (in which the CAD layout was used as a sales tool), as well as $50 000+ in operating savings, $750 000 in inventory savings, a 40 per cent increase in direct labour productivity, and 16 new jobs. ◊ An ISO 9000 pre-assessment audit yielded $1.6 million in total savings and $800 000 in sales retained. ◊ A product design and manufacturing layout project generated $36 000-$104 000 in cost savings and $625 000-$700 000 in increased sales.

◊ A manufacturing cost model project led to $100 000 in labour savings, $500 000 in new sales and a 5 per cent increase in profitability (the highest increase in the company’s history). The companies learned the value of adopting new technologies and processes, upgrading employee skills and seeking outside assistance.

Table 6. External reviews of GMEA, 1994-

Review Panel composition Recommendation Review of TRP proposal 1994 External agency review Two-year funding of GMEA approved First year review 1995 NIST internal staff review Recommended that TRP funding be continued Second year/rollover review 1996 NIST panel, with external reviewers Recommended rollover into MEP status. Recommended strengthening of GMEA advisory board Third year review 1997 NIST panel, with external reviewers Recommended continuation of funding for three additional years. Recommended attention to strategic planning, financial planning

Organisational assessments and external reviews

GMEA has been subject to organisational assessments and external reviews, as part of which GMEA evaluation analysis has been used to provide information on programme performance. There have been several assessments by expert panels and oversight agencies of MEP programme centres including GMEA. MEP conducted annual reviews of GMEA operations and results for the first three years of GMEA’s joining the MEP programme. The reviews required GMEA and other centres to

prepare written materials and reports and respond to questions by panel members. The panels examined centre results, planning and vision, staff quality, management of resources/budget, continuous improvement programme, performance in meeting programme goals, and made a recommendation about whether federal funding should be continued for a further three years. Overall, GMEA has been reviewed favourably and continued funding has been approved. But there have been recommendations to strengthen strategic planning, the role of an advisory board in providing industry input, co-ordination with public and private organisations and quality of referrals, budgeting and financial planning in response to reduced levels of federal funding, and marketing (Table 6).Issues in the use of evaluative information and analysis

We have discussed the methods and results of several different approaches used to provide evaluative information about GMEA programme performance and impacts. While the various approaches indicate that, generally, the programme appears to have favourable impacts, there are significant contrasts in terms of detailed findings, the reliability of estimates, the availability of controls and time horizons.

In the GMEA evaluation, a mix of quantitative and qualitative methods is used, but there is no clear superiority on this dimension. While it is important to quantify programme impacts and we take care to qualify and verify numerical estimates, it is apparent that companies usually find it rather difficult to estimate the dollar value of programme services. Some technology deployment and industrial extension services (such as reducing energy use or materials wastage) have immediate and quantifiable benefits. But other services, including interfirm networking, quality assistance and labour force training, have impacts that accrue over the longer term upon which it is hard to place a dollar value. Requests for dollar-denominated impacts are rarely answered completely by firms (we note that in our post-project survey, many more customers check the “yes” box than subsequently fill in a dollar value, suggesting that firms believe there is an economic impact – even though they cannot provide a number). As our one-year follow-up demonstrated, the time elapsed since project completion affects how companies report benefits and costs and, where estimates can be made, there is frequently a wide margin of error. Although, when aggregated together, “bottom-line” numbers can be derived, care needs to be taken in associating these numbers with a higher degree of accuracy than the underlying data collection realties allow.^5

There are also differences in the usefulness of different evaluation approaches to programme managers, federal and state sponsors and other interested parties. Among professional evaluators, the sine qua non is usually the sophisticated, controlled study (preferably with random assignment, although that is often hard to achieve). However, for other audiences, we have observed that there is no direct correlation between the usefulness of an evaluation method with that method’s degree of sophistication or even use of controls. Whether as professional evaluators we like it or not, simple methods are often influential. This is evident at the state policy-making and funding level, where the demand for complex evaluation techniques is relatively weak. It is also true at the federal level, where business testimonials and case examples (coupled with targeted lobbying) can go a long way in securing funding. Business testimonials are more easily understood, of course – although, arguably to their credit, there is at least some “street wisdom” among decision makers which recognises the difficulties of quantifying the impacts of technology deployment programmes. Similarly, although programme managers like to receive studies that give bottom-line figures (especially if the results shown are positive), those results are not always easily translated into management actions.

In understanding these issues, it is helpful to highlight the two essential purposes for which evaluation analyses can be used. The first is programme justification and rationalisation. Here, the

managers seeking to improve performance, the management information system provides data critical to understanding what the programme is doing and to maintaining its timeliness and quality.

From the view of the federal sponsor, GMEA’s surveys of customers are significantly discounted as a programme justification device. Measurements of satisfaction are deemed to be a programme- level concern, with funding decisions being made on the basis of demonstrated economic impacts, as opposed to whether the customer firms are happy. This is reasonable, since the MEP service is subsidised, which means that firms may be more easily satisfied than if they had to bear the full market cost (a cost which many firms would be unwilling or unable to afford). The federal sponsor also discounts “expected” (as opposed to “actual”) impacts. The lack of a control group is a further concern (although NIST’s own eight-ten month follow-up survey, conducted by the Census Bureau, does not have a control group). At the same time, client surveys have proven to be useful at the state level. Programme managers report showing completed forms to elected officials. The fact that the survey forms are completed in a customer’s own handwriting (or typing) gives them greater weight than aggregated numbers in a table, we are told. For programme management and improvement, the post-project customer surveys are also valued. Programme managers want to keep a “real-time” track of customer satisfaction. In particular, they want to know when and where there are problems, so that these can be addressed.

The analysis of relative programme impacts, by different project types, is generally too specific to be used in discussions of programme justification, whether at federal or state levels. However, it has attracted considerable attention from programme management and field staff in the context of how to better manage the programme and improve its net impacts. Within GMEA, it has prompted discussion about allocating more resources to project types, such as product development, that may generate larger effects on new sales and thereby jobs. At the national level, within the MEP, this analysis has been coupled with other evidence about the effects of more substantive and strategic interventions ( vs. quick, easy to do, but not necessarily fundamental projects) to argue for major shifts in the allocation of extension resources and priorities (see, for example, Luria, 1997).

The benefit-cost analysis of GMEA exhibits utility characteristics that are the reverse of those of the relative programme impact analysis. We believe that this analysis has had a useful educational effect in helping programme management understand the full framework of benefits and costs associated with project interventions, particularly in pointing out that the programme often imposes costs and expenditures on firms before streams of benefits accrue. However, the aggregated nature of the bottom-line results do not lend themselves to specific improvement actions. On the other hand, these bottom-line results have been used in programme justification discussions and materials, although we suspect that all officials are jaded by such studies and recognise that the results are sensitive to the assumptions used as much as the performance of the underlying programme. In theory, benefit-cost analysis should allow elected officials to make rational decisions about where to allocate resources among different programmes (or chose not to raise those tax-supported resources). In practice, this does not occur, as it is almost impossible to apply standardized procedures across different programmes or even to units within the same programme.

The longitudinal controlled studies are valued at federal and, perhaps to a slightly lesser extent, state levels particularly for purposes of programme justification. Controlled studies help to raise and answer important questions about whether programmes make a difference and whether firms might have achieved the same results without programme intervention. Controlled studies with a longitudinal dimension also help to address and control for issues about the kinds of firms that enter the programme – for example, is the programme attracting a “biased” set of firms that are already

receptive to intervention and thus more likely to be successful (we have not yet conducted this element of analysis, although we now have the data to do so for GMEA). At the same time, we have found that controlled studies are generally less useful for programme management and improvement. The variables used are often highly aggregated ( e.g. was or was not a programme customer) or not amenable to programme action. There are also issues of timeliness (these studies tend to take a while to complete and may use old secondary data sources), survey response bias and interpretation. Since controlled studies tend to focus on economic variables, they usually say little about non-economic effects (for example, impacts on know-how, relationships, trust or mutual business confidence) or about organisation variables (for instance, the results are affected by the manner in which particular services are delivered) that can be important for programme management and improvement purposes.

Case studies seek to focus on evaluation issues that cannot be easily quantified and to highlight the ways in which programme interventions lead to programme outcomes. We have helped to prepare both short descriptive cases and more elaborate cases employing logic models. We find that case studies have a mixed reception in terms of programme justification. We have already noted the power of simple verbal or descriptive written testimonials by businesses or, on their behalf, programme managers. More formal case studies do not appear to have any greater impact in this realm. Interestingly, while we have found that some previously reported impacts did not hold up to the scrutiny of a formal case study, in other instances we have identified customer impacts not otherwise reported or captured. From the view of programme improvement, well-implemented case studies have the potential to identify good practices that may be more likely to stimulate impacts in subsequent projects (Youtie, 1997).

External reviews have proved to be a major instrument from the point of view of the federal sponsor in managing the MEP programme, recommending improvements in centre operations and promoting revisions in management, organisation or strategy where deemed necessary. In this sense, external reviews are critical elements in programme improvement and have been used to prompt managers, even in a programme like GMEA which is generally recognised to be well run, to make changes. External reviews have also validated the GMEA evaluation process itself as an effective and robust. The point that individual MEP centres are subject to external review is helpful in programme justification, particularly with federal funders who are concerned that subsidies not be given to ineffective centres (this is becoming more important now that the “sunset clause” on federal funding beyond six years may be lifted). However, the value of external review may be limited from an oversight perspective because panel reports are closely held and not widely released (although in the past, summaries and general reviews have been issued of the first manufacturing technology centres). Additionally, state programme sponsors, in general, do not require external reviews: in the case of Georgia, the established reputation of Georgia Tech appears to assure state officials that the programme is competent. Independently, however, some units of the programme have secured external validation, for example by being certified to ISO 9000 quality management standards (as in the case of GMEA’s skill centre for quality and international standards).

Conclusions

GMEA’s experience with an array of evaluation methodologies highlights many of the tensions that are evident in implementing evaluations of programmes like the MEP. The issues include those of reconciling the varying evaluative needs of programme sponsors, programme managers, service providers and customers; accounting for the differential impacts of particular kinds of services; and trying to measure improvements that are not only often difficult to quantify or estimate, but which