Optimal Crowdsourcing Framework for Engineering Design

Principal Investigators: Panos Papalambros, University of Michigan, pyp@umich.edu

Honglak Lee, Asst. Prof., Comp. Sci. & Engr., honglak@umich.edu
Rich Gonzalez, Prof., Psych., gonzenator@umich.edu
Max Yi Ren, Rsrch. Scientist, Mech. Engr., yiren@umich.edu

Student: Alex Burnap, University of Michigan
Government: Richard Gerth, Andrew Dunn, Lisa Graf, Pradeep Mendonza, U.S. Army TARDEC
Industry: Damien DeClercq, Local Motors

Work began in 2012 and reached completion in 2015.Crowdsourcing graphic

Crowdsourcing is a widely publicized (and somewhat misunderstood) method of utilizing a group of individuals to perform tasks. The task can range from evaluating items in consumer reviews to design and manufacturing of a complex system, e.g., the DARPA AVM. Studies on crowdsourcing practices have been reported to show the advantage of crowdsourcing against conventional problem solving, including design, due to the diversity of expertise within the crowd.

Most crowdsourcing efforts to date are performed in an ad hoc manner. In this project, we proposed to initiate a rigorous study of crowdsourcing as a design process by combining research knowledge from systems design engineering, computer science and behavioral science. An early question in such a research endeavor is whether crowdsourcing can be reliable in supporting serious decision making -- and under what conditions. Focusing more deeply, we want to study what could be the properties of the crowd (such as composition and size) to achieve such reliability. To this end, the first year of this project focused on creating a simulation environment for the crowd and the crowdsourcing process in order to explore how properties of the design problem and properties of the crowd affect the crowd's ability to perform good evaluation tasks -- before collecting actual human data.

A crowd can help evaluate and validate a design based not only using given simulation results for physics-based performance criteria, but using also criteria that cannot be captured by simulations, often related to subjective but important human judgments. Linking physics-based simulation evaluations with human-based evaluations in a formal rigorous manner greatly expands our design capabilities. Note that a crowd here may be a collection of very diverse individuals, from design engineers, to generals and soldiers to maintenance workers.

The research objective is to investigate how well a crowd can evaluate design performance criteria. The evaluation task was selected because it is amenable to mathematical modeling, and evaluation is an important part of a crowd sourcing design process. More specific questions are: (1) Under what circumstances is crowdsourcing be more effective than a conventional design team; (2) for a given design task, what are preferred ways to present the task to the crowd; and (3) what are appropriate crowd structures (crowd composition and individual properties) in terms of diversity, expertise, and crowd member roles.


  • R. Gerth, A. Burnap, P. Y. Papalambros, "Crowdsourcing: A Primer and its Implications for Systems Engineering", Proceedings of the NDIA Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), Troy, MI, Aug 14-Aug 16, 2012.
  • Y. Ren, P. Y. Papalambros, "On Design Preference Elicitation with Crowd Implicit Feedback", ASME International Design Engineering Technical Conferences, DETC2012-70605, Chicago, Aug 12-Aug 15, 2012.
  • A. Burnap, Y. Ren, P. Y. Papalambros, R. Gonzalez, R. Gerth, "A simulation study of crowd abilities on crowdsourced evaluation of design concepts", ASME International Design Engineering Technical Conferences, 2013.
  • Y. Ren, C. Scott, P. Y. Papalambros, "A Scalable Algorithm for Eliciting the Most Preferred Designs of a Crowd", ASME International Design Engineering Technical Conferences, 2013.
  • Y. Ren, A. Burnap, P. Y. Papalambros, "Quantification of Perceptual Design Attributes Using a Crowd", 19th International Conference on Engineering Design, 2013.