When I started running my first crowdsourcing employee innovation program, I inherited an internal software platform. The prior program manager had just completed a proof-of-concept, and I was hired to move the program from test to launch and scale for 88K employees. As I was hired, new product features were being developed based on lessons learned from the pilot. One feature was a unique curiosity. After a user finished their idea, instead of posting directly to the database, a search window opened. Using keywords, the system tried to identify similar or duplicate ideas. The plan was that if they saw a duplicate, they would abandon their submission and instead go and vote and a comment.
The feature was terrible from a user experience perspective, so we ended up killing it before we ever went into production. But for the longest time, the concept shaped my perception about duplicate ideas. I was continually asking myself, “how do I prevent or minimize duplicates?”
I am not the only crowdsourcing program leader who has struggled with this same question. I have had these conversations with seasoned peers and program newbies. However, over the years my perspective on duplicate ideas has changed in a series of phases. Here is how my thinking has evolved and where it stands today.
Once I realized that there is no easy way, systematically or operationally, to prevent duplicate or similar ideas from being submitted, the most straightforward approach was merely to call a reluctant truce and accept them for what they are, products of the messy process of crowdsourcing. They are an inevitable inconvenience. This led me to think about the practical impact of duplicates and also acknowledging a simple truth about ideas.
First, the simple truth about ideas.
Ideas are not as unique as we want them to be. At any given time many people are walking around with the same idea in their head. The history of innovation teaches us similar lessons, so why should it be any different in some organization? Therefore, why should we either fret about the coexistence of duplicates in the system, or worse, think that whoever gets their idea submitted first is somehow better or more enlightened? The first duplicate is not statistically the best. I can guarantee this by experience.
Second, the practical impact of duplicates.
Some people are better at articulating their idea, the problem, or both. After ideas are submitted, and crowdsourcing techniques are applied to filter and rank ideas, there is a form of natural selection that either advances sets of similar ideas together toward the top, or will advance the one that is better presented. Either way, if a group of ideas represents a ‘best’ idea, one or more of them will surface. And inversely, if one or more represents a concept that has less value compared to other ideas, those will collectively not advance.
This boils down to the fact that obsessing about duplicates is a waste of time. After ideas are filtered and ranked, then consider addressing duplicates. This strategy reduces the stress of trying to win an unwinnable battle at the start. It also opens the door to new ways of thinking about duplicates.
Solutions and Problems
Ideas are not just solutions. They are problems, people, and value propositions. It is possible that different problems can be solved with the same solution. Different solutions can solve similar problems. And different users might need the same or different solutions. Duplicate or similar ideas generate deeper insights along these lines. Making sure that more information about the problem and users, captured with each solution, can be helpful in seeing more deeply into the opportunity a given idea is trying to address. Duplicates, on this level, are not duplicates, and therefore add further value to the total body of ideas captured in the crowdsourcing process. This aligns with execution strategies like Human-Centered Design and Lean Startup. In design thinking, the goal is to understand better the problem being solved before one commits too deeply in the solution. As I have heard often, the goal is to “fall in love with the problem, not the solution.” In Lean Startup, the first phase is to validate Problem-Solution Fit. In this sense, duplicate and similar ideas give us more information to work with as we try to better identify the problem we are working with.
The Voice of the Crowd
In Nate Silver’s book, The Signal and the Noise, he presents concepts around how to separate the ‘signal’ (meaningful information) from within a larger set of data (what remains is the ‘noise’). It is not the consensus view among all pieces of data, but the most substantial subset of the broader data that ends up being the relevant information.
After running even a few ideation challenges, it becomes apparent that ‘themes’ or cluster if ideas repeatedly form. Stepping back from the ‘top ideas’ and looking more deeply into the conversations that are forming across the community becomes the next, more mature way of leveraging the full set of data for action. Instead of picking five ideas out of a few hundred for consideration, what about picking five themes across all of the ideas?
When themes are acknowledged, an exciting and transformational opportunity emerges. Picking a theme and then activating all of the participants (ideators, commenters, voters, etc.) into a problem-solution super-cohort. This approach enables a program to more deeply and broadly engage a set of unmet needs and ideas that would otherwise go ignored by focusing exclusively on a single submission from within that group of data. The intersections and gaps between ideas create the best possibility for transformational thinking and can only be gleaned by pulling in the entire set of submissions.
By stepping back and seeing idea groups as ‘signal,’ not ‘noise,’ this addresses a common failure in most crowdsourcing campaigns. The focus on a tiny fraction of total submissions can leave a huge portion of the ‘engaged crowd’ feeling frustrated and left out after the campaign. Getting beyond the mindset of narrow focus and tuning in on the broader conversations points us down a path of higher engagement, larger potential to create massive change, and greater overall success in crowdsourcing innovation.
Wait! Before you go…
Choose how you want the latest innovation content delivered to you:
- Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today
- Weekly — Email Newsletter — Free Magazine — Linkedin Group
Gregory Hicks is skilled at: crowd-sourcing, facilitation, Business Model Generation (business model canvas, value proposition canvas), human-centered design, and business strategy. He has developed two proprietary tools for Unlabel Innovation, the Innovation Program Handbook assessment and Innovation IQ, an innovation project assessment, risk manager, health check, and decision-making tool.