Motivote is a digital accountability platform that harnesses behavioral economics to make voting fun, easy, and social. With team-based commitments and gamification, we’re increasing young voter turnout to build more representative government.
Background and concept
We came up with the concept as Master of Public Administration students at NYU Wagner, where we were tasked with creating a business case for a new social venture. Hundreds of interviews with college students and recent graduates surfaced an “intention-action gap” that prevents young people—who know voting is important and care about outcomes—from consistently following through. “Behavioral microbarriers” get in the way: feeling too busy, not knowing enough about candidates, or missing a deadline.
With a clear problem definition, we needed to start testing whether our proposed solutions would bridge that gap. We had spent months digging into the academic literature on how simple behavioral nudges could get people to act on intentions. And we could point to successful “commitment device” tools in other verticals, like DietBet for weight loss. But what would the testing for voting look like in practice?
Our hypotheses
Here’s what we thought would increase young voters’ likelihood of voting:
- Social influence and monitoring: When your peers are watching, you’re more likely to complete a socially desirable behavior.
- Making the process easier: By spoon-feeding you the information and doing interim steps for you, we’d cut the effort you have to put in—and the excuses.
- Specter of losing money: Humans are “loss averse”, so having money on the line makes you more likely to complete a task.
Barriers to testing
There were two main barriers to testing these hypotheses. First, there were a lot of them, and they were intertwined. We asked ourselves: how could we possibly test all aspects of the model without having a fully developed piece of technology to put in front of users?
Second, elections don’t happen all the time. Unlike a product or service that we could tweak and test daily, we assumed that we had to wait until an election happened to do any experimenting. We were stumped on finding a situation with similar dynamics, and felt resigned to simply waiting until election season picked up again—which was months away.
Stop searching for perfect
The most helpful advice we received was from a coach at the Leslie eLab, who got us to abandon searching for the one perfect experiment. Instead, he prompted us to think of this as 20 small experiments, which took away the pressure of figuring out how everything had to fit together. We dropped the notion that we needed a product to do this. The point was simply to see how people reacted to different behavioral mechanisms.
We listed out the dozens of pieces of our behavioral model that we wanted to test, and then spitballed low-lift ways to test each of them. The tests were based not on voting itself, but other forms of civic action that could happen at any time.
Our first tests
In one set of experiments, we went up to random individuals in parks and cafes to ask if they cared about current events. If they said yes, we asked them to call their elected official on the spot, handing them a pre-dialed cell phone and script. Over 90% were willing.
Afterwards, many told us they thought about calling their representative before, but never had. They felt nervous about what it would entail, or feared it would be a lot of work.
That signaled to us that minimizing the effort involved in a task (that someone is primed to do) can push them over the “hump” of completing it. This result reinforced our hypothesis that “hand-holding” should be a core part of our intervention.
Leaving it open-ended
Another takeaway from this test was that people were more likely to complete the task if they were with a friend. We had not gone into this specific experiment with the aim of looking at social pressure, but because these tests were purposefully left open-ended and “casual,” there was room to discover insights we hadn’t anticipated. Just by paying attention to non-verbal signals during these interactions, we saw things like waiting for a friend’s eye-lock or nod of approval before accepting the phone and script.
Calling representatives is a socially desirable behavior for people who say they care about current events. They didn’t want to look lazy or hypocritical after they had just claimed being engaged was important to them—and that pressure was heightened because their friend was watching. This underscored that social monitoring would be key to increasing follow-through; you simply don’t want to look bad in front of your friends.
Next steps
These observational tests helped strengthen our working set of assumptions, and got us to a point where we were ready for a slightly more structured experiment. We decided to test the team accountability and “loss aversion” components of our model in an NYU student government election, which had more characteristics of the political elections where we hoped to deploy motivote.
To simplify the experiment, we asked participants to form teams of five fellow students and each put down $5, which they would only get back if everyone on the team voted in the election.
To execute the test, we used tools easily at our disposal: Venmo for the financial pledge, and Google Forms and Sheets for signing up and tracking whether teams voted.
We reminded ourselves that we were just testing elements of the model. Even though our grand vision was not to have set team sizes or mandated financial pledge amounts, we needed these parameters to simplify how we looked at the underlying mechanics. Once we validated those, we could add more ambiguity and variation to the equation.
The results were a powerful testament to our model: We had increased year-over-year voting rates by 168% with these simple behavioral nudges. We learned so much from our tests, including how to test in the first place. It was an invaluable experience that we carry with us as we build motivote into the best product in can be.