FINN.no AS, Norway's largest online marketplace, was founded in March 2000. Currently, it is owned by Schibsted AS and specializes in facilitating classified ads and services for both individual and corporate users engaged in buying and selling activities.
A standout feature of FINN.no is its widespread popularity and extensive user base among Norwegians. It has firmly established itself as the preferred platform for many people seeking to buy or sell used items, thus thriving as a marketplace for second-hand goods. Notably, each user spends an average of more than 40 hours per year on the platform, and it hosts a staggering number of over 12.6 million ads across various categories, collectively accumulating over 460 million views each year.
Presently, there is an exceedingly low threshold for reporting ratings and reviews on FINN. Users can effortlessly click on the "report" button and express their disagreement in a textbox if they are dissatisfied with what they have received. Unfortunately, this ease of reporting has led to a significant influx of tickets to the customer service department.
Out of the numerous complaints received by the customer service team, only a small fraction has valid reasons for getting their rating or review deleted. This situation has resulted in a considerable number of dissatisfied customers who feel their concerns are being overlooked. Additionally, it places an unnecessary burden on the customer service team, dealing with a significant load of complaints that do not warrant action.
With these ideas in mind, we initiated a brainstorming session and sought insights from other services to align our approach with best practices. Our primary objective was to test the hypothesis concerning expectation management, informative display, and user nudging. To achieve this, we began creating wireframes and a prototype of the new flow.Embracing an iterative design process, we conducted four rounds of remote user testing.
During these tests, users were provided with a context of the situation and then given the freedom to perform the report as they deemed fit. This approach allowed us to observe how users interacted with the new flow, what they read, what they skipped, and how well they understood the various report functionalities.
Following each user testing session, we meticulously transcribed, compiled, and sorted the feedback we received. This valuable input was then leveraged to enhance and refine the design in the subsequent iterations. By repeating this iterative cycle, we could fine-tune the user experience, ensuring that our solution was intuitive, effective, and aligned with the users' needs and expectations.
In the end, the experimentation was ran over a period of 7 weeks, due to the number of total exposures (1.191). Here are some of the results:
- Control group (old flow)
- 133 out of 584 sent report to customer service
- Challenger group (new flow)
- 42 out of 609 reported a rule violation to customer service
- 27 out of 609 sent in feedback without rule violation
- 77 out of 609 followed the link to send message to the other party
- 37 out of 609 followed the link to reply on the received feedback
- 426 out of 609 did not report
Based on the comparison between the Control and Challenger approaches, the Challenger implementation resulted in a significant reduction of 68% in reports sent to customer service. In the Control scenario, both rule violations and non-rule violations were forwarded to customer service for analysis and handling. However, in the Challenger approach, only rule violations led to a report being sent to customer service, streamlining the process and reducing unnecessary reports.
For the Challenger, in cases of no-rule violations, users were given the option to write both a venting message and send feedback. This empowered customers to express their frustration and provide feedback directly without the need to involve customer service for non-rule violation issues.
Additionally, the Challenger offered clearer options that empowered users to resolve issues themselves. As a result, 77% of users with no rule violations opted to use one of the alternative solutions, effectively solving their problems without contacting customer service.The results indicate that the Challenger approach successfully reduced the burden on customer service, improved user autonomy in resolving issues, and enhanced the overall user experience on the platform.
The Challenger approach not only reduced reports to Customer Service by 68% but also had a positive impact on customer satisfaction. By providing users with the ability to be self-served when they received ratings and reviews they disagreed with, the Challenger approach empowered customers to take control of their own experiences. This was evident through increased usage of self-service features such as replying to reviews or sending messages to the review sender. Additionally, customers with no-rule violation cases also provided positive feedback, indicating their satisfaction with the available options.
Throughout the experiment, close collaboration with customer service was vital in understanding how the change affected users. It was crucial to ensure that the reduced reports to Customer Service were not simply being shifted to other channels, as this would not reflect the desired change. However, the collaboration with Customer Service confirmed that there was no significant increase in traffic to other channels, indicating that the reduction in reports was genuine and effective.
By working closely with Customer Service, the Challenger approach demonstrated its success in both reducing the burden on support team and improving the overall user experience. It effectively empowered users to handle certain issues on their own, leading to greater satisfaction and a more streamlined reporting process for rating and review concerns.
In conclusion, the Challenger approach yielded notable results both quantitatively and qualitatively. The reduction in load on customer service, as well as the positive impact on customer service interactions, validated the effectiveness of the new flow. Based on these findings, the decision was made to implement the Challenger approach at 100% capacity after the conclusion of the experiment.
By turning up the Challenger flow to full deployment, the platform could capitalize on the benefits of decreased reports to customer service and enhanced user satisfaction. The successful outcome of the experiment validated the value of the changes made, leading to a more efficient and user-centric system for handling ratings and reviews.