Mercari logo


Mercari, Inc. is the creator of the flea market app “Mercari”, a service that allows you to buy and sell items easily, safely, and securely in 3 steps. The app oversees a cumulative total of over 2 billion items (as of December 2020) and its use is still expanding. To meet this increasing
demand, the company has implemented a marketing strategy based on a variety of data, all with the goal of making the service closer to its users and easier to use. By migrating a portion of the functions handled by their on-premises log analysis platform to edge computing, Mercari has succeeded in reducing workload and improving maintainability.

jp.mercari.com
Industry: Ecommerce
Location: Tokyo, JP
Customer since: 2021


Favorite features
Compute

Addressing the increased workload of on-premises email beacon system by migrating to edge computing


Celebrating its launch in July 2013, the flea market app Mercari allows
individuals to easily buy and sell items with the use of a single smartphone. The app’s safe and secure transaction system is equipped with AI fraud monitoring, an escrow payment system that prevents payment issues, and many other features. Its number of monthly users has exceeded 20 million (September 2021). The company also promotes offline initiatives, such as “Mercari Posts,” which are unmanned mailboxes that help users simplify the process of packaging and shipping, as well as “Mercari Workshops,” where users can learn more about how to take advantage of Mercari’s services.


By circulating unneeded items as reusables, Mercari aims to further
accelerate a recycling-oriented society, offering alternative options to
“disposal” and utilizing everyday know-how to make those options more familiar. The secondary distribution data accumulated in Mercari can also be used to optimize product planning and production/sales in the primary distribution market, thus strengthening cooperation between primary distribution and secondary distribution. As part of these efforts, they utiliize a variety of data to further improve the Mercari user experience.


Mr. Masaru Igarashi, an engineer at Mercari, explains: “We use the mail
beacon system to collect logs and find out how many emails sent to customers
have been opened. From the collected logs, our team analyzes differences in
the rate at which various types of emails are opened, depending on their titles and contents. We then use those statistics to carry out marketing initiatives, such as improving the rate at which emails are opened. Since the mechanism for collecting these logs was built on-premises, the operation and maintenance workloads were high. For that reason, we decided to move it to a cloud platform.”


“The collection of” logs by our mail beacon system was achieved by writing and adding Lua code to OpenResty. We also operated a log transfer system. While these mechanisms were used only for collecting logs, any and every change in specifications required extra maintenance, resulting in high workload and lower maintainability. And that’s where we decided to migrate the system to edge computing and operate it in a serverless environment.” (Mr. Igarashi).


The transition to Compute platform is the first step toward the company’s vision for the future


Mercari began considering the transition to a serverless environment in the
fall of 2020. They explored Fastly’s proposals on the functions and use cases of Compute, and built a new mail beacon system in the serverless environment around February and March 2021. Mr. Kazuki Nakano, an SRE network engineer, explains: "We built the system on the same principles that govern Fastly’s log collection functions. After migrating to Compute, we confirmed whether the number of logs collected before and after the migration matched. Once we knew that there were no problems, we began fully implementing the system."


The system has a simple configuration that processes requests from Mercari’s
users with Compute and sends the logs to BigQuery, which is an analysis data warehouse of Google Cloud, and Cloud Storage, which is a file storage service. It a lso outputs the application p rocessing log to Datadog. While the same logs are sent to BigQuery and Cloud Storage, the logs sent to BigQuery are used for analytics, while those sent to Cloud Storage are used for backup.


“This transition to Compute is positioned as the first step to explore its usability for the future. We began migration by establishing backup methods, so that we could switch back immediately if it didn't work. In the end, however, we didn't have to worry, because there were no problems after testing it in the development environment. At that time, there were not many departments in the company that used a serverless environment, so instead of comparing it with other services, we decided to test the results of Compute ourselves and base our decision on those results.” (Mr. Igarashi)


Compute was chosen in order to build a CI/CD pipeline that would streamline procedures for upgrading the service. According to Mr. Igarashi, “We built a mechanism that executes compilation when the Rust script is updated, houses the compiled binaries in storage, downloads the binaries at the time of deployment, and then uploads them to Compute. This process was achieved using Fastly's CLI.“


Even greater efficiency expected by migrating other features of Pascal to Compute


The mail beacon system migrated to Compute is one of the functions of Pascal, Mercari's log analysis platform. They expect to improve further efficiency in the future by sequentially migrating other functions of Pascal to Compute. At that time, rather than sending logs directly to BigQuery as they are now, the same mechanism as the existing log analysis platform will be used, enabling them to be processed in Pub/Sub prior to collection.


On the effect of migrating to Compute, Mr. Nakano commented that “The mail beacon system is working stably without any problems. Workloads and maintainability have improved as expected. Also, the ease with which logs can be outputted and collected is really helpful. Moreover, when we began, only the Rust language was supported, but recently the JavaScript support has also become available, enabling us to develop the system more easily.”


Mr. Igarashi also pointed out that, “Reducing the operational load has been more beneficial to us than reducing system load. For example, Terraform’s Infrastructure as Code can be used with Fastly service to improve the efficiency of infrastructure configuration management, but the same mechanism can also be used on Compute. Another benefit is that we were able to reduce single points of failure. If the log transfer fails, the queuing function is still provided, so even if a problem occurs, there is no need to worry about log loss after the problem has been resolved. This is a system we can use with confidence. It would require a lot of work and time
to implement the same system by ourselves. ”


Regarding Fastly's support, Mr. Nakano said: “Ordinarily, indicators such as the number of requests, changes in traffic, cache hit rate, etc. can be displayed using Fastly’s statistics data, but since Compute is in Limited Availability (LA), there are still only a few indicators that can be displayed. To address that, Fastly developed a function that visualizes the number of responses for each status. There was also a portion that was difficult to debug during development, but now it seems that a simulator is provided locally, so I would like to try it in the next development.


About future prospects, Mr. Igarashi said, “Since Compute was in LA, there were few case studies and little information was available, so we proceeded with the migration while being assisted by Fastly’s support team. We received information regularly through Slack, such as when the local test functions of Compute were released, and it was helpful be able to make inquiries 24 hours a day. I was very grateful that I was able to contact them at any time, and that even small inquiries were answered very quickly. Recently, they started offering a limited-time free trial of Compute, and I hope that it will reach General Availability (GA) in the near future.
I also hope that the current support system is maintained, even after GA.”

Ready to get started?

Get in touch or create an account.