We were struggling with pulling 9M+ records out of our production environment, which took an 8-person team 40 hours a week. We got that down to 5 minutes a day, and it’s fantastic.
To track sales cycles across its teams, it uses a variety of different instances of customer relationship management (CRM) tools, including Salesforce and the on-premise solution SugarCRM. The company continues to have a robust and diverse product offering sold by numerous sales teams across the organization, which, in turn, generates a huge number of sales opportunities for the company to track.
However, every sales opportunity carries a transaction log with a distinct data footprint. And one potential downside of being a market leader of this size means having literally millions of opportunities - and a correspondingly gigantic archive of transaction log data.
And this sales opportunity log data is constantly changing, not just as sales representatives update contacts, stages, and other opportunity details, but also as some logs error out due to missing fields, incompatibility errors, or other technical problems - relatively minor issues in and of themselves, but potential showstoppers when taken at a scale of 9 million sales opportunities daily. Storing this type of data in a single, unified production database can quickly bring a server to capacity.
Of course, the mission of the company’s business operations team wasn’t simply to maintain a huge database to house its millions of sales opportunity transaction logs. The company also needed to run reporting on those logs to provide visibility into its sales funnel for the purposes of budgeting and forecasting, as well as to gauge the technical health of its transaction logging process.
However, from a technical perspective, it simply wasn’t feasible to run raw queries on the already-overloaded production server. To address the issue, the business operations team started by attempting to run a daily snapshot report, which was slow going, given the server’s already-maxed-out memory levels. “Since the system was relatively fragile, we would pull only 2,500 records at a time, so we ended up with many batches to go through, taking a lot of time to backfill,” one team member explains.
What the company had was a reporting window that took close to a full day to complete. What it needed was a reporting window of five minutes. To work on the problem, the company assigned two engineers to join a member of the business operations team and later expanded the efforts with an additional member of operations and four more engineers.
Development on the issue, taken together with the ongoing need to run daily reports, totaled eight hours of work every day. “We were struggling with pulling 9M+ records out of our production environment, a challenge that took an 8-person team 40 hours a week,” recalls one team member. This was clearly a significant pain point. However, it was also incurring a substantial expense in engineering resources that could’ve been deployed elsewhere, such as building and launching new products and features that would actually lead to new business.
That’s when the team recalled its previous use of the powerful, flexible Tray Platform to build sophisticated, automated workflows to handle important business tasks such as custom lead scoring and syncing multiple CRM instances across numerous business units, even flowing data between a cloud-based CRM and an on-prem CRM
The Tray Platform had proven to be user-friendly enough for its operations team to use directly without engineering resources, and robust enough to handle a variety of heavy-duty use cases for an industry-leading enterprise company. To handle the task, the team envisioned a lightweight, but powerful, workflow on the Tray Platform that could pull opportunity data directly from the company’s CRM instances in parallel to its production server.
The team knew it needed a workflow that included discrete connectors for its database, its cloud-based CRM Salesforce, its on-prem CRM SugarCRM, and its data visualization solution Chartio to integrate each application and route data according to the company’s own customized usage patterns. And it had to do it all without laying so much as a finger on the overburdened production server itself.
Partnering with Tray.io’s customer success team, the team was able to rapidly build out automated workflows to set up parallel reporting tables. These workflows combined its database and CRM instances with the Tray Platform’s custom logic helpers to transform and route its data, including scheduling helpers to set reports to run at specific day-and-time intervals, Boolean logic conditionals to trigger branching follow-up steps, and functionality to store stepwise batches, paginate lists, and loop data processing, row-by-row, for each of its millions of opportunity transaction records. All told, the process of building out these workflows took only a few days.
With these workflows in place, the team has been able to exponentially increase its reporting speed, going from 2,500 records in a day to 2 million records in an hour. “We didn’t want to continue to store this huge amount of data on the production server,” the team explains.
“We wanted our data to be clean and stable, so with this replica, we now have historical data on record. We can go back several months and troubleshoot - see the reasons that this or that process broke, and check the interactions.”
The team adds, “We’d previously had eight people doing 40 hours a week on this. This was a process that used to involve taking days to build things out in SQL, then spending hours in Excel. We got that down to five minutes a day, and it’s fantastic.” The team also reports that executive satisfaction regarding this important sales data is much higher now that the Tray Platform’s workflows provide more-responsive reporting and greater visibility.
Being able to offload reporting from production to parallel reporting tables, even across both cloud-based and on-prem services. has empowered the team to get proactive about its data maintenance and the way it uses insights. “Now that we have better headlights, we’re identifying problem areas faster, so when errors do happen, we can engage the team responsible to fix the problem.”
“Previously, we were playing catch-up. Now, we can have teams go out there and take action in a timely manner, and our error rate is going down big time. Error rates had originally been in the 60% range. We’re down into the single digits now, at about 8% presently,” explains the team.
“This absolutely seems like a use case that we and other enterprise companies could use in the future. A database sync for two database sources that are separate, with the ability to define when you fire it off, and how often. This would definitely be of use for customers who have fragile databases.”