Our careers in Quant research and fund management led us to focus Quotient on the functional areas of the quant workflow that we knew as the most time-consuming and tedious – mapping disparate data, coding factor ideas, constructing multi-factor models, applying machine learning techniques, integrating with existing portfolio construction tools, etc.
Quotient was built to be the toolset that we wished we had in our Quant days.
The functionality of Quotient was well-received by initial clients, who worked individually or in small groups of 2-3 quants. However, when we brought onboard our first larger client team, pushing the resources required of Quotient, we discovered that Quotient’s calculation engine had critical scalability bottlenecks. Quotient was designed to handle large and complex problems – just not from multiple simultaneous users or scheduled jobs!
Houston, we have a problem.
We determined that to make Quotient’s robust functionality capable of handling larger data science problems, an extensive reengineering project was needed for Quotient‘s core data calculation and caching engine – the Quble™. This effort would be a sizable undertaking and a shift from our core competency in financial tools and data science.
Then came a timely solution and the beginning of a beautiful partnership… with Snowflake.
Snowflake is a huge convenience for users that want to quickly add new vendor data and test their potential to enhance model alpha. From early on, Quotient’s data integration layer seamlessly joined Snowflake based data with other data sources for use in factor construction and analytics modules.
Yet, our interest in Snowflake grew dramatically when they introduced Snowpark for Python, providing the ability to write Python methods that are co-located with their databases and can scale in a manner similar to regular Snowflake SQL queries. Snowpark is a game changer for developers wanting to build faster data analytic tools on a platform that supports dynamically adding processors, memory and users.
After building a Snowpark based proof-of-concept by migrating some of our Quotient methods, we confirmed that using Snowpark would handle our scalability needs. As a bonus, Snowpark also improved performance through the colocation of code and data. SFS then added several Snowpark/Python developers to our team and spent the next half-year reengineering our Quble library and other Quotient components onto Snowflake.
Now with 100s of Quble’s powerful finance-oriented data manipulation and time-series calculation methods redeployed in Snowpark, SFS’s new flagship product Quotient for Snowflake™ is ready to optimize quant team efforts for both small investment teams and large institutions.
So what is it about the Quble that makes Quotient such a powerful and flexible data science tool? For that story, please watch for our next Blog… “It All Started With A Quble”.