New top story on Hacker News: Launch HN: Baselit (YC W23) – Automatically Reduce Snowflake Costs
Launch HN: Baselit (YC W23) – Automatically Reduce Snowflake Costs
12 by sahil_singla | 9 comments on Hacker News.
Hey HN! We are Baselit ( https://baselit.ai/ ), a tool that automatically optimizes Snowflake costs. Here’s a demo video: https://www.youtube.com/watch?v=Ls6VRzBQ-pQ . Snowflake is one of the most widely used data warehouses today. It abstracts out the underlying compute infrastructure into “warehouses” for the user - compute units with t-shirt sizes (X-Small, Small, Medium etc.). In general, if you want to lower your data processing costs, the only thing you can do is to just process less data (i.e. query optimization). But Snowflake’s warehouse abstraction allows an extra dimension along which you can optimize - by minimizing the compute you need to process that same data (i.e. warehouse optimization). Baselit automates Snowflake warehouse optimization for you. While we were working on another idea last year (AI for SQL generation), users frequently shared with us how Snowflake costs had become a top concern, and cost optimization was now their business priority. Every few months, they would manually look for opportunities to cut down on costs (removing workloads or optimizing queries) - a time consuming process. We decided to build a solution that could automate cost optimization, and complement the manual effort of data teams. There are two key components of Baselit: 1. Automated agents that cut down on warehouse idle time. This happens in one of two ways: cache optimization (when to suspend a warehouse vs letting it run idle) and cluster optimization (optimal spin down of clusters). You can easily find out how much these agents can save you. Here’s a SQL query that you can run on your Snowflake, and it will calculate your savings - https://ift.tt/kT9Nw1o 2. Autoscaler that lets you create custom scaling policies for multi-cluster warehouses based on SLAs. Snowflake’s default policies (Economy and Standard) are not cost optimal in most cases, and they don’t give you any control. One use case for Autoscaler is that it helps you efficiently merge several warehouses into one multi-cluster warehouse - with a custom scaling policy that is optimal for a particular type of workload. In Autoscaler, you can set a parameter called “Allowed Queuing Time” that controls how fast a new cluster should spin up. For e.g. if you want to merge transformation workloads, you might want to set a higher queuing time. Baselit will slow down the cluster spin up, ensuring all clusters are running at a high utilization, and you’ll see a reduction in costs. We’ve built a bunch of other features that help in optimizing Snowflake costs: a dbt optimization feature that automatically picks the right warehouse size for dbt models through constant experimentation, a “cost lineage”, spend views by teams/roles/users, and automatic recommendations from scanning Snowflake metadata. Due to the nature of our product (access to Snowflake metadata required), we haven’t made Baselit self-serve yet. We invite you to run our savings query ( https://ift.tt/kT9Nw1o ) and find out your potential savings. And if you’d like to know more about any of our features and get a live demo, you can book one at this link - https://ift.tt/mZMkcHV We’d love to read your feedback and ideas on Snowflake optimization!
12 by sahil_singla | 9 comments on Hacker News.
Hey HN! We are Baselit ( https://baselit.ai/ ), a tool that automatically optimizes Snowflake costs. Here’s a demo video: https://www.youtube.com/watch?v=Ls6VRzBQ-pQ . Snowflake is one of the most widely used data warehouses today. It abstracts out the underlying compute infrastructure into “warehouses” for the user - compute units with t-shirt sizes (X-Small, Small, Medium etc.). In general, if you want to lower your data processing costs, the only thing you can do is to just process less data (i.e. query optimization). But Snowflake’s warehouse abstraction allows an extra dimension along which you can optimize - by minimizing the compute you need to process that same data (i.e. warehouse optimization). Baselit automates Snowflake warehouse optimization for you. While we were working on another idea last year (AI for SQL generation), users frequently shared with us how Snowflake costs had become a top concern, and cost optimization was now their business priority. Every few months, they would manually look for opportunities to cut down on costs (removing workloads or optimizing queries) - a time consuming process. We decided to build a solution that could automate cost optimization, and complement the manual effort of data teams. There are two key components of Baselit: 1. Automated agents that cut down on warehouse idle time. This happens in one of two ways: cache optimization (when to suspend a warehouse vs letting it run idle) and cluster optimization (optimal spin down of clusters). You can easily find out how much these agents can save you. Here’s a SQL query that you can run on your Snowflake, and it will calculate your savings - https://ift.tt/kT9Nw1o 2. Autoscaler that lets you create custom scaling policies for multi-cluster warehouses based on SLAs. Snowflake’s default policies (Economy and Standard) are not cost optimal in most cases, and they don’t give you any control. One use case for Autoscaler is that it helps you efficiently merge several warehouses into one multi-cluster warehouse - with a custom scaling policy that is optimal for a particular type of workload. In Autoscaler, you can set a parameter called “Allowed Queuing Time” that controls how fast a new cluster should spin up. For e.g. if you want to merge transformation workloads, you might want to set a higher queuing time. Baselit will slow down the cluster spin up, ensuring all clusters are running at a high utilization, and you’ll see a reduction in costs. We’ve built a bunch of other features that help in optimizing Snowflake costs: a dbt optimization feature that automatically picks the right warehouse size for dbt models through constant experimentation, a “cost lineage”, spend views by teams/roles/users, and automatic recommendations from scanning Snowflake metadata. Due to the nature of our product (access to Snowflake metadata required), we haven’t made Baselit self-serve yet. We invite you to run our savings query ( https://ift.tt/kT9Nw1o ) and find out your potential savings. And if you’d like to know more about any of our features and get a live demo, you can book one at this link - https://ift.tt/mZMkcHV We’d love to read your feedback and ideas on Snowflake optimization!
Comments
Post a Comment