![Using Snowsight: Streamlining Gurobi Powered Decision-Making in Snowflake Part Two](https://cdn.prod.website-files.com/6603bc85bc604431b3053166/664f60718abb6cc7b173805c_B86ljikxmLvvZ6Kmy2t0glNN10Xdk1xpZmfmoO2qet8.png)
Optimization & Simulation
Your step-by-step guide to integrating Gurobi, the powerhouse of mathematical optimization, with Snowflake.
Snowflake is a cutting-edge cloud data platform designed to revolutionize how businesses manage and analyze their data. With unmatched scalability, performance, and user-friendliness, Snowflake empowers organizations to extract insights and foster innovation from their data like never before.
An exciting addition to Snowflake’s features is Snowpark Container Services (SPCS). SPCS, currently in public preview as of February 2024, enables users to package, execute, and optimize containerized workloads as OCI images directly within Snowflake. This breakthrough opens avenues for executing advanced decision science workflows seamlessly within Snowflake, providing a unified architecture for enterprise operations.
This allows you to optimize workflows like:
In this guide, we will cover how to run Gurobi within the Snowflake environment using Snowpark Container Services (SPCS). This method also serves as an alternative approach for those who want to run Python packages not natively available in the Snowpark Anaconda Distribution. For any questions or more information on how Aimpoint can provide decision support by developing state-of-the-art models using Gurobi + Snowflake, please feel free to reach out here.
What you will need
At the end of this guide, you will have:
Follow Snowflake’s quickstarts guide to complete the initial SPCS setup. The guide provides a detailed overview of the SPCS infrastructure and step-by-step instructions on how to get started.
Some practical tips when provisioning SPCS:
SELECT CURRENT_ORGANIZATION_NAME();
SELECT CURRENT_ACCOUNT_NAME();
FROM python:3.10
LABEL author=""
RUN apt-get update && \
apt-get install -y python3-pip
RUN pip3 install JPype1 jupyter pandas numpy gurobipy "snowflake-snowpark-python[pandas]" snowflake-connector-python
RUN useradd -ms /bin/bash jupyter
USER jupyter
WORKDIR /home/jupyter
EXPOSE 8888
ENTRYPOINT ["jupyter", "notebook",
"--allow-root",
"--ip=0.0.0.0",
"--port=8888",
"--no-browser",
"--NotebookApp.token=''",
"--NotebookApp.password=''"]
CALL
SYSTEM$REGISTRY_LIST_IMAGES('/<DB_CONTAINER_NAME>/PUBLIC/IMAGE_REPO');
This will allow you to see the status of the push. Please note that the push can take ~10-15 minutes or more depending on the size of the image.
Once the service is “READY”, you can run the URL (i.e. ingress_url) from the SQL output above in a browser. This allows you to access the Jupyter Notebook Service and run your uploaded notebooks.
In the YAML configuration file provided in the Snowflake quickstart guide, we provided a mounting location to allow a directory to be shared between Snowflake’s internal stage location, which is stored in @volumes of your chosen database’s public schema, and the active container, which is the /home/jupyter directory of the Jupyter Notebook Service. This configuration implies that the @volumes stage will persist and store artifacts in the container.
If you navigate to the @volumes stage in Snowsight, you will observe a table of directories. A new directory, @volumes/jupyter-snowpark, is created when the Jupyter Snowpark Service was established. Consequently, any files uploaded to @volumes/jupyter-snowpark will be accessible within the container’s /home/jupyter directory.
Figure 1. Mounting location in Snowflake which persist in container’s /home/jupyter directory
import os
os.environ[“GRB_LICENSE_FILE”] = “/home/jupyter/gurobi.lic”
3. Run the ingress_url in your browser and login with your credentials. A Jupyter Service with your uploaded Jupyter Notebook should be visible. Open and run your Jupyter Notebook as you would normally run locally.
Figure 2. Gurobi model ran within Jupyter Notebook in a Snowpark Container Service
Notes on Gurobi license: The ability to execute models in SPCS using Gurobi is contingent upon the specific type of license available to you. You can run models with up to 2000 variables or constraints with a demo license that comes out-of-the-box with a gurobipy installation. For comprehensive details regarding licensing terms and permissions, we recommend navigating to this site or contacting Gurobi directly.
When the model execution is complete, you can stop the services and suspend the compute pool to avoid incurring additional costs by running the command below in a Snowflake SQL worksheet:
ALTER COMPUTE POOL <COMPUTE_POOL_NAME> SUSPEND;
Before suspending the compute pool, it is vital that you have closed all browsers with the ingress_url. Otherwise, the compute pool will not suspend, especially when the auto-resume setting is set to active.
To continue your journey, read part two, where we discuss how the process can be further streamlined using Snowsight.
Seize the opportunity to revolutionize your optimization processes with Gurobi on Snowflake now! Whether you’re exploring new optimization solutions or contemplating a shift from Python to Snowpark, our dedicated Applied Optimization and Simulation team is here to guide you. Contact us today through the form below!
Whether you need advanced AI solutions, strategic data expertise, or tailored insights, our team is here to help.