How to setup LM Studio to work with Rivet

Introduction

LM Studio is a desktop application to run Large Language Models (LLM) on your local machine. It works on Windows, Mac and Linux (beta). It's not open source but it has a very friendly interface. It's one of the easiest ways to have your own private ChatGPT on your own computer.
Rivet is a visual IDE to create flows involving LLMs. If you want to create your own custom workflows, you might want to check it out. By default it works against OpenAI ChatGPT or GPT-4.
In this guide, I will share the exact configuration steps you need to do to connect Rivet to your own local LLM model using LM Studio as a server.
Note: This guide assumes you have at least one model installed in your LM Studio.

Configuration steps

On LM Studio:
    Click on the Local Server button on the left menu (looks like a double headed arrow)
    Place your mouse over the small info symbol near (CORS). Read the warning so you know what you are doing
    Set Cross-Origin-Resource-Sharing (CORS) to ON
    Set Request Queuing to ON
    Select a model on the top dropdown menu
    Click Start Server Annotate the URL that appears on the center pane. By default, it will be "http://localhost:1234/v1"
Configuration screen of LM Studio Server
On Rivet:
    Click on the menu bar "App -> Settings"
    Select the OpenAI option
    Close the settings window
    Go and read the [Rivet Getting started guide](https://rivet.ironcladapp.com/docs/getting-started/first-ai-agent) to test your server.
Rivet configuration screen
Enjoy!