Once you've setup the langchainrb_rails gem and configured you PosgreSQL database to use the vector extension you will be able to fully unleash an assistant into your application.
**Overview**
In order to implement an assistant, we will implement a background job, modular front-end, and an assistant controller
The assistant will function as a background job, this allows for the application to handle slow API calls. The assistant will communicate background job to the front-end through a turbo stream channel. The assistant also can utilize tools which we will setup with the background job.
On the front-end we will have an interface that subscribes to the same turbo stream channel to receive updates. All updates will be done using HOTWIRE or HTML-Over-The-Wire. This greatly simplifies the entire implementation.
For the front-end to communicate to the back-end, we'll setup an assistant controller to handle form submissions to initiate the background job. In this iteration, in order to reduce initial complexity we will not be saving the state.
**Background Job**
We're setting up our background job to not only respond to chat requests, but it will also be able to interact with our application. The assistant will be able to perform updates, search and reason about our data. In my mind, that's pretty exciting.
Lets start out by generating our assistant job:
```Ruby
rails generate job Assistant
```
This will create an assistant_job.rb file in our /app/jobs folder.
We'll start off by updating the perform method to adding a named parameter (user_id: user_request:). The user id is used to broadcast the message back to the appropriate user, if you don't have user authentication setup then you can broadcast to a predefined channel.
```Ruby
def perform(user_id, user_request:)
```
Next we'll look up the user and setup the tool that our agent will use to interact with our application. We'll go into setting up the tool more later:
```Ruby
##
# Get the user, the user provides scope for the request
user = User.find_by(id:user_id)
##
# Setup tools
project_tool = ProjectTool.new(api_key: ENV["OPENAI_API_KEY"])
```
Now that we have our assistant setup, first we setup which large language model, then we can pass in the large language model we are using to our assistant along with instructions, and any tools that the assistant can use.
```Ruby
##
# Setup the LLM
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
##
# Setup the assistant
assistant = Langchain::Assistant.new(
llm: llm,
instructions: "You are an Earned Value Management System Assistant, knowleable on the EIA-748 standard. You will be assisting the user with their requests for information. When responding to their requests don't use id references.",
tools: [project_tool]
)
```
To send the results of our LLM as they come in we can setup callback whenever the LLM adds a new message. Here we will have our assistant broadcast to our user, and append messages to a div with an id of "assistant_messages", and add a partial called "assistant/message".
We then create a custom hash that represents the data, I found that my application was not happy when it received the raw message. Later we will add a method that will parse out the tool that the assistant is using just to let the user know what the assistant is working on.
```Ruby
##
# Broadcast messages to the user
assistant.add_message_callback = -> (message) do
Turbo::StreamsChannel.broadcast_append_to user,
target: "assistant_messages",
partial: "assistant/message",
locals: { message: {
content: message.content,
role: message.role,
tool_status_message: tool_status_message(message) } }
end
```
Finally we can add the message that the user supplied. I've also added an additional message to provide additional information to the assistant for using tools later.
Finally the run command executes the assistant and the auto_tool_execution option will tell the assistant to execute the tools without asking the user first.
```Ruby
##
# Add the message to the assistant
assistant.add_message(content: user_request)
##
# Add the company id to the assistant, just so that there's some added scope
assistant.add_message(content: "The company id is #{user.company_id}.")
##
# Run the assistant
messages = assistant.run(auto_tool_execution: true)
```
As the assistant runs various tools, I wanted to give the user feedback regarding which tools were run, to do that I created a method that would see the tools that were executed and parse that out.
By default when langchain calls our tools internally tool calls are referenced by the too name, and then the method name. So a call to the project tool update_project method results in 'project_tool__update_project'. I start by creating a hash to be used to lookup the user friendly message to be displayed to the user.
Then we will iterate through the message structure which is dynamic so we'll have to check to see if the various structure elements exist before referencing our hash. All the tool calls are then joined if multiple tool calls were performed simultaneously.
```Ruby
##
# Parse the message to determine extracting a status message
def tool_status_message(message)
tool_hash = {
"project_tool__update_project" => "Attempting to update the project",
"project_tool__retrieve_project" => "Retrieving a specific project for further detail",
"project_tool__ask_projects" => "Analyzing project data to answer your question",
"project_tool__query_projects" => "Querying projects for the appropriate project",
"project_tool__get_all_projects" => "Retrieving all of your projects for analysis"
}
tools_called = Array.new
if !message.tool_calls.empty?
message.tool_calls.each do |tool_call|
if !tool_call["function"].nil?
tools_called << tool_hash[tool_call["function"]["name"]]
end
end
return "#{tools_called.join(", ")}..."
else
return ""
end
end
```
Wow, that was a lot, to review what was done:
1) We generated our background job
2) We pulled in our user that we are broadcasting to
3) We defined the tools that the assistant could use
4) We defined which LLM we were using
5) We setup our assistant
6) We setup our callback that was called upon each message
7) We added our first message to our assistant
8) We passed in an additional message to our assistant to provide scope for tool use
9) We then run our assistant telling the assistant to use tools unprompted
10) We added a helper to parse the message to pull out tool calls to provide our user.
Tools
Now that we have our assistant setup, we need to setup the tool that the assistant will use to interact with our application. I've created a new folder within the application folder called tools (/app/tools), where I've created the project_tool.rb file.
Create a class and include the extend Langchain::ToolDefinition:
```Ruby
class ProjectTool
extend Langchain::ToolDefinition
```
Next you need to setup the initalizer:
```Ruby
##
# Initialize the tool
def initialize(api_key:)
@api_key = api_key
end
```
Now for each of the methods that we are exposing to the tool we need to create a definition. In this case we are going to allow our language model to use the ask_project(question:, company_id:) function, the definition defines the inputs, their their types and how they are used to the LLM.
```Ruby
##
# ask_projects function definition
# This function will allow the agent to ask a question about the projects for # a given company
define_function :ask_projects,
description: "Ask Projects: Send a query that returns a natural language response to the question." do
property :question, type: "string",
description: "Ask a question to inquire about the companies projects in the appropriate information is not available",
required: true
property :company_id, type: "integer",
description: "The id of the company that the project belongs to",
required: true
end
```
The definition matches the ask_project function below, which executes the langchain .ask() function on the Project model. The company_id input reduces the amount of projects analyzed.
``` RUby
##
# Ask a question about the projects.
def ask_projects(question:, company_id:)
Project.where(company_id: company_id).ask(question, k: 5).to_json
end
```
One of the issues that I ran into, was that the project's scope was in rich text and was not directly available when returning the project as json, to work around that I generated custom hashes to extract the scope. If you are not using action text the remaining functions can be greatly simplified. Here's the entirety of the tool code:
``` Ruby
##
# This tool will help the agent interact with the projects
class ProjectTool
extend Langchain::ToolDefinition
##
# ask_projects function definition
# This function will allow the agent to ask a question about the projects for # a given company
define_function :ask_projects,
description: "Ask Projects: Send a query that returns a natural language response to the question."
do
property :question,
type: "string", description: "Ask a question to inquire about the companies projects in the appropriate information is not available", required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# get_all_projects function definition
# This function will allow the agent to get all projects for a given company
define_function :get_all_projects,
description: "Get All Projects: Get all projects for a given company."
do
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# query_projects function definition
# This function will allow the agent to query the projects datast for a given company
define_function :query_projects,
description: "Query Projects: Query projects returning a list of projects in descending order of relevance."
do
property :query,
type: "string",
description: "Query used to search projects",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# retrieve_project function definition
# This function will allow the agent to retrieve a project by a specific id for a given company
define_function :retrieve_project,
description: "Retrieve Project: Retrieve a project by id and company id. Also includes project's scope details." do
property :project_id,
type: "integer",
description: "The project's id to be retrieved",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# update_project function definition
# This function will allow the agent to update a project for a given company
define_function :update_project,
description: "Update Project: Update a project for a given company."
do
property :project_id,
type: "integer",
description: "The project's id to be updated",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
property :title,
type: "string",
description: "The title of the project", required: false
end
##
# Initialize the tool
def initialize(api_key:)
@api_key = api_key
end
##
# Ask a question about the projects.
def ask_projects(question:, company_id:)
Project.where(company_id: company_id).ask(question, k: 5).to_json
end
##
# Get all projects for a given company
def get_all_projects(company_id:)
Project.where(company_id: company_id).to_json
end
##
# Filter the company for based on the vector search query
def query_projects(query:, company_id:)
Project.where(company_id: company_id).similarity_search(query, k: 5).to_json
end
##
# Retrieve the project by id, for the given company
def retrieve_project(project_id:, company_id:)
project = Project.find_by(id: project_id, company_id: company_id)
if project.present?
return { success: "Project retrieved successfully.",
project: {
id: project.id,
title: project.title,
scope: project.scope.to_s,
project_start_date: project.project_start_date,
fee_adder: project.fee_adder,
margin_adder: project.margin_adder,
escalation_adder: project.escalation_adder,
benefits_adder: project.benefits_adder
}
}.to_json
else
return { error: "Project not found." }.to_json
end
end
##
# Update the project's title and/or scope for a given company
def update_project(project_id:, company_id:, title:)
project = Project.find_by(id: project_id, company_id: company_id)
project.title = title
if project.save
return { success: "Project updated successfully.",
project: { id: project.id,
title: project.title,
project_start_date: project.project_start_date,
fee_adder: project.fee_adder,
margin_adder: project.margin_adder,
escalation_adder: project.escalation_adder,
benefits_adder: project.benefits_adder
}
}.to_json
else
return { error: "Project not updated due to errors.",
error_messages: project.errors.full_messages
}.to_json
end
rescue
return { error: "Project not found." }.to_json
end
end
```
**Assistant Controller**
The assistant controller provides the ability for the interface to submit messages to the background job, and later triggers a turbo_stream that updates the submission form.
```Ruby
class AssistantController < ApplicationController
before_action :redirect_unless_logged_in
##
# Send a message to the assistant
def send_message
AssistantJob.perform_later(user_id: current_user.id, user_request: params[:message])
respond_to do |format|
format.turbo_stream { render :sent_message }
format.html { redirect_to company_projects_path(@company) }
end
end
end
```
The turbo_stream called after kicking off the background job (sent_message.turbo_stream.erb) replaces the existing message form with the blank template. You could use this same process to also stream a loading interface.
```Ruby
<%= turbo_stream.replace "message_form", partial: "assistant/message_form" %>
```
Here is the form partial, key components form are the id="message_form" which is utilized by the turbo_stream as a key in the replacement process, and the remote: true on the form, so that the form is submitted without redirecting the entire page.
```Ruby
<div id="message_form" class="rounded-lg border-gray-400 border-2 p-4 mt-4">
<%= form_with url: assistant_send_message_path, remote: true, method: :post do |form| %>
<div class="font-extrabold text-gray-400 text-md">Message your assistant</div>
<%= form.text_field :message, placeholder: "Ask me anything...", class: "mt-2 w-full p-2 rounded-lg bg-gray-800 text-white shadow-lg" %>
<%= form.submit 'Send Message', class: "mt-4 bg-gray-100 font-bold hover:bg-white text-gray-800 px-2 py-1 rounded shadow-lg" %>
<% end %>
</div>
```
**Assistant Interface**
The assistant interface is composed of multiple partials to generate the interface. The assistant interface partial sets the initial state of the interface and includes the messages partial and the message form partial.
A key component of the assistant interface that is needed to capture input from the background job is the turbo_stream_from current_user which subscribes the current user to the
assistant_interface:
```Ruby
<div class="rounded-lg bg-gray-600 p-4 m-4 shadow-lg">
<div class="bg-green-300 rounded p-2">
<div class="text-gray-800 font-extrabold text-xl flex flex-column items-center">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6 text-indigo-600">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-1">Your Project Assistant</div>
</div>
</div>
<div class="border-gray-500 border-2 rounded-lg p-4 text-white mt-4">
<%= turbo_stream_from current_user %>
<%= render partial: "assistant/messages", locals: { messages: messages } %>
<%= render partial: "assistant/message_form" %>
</div>
</div>
```
The messages partial sets up the initial state of the messages, inserting boilerplate messages and calling message template for all messages. A key component of messages is the id="assistant_messages" which the background job will append to ass messages come in.
messages:
```Ruby
<div id="assistant_messages">
<% if messages.empty? %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="font-extrabold">Hi, <%= current_user.full_name %>!</div>
<div>I am your project assistant.</div>
<div>I am able to help you easily manage your projects. You can ask me questions about your projects, I can assist you with your updates, or you can ask me for advice in planning your projects.</div>
</div>
</div>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
1) <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center">
<div>How can I help you today?</div>
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="ml-2 flex-shrink-0 h-6 w-6 text-gray-500">
<path stroke-linecap="round" stroke-linejoin="round" d="M15.182 15.182a4.5 4.5 0 0 1-6.364 0M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0ZM9.75 9.75c0 .414-.168.75-.375.75S9 10.164 9 9.75 9.168 9 9.375 9s.375.336.375.75Zm-.375 0h.008v.015h-.008V9.75Zm5.625 0c0 .414-.168.75-.375.75s-.375-.336-.375-.75.168-.75.375-.75.375.336.375.75Zm-.375 0h.008v.015h-.008V9.75Z" />
</svg>
</div>
</div>
</div>
<% else %>
<%= messages.each do |message| %>
<%= render partial: "assistant/message", locals: { message: message } %>
<% end %>
<% end %>
</div>
```
The message partial renders the messages differently depending on the role of the message or the purpose. Content from OpenAI are in markdown, and must be rended to html to be displayed properly.
```Ruby
<% case message[:role] %>
<% when "user" %>
<div class="rounded-lg bg-gray-700 p-2 text-gray-400 text-md flex flex-column items-center mt-4 flex flex-column">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="flex-shrink-0 h-6 w-6 text-green-500">
<path stroke-linecap="round" stroke-linejoin="round" d="m8.25 4.5 7.5 7.5-7.5 7.5" />
</svg>
<div class="pl-2">
<%= message[:content] %>
</div>
</div>
<% when "tool" %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center text-gray-400 font-italic flex flex-column">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="m4.5 12.75 6 6 9-13.5" />
</svg>
<div class="pl-2">Tool execution complete.</div>
</div>
</div>
</div>
<% else %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center">
<% if !message[:content].nil? %>
<div>
<%= markdown(message[:content]) %>
</div>
<% end %>
<div><%= message[:tool_status_message] %></div>
</div>
</div>
</div>
<% end %>
```
To display markdown properly you will need to add the redcarpet gem to your gemfile.
```Ruby
##
# Markdown Rendering
gem "redcarpet", "~> 3.6"
```
Once you have the gem installed, you can add a helper to your application_helper.rb:
```Ruby
def markdown(text)
options = [:hard_wrap, :autolink, :no_intra_emphasis, :fenced_code_blocks, :underline, :highlight, :no_images, :filter_html, :safe_links_only, :prettify, :no_styles]
Markdown.new(text, *options).to_html.html_safe
end
```
**Lets take a break**
Congratulations, in this guide we learned how to setup a background job to execute as our assistant, we've setup that assistant to use tools and pass messages back to our user using HOTWIRE. We've setup a controller to accept messages from the user and pass those messages to the background job, we've also setup an interface composed of partials that subscribe to the channel used by the background job to receive updates.
Going forward there are some deficiencies in the interface that we need to clean-up. Noteably, each request is handled in solutude without knowledge of prior requests. To solve this we will utilize the langchain assistant and message models to save our messages to be loaded by the assistant interface.
**Overview**
In order to implement an assistant, we will implement a background job, modular front-end, and an assistant controller
The assistant will function as a background job, this allows for the application to handle slow API calls. The assistant will communicate background job to the front-end through a turbo stream channel. The assistant also can utilize tools which we will setup with the background job.
On the front-end we will have an interface that subscribes to the same turbo stream channel to receive updates. All updates will be done using HOTWIRE or HTML-Over-The-Wire. This greatly simplifies the entire implementation.
For the front-end to communicate to the back-end, we'll setup an assistant controller to handle form submissions to initiate the background job. In this iteration, in order to reduce initial complexity we will not be saving the state.
**Background Job**
We're setting up our background job to not only respond to chat requests, but it will also be able to interact with our application. The assistant will be able to perform updates, search and reason about our data. In my mind, that's pretty exciting.
Lets start out by generating our assistant job:
```Ruby
rails generate job Assistant
```
This will create an assistant_job.rb file in our /app/jobs folder.
We'll start off by updating the perform method to adding a named parameter (user_id: user_request:). The user id is used to broadcast the message back to the appropriate user, if you don't have user authentication setup then you can broadcast to a predefined channel.
```Ruby
def perform(user_id, user_request:)
```
Next we'll look up the user and setup the tool that our agent will use to interact with our application. We'll go into setting up the tool more later:
```Ruby
##
# Get the user, the user provides scope for the request
user = User.find_by(id:user_id)
##
# Setup tools
project_tool = ProjectTool.new(api_key: ENV["OPENAI_API_KEY"])
```
Now that we have our assistant setup, first we setup which large language model, then we can pass in the large language model we are using to our assistant along with instructions, and any tools that the assistant can use.
```Ruby
##
# Setup the LLM
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
##
# Setup the assistant
assistant = Langchain::Assistant.new(
llm: llm,
instructions: "You are an Earned Value Management System Assistant, knowleable on the EIA-748 standard. You will be assisting the user with their requests for information. When responding to their requests don't use id references.",
tools: [project_tool]
)
```
To send the results of our LLM as they come in we can setup callback whenever the LLM adds a new message. Here we will have our assistant broadcast to our user, and append messages to a div with an id of "assistant_messages", and add a partial called "assistant/message".
We then create a custom hash that represents the data, I found that my application was not happy when it received the raw message. Later we will add a method that will parse out the tool that the assistant is using just to let the user know what the assistant is working on.
```Ruby
##
# Broadcast messages to the user
assistant.add_message_callback = -> (message) do
Turbo::StreamsChannel.broadcast_append_to user,
target: "assistant_messages",
partial: "assistant/message",
locals: { message: {
content: message.content,
role: message.role,
tool_status_message: tool_status_message(message) } }
end
```
Finally we can add the message that the user supplied. I've also added an additional message to provide additional information to the assistant for using tools later.
Finally the run command executes the assistant and the auto_tool_execution option will tell the assistant to execute the tools without asking the user first.
```Ruby
##
# Add the message to the assistant
assistant.add_message(content: user_request)
##
# Add the company id to the assistant, just so that there's some added scope
assistant.add_message(content: "The company id is #{user.company_id}.")
##
# Run the assistant
messages = assistant.run(auto_tool_execution: true)
```
As the assistant runs various tools, I wanted to give the user feedback regarding which tools were run, to do that I created a method that would see the tools that were executed and parse that out.
By default when langchain calls our tools internally tool calls are referenced by the too name, and then the method name. So a call to the project tool update_project method results in 'project_tool__update_project'. I start by creating a hash to be used to lookup the user friendly message to be displayed to the user.
Then we will iterate through the message structure which is dynamic so we'll have to check to see if the various structure elements exist before referencing our hash. All the tool calls are then joined if multiple tool calls were performed simultaneously.
```Ruby
##
# Parse the message to determine extracting a status message
def tool_status_message(message)
tool_hash = {
"project_tool__update_project" => "Attempting to update the project",
"project_tool__retrieve_project" => "Retrieving a specific project for further detail",
"project_tool__ask_projects" => "Analyzing project data to answer your question",
"project_tool__query_projects" => "Querying projects for the appropriate project",
"project_tool__get_all_projects" => "Retrieving all of your projects for analysis"
}
tools_called = Array.new
if !message.tool_calls.empty?
message.tool_calls.each do |tool_call|
if !tool_call["function"].nil?
tools_called << tool_hash[tool_call["function"]["name"]]
end
end
return "#{tools_called.join(", ")}..."
else
return ""
end
end
```
Wow, that was a lot, to review what was done:
1) We generated our background job
2) We pulled in our user that we are broadcasting to
3) We defined the tools that the assistant could use
4) We defined which LLM we were using
5) We setup our assistant
6) We setup our callback that was called upon each message
7) We added our first message to our assistant
8) We passed in an additional message to our assistant to provide scope for tool use
9) We then run our assistant telling the assistant to use tools unprompted
10) We added a helper to parse the message to pull out tool calls to provide our user.
Tools
Now that we have our assistant setup, we need to setup the tool that the assistant will use to interact with our application. I've created a new folder within the application folder called tools (/app/tools), where I've created the project_tool.rb file.
Create a class and include the extend Langchain::ToolDefinition:
```Ruby
class ProjectTool
extend Langchain::ToolDefinition
```
Next you need to setup the initalizer:
```Ruby
##
# Initialize the tool
def initialize(api_key:)
@api_key = api_key
end
```
Now for each of the methods that we are exposing to the tool we need to create a definition. In this case we are going to allow our language model to use the ask_project(question:, company_id:) function, the definition defines the inputs, their their types and how they are used to the LLM.
```Ruby
##
# ask_projects function definition
# This function will allow the agent to ask a question about the projects for # a given company
define_function :ask_projects,
description: "Ask Projects: Send a query that returns a natural language response to the question." do
property :question, type: "string",
description: "Ask a question to inquire about the companies projects in the appropriate information is not available",
required: true
property :company_id, type: "integer",
description: "The id of the company that the project belongs to",
required: true
end
```
The definition matches the ask_project function below, which executes the langchain .ask() function on the Project model. The company_id input reduces the amount of projects analyzed.
``` RUby
##
# Ask a question about the projects.
def ask_projects(question:, company_id:)
Project.where(company_id: company_id).ask(question, k: 5).to_json
end
```
One of the issues that I ran into, was that the project's scope was in rich text and was not directly available when returning the project as json, to work around that I generated custom hashes to extract the scope. If you are not using action text the remaining functions can be greatly simplified. Here's the entirety of the tool code:
``` Ruby
##
# This tool will help the agent interact with the projects
class ProjectTool
extend Langchain::ToolDefinition
##
# ask_projects function definition
# This function will allow the agent to ask a question about the projects for # a given company
define_function :ask_projects,
description: "Ask Projects: Send a query that returns a natural language response to the question."
do
property :question,
type: "string", description: "Ask a question to inquire about the companies projects in the appropriate information is not available", required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# get_all_projects function definition
# This function will allow the agent to get all projects for a given company
define_function :get_all_projects,
description: "Get All Projects: Get all projects for a given company."
do
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# query_projects function definition
# This function will allow the agent to query the projects datast for a given company
define_function :query_projects,
description: "Query Projects: Query projects returning a list of projects in descending order of relevance."
do
property :query,
type: "string",
description: "Query used to search projects",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# retrieve_project function definition
# This function will allow the agent to retrieve a project by a specific id for a given company
define_function :retrieve_project,
description: "Retrieve Project: Retrieve a project by id and company id. Also includes project's scope details." do
property :project_id,
type: "integer",
description: "The project's id to be retrieved",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
end
##
# update_project function definition
# This function will allow the agent to update a project for a given company
define_function :update_project,
description: "Update Project: Update a project for a given company."
do
property :project_id,
type: "integer",
description: "The project's id to be updated",
required: true
property :company_id,
type: "integer",
description: "The id of the company that the project belongs to", required: true
property :title,
type: "string",
description: "The title of the project", required: false
end
##
# Initialize the tool
def initialize(api_key:)
@api_key = api_key
end
##
# Ask a question about the projects.
def ask_projects(question:, company_id:)
Project.where(company_id: company_id).ask(question, k: 5).to_json
end
##
# Get all projects for a given company
def get_all_projects(company_id:)
Project.where(company_id: company_id).to_json
end
##
# Filter the company for based on the vector search query
def query_projects(query:, company_id:)
Project.where(company_id: company_id).similarity_search(query, k: 5).to_json
end
##
# Retrieve the project by id, for the given company
def retrieve_project(project_id:, company_id:)
project = Project.find_by(id: project_id, company_id: company_id)
if project.present?
return { success: "Project retrieved successfully.",
project: {
id: project.id,
title: project.title,
scope: project.scope.to_s,
project_start_date: project.project_start_date,
fee_adder: project.fee_adder,
margin_adder: project.margin_adder,
escalation_adder: project.escalation_adder,
benefits_adder: project.benefits_adder
}
}.to_json
else
return { error: "Project not found." }.to_json
end
end
##
# Update the project's title and/or scope for a given company
def update_project(project_id:, company_id:, title:)
project = Project.find_by(id: project_id, company_id: company_id)
project.title = title
if project.save
return { success: "Project updated successfully.",
project: { id: project.id,
title: project.title,
project_start_date: project.project_start_date,
fee_adder: project.fee_adder,
margin_adder: project.margin_adder,
escalation_adder: project.escalation_adder,
benefits_adder: project.benefits_adder
}
}.to_json
else
return { error: "Project not updated due to errors.",
error_messages: project.errors.full_messages
}.to_json
end
rescue
return { error: "Project not found." }.to_json
end
end
```
**Assistant Controller**
The assistant controller provides the ability for the interface to submit messages to the background job, and later triggers a turbo_stream that updates the submission form.
```Ruby
class AssistantController < ApplicationController
before_action :redirect_unless_logged_in
##
# Send a message to the assistant
def send_message
AssistantJob.perform_later(user_id: current_user.id, user_request: params[:message])
respond_to do |format|
format.turbo_stream { render :sent_message }
format.html { redirect_to company_projects_path(@company) }
end
end
end
```
The turbo_stream called after kicking off the background job (sent_message.turbo_stream.erb) replaces the existing message form with the blank template. You could use this same process to also stream a loading interface.
```Ruby
<%= turbo_stream.replace "message_form", partial: "assistant/message_form" %>
```
Here is the form partial, key components form are the id="message_form" which is utilized by the turbo_stream as a key in the replacement process, and the remote: true on the form, so that the form is submitted without redirecting the entire page.
```Ruby
<div id="message_form" class="rounded-lg border-gray-400 border-2 p-4 mt-4">
<%= form_with url: assistant_send_message_path, remote: true, method: :post do |form| %>
<div class="font-extrabold text-gray-400 text-md">Message your assistant</div>
<%= form.text_field :message, placeholder: "Ask me anything...", class: "mt-2 w-full p-2 rounded-lg bg-gray-800 text-white shadow-lg" %>
<%= form.submit 'Send Message', class: "mt-4 bg-gray-100 font-bold hover:bg-white text-gray-800 px-2 py-1 rounded shadow-lg" %>
<% end %>
</div>
```
**Assistant Interface**
The assistant interface is composed of multiple partials to generate the interface. The assistant interface partial sets the initial state of the interface and includes the messages partial and the message form partial.
A key component of the assistant interface that is needed to capture input from the background job is the turbo_stream_from current_user which subscribes the current user to the
assistant_interface:
```Ruby
<div class="rounded-lg bg-gray-600 p-4 m-4 shadow-lg">
<div class="bg-green-300 rounded p-2">
<div class="text-gray-800 font-extrabold text-xl flex flex-column items-center">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6 text-indigo-600">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-1">Your Project Assistant</div>
</div>
</div>
<div class="border-gray-500 border-2 rounded-lg p-4 text-white mt-4">
<%= turbo_stream_from current_user %>
<%= render partial: "assistant/messages", locals: { messages: messages } %>
<%= render partial: "assistant/message_form" %>
</div>
</div>
```
The messages partial sets up the initial state of the messages, inserting boilerplate messages and calling message template for all messages. A key component of messages is the id="assistant_messages" which the background job will append to ass messages come in.
messages:
```Ruby
<div id="assistant_messages">
<% if messages.empty? %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="font-extrabold">Hi, <%= current_user.full_name %>!</div>
<div>I am your project assistant.</div>
<div>I am able to help you easily manage your projects. You can ask me questions about your projects, I can assist you with your updates, or you can ask me for advice in planning your projects.</div>
</div>
</div>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
1) <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center">
<div>How can I help you today?</div>
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="ml-2 flex-shrink-0 h-6 w-6 text-gray-500">
<path stroke-linecap="round" stroke-linejoin="round" d="M15.182 15.182a4.5 4.5 0 0 1-6.364 0M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0ZM9.75 9.75c0 .414-.168.75-.375.75S9 10.164 9 9.75 9.168 9 9.375 9s.375.336.375.75Zm-.375 0h.008v.015h-.008V9.75Zm5.625 0c0 .414-.168.75-.375.75s-.375-.336-.375-.75.168-.75.375-.75.375.336.375.75Zm-.375 0h.008v.015h-.008V9.75Z" />
</svg>
</div>
</div>
</div>
<% else %>
<%= messages.each do |message| %>
<%= render partial: "assistant/message", locals: { message: message } %>
<% end %>
<% end %>
</div>
```
The message partial renders the messages differently depending on the role of the message or the purpose. Content from OpenAI are in markdown, and must be rended to html to be displayed properly.
```Ruby
<% case message[:role] %>
<% when "user" %>
<div class="rounded-lg bg-gray-700 p-2 text-gray-400 text-md flex flex-column items-center mt-4 flex flex-column">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="flex-shrink-0 h-6 w-6 text-green-500">
<path stroke-linecap="round" stroke-linejoin="round" d="m8.25 4.5 7.5 7.5-7.5 7.5" />
</svg>
<div class="pl-2">
<%= message[:content] %>
</div>
</div>
<% when "tool" %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center text-gray-400 font-italic flex flex-column">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="m4.5 12.75 6 6 9-13.5" />
</svg>
<div class="pl-2">Tool execution complete.</div>
</div>
</div>
</div>
<% else %>
<div class="rounded-lg bg-gray-800 p-2 text-gray-400 text-md flex flex-column items-center mt-4">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="flex-shrink-0 h-6 w-6 text-gray-500">
<path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 1 1 9 0 4.5 4.5 0 0 1-9 0ZM3.751 20.105a8.25 8.25 0 0 1 16.498 0 .75.75 0 0 1-.437.695A18.683 18.683 0 0 1 12 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 0 1-.437-.695Z" clip-rule="evenodd" />
</svg>
<div class="pl-2">
<div class="flex flex-row items-center">
<% if !message[:content].nil? %>
<div>
<%= markdown(message[:content]) %>
</div>
<% end %>
<div><%= message[:tool_status_message] %></div>
</div>
</div>
</div>
<% end %>
```
To display markdown properly you will need to add the redcarpet gem to your gemfile.
```Ruby
##
# Markdown Rendering
gem "redcarpet", "~> 3.6"
```
Once you have the gem installed, you can add a helper to your application_helper.rb:
```Ruby
def markdown(text)
options = [:hard_wrap, :autolink, :no_intra_emphasis, :fenced_code_blocks, :underline, :highlight, :no_images, :filter_html, :safe_links_only, :prettify, :no_styles]
Markdown.new(text, *options).to_html.html_safe
end
```
**Lets take a break**
Congratulations, in this guide we learned how to setup a background job to execute as our assistant, we've setup that assistant to use tools and pass messages back to our user using HOTWIRE. We've setup a controller to accept messages from the user and pass those messages to the background job, we've also setup an interface composed of partials that subscribe to the channel used by the background job to receive updates.
Going forward there are some deficiencies in the interface that we need to clean-up. Noteably, each request is handled in solutude without knowledge of prior requests. To solve this we will utilize the langchain assistant and message models to save our messages to be loaded by the assistant interface.