Hello! Here are some book updates.
Below is an example from the current chapter of how we can do everything we need to do to vectorize our data and ask it questions. The chapter will go into adding images and voice as well. It is a huge chapter but should be done by Tuesday next week!
Retrieval-Augmented Generation (RAG) is a technique that enhances AI language models by combining their built-in knowledge with relevant information retrieved from an external database. This allows the Ai to provide more accurate, up-to-date, and context-specific responses by drawing on both its general knowledge and specific, retrieved facts. --Claude
The previous chapter "Prompting Overview" is ready to read if you buy the book now and you will then be notified as more chapters come out.
👉🏻 Book Store https://bit.ly/php_llms
I've had a lot of good feedback about the prompting chapter and really think it can help people save time and more quickly learn prompting skills. Something I did not even think was a real skill two years ago 🙂
Enjoy!
ps https://www.tiktok.com/@alfrednutile/video/7409751004083768619?lang=en quick video on Tagging your data!
Chapter Excerpt BELOW
Our First RAG Question
We are going to build this "vector search" as a Tool shortly, but for now we will use TinkerWell to show how this works so we can keep it simple and all in one place.
{format: php}
{title: "TinkerWell"}
use App\Models\Chunk;
use App\Services\LlmServices\LlmDriverFacade;
use App\Services\LlmServices\Requests\MessageInDto;
use Illuminate\Support\Facades\Http;
use Pgvector\Laravel\Distance;
use Pgvector\Vector;
$question = "What are the top two tips to being a good teammate";
$embedding = Http::withHeaders([
"Content-Type" => "application/json",
"Authorization" => "Bearer ollama"
])
->post("http://localhost:11434/api/embed", [
"model" => "nomic-embed-text",
"input" => $question
])
->json();
$embedding = new Vector($embedding["embeddings"][0]);
$searchresults = Chunk::query()
->orderBy("sort_order")
->nearestNeighbors("embedding_768", $embedding, Distance::Cosine)
->limit(3)
->get()
->pluck("content")
->implode("\n");
$prompt = [
MessageInDto::from([
"role" => "system",
"content" =>
"You are an assistant with our collection of data about Pickleball. Do not answer questions outside the content given. If you do not know the answer due to a lack of information ask the user to rephrase their question"
]),
MessageInDto::from([
"role" => "user",
"content" => $question
])
];
$payload = [
"model" => "llama3",
"messages" => $messages,
"stream" => false,
"options" => [
"temperature" => 0
]
];
$results = Http::withHeaders([
"content-type" => "application/json"
])
->timeout(120)
->baseUrl("http://localhost:11434/api/")
->post("/chat", $payload);
$results->json();
NOTE: we set the temperature to 0 this is key. We are saying "do not be creative just stick to the facts". You could set it higher to experiment say 0.2 and so on. You can read more about this in the OpenAi Docs
Results:
{format: text}
{title: "Results from question"}
"""
Based on our Pickleball data, here are the top two tips to being a good teammate:\n
\n
1. **Communicate effectively**: Good communication is key to success in Pickleball. As a teammate, you should be able to clearly call out your shots, signal to your partner when you're open or not, and provide encouragement and support throughout the game.\n
2. **Read the play and adapt**: A good teammate needs to be able to read the opposing team's strategy and adjust their own gameplay accordingly. This means paying attention to the opponents' strengths and weaknesses, anticipating their shots, and making smart decisions about when to take risks or play it safe.\n
\n
These two tips can help you build a strong foundation as a Pickleball teammate and improve your chances of winning!
"""
NOTE: I will use the Laravel docs shortly we need to refactor though before we can put in that much content.
Lets review each one of these steps in detail.
|