a tool for effective management of models

a tool for effective management of models

Introduction

Recently, rapid progress has been observed in the field of natural language processing. The emergence of powerful language models such as GPT and Bard has really opened up new possibilities for creating intelligent applications. However, at the same time, we faced the need for more advanced tools to effectively integrate and manage such models.

And recently, Microsoft introduced Guidance, a management language designed to control large language models. In our opinion, this tool can significantly change the process of developing applications based on AI capabilities.

In this article, we would like to get to know its features and understand what kind of “beast” it is. We hope this information will be useful for developers, researchers, and organizations actively working to improve LLM behavior control.

We invite you to read!

Straight to the point

When we started testing Guidance in our company, it was a really positive experience.

First, we liked Guidance’s special syntax based on handlebars for step-by-step descriptions of data processing by the language model.

Due to this, the Guidance code is executed sequentially, reflecting the order in which the model parses the input text. This allowed us to precisely control the text generation process.

For example, we could first set an output template, then specify the data to populate that template, and then Guidance would neatly generate the output we wanted in the given format. This is much more convenient than using the traditional hint method, where you have to repeatedly restart the model until you get the desired result.

In addition, single-pass generation in Guidance saves computational resources compared to calling the model multiple times. It can be easily integrated with providers such as Hugging Face models, and includes an intelligent sample-based generation caching system and a token healing system that optimizes hint boundaries and eliminates tokenization bias. Enabling regex directive patterns further enforces formats by allowing completion of prompts.

Another useful feature of Guidance is support for selection constructs when generating text. This allowed us to describe the branched logic of the program. For example, you can imagine the possibility of choosing from several options, as in the example:

{{#select "answer"}}

Yes

{{or}}

No

{{/select}}

Thanks to such choice structures, in Guidance you can flexibly configure the generation of the required text or data depending on the situation – the formation of reports, documentation, the implementation of step-by-step processing or piping.

In addition, a big advantage of Guidance is the ability to connect different language models, not just specific solutions like GPT-3 or Codex. The connection is made not through the API, but with the help of built-in transformers, as in the example for the self-hosted Facebook LLaMA model:

llm = guidance.llms.Transformers("your_local_path/llama-7b", device=0)

This gives us flexibility in choosing the optimal model for the task and the ability to easily change it as needed, without limiting ourselves to popular cloud solutions.

In addition, special functions for testing and debugging applications have become a big help in Guidance. For example, it is possible to display and analyze the intermediate results of text generation during development, or to compare the results of the program on different sets of input data. All this greatly simplifies setting the logic we need and achieving the desired quality of the text output by the application.

What is Guidance?

When our development team started exploring Guidance, we were pleasantly surprised by the number of useful and interesting features.

We have already mentioned token healing above, but I would like to dwell on it in more detail.
In the process of creating language models, standard “greedy” tokenizers are often used. Unfortunately, they introduce a hidden but quite strong distortion that can cause LLM to behave unpredictably.

Let’s take a look at a specific example from the GitHub documentation. Let’s say we’re trying to generate a URL string:

# Мы используем StableLM как открытый пример, но эти проблемы затрагивают все модели в той или иной степени

guidance.llm = guidance.llms.Transformers("stabilityai/stablelm-base-alpha-3b", device=0)

 # Мы отключаем token healing, чтобы guidance вела себя как обычная библиотека подсказок

program = guidance('''Ссылка: <a href="http:{{gen max_tokens=10 token_healing=False}}''')

program()

Note the oddity: instead of the expected http://, the model returns something like http:/. The reason is that the tokenizer breaks the :// string into separate characters. Seeing: the model decides that // cannot be here, otherwise the entire :// token would be used.

This is not limited to the colon, this situation occurs almost everywhere. More than 70% of common tokens are prefixes of longer tokens. This results in token boundary errors.

In Guidance, this is solved with token healing. This technique rolls the model back one step and then allows the model to move forward, limiting it to generating only those tokens whose prefix matches the last token. This eliminates the distortion of the tokenizer and allows you to complete the prompt naturally.

prompt="Ссылка: <a href="http:{{gen token_healing=True}}"

output = model(prompt)  

# теперь корректно сгенерирует http://

Another big plus for us as developers is convenient code debugging tools. The “step-by-step execution” function implemented in Guidance is especially useful for us. It made it possible to analyze in detail the execution of each line and optimize the logic of the program.

In general, the unique capabilities of Guidance allow us to maximize the potential of AI language models to create truly complex and interesting solutions.

And what else?

As we studied Guidance, we became increasingly aware of the usefulness of this tool’s capabilities.

First, in practice, we have verified that with Guidance you can achieve much more accurate text generation than with traditional methods. Due to the step-by-step implementation, it is possible to finely control the entire process.

Another cool thing is function calling. Any Python function can be called using the generated variables as arguments. The function will be called during the prompt:

```python

def aggregate(best):

  return '\n'.join(['- ' + x for x in best])

prompt = guidance('''Лучшее на пляже - это {{~gen 'best' n=3 temperature=0.7 max_tokens=7 hidden=True}}  

{{aggregate best}}''')

prompt = prompt(aggregate=aggregate)

print(prompt)

```

This way, you can use any Python functions inside Guidance prompts by calling them with arguments generated by the model. This expands the possibilities of building clues.

In addition to texts, Guidance allows you to conveniently work with various data – create tables, generate reports in the desired format. At the same time, the output data is always valid. For example, JSON data is correctly formatted even when using complex nested structures. The same applies to XML, CSV and other formats.

Results

After spending a lot of time testing Guidance, we can draw the following conclusions. This tool really offers a new approach to creating AI-powered applications.

Thanks to flexible management tools, we managed to develop several working applications – systems for generating reports and unique content. At the same time, deep knowledge in machine learning was not required.

In addition, with the help of Guidance, we managed to create a text analysis system with further conversion into the required data structures. We used Guidance’s capabilities to extract information from text and convert it to other formats. This made it possible to automate routine work with large volumes of text data.

In our opinion, Guidance can be of interest to many IT companies and startups due to such accessibility of the technology. We are sure that many innovative products will soon appear on its basis.

Overall, this tool has the potential to simplify the development of intelligent applications and expand the use of AI in various fields. We are optimistic about its prospects.

Related posts