Skip to main content

Using Forms to collect leads

· 5 min read
Julia Schröder Langhaeuser
Julia Schröder Langhaeuser
Product Director

In today's competitive digital landscape, generating high-quality leads is crucial for business growth. AI Agents can transform your lead generation process by engaging with visitors in a conversational manner while seamlessly collecting valuable information. In this article, we'll explore how to use the Forms capability in Serenity* AI Hub to create an effective lead collection system.

Why Use Forms for Lead Generation?

Traditional lead forms can be static and impersonal, often leading to high abandonment rates. Conversational AI offers several advantages:

  • Natural Interaction: Visitors engage in a conversation rather than filling out fields on a form
  • Progressive Data Collection: Information is gathered gradually throughout the conversation
  • Personalized Experience: The agent adapts its questions based on previous responses
  • Higher Completion Rates: Less intimidating than lengthy traditional forms
  • 24/7 Availability: Collect leads around the clock without human intervention

Setting Up Your Lead Generation Form

Let's create an Assistant Agent that will engage with website visitors and collect lead information for your sales team.

Prerequisites

  • Access to Serenity* AI Hub
  • A clear understanding of what lead information you need to collect

Step 1: Create a Lead Generation Agent

Start by creating a new Assistant Agent specifically designed for lead collection. You can either use a template or create one from scratch.

Configure your agent with a personality that aligns with your brand. For lead generation, a friendly and helpful tone usually works best.

Step 2: Configure the Forms Feature

Navigate to the Forms tab in your agent configuration.

Forms Tab

Click on "Add Field" to begin building your lead capture form.

Step 3: Define Lead Information Fields

For an effective lead generation form, consider adding these essential fields:

  1. Name (Text type)

    • Set as required
    • Instructions: "Ask for the person's full name"
  2. Email (Email type)

    • Set as required
    • Instructions: "Collect a valid email address for follow-up"
  3. Phone Number (Phone type)

    • Optional
    • Instructions: "Request phone number including country code if the lead seems interested in a call"
  4. Company (Text type)

    • Set as required
    • Instructions: "Ask for the company name"
  5. Role/Position (Text type)

    • Optional
    • Instructions: "Inquire about their position or role in the company"
  6. Interest Level (Select type)

    • Options: High, Medium, Low
    • Instructions: "Based on the conversation, assess their interest level"
  7. Product Interest (Select type)

    • Options: [Your specific products/services]
    • Instructions: "Determine which of our products or services they're most interested in"
  8. Additional Notes (Text type)

    • Optional
    • Instructions: "Capture any other relevant information shared during the conversation"

For each field, configure how it should be collected:

Field Configuration Example Field Configuration Example

Step 4: Set Collection Instructions

For a lead generation use case, it's usually best to select "After a Few Messages" as the collection timing. This allows the agent to build rapport before asking for information.

Collection Timing

For custom instructions, you might use:

"Engage in a natural conversation first. Ask about the visitor's challenges or needs. After establishing rapport, begin collecting lead information in a conversational way. Don't ask for all information at once - spread questions throughout the conversation."

Step 5: Test Your Lead Generation Agent

Use the preview chat to test how your agent collects lead information. Pay attention to:

  • How naturally the questions are integrated into the conversation
  • Whether the agent collects all required information
  • How the agent handles objections or hesitations

Testing Lead Form

Make adjustments to your form fields and instructions based on the test results.

Step 6: Deploy Your Lead Generation Agent

Once you're satisfied with your lead generation agent, publish it to make it available for use. You can integrate it with your website, landing pages, or other digital channels.

Accessing and Managing Lead Data

After your agent has been collecting leads, you can access this valuable information from the agent card by clicking the "Forms" button.

Access Forms Button

On the forms page, you'll see all the lead data organized in a grid format:

Lead Data Grid

You can:

  • Sort and filter leads
  • Export lead data to Excel for integration with your CRM
  • View different versions of your form to track performance over time

Best Practices for Conversational Lead Generation

  1. Start with Value: Have your agent provide helpful information before asking for contact details
  2. Be Transparent: Clearly communicate why you're collecting information and how it will be used
  3. Progressive Disclosure: Start with simple questions and gradually move to more detailed ones
  4. Offer Incentives: Consider offering something valuable in exchange for contact information
  5. Follow Up: Ensure leads are promptly followed up by your sales team
  6. Continuous Improvement: Regularly review conversations and adjust your form fields and agent instructions

Conclusion

Forms in AI Hub provide a powerful way to collect leads through conversational AI. By creating a natural, engaging experience, you can increase both the quantity and quality of leads while gathering rich contextual information that helps your sales team close more deals.

Ready to revolutionize your lead generation process? Start by creating your first lead collection agent today!

Using Serenity* Visual Studio Code Chat to Create Automated Tests

· 6 min read
Carolina De Cono
Carolina De Cono
QA Analyst

In today's fast-paced development environment, automated testing is essential for ensuring software quality and efficiency. The Serenity* VSCode Chat extension offers a powerful solution for developers looking to streamline their testing processes.

This blog post will guide you through the process of using the Serenity* VSCode Chat extension in Visual Studio Code to create automated test cases, enhancing both efficiency and effectiveness in your projects.

Introduction to Serenity* VSCode Chat

Serenity* VSCode Chat is a Visual Studio Code extension that brings AI assistance directly into your development environment. Configure custom AI agents with project-specific instructions and documentation, turning them into pair-programming companions that understand your project's context.

image.png

The Serenity QA team uses this extension to facilitate E2E test development. The configured AI agent ensures tests follow our naming conventions and Arrange-Act-Assert pattern, adds meaningful comments, and suggests appropriate reusable components that were custom-built to use Selenium to interact with Serenity* AIHub.

By using the VSCode extension, the team has access to the agent and can quickly ask for help and iterate with the tests directly in VSCode.

Successful use case

By utilizing this tool, we were able to reduce the time spent on manual testing by over 50%. The team reported improved test coverage and faster feedback loops, which ultimately led to a more stable and reliable product release.

In a recent project, the Katalon Chrome extension was used to record the steps necessary for testing essential functions in our application. After recording the interactions, the scripts were exported in the programming language used by our development team. This script was then run through the Serenity* VSCode Chat extension, which refactored the code to reduce errors and align it with our coding standards. The result was a cleaner, more efficient test script that significantly improved our testing workflow.

Katalon export example, with languages available for exporting:

image.png

C# code block to be exported:

using System;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading;
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.Support.UI;

namespace SeleniumTests
{
[TestFixture]
public class CheckConditionPlugin
{
private IWebDriver driver;
private StringBuilder verificationErrors;
private string baseURL;
private bool acceptNextAlert = true;

[SetUp]
public void SetupTest()
{
driver = new FirefoxDriver();
baseURL = "https://www.google.com/";
verificationErrors = new StringBuilder();
}

[TearDown]
public void TeardownTest()
{
try
{
driver.Quit();
}
catch (Exception)
{
// Ignore errors if unable to close the browser
}
Assert.AreEqual("", verificationErrors.ToString());
}

[Test]
public void TheCheckConditionPluginTest()
{
driver.Navigate().GoToUrl("https://<URL>/MyAgents");
driver.FindElement(By.XPath("//div[@id='my-agents-grid']/div/div[6]/div/div/div/div/table/tbody[2]/div/a/div")).Click();
driver.Navigate().GoToUrl("https://<URL>/AssistantAgent/Edit/85B09A36-E17E-C359-3D4D-3A18738550EE");
driver.FindElement(By.XPath("//a[@id='skills-tab']/span[2]")).Click();
driver.FindElement(By.Id("btn-add-plugin-empty")).Click();
driver.FindElement(By.XPath("//div[@id='skills-grid']/div/div/div/a/div/p")).Click();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).Click();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).Clear();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).SendKeys("01");
driver.FindElement(By.XPath("//div[@id='plugin-description-section']/div[2]/ignite-textarea-extension/div/textarea")).Clear();
driver.FindElement(By.XPath("//div[@id='plugin-description-section']/div[2]/ignite-textarea-extension/div/textarea")).SendKeys("evalua si el usuario dice hola");
driver.FindElement(By.XPath("//div[@id='plugin-configuration-form']/div/div/div[3]/button[4]")).Click();
driver.FindElement(By.Id("btn-submit-assistant-agent")).Click();
driver.FindElement(By.LinkText("Save Changes")).Click();
driver.Navigate().GoToUrl("https://<URL>/AssistantAgent/Edit/85b09a36-e17e-c359-3d4d-3a18738550ee");
}

private bool IsElementPresent(By by)
{
try
{
driver.FindElement(by);
return true;
}
catch (NoSuchElementException)
{
return false;
}
}

private bool IsAlertPresent()
{
try
{
driver.SwitchTo().Alert();
return true;
}
catch (NoAlertPresentException)
{
return false;
}
}

private string CloseAlertAndGetItsText() {
try {
IAlert alert = driver.SwitchTo().Alert();
string alertText = alert.Text;
if (acceptNextAlert) {
alert.Accept();
} else {
alert.Dismiss();
}
return alertText;
} finally {
acceptNextAlert = true;
}
}
}
}

The script is then run though our VSCode extension:

image.png

Code scripts can be iterated to completion using VSCode Chat extension, that can solve issues and propose solutions:

image.png

Examples of successful tests after iterating with VSCode Chat:

image.png

Configuring Serenity* VSCode Chat and Katalon

To get started with this powerful combination, follow these steps:

  1. Install Visual Studio Code: If you haven't already, download and install Visual Studio Code.

  2. Install the Serenity* VSCode Chat Extension:

    1. Open Visual Studio Code.
    2. Click on the Extensions icon in the Activity Bar.
    3. Search for Serenity* VSCode Chat and click Install.
  3. Install Katalon Extension for Chrome:

    1. Open Google Chrome and navigate to the Chrome Web Store.
    2. Search for Katalon and click Add to Chrome.
  4. Set Up Your Project:

    1. Create a new project or open an existing one in Visual Studio Code.
    2. Ensure that your project is configured to use the Serenity* Star framework.

Creating and Executing Automated Tests

Now that you have both tools set up, let's create and execute automated tests:

  1. Record Your Test Steps:

    1. Open the Katalon extension in Chrome.
    2. Click on Record and perform the actions you want to test.
    3. Once finished, click Stop Recording.
  2. Export the Test Script:

    1. Click on the Export button and choose the appropriate programming language for your development team.
  3. Refactor the Script with Serenity* VSCode Chat:

    1. Copy and paste the export script in the integrated chat and ask the agent to refactor it.
  4. Run Your Tests:

    1. Make sure you have the following extension installed: image.png
    2. After building your project, run your tests with the play button on each test.
    3. Review the output for test results and any errors that may have occurred.

How to Configure your Agent and Associate it with Serenity Star Chat

First we need to create an AI Agent in Serenity* AI Hub:

  1. Get a Free Serenity* Star Account and Login

  2. Click on “AI Agents”

    image.png

  3. Click over the “New Agent” button

    image.png

  4. Search for the VSCode agent template

    image.png

  5. Adjust anything you need and click on the "Create" button at the bottom-right corner of your screen.

💡 Tip: You can freely adjust your agent’s personality, knowledge, and skills. Keep in mind, AI Agents perform best when provided with rich and relevant context.

Getting your API Key

  1. Now that the agent is created, from the AI Agents grid, click over the new agent’s card to go to the Agent Designer view.

  2. Once in the Designer for the Coding expert agent, click on the Code Snippets button at the top-right of your screen.

  3. You can get your API Key from the CURL snippet:

    Api Key

Setting up the VSCode extension

  1. Go to VSCode

  2. Install the Serenity* VSCode extension

    image.png

  3. Open the Serenity* Chat View and click on the Setup Serenity button.

    image.png

  4. This will guide you through a short process to setup your API Key and default agent.

    image.png

  5. Once done, you can start chatting with your Agent from the Serenity* Chat View:

    image.png

Tips and Best Practices

By integrating the Serenity* VSCode Chat extension, you can significantly enhance your automated testing process. This extension allows for efficient test creation, refactoring, and execution, ultimately leading to higher quality software and faster release cycles. Embrace these tools to streamline your development workflow and improve your testing outcomes!

For more information on using the Serenity* VSCode Chat extension, visit the Serenity* VS Code Documentation.

Giving an AI Agent access to your emails

· 3 min read
Julia Schröder Langhaeuser
Julia Schröder Langhaeuser
Product Director

We want to provide an agent with the capability of accessing our Microsoft account to help us process our emails. We will take advantage of the Http request skill, and configure the OAuth authentication to use Microsoft Graph APIs. We will be configuring both the access in Microsoft Entra and the agent in Serenity* AI Hub.

Follow this steps to configure the OAuth authentication method.

  1. We need to start by having an application registered in Microsoft. To do this, in the Microsoft Entra page, we will access the App registration section:

    image.png

  2. And click on New registration:

    image.png

    It is important to add https://hub.serenitystar.ai/OAuth2Connection/AuthorizationCallback as a Redirect Uri to give the required permissions for the authentication.

    image.png

  3. On Serenity* AI Hub, we add a new HTTP request skill, and start configuring our integration. We will choose OAuth2 as authentication type and click on Create New to define it:

    image.png

    You will see something like this, and we will go step by step through each of the fields:

    image.png

    • Name: it is for using this configuration within the AI Hub.

    • Auth URL and Token URL: we will obtain it from Microsoft Entra. On your app registration, you can access the Endpoints list:

      image.png

      In particular, we need the OAuth 2.0 authorization endpoint (v2) and the OAuth 2.0 token endpoint (v2)

      image.png

    • Client Id: We obtain it directly from the app registration definition:

      image.png

    • Client Secret: we will create a new secret from Microsoft Entra and link it on Serenity* AI Hub:

      image.png

      Be aware that the expiration defined here will impact the actions that can be performed by the agent.

    • Scope: this will define which endpoints can be accessed with this authentication. The definition in Serenity* AI Hub has to match the permissions that were given to this application in Microsoft Entra. To do this, we go to the API permissions section:

      image.png

      In this case, I want to use Microsoft Graph API, but from here it will depend on the implementation.

      image.png

      image.png

      image.png

      image.png

      In the Scope configuration in the AI Hub, we need to add all of this permissions separated by a white space.

      Important: apart from the scope you want to enable, always add “offline_access” to your scope to enable the authentication.

  4. Now you are ready to test the connection. You should see a new tab opened, and if everything was configured correctly, it will have a message like this one:

    image.png

  5. Now we can finish configuring the Http request skill, to access our emails.

    In the Microsoft Graph documentation, you can see all of the different endpoints that are available. In particular, we will configure our skill to read our mailbox, and we will use this endpoint:

    https://learn.microsoft.com/en-us/graph/api/user-list-messages?view=graph-rest-1.0&tabs=http

    The parameters of the endpoint can be fully configured following this documentation: https://learn.microsoft.com/en-us/graph/query-parameters?tabs=http

    The following configuration for example, retrieves the sender and subject of 50 emails received since a variable date. The agent will decide the correct date based on the parameters.

    image.png

    TIP: we recommend testing the requests from the Graph Explorer tool https://developer.microsoft.com/en-us/graph/graph-explorer?request=me%2Fmessages&version=v1.0

  6. We will use a basic prompt, but ensure to (with the help of liquid) let the agent know what day it is today.

    Help the user process his emails.
    You only have access to the sender and subject of those emails.
    Take into account that today is {{ "now" | date: "%Y-%m-%d %H:%M" }}
  7. We can try it out with a simple request:

    image.png

And see the execution of the tool:

image.png

Create an AI Agent with Alibaba Cloud

· 2 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Serenity* AI Hub provides easy access to a variety of Large Language Models (LLM) powering your AI Agents. The LLM can be thought of as the brain powering your agent.
In this article, you will learn how to integrate models though Alibaba Cloud to power your AI solutions.

Introduction

Qwen is a family of foundational models by Alibaba, available in a variety of sizes, use cases and techologies, designed for fast usage and scaling

alt text

Creating an agent

  1. Register for free on Serenity* AI Hub.
  2. On the home screen, use the integrated chat with Serena to create a new agent.
  3. Once Serena correctly identifies the use case for the agent, you will be taken to the Agent Designer, where you can fully customize your agent.

alt text alt text

Selecting Alibaba Qwen Models

In the Agent Designer, go to the Model tab. From the model dropdown, you can easily select Alibaba Qwen Models. Once the selected model is chosen, the agent is ready to be used and evaluated. Changing between different Qwen models is as easy as selecting the appropriate model from the dropdown menu.

alt text

Integrating with your client application

Exporting your agent and integrating it with your application is as simple as executing an HTTP request to AI Hub. For a quick example, click on Code Snippets in the top right corner, and a sample CURL command will be generated to execute the agent. You can easily integrate this CURL command with your system and start enhancing your applications.

alt text alt text

Integrate AI Agents in Zapier

· 3 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Integrating Serenity* AI Hub Agents with Zapier is a straightforward way to automate tasks and trigger Agent actions through a wide range of options and workflows. This guide will walk you through the simple steps to configure your Agents in your Zaps!

Steps

  1. Sign up for free to create your first AI agent and get your API key.

  2. Log in to Zapier to create or edit an existing Zap.

  3. Create a new step in your workflow and search for the Serenity AI Hub connector. Select Serenity AI Hub Connector

  4. Sign in to Serenity* AI Hub by creating a new connection in Zapier. Create a new Connection

  5. Insert your Serenity* AI Hub API key credentials. Entering credentials

  6. Select the event you want to execute in this step.

  7. Complete the mandatory parameters and automate your workflow! alt text

Available Events

Execute Activity Agent

A simplified event that executes any activity agent from a list of your available agents on AI Hub.

  • Agent Code (required): Select from the provided list the Activity Agent you want to execute.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.
  • Agent Version: The specific version of the agent to be executed. If no value is specified, the published version will be used.

alt text

Create Assistant Conversation

This event is used for starting a new conversation with a given Assistant Agent. It returns the necessary chatId for sending messages in a conversation.

Configuring parameters:

  • Agent Code (required): Select from the provided list the Assistant Agent you want to start the conversation with.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.
  • Agent Version: The specific version of the agent to be executed. If no value is specified, the published version will be used.
  • User Identifier: An optional identifier used to associate the conversation with a specific user in Serenity* AI Hub.

alt text

Execute Assistant Conversation

This event is used to send messages to the selected agent through a given chat instance. Use this event to send and receive messages with the agent.

Configuring parameters:

  • Agent Code (required): Select from the provided list the Assistant Agent you want to message. Be sure to select the same agent you have already started a conversation with.
  • Chat Id (required): The chat ID of the conversation where the messages will be sent. It's recommended to map this value to the data output of the Create Assistant Conversation event.
  • Message (required): The message to send to the agent.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.

alt text

Create an AI Agent with IBM's new LLM Granite

· 2 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Serenity* AI Hub provides easy access to a variety of Large Language Models (LLM) powering your AI Agents. The LLM can be thought of as the brain powering your agent.
In this article, you will learn how to integrate IBM foundational models from the Granite family to power your AI solutions.

Introduction

Granite is a family of foundational models by IBM, available in a variety of sizes: 2, 8, and 20 billion parameters. The new IBM Granite 3.0 models deliver state-of-the-art performance relative to model size while maximizing safety, speed, and cost-efficiency for enterprise use cases.

alt text

Creating an agent

  1. Register for free on Serenity* AI Hub.
  2. On the home screen, use the integrated chat with Serena to create a new agent.
  3. Once Serena correctly identifies the use case for the agent, you will be taken to the Agent Designer, where you can fully customize your agent.

alt text alt text

Selecting IBM Granite Models

In the Agent Designer, go to the Model tab. From the model dropdown, you can easily select IBM Granite models. Once the selected model is chosen, the agent is ready to be used and evaluated. Changing between different Granite models is as easy as selecting the appropriate model from the dropdown menu.

alt text

Integrating with your client application

Exporting your agent and integrating it with your application is as simple as executing an HTTP request to AI Hub. For a quick example, click on Code Snippets in the top right corner, and a sample CURL command will be generated to execute the agent. You can easily integrate this CURL command with your system and start enhancing your applications.

alt text alt text

How to Integrate Your AI Agent with Calendly

· 3 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

When we talk about AI Agents, a key feature to automate your processes and workflows is the capability of integrating with external services. One of these services can be Calendly, for accessing your agenda and organizing your day.

In this article, we will explore how to integrate your AI Agent with the Calendly API using the HTTP Request Skill in Serenity* AI Hub.

About the Example

We will be creating an Assistant Agent that integrates with Calendly to retrieve your daily scheduled events. We will use Calendly’s /scheduled_events endpoint to get a list of upcoming events for the logged-in user.

Prerequisites

  • You should have a Calendly account to follow this step-by-step guide.
  • Inside Calendly, create a Bearer Token that we will use for authentication with the Calendly API.

Setting Up the Agent

For this example, we will be using an existing Agent template in AI Hub called Calendly Assistant. Go to the agent creator screen and select the template from the list.

alt text

Skills Tab

In the skills tab we can see two existing skills that will be used for the integration with Calendly:

  • The first skill will allow you to get the list of upcoming events
  • The second skill allows you to retrieve your Calendy user id

Click on edit the first skill so we can configure your authentication settings.

alt text

Let's go through the configuration settings needed for the Calendly request skill:

alt text

  • Code: A simple identifier for the skill.
  • What is this HTTP Request for? This defines the objective of this skill. If the agent determines that the specified condition is met, it will execute the skill.
  • Endpoint: Specifies the method type and base URL. In this case, we use the scheduled_events?user endpoint, which requires a user parameter. By using the {{userId}} syntax, we indicate that this parameter value will be automatically replaced by the AI. Additional parameters can also be specified using this syntax.
  • URL Parameters: Defines how the agent should replace each URL parameter. In this case, we use Calendly’s user identification system via the User Id.
  • Authentication: Replace the placeholder with your Calendly bearer token.

If you edit the second skill, the one that allows you to retrieve the user by, you should see the following configuration:

alt text

Replace the placeholder with your Calendly bearer token

Testing the Agent

Once the skills are configured, simply type your question to test the agent. For example, you can ask "What are my scheduled events for today?" Your agent should respond with a message like this:

alt text

alt text

Final Tips

  • Check out the Calendly API Reference to further customize and add additional endpoints.
  • You can add dynamic parameters to your base URL, which will be completed by the agent based on the context of the conversation.
  • Clearly define the responsibility of each skill and when it should be executed. This will improve your agent’s performance by avoiding unnecessary skill execution.
  • Integrate your agent with your website, application, or any other service.

Is Your Language Holding Back Your AI Agent? How Language Resources Affect LLM Performance

· 5 min read
Agustín Fisher
Agustín Fisher
Software Developer

Large Language Models (LLMs) have revolutionized every day tasks and become a part of our day to day life, demonstrating remarkable capabilities across a range of activities. However, their performance often varies depending on the language we're using. This can have significant implications for their deployment in multilingual settings, especially when extending support to low-resource languages.

High-Resource and Low-Resource Languages

Languages are broadly classified based on the amount of training data available at the time of training LLMs. High-resource languages, such as English, Chinese, and Spanish, benefit from vast amounts of text data, which facilitates the creation of uniform and well-structured embedding spaces. These embeddings, which represent the semantic relationships between words, phrases, and concepts, tend to be more consistent and stable in high-resource languages. This uniformity ensures that LLMs perform reliably across a variety of tasks for these languages.

Conversely, low-resource languages face a different reality. Due to the limited availability of high-quality data, the embedding spaces for these languages are often sparse and less uniform. This inconsistency can lead to subpar performance, as the model struggles to capture nuanced semantic relationships. For instance, tasks like sentiment analysis, machine translation, or question answering may yield less accurate results when applied to low-resource languages.

In simpler terms we can think of these like talking to a wise old person who has a lot of life experiences (a high-resource language) versus talking to a little kid who has not lived much yet (low-resource language). Of course, the wise old person who has had many experiences in life is better equipped to give you advice. They will be able to better understand your emotions and relate to them, leading to better advice.

Why Does This Happen?

The training process of LLMs inherently favors languages with more data. High-resource languages dominate the loss optimization during training, leading to embedding spaces that are better tuned to these languages. On the other hand, low-resource languages are often underrepresented, resulting in embedding spaces that are:

  • Less densely populated: Fewer examples mean weaker representation of linguistic nuances.
  • More prone to noise: The lack of data increases the likelihood of overfitting or spurious correlations.
  • Poorly aligned with high-resource languages: This misalignment can cause inaccuracies in multilingual tasks, such as translation or cross-lingual understanding.

image

Here we can see what happens with these embedding spaces of Gemma 7B with 8 different languages. Can you tell which ones are high-resource and which ones are low-resource languages?

Correcting the Imbalance: The Role of Fine-Tuning

Recent advancements in LLM development, such as the Qwen models, demonstrate how fine-tuning can mitigate these issues. Fine-tuning involves training a pre-trained model on specific data tailored to the target language or task. This additional training can help align the embedding spaces of low-resource languages with their high-resource counterparts. Here’s how:

  • Data Augmentation: By introducing synthetic or curated datasets for low-resource languages, fine-tuning enriches the embedding space.
  • Multilingual Alignment: Techniques like contrastive learning can align representations across languages, improving performance in cross-lingual tasks.
  • Task-Specific Optimization: Fine-tuning on task-relevant data ensures that the model’s performance improves for specific applications, even in low-resource settings.

image

These graphs show the similarity to English performance across layers in the LLM for a couple different models. Notice how in the Qwen model, Chinese is the only high-resource language that shows an improvement in performance over the last layers. This is thanks to their fine-tuning work specifically for Chinese training data.

Running Some Tests

We ran some tests comparing English and Spanish performance using our very own Agent Quality Studio.

image

We asked some very specific and challenging questions in both English and Spanish and had automated testing determine whether they were similar enough. Note that in this testing we take English as absolutely correct in the sense that we don't measure performance as how correct an answer is, rather as how similar it is to the English answer. In some cases it could not be the case that the English answer itself is sub-optimal. We just measure similarity to English as that is the standard among LLMs.

Both models got every question correct except for one interesting case where the Spanish model did even better! Though this may be due to the connotation some specific words may have in Spanish.

image

Final Thoughts

We learned that low-resource languages can lead to non-uniform embedding spaces and this has a significant negative impact on LLM performance. However, as we could see in our test results it's really not a big deal among high-resouce languages, even in challenging topics like STEM. How does this really affect day-to-day operation with LLMs? It will really depend on your use case but, generally speaking, as long as you stay within high-resource languages you honestly don't have much to worry about. So, here's what you need to keep in mind in order to get the most out of your AI Agents:

  • Try to stick to high-resource languages if possible. Sticking to English, Spanish, French, Chinese, etc... will yield better results across the board
  • If you really need to squeeze out the best performance you can for a specific language then consider fine-tuning. As we saw previously, this can greatly help with performance in a specific language.

References

Zihao Li "Quantifying Multilingual Performance of Large Language Models Across Languages" arXiv preprint arXiv:2404.11553 (2024).

A Guide to Executing POST Requests with Serenity* Agents

· 10 min read

Serenity* Agents can execute HTTP requests via the HttpRequest Skill. This skill allows us to describe the endpoint we want to execute, including the method, headers, authentication, and body.

This article will explore how to use this skill to execute POST requests. We'll see how to configure this skill to provide an AI-generated body for the request and, most importantly, ensure the agent knows how to build a valid body and when to use each skill.

We'll iterate from a simple starter agent to a more robust one, testing and improving it along the way.

About the example

We'll create an agent whose goal will be to help us schedule meetings and events based on a prompt.

For this example, I created a mock API with two endpoints:

  • POST /meeting/schedule to schedule a meeting
  • POST /event/schedule to schedule an event (used for reminders or something that doesn't require attendees)

Both endpoints are intentionally similar, so we can demonstrate how to instruct the agent to use the right skill for each case.

We'll give our agent access to these endpoints via the HttpRequest Skill.

For example, if the agent receives a prompt like "Schedule a meeting for the product demo on September 22nd at 3 pm. It should take 45 minutes," it should execute the following request:

curl -X POST "https://<my-mock-api>/meeting/schedule" -H "Content-Type: application/json" -d '{
"datetime": "2025-09-22T15:00:00",
"duration": 0.75,
"subject": "Product demo",
"attendees": [],
"location": null,
"description": null
}'

Setting up the Agent

For this example, we will create an Activity agent. It will receive a prompt that will be used to decide which skill to use, if any.

General Tab

General tab describing the calendar organizer agent

The main thing to notice here is the "Ask" field. That's the instruction the agent will receive. As you can see, it includes an input parameter called prompt.

We'll keep it simple for now.

Model Tab

Model tab showing how I seleccted OpenAI&#39;s GPT 4.o Mini

In this tab, select OpenAI's GPT 4o Mini and leave everything else as default.

Skills Tab

Time to add our two HTTP Request Skills.

Meeting Skill

HTTP Request form for the meeting skill

The first section of the form asks us for the following information:

  • Code: This should be a unique code to identify this particular skill. It will also help us identify the skill in the agent's response and logs.
  • Description: Describe what the skill does, when to use it, any important notes or limitations we want the agent to consider, etc.

These two fields are extremely important for the agent to determine when and how to use each skill. In this first iteration, we'll keep them brief and simple so we can see the difference when we improve them later.

For the Authentication Type, I'll select "No Auth" since my mock API doesn't require authentication.

Then, fill the Method and Url fields with the appropriate values for the meeting endpoint and press "Continue."

In the next section, we'll be asked to configure the headers and body.

The default configurations for headers and body

We don't need to provide any headers in this case, so we'll leave that section as is.

For the body, turn on the "Include request body" switch and select "AI-Generated Body." This will instruct the agent to generate the body considering the body description we provide, as well as the prompt and skill description.

Body description

We'll simply provide a sample body with a brief description.

Finally, check the "Include skill output in the agent's result" switch at the bottom to include this skill's response in the agent's response. This will help us with our tests later.

And now, we can confirm this configuration and move on to the next skill.

Event Skill

This configuration will be similar to the previous one, so I'll just show you the resulting forms:

Event skill description

Event skill body

Now that the agent has skills, we can create it and begin testing.

Testing the first version of our agent

We can access our new agent from the "AI Agents" page. Click over the card to open the Agent Designer.

AI Agent list

At the right side we have a "Preview" section where we can test our agent filling-in the prompt parameter with different values.

Let's try it.

First test asking to schedule a meeting

It seems like it understood the task, but taking a look at the logs I can see that it didn't invoke any of the skills and its asking for the location of the meeting. That's probably because I didn't specify which fields are optional when describing the body.

It also appears to be using the date I provided as an example (2025-01-15) as the current date.

Let's try adjusting the input prompt to see if we can get a better result:

The response of the agent demonstrating that the meeting was successfully created

It seems the adjustments to the prompt did the trick, and now the agent is properly calling the MeetingScheduler skill.

Automated testing

One or two manual tests are not enough to ensure the agent is working as expected. Ideally, we should run a set of diverse tests that emulate real-world scenarios and evaluate the results to see where the agent is failing.

You can use Postman to run automated tests on your agent by providing a data file with many different test cases and using Postman's scripts to verify the results.

Postman endpoint and body

For example, in this case, I added a Postman endpoint where the body contains a variable for prompt, which will be replaced by the test data.

Here's a sample of my test data:

[
{
    "prompt": "Arrange a meeting for the product demo on September 22nd at 3 pm. It should take 45 minutes.",
    "type": "meeting"
},
{
    "prompt": "Put the Leonid meteor shower on November 17th on my calendar. Starts at midnight.",
    "type": "event"
},
{
    "prompt": "Book a weekly sync with the design team next Thursday at 11 am for 30 minutes.",
    "type": "meeting"
},
{
    "prompt": "Add the company picnic to my calendar for August 25th. It starts at 10 am and lasts all day.",
    "type": "event"
}
]

I also added a test script to verify the agent's response. This script checks if the agent used the right skill by comparing the response with the variable type. This variable is part of my test data, but it's not sent to the agent since we only use it to verify the response.

Postman scripts for testing

I'll configure a Runner to execute this endpoint using my json file as the data source.

Postman runner configuration

And after running the tests, we can see the results:

Postman tests results

According to the results, sometimes the agent uses the wrong skill, and sometimes, it cannot use any skill at all.

After reviewing the failed tests, I noticed 3 main issues:

  1. The agent sometimes confuses Meetings with Events, which makes sense since we didn't provide any information on how to differentiate them.
  2. It has trouble calculating dates with prompts such as "Tomorrow" or "Next Wednesday" unless we provide the current date, time, and day of the week within the prompt.
  3. It also can't differentiate between optional and required body fields, and it doesn't know details about them, such as the unit of time for the duration field.

Improving the agent

Issue 1: Differentiating between Meetings and Events

This should be easy to fix. We need to change the skill's description to provide more information.

Meeting skill improved description

Event skill improved description

The improvements are:

  • Now we're indicating when to use this skill.
  • We're specifying the skill is actually an HTTP request, so it's easier for the agent to understand why it needs a body. And we're also stating that a body must be provided to strengthen that point.
  • Finally, we're indicating the skill's expected output.

Issue 2: Calculating dates

We could fix this by adding the "TimePlugin," which includes a set of tools to calculate dates and times and returns the current date. But in this case, there's a simpler solution.

We can use Liquid to indicate the current date in the "Ask" field, like so:

Using liquid to specify the date

The liquid expression prints the date in the following format "2025-01-15 15:10 - Wednesday". Read more about this here

💡 Including just the minimum required skills is a good idea for performance and to avoid confusing the agent.

Issue 3: Building a valid body

This can be done by improving the body description, like so:

Improved body for the meeting skill

We're now providing a description for each field, indicating when they're required or optional, and details about the data type and units.

Testing the improved agent

We can run the tests again after saving and publishing the agent with these changes.

Postman results after improving the agent

The results look much better now. However, we can keep iterating and testing by following the same process.

Final tips on configuring agents with skills

To sum up, here are some tips to get the best results while configuring an agent with skills:

Choose a clear representative Code for your skill: This, along with the description, will help the agent determine when to use it and will also help you find it in the agent's response and logs.

  • Write detailed descriptions: The Skill description is key to instructing the agent when and how to use it. Write anything you think is relevant, including constraints, what the endpoint will return, and when not to use it.
  • Describe the body (or any other skill parameters) thoroughly: If the agent needs to create a body for your request, make sure it knows the details of each field, such as data types, which are required and which are optional, types of units, etc.
  • Include the minimum skills needed to do the job: This will help the agent avoid confusion and improve performance. If you think your agent is overloaded with skills, you may want to replace some with Liquid or use the Agent executor skill to call specialized agents for each task.
  • Try different models: If the model you selected for your agent is not performing as expected, try a different one. Read the documentation on each model to see which one may perform better for your use case.
  • Test thoroughly: Make sure the agent performs well on different cases. Users may provide diverse prompts, and we don't want to overfit the agent to a specific type of input.
  • Iterate: Keep iterating. We can always return to a previous version of the agent, so don't be afraid to make changes and test them.

How to Integrate the Chat Widget in a Next.js Project

· 3 min read
Mauro Garcia
Front End Engineer

The Chat Widget allows you to add a customizable AI-powered chat feature to your web application. This guide will walk you through integrating the widget into a Next.js project using both the App Router and Pages Router approaches.

Using App Router

When using the App Router in Next.js, follow these steps:

Step 1: Modify the Root Layout

In the app/layout.tsx file (or .js), add the CSS link to the <head> section and the JS script to the end of the <body>. You'll also need to include a placeholder <div> for the chat widget and reference a custom component for initialization.

Here’s an example of the modified layout:

import type { Metadata } from "next";
import { Geist, Geist_Mono } from "next/font/google";
import "./globals.css";
import SerenityChat from "@/components/SerenityChat";

const geistSans = Geist({
variable: "--font-geist-sans",
subsets: ["latin"],
});

const geistMono = Geist_Mono({
variable: "--font-geist-mono",
subsets: ["latin"],
});

export const metadata: Metadata = {
title: "Create Next App",
description: "Generated by create next app",
};

export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<head>
{/* Add Serenity* Star Chat CSS */}
<link
key={"serenity-chat-styles"}
rel="stylesheet"
href="https://hub.serenitystar.ai/resources/chat.css"
/>
</head>
<body className={`${geistSans.variable} ${geistMono.variable}`}>
{/* Placeholder for Chat Widget */}
<div id="aihub-chat"></div>
{children}
{/* Initialize Serenity Chat */}
<SerenityChat />
{/* Add Serenity* Star Chat JS */}
<script src="https://hub.serenitystar.ai/resources/chat.js"></script>
</body>
</html>
);
}

Step 2: Create the Chat Widget

The Chat Widget initializes the chat using the AIHubChat class. Place this file in src/components/SerenityChat.tsx:

"use client";

import { useEffect } from "react";

const SerenityChat: React.FC = () => {
const initChat = () => {
//@ts-ignore
const chat = new AIHubChat("aihub-chat", {
apiKey: "<Your API Key>",
agentCode: "<Your Agent Code>",
baseURL: "https://api.serenitystar.ai/api",
});
chat.init();
};

useEffect(() => {
initChat();
}, []);

return null;
};

export default SerenityChat;

Replace <Your API Key> and <Your Agent Code> with your actual credentials.

Using Pages Router

If you’re using the Pages Router, you just need to update your _app.tsx file.

Include the CSS link and the initialization script globally. Use the <Head> component for the CSS and also add the js script like this:

import Head from "next/head";
import { useEffect } from "react";
import "@/styles/globals.css";

export default function App({ Component, pageProps }) {
useEffect(() => {
if (typeof window !== "undefined") {
//@ts-ignore
const chat = new AIHubChat("aihub-chat", {
apiKey: "<Your API Key>",
agentCode: "<Your Agent Code>",
baseURL: "https://api.serenitystar.ai/api",
});
chat.init();
}
}, []);

return (
<>
<Head>
{/* Add Serenity* Star Chat CSS */}
<link
rel="stylesheet"
href="https://hub.serenitystar.ai/resources/chat.css"
/>
</Head>
<div id="aihub-chat"></div>
<Component {...pageProps} />
{/* Add Serenity* Star Chat JS */}
<script src="https://hub.serenitystar.ai/resources/chat.js"></script>
</>
);
}

Summary

  • App Router: Use layout.tsx for global styles and scripts and a use client component for chat initialization.
  • Pages Router: Add the CSS and JS in _app.js, and initialize the chat directly in a useEffect.

By following these instructions, you can easily integrate the chat widget into your Next.js project.