Skip to main content

Integrating Google Calendar with your Serenity* AI Agent

· 3 min read

This guide walks you through integrating Google Calendar with your AIHub agent using OAuth 2.0 authentication. You'll learn how to set up an HTTP Request skill that can create calendar events by making authenticated API calls to the Google Calendar API.

You can use this integration to interact with your Google Calendar in any way you like, such as creating and deleting events, retrieving event details, or updating existing events, all by asking your AI agent to perform these tasks.

Add an HTTP Request Skill

  1. From the agent designer, add a new HTTP Request skill. In the following example, I'm preparing a skill to add new events to my calendar.

    HTTP Request Skill Form

    Configure the following settings:

    • Code: CreateCalendarEvent
    • Description: POST HTTP request that adds an event to the calendar by specifying start date, end date, summary, and description in the body.
    • Method: POST
    • URL: https://www.googleapis.com/calendar/v3/calendars/<google-email-here>/events
  2. Select OAuth2 as your authentication type and press "Create new"

    OAuth2 Authentication Selection

Configuring the OAuth 2.0 Connection

Complete the OAuth 2.0 Connection Configuration form:

OAuth2 Connection Configuration Form

Configure the following settings:

  • Name: Google Calendar (this is just to identify the connection)
  • Auth URL: https://accounts.google.com/o/oauth2/v2/auth
  • Token URL: https://oauth2.googleapis.com/token
  • Client ID: Your Google OAuth app client ID (generated via Google Cloud Console)
  • Client Secret: Your Google OAuth app client secret
  • Scopes: https://www.googleapis.com/auth/calendar

For Google OAuth, you also need to add two additional Auth Request Parameters:

Access Type Parameter:

  • Key: access_type
  • Value: offline

Prompt Parameter:

  • Key: prompt
  • Value: consent

Important: Make sure to add the Redirect URI to your Google OAuth app as specified at the top of the form:

OAuth2 Redirect URI Configuration

Once the OAuth connection is complete, click "Continue" and go through Google's login and consent screens. You should see this success message:

OAuth2 Connection Success Message

Close the tab and return to the OAuth 2.0 Connection form. You should see the new connection configured in the OAuth 2.0 selector:

OAuth2 Connection Selector

Finish Configuring the HTTP Request

Press "Continue" to set up the rest of the request information. In this step, you may need to configure the request body. For my "Create Event" request, I used an AI-generated body like this:

HTTP Request Body Configuration

What's Next?

You can now try your Agent from the Preview panel. Ask it to create a calendar event, and it should respond with a confirmation message.

Once you're sure that's working correctly, you can reuse this OAuth 2.0 connection in other HTTP Request skills to provide your Agent with other ways to interact with your Google Calendar, such as retrieving events or deleting them.

Keep in Mind

  • Permissions: Ensure your Google OAuth app has the necessary permissions to access the Calendar API. You can manage these in the Google Cloud Console.
  • Your Agent will act on your behalf: When you set up the OAuth connection, your agent will be able to create, modify, and delete events in your Google Calendar. Keep this in mind before sharing your agent publicly.

Additional Resources

Advanced ATM Security Assistant

· 3 min read
Guillermo Ezquer
Guillermo Ezquer
Software Engineer

Introduction

In high-risk environments such as Automated Teller Machines (ATMs), the need to detect suspicious behavior in real time is critical. This article explores a concrete use case: extending an existing helmet-detection system with a natural language agent (LLM) that interprets, classifies, and responds to risk scenarios captured in images.

From Visual Detection to Semantic Understanding

The system builds upon an already functional infrastructure that identifies individuals wearing helmets. On top of this, an "activity agent" is trained with a specific prompt designed to:

  • Analyze images for unlawful or suspicious behavior
  • Classify the event (e.g., armed robbery, forced entry, vandalism)
  • Count and describe individuals (age, gender, clothing, height, tattoos)
  • Generate a structured JSON output with tags, confidence scores, timestamps, and recommended actions

Step-by-Step Agent Setup in Serenity* AI Hub

Creating this kind of agent in Serenity* AI Hub is straightforward. Below is a visual walkthrough:

  1. Create a new agent
    Create new agent

  2. Select the 'Advanced ATM Security Assistant' template
    Choose template

  3. Customize the endpoint name and configuration
    Configure endpoint

  4. Finalize creation
    Create agent

  5. Enable image processing capabilities
    Activate vision

  6. Upload an image to test the agent
    Upload test image

Execution and Results

Once deployed, the agent can analyze ATM surveillance images with impressive speed and accuracy.

  • Test Image 1 – Normal ATM usage scenario:

    Test image 1
    Agent response 1

    The system correctly identifies that no unlawful activity is occurring, with a low-risk score and a recommendation to ignore.

  • Test Image 2 – Forced entry attempt:

    Test image 2
    Agent response 2

    Here, the agent classifies the situation as “forced entry,” detects visible vandalism, and recommends dispatching security with a high-confidence score.

  • Test Image 3 – Attempted ATM break-in with tools:

    Test image 3
    Agent response 3

    The agent detects a masked suspect attempting to breach the ATM using bolt cutters, with high alert level and immediate action recommendation.

Integration with Serenity* AI Hub

The activity agent operates entirely within Serenity* AI Hub, enabling seamless monitoring and continuous refinement of its performance. Once deployed, it enables:

  • Centralized incident management across multiple camera feeds
  • Automated action flows (alerts, escalations, reports)
  • Continuous training via real-world feedback loops

API Access and Execution

To retrieve the API Key and execute the agent via code, Serenity* AI Hub provides auto-generated code snippets. Here's how to find and use them:

  • Step 1: Locate the "Code Snippets" section in the agent configuration
    Code Snippets button

  • Step 2: Copy the cURL command to execute your agent via API
    cURL example with API Key

Systems Integration with Serenity AI Hub

  1. Upload the images to analyze to AI Hub through the volatile knowledge endpoint:
    Upload Volatile Knowledge

    curl --location 'https://api.serenitystar.ai/api/v2/VolatileKnowledge' \
    --header 'X-API-KEY: <YOUR_API_KEY>' \
    --form '<YOUR_IMAGE_FILE>'

  2. Check image processing status:
    Check Volatile Knowledge

    curl --location 'https://api.serenitystar.ai/api/v2/VolatileKnowledge/<VOLATILE_KNOWLEDGE_ID>' \
    --header 'X-API-KEY: <YOUR_API_KEY>'

  3. Execute the agent with the uploaded image ID:
    Execute Agent

    curl --location 'https://api.serenitystar.ai/api/v2/agent/ATMassistant/execute' \
    --header 'Content-Type: application/json' \
    --header 'X-API-KEY: <YOUR_API_KEY>' \
    --data '[{
    "Key": "volatileKnowledgeIds",
    "Value": ["<YOUR_IMAGE_VOLATILE_KNOWLEDGE_ID>"]
    }]'

  4. Integrate the resulting JSON response with your target system.

Conclusion

This use case demonstrates how combining LLMs with computer vision elevates security from passive monitoring to intelligent, operational interpretation. By generating structured, actionable insights in real time, Serenity Star is ready to scale these solutions into critical infrastructure environments.

Using Forms to collect leads

· 5 min read
Julia Schröder Langhaeuser
Julia Schröder Langhaeuser
Product Director

In today's competitive digital landscape, generating high-quality leads is crucial for business growth. AI Agents can transform your lead generation process by engaging with visitors in a conversational manner while seamlessly collecting valuable information. In this article, we'll explore how to use the Forms capability in Serenity* AI Hub to create an effective lead collection system.

Why Use Forms for Lead Generation?

Traditional lead forms can be static and impersonal, often leading to high abandonment rates. Conversational AI offers several advantages:

  • Natural Interaction: Visitors engage in a conversation rather than filling out fields on a form
  • Progressive Data Collection: Information is gathered gradually throughout the conversation
  • Personalized Experience: The agent adapts its questions based on previous responses
  • Higher Completion Rates: Less intimidating than lengthy traditional forms
  • 24/7 Availability: Collect leads around the clock without human intervention

Setting Up Your Lead Generation Form

Let's create an Assistant Agent that will engage with website visitors and collect lead information for your sales team.

Prerequisites

  • Access to Serenity* AI Hub
  • A clear understanding of what lead information you need to collect

Step 1: Create a Lead Generation Agent

Start by creating a new Assistant Agent specifically designed for lead collection. You can either use a template or create one from scratch.

Configure your agent with a personality that aligns with your brand. For lead generation, a friendly and helpful tone usually works best.

Step 2: Configure the Forms Feature

Navigate to the Forms tab in your agent configuration.

Forms Tab

Click on "Add Field" to begin building your lead capture form.

Step 3: Define Lead Information Fields

For an effective lead generation form, consider adding these essential fields:

  1. Name (Text type)

    • Set as required
    • Instructions: "Ask for the person's full name"
  2. Email (Email type)

    • Set as required
    • Instructions: "Collect a valid email address for follow-up"
  3. Phone Number (Phone type)

    • Optional
    • Instructions: "Request phone number including country code if the lead seems interested in a call"
  4. Company (Text type)

    • Set as required
    • Instructions: "Ask for the company name"
  5. Role/Position (Text type)

    • Optional
    • Instructions: "Inquire about their position or role in the company"
  6. Interest Level (Select type)

    • Options: High, Medium, Low
    • Instructions: "Based on the conversation, assess their interest level"
  7. Product Interest (Select type)

    • Options: [Your specific products/services]
    • Instructions: "Determine which of our products or services they're most interested in"
  8. Additional Notes (Text type)

    • Optional
    • Instructions: "Capture any other relevant information shared during the conversation"

For each field, configure how it should be collected:

Field Configuration Example Field Configuration Example

Step 4: Set Collection Instructions

For a lead generation use case, it's usually best to select "After a Few Messages" as the collection timing. This allows the agent to build rapport before asking for information.

Collection Timing

For custom instructions, you might use:

"Engage in a natural conversation first. Ask about the visitor's challenges or needs. After establishing rapport, begin collecting lead information in a conversational way. Don't ask for all information at once - spread questions throughout the conversation."

Step 5: Test Your Lead Generation Agent

Use the preview chat to test how your agent collects lead information. Pay attention to:

  • How naturally the questions are integrated into the conversation
  • Whether the agent collects all required information
  • How the agent handles objections or hesitations

Testing Lead Form

Make adjustments to your form fields and instructions based on the test results.

Step 6: Deploy Your Lead Generation Agent

Once you're satisfied with your lead generation agent, publish it to make it available for use. You can integrate it with your website, landing pages, or other digital channels.

Accessing and Managing Lead Data

After your agent has been collecting leads, you can access this valuable information from the agent card by clicking the "Forms" button.

Access Forms Button

On the forms page, you'll see all the lead data organized in a grid format:

Lead Data Grid

You can:

  • Sort and filter leads
  • Export lead data to Excel for integration with your CRM
  • View different versions of your form to track performance over time

Best Practices for Conversational Lead Generation

  1. Start with Value: Have your agent provide helpful information before asking for contact details
  2. Be Transparent: Clearly communicate why you're collecting information and how it will be used
  3. Progressive Disclosure: Start with simple questions and gradually move to more detailed ones
  4. Offer Incentives: Consider offering something valuable in exchange for contact information
  5. Follow Up: Ensure leads are promptly followed up by your sales team
  6. Continuous Improvement: Regularly review conversations and adjust your form fields and agent instructions

Conclusion

Forms in AI Hub provide a powerful way to collect leads through conversational AI. By creating a natural, engaging experience, you can increase both the quantity and quality of leads while gathering rich contextual information that helps your sales team close more deals.

Ready to revolutionize your lead generation process? Start by creating your first lead collection agent today!

Using Serenity* Visual Studio Code Chat to Create Automated Tests

· 6 min read
Carolina De Cono
Carolina De Cono
QA Analyst

In today's fast-paced development environment, automated testing is essential for ensuring software quality and efficiency. The Serenity* VSCode Chat extension offers a powerful solution for developers looking to streamline their testing processes.

This blog post will guide you through the process of using the Serenity* VSCode Chat extension in Visual Studio Code to create automated test cases, enhancing both efficiency and effectiveness in your projects.

Introduction to Serenity* VSCode Chat

Serenity* VSCode Chat is a Visual Studio Code extension that brings AI assistance directly into your development environment. Configure custom AI agents with project-specific instructions and documentation, turning them into pair-programming companions that understand your project's context.

image.png

The Serenity QA team uses this extension to facilitate E2E test development. The configured AI agent ensures tests follow our naming conventions and Arrange-Act-Assert pattern, adds meaningful comments, and suggests appropriate reusable components that were custom-built to use Selenium to interact with Serenity* AIHub.

By using the VSCode extension, the team has access to the agent and can quickly ask for help and iterate with the tests directly in VSCode.

Successful use case

By utilizing this tool, we were able to reduce the time spent on manual testing by over 50%. The team reported improved test coverage and faster feedback loops, which ultimately led to a more stable and reliable product release.

In a recent project, the Katalon Chrome extension was used to record the steps necessary for testing essential functions in our application. After recording the interactions, the scripts were exported in the programming language used by our development team. This script was then run through the Serenity* VSCode Chat extension, which refactored the code to reduce errors and align it with our coding standards. The result was a cleaner, more efficient test script that significantly improved our testing workflow.

Katalon export example, with languages available for exporting:

image.png

C# code block to be exported:

using System;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading;
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.Support.UI;

namespace SeleniumTests
{
[TestFixture]
public class CheckConditionPlugin
{
private IWebDriver driver;
private StringBuilder verificationErrors;
private string baseURL;
private bool acceptNextAlert = true;

[SetUp]
public void SetupTest()
{
driver = new FirefoxDriver();
baseURL = "https://www.google.com/";
verificationErrors = new StringBuilder();
}

[TearDown]
public void TeardownTest()
{
try
{
driver.Quit();
}
catch (Exception)
{
// Ignore errors if unable to close the browser
}
Assert.AreEqual("", verificationErrors.ToString());
}

[Test]
public void TheCheckConditionPluginTest()
{
driver.Navigate().GoToUrl("https://<URL>/MyAgents");
driver.FindElement(By.XPath("//div[@id='my-agents-grid']/div/div[6]/div/div/div/div/table/tbody[2]/div/a/div")).Click();
driver.Navigate().GoToUrl("https://<URL>/AssistantAgent/Edit/85B09A36-E17E-C359-3D4D-3A18738550EE");
driver.FindElement(By.XPath("//a[@id='skills-tab']/span[2]")).Click();
driver.FindElement(By.Id("btn-add-plugin-empty")).Click();
driver.FindElement(By.XPath("//div[@id='skills-grid']/div/div/div/a/div/p")).Click();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).Click();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).Clear();
driver.FindElement(By.XPath("//form[@id='default-plugin-configuration-form']/div/div[2]/ignite-input/div/input")).SendKeys("01");
driver.FindElement(By.XPath("//div[@id='plugin-description-section']/div[2]/ignite-textarea-extension/div/textarea")).Clear();
driver.FindElement(By.XPath("//div[@id='plugin-description-section']/div[2]/ignite-textarea-extension/div/textarea")).SendKeys("evalua si el usuario dice hola");
driver.FindElement(By.XPath("//div[@id='plugin-configuration-form']/div/div/div[3]/button[4]")).Click();
driver.FindElement(By.Id("btn-submit-assistant-agent")).Click();
driver.FindElement(By.LinkText("Save Changes")).Click();
driver.Navigate().GoToUrl("https://<URL>/AssistantAgent/Edit/85b09a36-e17e-c359-3d4d-3a18738550ee");
}

private bool IsElementPresent(By by)
{
try
{
driver.FindElement(by);
return true;
}
catch (NoSuchElementException)
{
return false;
}
}

private bool IsAlertPresent()
{
try
{
driver.SwitchTo().Alert();
return true;
}
catch (NoAlertPresentException)
{
return false;
}
}

private string CloseAlertAndGetItsText() {
try {
IAlert alert = driver.SwitchTo().Alert();
string alertText = alert.Text;
if (acceptNextAlert) {
alert.Accept();
} else {
alert.Dismiss();
}
return alertText;
} finally {
acceptNextAlert = true;
}
}
}
}

The script is then run though our VSCode extension:

image.png

Code scripts can be iterated to completion using VSCode Chat extension, that can solve issues and propose solutions:

image.png

Examples of successful tests after iterating with VSCode Chat:

image.png

Configuring Serenity* VSCode Chat and Katalon

To get started with this powerful combination, follow these steps:

  1. Install Visual Studio Code: If you haven't already, download and install Visual Studio Code.

  2. Install the Serenity* VSCode Chat Extension:

    1. Open Visual Studio Code.
    2. Click on the Extensions icon in the Activity Bar.
    3. Search for Serenity* VSCode Chat and click Install.
  3. Install Katalon Extension for Chrome:

    1. Open Google Chrome and navigate to the Chrome Web Store.
    2. Search for Katalon and click Add to Chrome.
  4. Set Up Your Project:

    1. Create a new project or open an existing one in Visual Studio Code.
    2. Ensure that your project is configured to use the Serenity* Star framework.

Creating and Executing Automated Tests

Now that you have both tools set up, let's create and execute automated tests:

  1. Record Your Test Steps:

    1. Open the Katalon extension in Chrome.
    2. Click on Record and perform the actions you want to test.
    3. Once finished, click Stop Recording.
  2. Export the Test Script:

    1. Click on the Export button and choose the appropriate programming language for your development team.
  3. Refactor the Script with Serenity* VSCode Chat:

    1. Copy and paste the export script in the integrated chat and ask the agent to refactor it.
  4. Run Your Tests:

    1. Make sure you have the following extension installed: image.png
    2. After building your project, run your tests with the play button on each test.
    3. Review the output for test results and any errors that may have occurred.

How to Configure your Agent and Associate it with Serenity Star Chat

First we need to create an AI Agent in Serenity* AI Hub:

  1. Get a Free Serenity* Star Account and Login

  2. Click on “AI Agents”

    image.png

  3. Click over the “New Agent” button

    image.png

  4. Search for the VSCode agent template

    image.png

  5. Adjust anything you need and click on the "Create" button at the bottom-right corner of your screen.

💡 Tip: You can freely adjust your agent’s personality, knowledge, and skills. Keep in mind, AI Agents perform best when provided with rich and relevant context.

Getting your API Key

  1. Now that the agent is created, from the AI Agents grid, click over the new agent’s card to go to the Agent Designer view.

  2. Once in the Designer for the Coding expert agent, click on the Code Snippets button at the top-right of your screen.

  3. You can get your API Key from the CURL snippet:

    Api Key

Setting up the VSCode extension

  1. Go to VSCode

  2. Install the Serenity* VSCode extension

    image.png

  3. Open the Serenity* Chat View and click on the Setup Serenity button.

    image.png

  4. This will guide you through a short process to setup your API Key and default agent.

    image.png

  5. Once done, you can start chatting with your Agent from the Serenity* Chat View:

    image.png

Tips and Best Practices

By integrating the Serenity* VSCode Chat extension, you can significantly enhance your automated testing process. This extension allows for efficient test creation, refactoring, and execution, ultimately leading to higher quality software and faster release cycles. Embrace these tools to streamline your development workflow and improve your testing outcomes!

For more information on using the Serenity* VSCode Chat extension, visit the Serenity* VS Code Documentation.

Giving an AI Agent access to your emails

· 3 min read
Julia Schröder Langhaeuser
Julia Schröder Langhaeuser
Product Director

We want to provide an agent with the capability of accessing our Microsoft account to help us process our emails. We will take advantage of the Http request skill, and configure the OAuth authentication to use Microsoft Graph APIs. We will be configuring both the access in Microsoft Entra and the agent in Serenity* AI Hub.

Follow this steps to configure the OAuth authentication method.

  1. We need to start by having an application registered in Microsoft. To do this, in the Microsoft Entra page, we will access the App registration section:

    image.png

  2. And click on New registration:

    image.png

    It is important to add https://hub.serenitystar.ai/OAuth2Connection/AuthorizationCallback as a Redirect Uri to give the required permissions for the authentication.

    image.png

  3. On Serenity* AI Hub, we add a new HTTP request skill, and start configuring our integration. We will choose OAuth2 as authentication type and click on Create New to define it:

    image.png

    You will see something like this, and we will go step by step through each of the fields:

    image.png

    • Name: it is for using this configuration within the AI Hub.

    • Auth URL and Token URL: we will obtain it from Microsoft Entra. On your app registration, you can access the Endpoints list:

      image.png

      In particular, we need the OAuth 2.0 authorization endpoint (v2) and the OAuth 2.0 token endpoint (v2)

      image.png

    • Client Id: We obtain it directly from the app registration definition:

      image.png

    • Client Secret: we will create a new secret from Microsoft Entra and link it on Serenity* AI Hub:

      image.png

      Be aware that the expiration defined here will impact the actions that can be performed by the agent.

    • Scope: this will define which endpoints can be accessed with this authentication. The definition in Serenity* AI Hub has to match the permissions that were given to this application in Microsoft Entra. To do this, we go to the API permissions section:

      image.png

      In this case, I want to use Microsoft Graph API, but from here it will depend on the implementation.

      image.png

      image.png

      image.png

      image.png

      In the Scope configuration in the AI Hub, we need to add all of this permissions separated by a white space.

      Important: apart from the scope you want to enable, always add “offline_access” to your scope to enable the authentication.

  4. Now you are ready to test the connection. You should see a new tab opened, and if everything was configured correctly, it will have a message like this one:

    image.png

  5. Now we can finish configuring the Http request skill, to access our emails.

    In the Microsoft Graph documentation, you can see all of the different endpoints that are available. In particular, we will configure our skill to read our mailbox, and we will use this endpoint:

    https://learn.microsoft.com/en-us/graph/api/user-list-messages?view=graph-rest-1.0&tabs=http

    The parameters of the endpoint can be fully configured following this documentation: https://learn.microsoft.com/en-us/graph/query-parameters?tabs=http

    The following configuration for example, retrieves the sender and subject of 50 emails received since a variable date. The agent will decide the correct date based on the parameters.

    image.png

    TIP: we recommend testing the requests from the Graph Explorer tool https://developer.microsoft.com/en-us/graph/graph-explorer?request=me%2Fmessages&version=v1.0

  6. We will use a basic prompt, but ensure to (with the help of liquid) let the agent know what day it is today.

    Help the user process his emails.
    You only have access to the sender and subject of those emails.
    Take into account that today is {{ "now" | date: "%Y-%m-%d %H:%M" }}
  7. We can try it out with a simple request:

    image.png

And see the execution of the tool:

image.png

Create an AI Agent with Alibaba Cloud

· 2 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Serenity* AI Hub provides easy access to a variety of Large Language Models (LLM) powering your AI Agents. The LLM can be thought of as the brain powering your agent.
In this article, you will learn how to integrate models though Alibaba Cloud to power your AI solutions.

Introduction

Qwen is a family of foundational models by Alibaba, available in a variety of sizes, use cases and techologies, designed for fast usage and scaling

alt text

Creating an agent

  1. Register for free on Serenity* AI Hub.
  2. On the home screen, use the integrated chat with Serena to create a new agent.
  3. Once Serena correctly identifies the use case for the agent, you will be taken to the Agent Designer, where you can fully customize your agent.

alt text alt text

Selecting Alibaba Qwen Models

In the Agent Designer, go to the Model tab. From the model dropdown, you can easily select Alibaba Qwen Models. Once the selected model is chosen, the agent is ready to be used and evaluated. Changing between different Qwen models is as easy as selecting the appropriate model from the dropdown menu.

alt text

Integrating with your client application

Exporting your agent and integrating it with your application is as simple as executing an HTTP request to AI Hub. For a quick example, click on Code Snippets in the top right corner, and a sample CURL command will be generated to execute the agent. You can easily integrate this CURL command with your system and start enhancing your applications.

alt text alt text

Integrate AI Agents in Zapier

· 3 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Integrating Serenity* AI Hub Agents with Zapier is a straightforward way to automate tasks and trigger Agent actions through a wide range of options and workflows. This guide will walk you through the simple steps to configure your Agents in your Zaps!

Steps

  1. Sign up for free to create your first AI agent and get your API key.

  2. Log in to Zapier to create or edit an existing Zap.

  3. Create a new step in your workflow and search for the Serenity AI Hub connector. Select Serenity AI Hub Connector

  4. Sign in to Serenity* AI Hub by creating a new connection in Zapier. Create a new Connection

  5. Insert your Serenity* AI Hub API key credentials. Entering credentials

  6. Select the event you want to execute in this step.

  7. Complete the mandatory parameters and automate your workflow! alt text

Available Events

Execute Activity Agent

A simplified event that executes any activity agent from a list of your available agents on AI Hub.

  • Agent Code (required): Select from the provided list the Activity Agent you want to execute.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.
  • Agent Version: The specific version of the agent to be executed. If no value is specified, the published version will be used.

alt text

Create Assistant Conversation

This event is used for starting a new conversation with a given Assistant Agent. It returns the necessary chatId for sending messages in a conversation.

Configuring parameters:

  • Agent Code (required): Select from the provided list the Assistant Agent you want to start the conversation with.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.
  • Agent Version: The specific version of the agent to be executed. If no value is specified, the published version will be used.
  • User Identifier: An optional identifier used to associate the conversation with a specific user in Serenity* AI Hub.

alt text

Execute Assistant Conversation

This event is used to send messages to the selected agent through a given chat instance. Use this event to send and receive messages with the agent.

Configuring parameters:

  • Agent Code (required): Select from the provided list the Assistant Agent you want to message. Be sure to select the same agent you have already started a conversation with.
  • Chat Id (required): The chat ID of the conversation where the messages will be sent. It's recommended to map this value to the data output of the Create Assistant Conversation event.
  • Message (required): The message to send to the agent.
  • Input Parameters: The parameters defined in the Parameters tab on the Agent Designer allow you to configure agent behavior and functionality. For more details on how to define and use parameters, refer to the Serenity Input Parameters Guide. Be sure to provide values for all required parameters.

alt text

Create an AI Agent with IBM's new LLM Granite

· 2 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

Serenity* AI Hub provides easy access to a variety of Large Language Models (LLM) powering your AI Agents. The LLM can be thought of as the brain powering your agent.
In this article, you will learn how to integrate IBM foundational models from the Granite family to power your AI solutions.

Introduction

Granite is a family of foundational models by IBM, available in a variety of sizes: 2, 8, and 20 billion parameters. The new IBM Granite 3.0 models deliver state-of-the-art performance relative to model size while maximizing safety, speed, and cost-efficiency for enterprise use cases.

alt text

Creating an agent

  1. Register for free on Serenity* AI Hub.
  2. On the home screen, use the integrated chat with Serena to create a new agent.
  3. Once Serena correctly identifies the use case for the agent, you will be taken to the Agent Designer, where you can fully customize your agent.

alt text alt text

Selecting IBM Granite Models

In the Agent Designer, go to the Model tab. From the model dropdown, you can easily select IBM Granite models. Once the selected model is chosen, the agent is ready to be used and evaluated. Changing between different Granite models is as easy as selecting the appropriate model from the dropdown menu.

alt text

Integrating with your client application

Exporting your agent and integrating it with your application is as simple as executing an HTTP request to AI Hub. For a quick example, click on Code Snippets in the top right corner, and a sample CURL command will be generated to execute the agent. You can easily integrate this CURL command with your system and start enhancing your applications.

alt text alt text

How to Integrate Your AI Agent with Calendly

· 3 min read
Máximo Flugelman
Máximo Flugelman
Project Manager

When we talk about AI Agents, a key feature to automate your processes and workflows is the capability of integrating with external services. One of these services can be Calendly, for accessing your agenda and organizing your day.

In this article, we will explore how to integrate your AI Agent with the Calendly API using the HTTP Request Skill in Serenity* AI Hub.

About the Example

We will be creating an Assistant Agent that integrates with Calendly to retrieve your daily scheduled events. We will use Calendly’s /scheduled_events endpoint to get a list of upcoming events for the logged-in user.

Prerequisites

  • You should have a Calendly account to follow this step-by-step guide.
  • Inside Calendly, create a Bearer Token that we will use for authentication with the Calendly API.

Setting Up the Agent

For this example, we will be using an existing Agent template in AI Hub called Calendly Assistant. Go to the agent creator screen and select the template from the list.

alt text

Skills Tab

In the skills tab we can see two existing skills that will be used for the integration with Calendly:

  • The first skill will allow you to get the list of upcoming events
  • The second skill allows you to retrieve your Calendy user id

Click on edit the first skill so we can configure your authentication settings.

alt text

Let's go through the configuration settings needed for the Calendly request skill:

alt text

  • Code: A simple identifier for the skill.
  • What is this HTTP Request for? This defines the objective of this skill. If the agent determines that the specified condition is met, it will execute the skill.
  • Endpoint: Specifies the method type and base URL. In this case, we use the scheduled_events?user endpoint, which requires a user parameter. By using the {{userId}} syntax, we indicate that this parameter value will be automatically replaced by the AI. Additional parameters can also be specified using this syntax.
  • URL Parameters: Defines how the agent should replace each URL parameter. In this case, we use Calendly’s user identification system via the User Id.
  • Authentication: Replace the placeholder with your Calendly bearer token.

If you edit the second skill, the one that allows you to retrieve the user by, you should see the following configuration:

alt text

Replace the placeholder with your Calendly bearer token

Testing the Agent

Once the skills are configured, simply type your question to test the agent. For example, you can ask "What are my scheduled events for today?" Your agent should respond with a message like this:

alt text

alt text

Final Tips

  • Check out the Calendly API Reference to further customize and add additional endpoints.
  • You can add dynamic parameters to your base URL, which will be completed by the agent based on the context of the conversation.
  • Clearly define the responsibility of each skill and when it should be executed. This will improve your agent’s performance by avoiding unnecessary skill execution.
  • Integrate your agent with your website, application, or any other service.

Is Your Language Holding Back Your AI Agent? How Language Resources Affect LLM Performance

· 5 min read
Agustín Fisher
Agustín Fisher
Software Developer

Large Language Models (LLMs) have revolutionized every day tasks and become a part of our day to day life, demonstrating remarkable capabilities across a range of activities. However, their performance often varies depending on the language we're using. This can have significant implications for their deployment in multilingual settings, especially when extending support to low-resource languages.

High-Resource and Low-Resource Languages

Languages are broadly classified based on the amount of training data available at the time of training LLMs. High-resource languages, such as English, Chinese, and Spanish, benefit from vast amounts of text data, which facilitates the creation of uniform and well-structured embedding spaces. These embeddings, which represent the semantic relationships between words, phrases, and concepts, tend to be more consistent and stable in high-resource languages. This uniformity ensures that LLMs perform reliably across a variety of tasks for these languages.

Conversely, low-resource languages face a different reality. Due to the limited availability of high-quality data, the embedding spaces for these languages are often sparse and less uniform. This inconsistency can lead to subpar performance, as the model struggles to capture nuanced semantic relationships. For instance, tasks like sentiment analysis, machine translation, or question answering may yield less accurate results when applied to low-resource languages.

In simpler terms we can think of these like talking to a wise old person who has a lot of life experiences (a high-resource language) versus talking to a little kid who has not lived much yet (low-resource language). Of course, the wise old person who has had many experiences in life is better equipped to give you advice. They will be able to better understand your emotions and relate to them, leading to better advice.

Why Does This Happen?

The training process of LLMs inherently favors languages with more data. High-resource languages dominate the loss optimization during training, leading to embedding spaces that are better tuned to these languages. On the other hand, low-resource languages are often underrepresented, resulting in embedding spaces that are:

  • Less densely populated: Fewer examples mean weaker representation of linguistic nuances.
  • More prone to noise: The lack of data increases the likelihood of overfitting or spurious correlations.
  • Poorly aligned with high-resource languages: This misalignment can cause inaccuracies in multilingual tasks, such as translation or cross-lingual understanding.

image

Here we can see what happens with these embedding spaces of Gemma 7B with 8 different languages. Can you tell which ones are high-resource and which ones are low-resource languages?

Correcting the Imbalance: The Role of Fine-Tuning

Recent advancements in LLM development, such as the Qwen models, demonstrate how fine-tuning can mitigate these issues. Fine-tuning involves training a pre-trained model on specific data tailored to the target language or task. This additional training can help align the embedding spaces of low-resource languages with their high-resource counterparts. Here’s how:

  • Data Augmentation: By introducing synthetic or curated datasets for low-resource languages, fine-tuning enriches the embedding space.
  • Multilingual Alignment: Techniques like contrastive learning can align representations across languages, improving performance in cross-lingual tasks.
  • Task-Specific Optimization: Fine-tuning on task-relevant data ensures that the model’s performance improves for specific applications, even in low-resource settings.

image

These graphs show the similarity to English performance across layers in the LLM for a couple different models. Notice how in the Qwen model, Chinese is the only high-resource language that shows an improvement in performance over the last layers. This is thanks to their fine-tuning work specifically for Chinese training data.

Running Some Tests

We ran some tests comparing English and Spanish performance using our very own Agent Quality Studio.

image

We asked some very specific and challenging questions in both English and Spanish and had automated testing determine whether they were similar enough. Note that in this testing we take English as absolutely correct in the sense that we don't measure performance as how correct an answer is, rather as how similar it is to the English answer. In some cases it could not be the case that the English answer itself is sub-optimal. We just measure similarity to English as that is the standard among LLMs.

Both models got every question correct except for one interesting case where the Spanish model did even better! Though this may be due to the connotation some specific words may have in Spanish.

image

Final Thoughts

We learned that low-resource languages can lead to non-uniform embedding spaces and this has a significant negative impact on LLM performance. However, as we could see in our test results it's really not a big deal among high-resouce languages, even in challenging topics like STEM. How does this really affect day-to-day operation with LLMs? It will really depend on your use case but, generally speaking, as long as you stay within high-resource languages you honestly don't have much to worry about. So, here's what you need to keep in mind in order to get the most out of your AI Agents:

  • Try to stick to high-resource languages if possible. Sticking to English, Spanish, French, Chinese, etc... will yield better results across the board
  • If you really need to squeeze out the best performance you can for a specific language then consider fine-tuning. As we saw previously, this can greatly help with performance in a specific language.

References

Zihao Li "Quantifying Multilingual Performance of Large Language Models Across Languages" arXiv preprint arXiv:2404.11553 (2024).