AI Agent Tools

Our new Graphlit Agent Tools for Python enable easy interaction with agent frameworks such as CrewAI, allowing developers to easily integrate the Graphlit Platform into agentic workflows.

We have example Google Colab notebooks using CrewAI, which provide an example for analyzing the web marketing strategy of a company, and for structured data extraction of products from scraped web pages.

!pip install --upgrade
!pip install --upgrade
!pip install --upgrade
from crewai import Agent, Task, Crew, Process
from graphlit_tools import WebSearchTool, CrewAIConverter

web_search_tool = CrewAIConverter.from_tool(WebSearchTool(graphlit))

web_search_agent = Agent(
    role="Web Researcher",
    goal="Find the {company} website.",
    backstory="",
    verbose=True,
    allow_delegation=False,
    tools=[web_search_tool],
)

search_web_task = Task(
    description=(
        """Given company named {company}, search the web to find their home page.
        Return the root path for URLs, not individual web pages.
        For example return https://www.example.com, 
		not https://www.example.com/index.html"""
    ),
    expected_output="A single URL for the {company} home page",
    agent=web_search_agent,
)

crew = Crew(
    agents=[web_search_agent],
    tasks=[search_web_task],
    process=Process.sequential,
    planning=True,
    verbose=True,
)

result = await crew.kickoff_async(inputs={"company": company_name})
print(result)
from crewai import Agent, Task, Crew, Process
from graphlit_tools import WebSearchTool, CrewAIConverter

web_search_tool = CrewAIConverter.from_tool(WebSearchTool(graphlit))

web_search_agent = Agent(
    role="Web Researcher",
    goal="Find the {company} website.",
    backstory="",
    verbose=True,
    allow_delegation=False,
    tools=[web_search_tool],
)

search_web_task = Task(
    description=(
        """Given company named {company}, search the web to find their home page.
        Return the root path for URLs, not individual web pages.
        For example return https://www.example.com, 
		not https://www.example.com/index.html"""
    ),
    expected_output="A single URL for the {company} home page",
    agent=web_search_agent,
)

crew = Crew(
    agents=[web_search_agent],
    tasks=[search_web_task],
    process=Process.sequential,
    planning=True,
    verbose=True,
)

result = await crew.kickoff_async(inputs={"company": company_name})
print(result)
from crewai import Agent, Task, Crew, Process
from graphlit_tools import WebSearchTool, CrewAIConverter

web_search_tool = CrewAIConverter.from_tool(WebSearchTool(graphlit))

web_search_agent = Agent(
    role="Web Researcher",
    goal="Find the {company} website.",
    backstory="",
    verbose=True,
    allow_delegation=False,
    tools=[web_search_tool],
)

search_web_task = Task(
    description=(
        """Given company named {company}, search the web to find their home page.
        Return the root path for URLs, not individual web pages.
        For example return https://www.example.com, 
		not https://www.example.com/index.html"""
    ),
    expected_output="A single URL for the {company} home page",
    agent=web_search_agent,
)

crew = Crew(
    agents=[web_search_agent],
    tasks=[search_web_task],
    process=Process.sequential,
    planning=True,
    verbose=True,
)

result = await crew.kickoff_async(inputs={"company": company_name})
print(result)

Content Ingestion

Ingests messages from Slack channel into knowledge base.

Accepts Slack channel name.

Returns extracted Markdown text and metadata from messages.

Ingests messages from Discord channel into knowledge base.

Accepts Discord channel name.

Returns extracted Markdown text and metadata from messages.

Ingests messages from Microsoft Teams channel into knowledge base.

Returns extracted Markdown text and metadata from messages.

Ingests issues from Linear project into knowledge base.

Accepts Linear project name.

Returns extracted Markdown text and metadata from issues.

Ingests issues from Atlassian Jira into knowledge base.

Accepts Atlassian Jira server URL and project name.

Returns extracted Markdown text and metadata from issues.

Ingests issues from GitHub repository into knowledge base.

Accepts GitHub repository owner and repository name.

Returns extracted Markdown text and metadata from issues.

Ingests emails from Google Email account into knowledge base.

Returns extracted Markdown text and metadata from emails.

Ingests emails from Microsoft Email account into knowledge base.

Returns extracted Markdown text and metadata from emails.

Ingests posts from RSS feed into knowledge base.

For podcast RSS feeds, audio will be transcribed and ingested into knowledge base.

Returns extracted or transcribed Markdown text and metadata from RSS posts.

Ingests pages from Notion database into knowledge base.

Returns extracted Markdown text and metadata from Notion pages.

Ingests posts from Reddit subreddit into knowledge base.

Returns extracted Markdown text and metadata from Reddit posts.

Accepts web page URL as string.

Enumerates the web pages at or beneath the provided URL using web sitemap.

Returns list of mapped URIs from web site.

Accepts search query text as string.

Performs web search based on search query.

Returns Markdown text and metadata extracted from web pages.

Crawls web pages from web site into knowledge base.

Returns Markdown text and metadata extracted from web pages.

Scrapes web page into knowledge base.

Returns Markdown text and metadata extracted from web page.

Accepts file path to local file.

Returns extracted Markdown text and metadata from content.

Can ingest individual Word documents, PDFs, audio recordings, videos, images, or any other unstructured data.

Accepts URL to cloud-hosted file.

Returns extracted Markdown text and metadata from content.

Can ingest individual Word documents, PDFs, audio recordings, videos, images, or any other unstructured data.

RAG

RAG

Accepts user prompt as string.

Prompts LLM with relevant content and returns completion from RAG pipeline. Returns Markdown text from LLM completion.

Uses vector embeddings and similarity search to retrieve relevant content from knowledge base.

Can search through web pages, PDFs, audio transcripts, and other unstructured data.

Data Retrieval

Accepts search text as string.

Retrieves organizations based on similarity search from knowledge base.

Returns metadata from organizations relevant to the search text.

Accepts search text as string.

Retrieves persons based on similarity search from knowledge base.

Returns metadata from persons relevant to the search text.

Accepts search text as string.

Optionally accepts a list of content types (i.e. FILE, PAGE, EMAIL, ISSUE, MESSAGE) for filtering the result set.

Retrieves contents based on similarity search from knowledge base.

Returns extracted Markdown text and metadata from contents relevant to the search text.

Can search through web pages, PDFs, audio transcripts, Slack messages, emails, or any unstructured data ingested into the knowledge base.

Content Generation

Accepts transcript as string.

Returns chapters as text.

Accepts text as string.

Optionally accepts the count of keywords to be generated.

Returns keywords as text.

Accepts text as string.

Optionally accepts the count of followup questions to be generated.

Returns followup questions as text.

Accepts text as string.

Optionally accepts the count of social media posts to be generated.

Returns social media posts as text.

Accepts text as string.

Optionally accepts the count of headlines to be generated.

Returns headlines as text.

Accepts text as string.

Optionally accepts the count of bullet points to be generated.

Returns bullet points as text.

Accepts text as string.

Optionally accepts text prompt to be provided to LLM for text summarization.

Returns summary as text.

Image Description

Screenshots web page from URL and describes web page with vision LLM.   

Returns Markdown description of screenshot and extracted Markdown text from image.

Accepts image URL as string.

Prompts vision LLM and returns completion.

Returns Markdown text from LLM completion.

Data Extraction

Accepts text to be scraped, and JSON schema of Pydantic model to be extracted into.

JSON schema needs be of type 'object' and include 'properties' and 'required' fields.

Returns extracted JSON from text.

Accepts URL to be scraped, and JSON schema of Pydantic model to be extracted into.

JSON schema needs be of type 'object' and include 'properties' and 'required' fields.

Returns extracted JSON from web page.

Accepts URL to be ingested, and JSON schema of Pydantic model to be extracted into.

JSON schema needs be of type 'object' and include 'properties' and 'required' fields.

Returns extracted JSON from file.