Data & Agent Performance Engineer

Own Company

Own Company

São Paulo, SP, Brazil

Posted on May 7, 2026

Description

This role is designed for a technical data professional who supports both the data foundation before agent deployment and the performance visibility after agents go live.

The Data & Agent Performance Engineer plays a key role in making enterprise data usable, discoverable, and actionable for AI agents. This person works closely with architects, AI engineers, business teams, and platform specialists to prepare the data architecture needed for agentic systems and to monitor how agents perform once they are in production.

Core Purpose

The core purpose of this role is to enable data readiness, real-time context, and agent effectiveness.

Because AI agents need deep business context to make decisions, answer accurately, and execute workflows, this role focuses on preparing the data foundation behind the agent. This includes supporting the Data Cloud strategy, harmonizing enterprise data, connecting structured and unstructured sources, and feeding relevant real-time context into agentic systems.

At the same time, this role is also responsible for the “after go-live” visibility: analyzing agent performance, identifying gaps in answers, monitoring usage patterns, evaluating grounding quality, and helping teams continuously improve the agent experience.

Key Responsibilities

Build the Data Foundation for Agents

Support the design and implementation of data models, Data Cloud configurations, identity resolution, data harmonization, ingestion patterns, and activation strategies required for agents to operate with reliable business context.

Enable Real-Time Context for Agentic Systems

Prepare and connect the data sources agents need to answer questions, make recommendations, trigger actions, and support business workflows with accurate and contextual information.

Move from Traditional ETL to Search, Indexing, and Knowledge Access

Help evolve traditional data architectures beyond rigid ETL pipelines by enabling enterprise search, content stores, indexing strategies, semantic search, and knowledge graph structures that make information easier for agents to discover and use.

Work with Messy and Unstructured Data

Use AI-assisted approaches to extract meaning from scattered and unstructured data sources such as PDFs, transcripts, documents, knowledge articles, emails, legacy content, and operational records. The role does not wait for perfect data; it helps make imperfect data usable for AI use cases.

Build RAG and API-Based Integration Layers

Develop and support Retrieval-Augmented Generation architectures, APIs, data connectors, and multi-source access patterns that allow agents to retrieve information from different systems without forcing all enterprise data into a single centralized repository.

Monitor Agent Performance After Go-Live

Analyze agent sessions, interactions, escalation points, unanswered questions, grounding failures, hallucination risks, user feedback, adoption metrics, latency, and completion rates to understand how agents are performing in real-world scenarios.

Improve Agent Quality Continuously

Translate performance insights into technical improvements, including better grounding, improved prompts, refined knowledge sources, optimized retrieval logic, better data mappings, and stronger monitoring dashboards.

Create Observability and Performance Dashboards

Build dashboards and reporting views to give teams visibility into agent behavior, usage, quality, business impact, and operational risks.

Profile

This is not a traditional data engineering role focused only on pipelines. It is an evolved data engineering role for the AI era.

The ideal professional has a strong data background, understands modern data platforms, and can also think about how data is consumed by AI agents in real business workflows.

This person should be technical enough to build and troubleshoot data integrations, RAG patterns, APIs, and dashboards, while also understanding what business context an agent needs to deliver useful and trusted responses.

Required Background

  • Strong background in data engineering, data architecture, analytics, or AI data foundations.
  • Experience with data modeling, ingestion, transformation, APIs, and enterprise data platforms.
  • Knowledge of Salesforce Data Cloud, CRM data, MuleSoft, or similar integration/data platforms.
  • Understanding of structured and unstructured data.
  • Familiarity with RAG, semantic search, vector databases, indexing, knowledge graphs, or enterprise search patterns.
  • Ability to work with SQL, Python, APIs, and data visualization tools.
  • Experience building dashboards or observability views for business or technical performance.
  • Curiosity about AI agents, LLMs, prompt behavior, grounding, and agent performance.
  • Experience with Salesforce platforms is a strong plus.

Preferred Skills

  • Experience with Data Platforms, Data Cloud, Tableau, CRM Analytics, or similar platforms.
  • Experience analyzing agent, chatbot, or digital service performance.
  • Knowledge of observability metrics such as session volume, containment, escalation, response quality, latency, user satisfaction, and task completion.
  • Experience with unstructured data processing, document extraction, transcripts, PDFs, and knowledge base optimization.
  • Understanding of governance, security, privacy, and data quality principles.

Positioning

This role sits between Data Engineering, AI Engineering, and Agent Performance Observability.

It supports the agent lifecycle end to end:

Before go-live:

Prepare the data, context, integrations, RAG, indexing, and knowledge structures needed for the agent to work.

After go-live:

Monitor performance, detect gaps, analyze behavior, and recommend improvements to make agents more accurate, useful, trusted, and scalable.

In simple terms:

This person makes sure agents have the right data before they go live — and the right visibility after they are live.