Skip to main content
Agent profiles allow you to customize the behavior and capabilities of the automation agent by configuring which LLM models power different components.

What are Agent Profiles?

An agent profile defines the LLM configuration for the various sub-agents that power mobile-use:
  • Planner - Creates high-level plans from goals
  • Orchestrator - Coordinates execution steps
  • Cortex - Visual understanding and decision-making (most important)
  • Executor - Performs specific actions
  • Utils - Helper agents (outputter, hopper)

Platform Profiles

Using the platform? Create and manage profiles on platform.minitap.ai/llm-profiles, then reference them by name in your tasks.

Creating Platform Profiles

  1. Go to LLM Profiles on the platform
  2. Click Create Profile
  3. Configure each agent component with your preferred models
  4. All OpenRouter models are available (no API key management needed)
  5. Reference the profile by name in your code:
from minitap.mobile_use.sdk.types import PlatformTaskRequest

result = await agent.run_task(
    request=PlatformTaskRequest(
        task="check-notifications",
        profile="your-profile-name"  # Profile configured on platform
    )
)
Platform profiles can be updated anytime without changing code - perfect for A/B testing different models!

Local Profiles

For local development, profiles are defined in config files or code:

From Configuration File

The recommended approach for local production:
from minitap.mobile_use.sdk.types import AgentProfile

profile = AgentProfile(
    name="default",
    from_file="llm-config.defaults.jsonc"
)
llm-config.defaults.jsonc:
{
  "planner": {
    "provider": "openai",
    "model": "gpt-5-nano"
  },
  "orchestrator": {
    "provider": "openai",
    "model": "gpt-5-nano"
  },
  "cortex": {
    "provider": "openai",
    "model": "gpt-5",
    "fallback": {
      "provider": "openai",
      "model": "gpt-5"
    }
  },
  "executor": {
    "provider": "openai",
    "model": "gpt-5-nano"
  },
  "utils": {
    "hopper": {
      "provider": "openai",
      "model": "gpt-5-nano"
    },
    "outputter": {
      "provider": "openai",
      "model": "gpt-5-nano"
    }
  }
}

Programmatic Configuration

For dynamic configuration:
from minitap.mobile_use.sdk.types import AgentProfile
from minitap.mobile_use.config import LLM, LLMConfig, LLMConfigUtils, LLMWithFallback

detail_oriented_profile = AgentProfile(
    name="detail_oriented",
    llm_config=LLMConfig(
        planner=LLM(provider="openrouter", model="meta-llama/llama-4-scout"),
        orchestrator=LLM(provider="openrouter", model="meta-llama/llama-4-scout"),
        cortex=LLMWithFallback(
            provider="openai",
            model="o4-mini",
            fallback=LLM(provider="openai", model="gpt-5"),
        ),
        executor=LLM(provider="openai", model="gpt-5-nano"),
        utils=LLMConfigUtils(
            outputter=LLM(provider="openai", model="gpt-5-nano"),
            hopper=LLM(provider="openai", model="gpt-4.1"),
        ),
    )
)
You cannot specify both llm_config and from_file - they are mutually exclusive.

Using Profiles

Setting a Default Profile

Configure an agent with a default profile:
from minitap.mobile_use.sdk.builders import Builders

profile = AgentProfile(name="default", from_file="llm-config.defaults.jsonc")

config = (
    Builders.AgentConfig
    .with_default_profile(profile)
    .build()
)

agent = Agent(config=config)

Multiple Profiles

Register multiple profiles and switch between them:
# Create profiles
fast_profile = AgentProfile(name="fast", from_file="fast-config.jsonc")
accurate_profile = AgentProfile(name="accurate", from_file="accurate-config.jsonc")

# Configure agent with multiple profiles
config = (
    Builders.AgentConfig
    .add_profiles([fast_profile, accurate_profile])
    .with_default_profile(fast_profile)
    .build()
)

agent = Agent(config=config)

# Use different profiles for different tasks
await agent.run_task(
    goal="Quick notification check",
    profile="fast"
)

await agent.run_task(
    goal="Detailed financial analysis",
    profile="accurate"
)

Profile Use Cases

  • Speed vs Accuracy
  • Cost Optimization
  • Provider Diversity
Fast Profile - Quick tasks, simple UI navigation
{
  "cortex": {
    "provider": "openai",
    "model": "gpt-5-nano"
  }
}
Accurate Profile - Complex analysis, detailed extraction
{
  "cortex": {
    "provider": "openai",
    "model": "o4-mini"
  }
}

Component Roles

The cortex is the visual understanding component - the โ€œeyes and brainโ€ of the agent.
  • Analyzes screenshots
  • Understands UI elements
  • Makes decisions about actions
  • Recommendation: Use the best vision model available (e.g., o4-mini, gpt-5)
  • Context window: Needs at least 128k tokens
Creates high-level plans from natural language goals.
  • Breaks down goals into subgoals
  • Estimates complexity
  • Recommendation: Use fast, capable text models (e.g., gpt-5-nano)
Coordinates execution and decides when to replicate.
  • Manages task flow
  • Handles errors and retries
  • Recommendation: Use fast models (e.g., gpt-5-nano)
Translates decisions into device actions.
  • Generates Maestro commands
  • Handles action formatting
  • Recommendation: Use fast, instruction-following models
Digs through large batches of data to extract the most relevant information for reaching the goal.
  • Processes extensive historical context and screen data
  • Extracts relevant information without modifying it
  • Context window: Needs at least 256k tokens (handles huge data batches)
  • Recommendation: Use models with large context (e.g., gpt-4.1)
Extracts structured output from final results.
  • Formats data into Pydantic models
  • Ensures type safety
  • Recommendation: Use capable text models

Supported Providers

Configure API keys in your .env file:
.env
# OpenAI
OPENAI_API_KEY=sk-...

# Google (Gemini)
GOOGLE_API_KEY=...

# xAI (Grok)
XAI_API_KEY=...

# OpenRouter (access to multiple models)
OPEN_ROUTER_API_KEY=...

Fallback Configuration

The cortex supports fallback models for reliability:
cortex=LLMWithFallback(
    provider="openai",
    model="o4-mini",
    fallback=LLM(provider="openai", model="gpt-5")
)
If the primary model fails or is unavailable, the fallback is used automatically.

Best Practices

Optimize the Cortex

Invest in the best vision model for cortex - it has the biggest impact

Use Fast Models for Planner

Planner and orchestrator donโ€™t need the most powerful models

Large Context for Hopper

Ensure hopper has at least 256k context window

Test Profile Performance

Benchmark different profile configurations for your use cases

Example: Task-Specific Profiles

import asyncio

# Define specialized profiles
fast_profile = AgentProfile(name="fast", from_file="fast.jsonc")
vision_profile = AgentProfile(name="vision", from_file="vision.jsonc")

config = (
    Builders.AgentConfig
    .add_profiles([fast_profile, vision_profile])
    .with_default_profile(fast_profile)
    .build()
)

agent = Agent(config=config)
agent.init()

try:
    # Use fast profile for simple navigation
    await agent.run_task(
        goal="Open settings",
        profile="fast"
    )
    
    # Use vision profile for complex visual task
    result = await agent.run_task(
        goal="Analyze all icons on the home screen and describe their purpose",
        profile="vision"
    )
    
finally:
    agent.clean()

Next Steps

โŒ˜I