Skip to main content
Reference for core types and data structures used in the mobile-use SDK.

AgentProfile

Represents a profile for the mobile-use agent with LLM configuration.
from minitap.mobile_use.sdk.types import AgentProfile

Constructor

AgentProfile(
    *,
    name: str,
    llm_config: LLMConfig | None = None,
    from_file: str | None = None,
)
name
str
required
Name of the profile
llm_config
LLMConfig
LLM configuration for the agent
from_file
str
Path to a file containing LLM configuration (JSONC format)
llm_config and from_file are mutually exclusive - use only one.

Examples

# From file
profile = AgentProfile(
    name="default",
    from_file="llm-config.defaults.jsonc"
)

# Programmatic
from minitap.mobile_use.config import LLM, LLMConfig

profile = AgentProfile(
    name="custom",
    llm_config=LLMConfig(
        planner=LLM(provider="openai", model="gpt-5-nano"),
        cortex=LLM(provider="openai", model="gpt-5"),
        # ... other components
    )
)

TaskRequest

Represents a mobile automation task request.
from minitap.mobile_use.sdk.types import TaskRequest

Attributes

goal
str
Natural language description of the task goal
profile
str | None
Name of the agent profile to use
task_name
str | None
Name of the task for logging
output_description
str | None
Description of the expected output format
output_format
type[TOutput] | None
Pydantic model class for typed output
max_steps
int
Maximum number of steps the agent can take
record_trace
bool
Whether to record execution traces
trace_path
Path
Directory to save trace data
llm_output_path
Path | None
Path to save LLM outputs
thoughts_output_path
Path | None
Path to save agent thoughts

Usage

TaskRequest objects are typically created via TaskRequestBuilder:
task_request = (
    agent.new_task("Your goal")
    .with_name("task_name")
    .build()
)

PlatformTaskRequest

Task request for execution via the Minitap Platform.
from minitap.mobile_use.sdk.types import PlatformTaskRequest
With PlatformTaskRequest, you only reference a task by name. The SDK automatically fetches the task configuration (goal, max_steps, output format) and LLM profile from the platform, then executes the task and streams observability data back.

Constructor

PlatformTaskRequest(
    task: str,
    profile: str | None = None,
    api_key: str | None = None,
    record_trace: bool = False,
    trace_path: Path = Path("mobile-use-traces"),
    llm_output_path: Path | None = None,
    thoughts_output_path: Path | None = None,
    max_steps: int = 400
)

Attributes

task
str
Name of the task configured on the Minitap Platform.Must exactly match a task name from platform.minitap.ai/tasks.
profile
str | None
Name of the LLM profile to use for this task.If not specified, uses the Minitap-managed default profile.
api_key
str | None
API key for authentication with the Minitap Platform.If not provided, uses the MINITAP_API_KEY environment variable.
record_trace
bool
Whether to record trace files locally (in addition to platform tracing).
trace_path
Path
Directory to save local trace files if record_trace is enabled.
llm_output_path
Path | None
Path to save the final LLM output locally.
thoughts_output_path
Path | None
Path to save agent thoughts/reasoning locally.
max_steps
int
Maximum number of steps (overridden by platform configuration).

Usage

from minitap.mobile_use.sdk import Agent
from minitap.mobile_use.sdk.types import PlatformTaskRequest

agent = Agent()
agent.init()

# Simple task execution
result = await agent.run_task(
    request=PlatformTaskRequest(task="check-notifications")
)

agent.clean()

AgentConfig

Configuration for the agent.
from minitap.mobile_use.sdk.types import AgentConfig
Created via AgentConfigBuilder:
config = (
    Builders.AgentConfig
    .with_default_profile(profile)
    .build()
)

agent = Agent(config=config)

DevicePlatform

Enum for device platforms.
from minitap.mobile_use.sdk.types import DevicePlatform

# Available values
DevicePlatform.ANDROID
DevicePlatform.IOS

Usage

config = (
    Builders.AgentConfig
    .for_device(platform=DevicePlatform.ANDROID, device_id="emulator-5554")
    .build()
)

ServerConfig

Configuration for agent servers.
from minitap.mobile_use.sdk.types import ServerConfig

Attributes

hw_bridge_base_url
str
Hardware Bridge server URL
screen_api_base_url
str
Screen API server URL
adb_server_host
str
ADB server host
adb_server_port
int
ADB server port

Usage

servers = ServerConfig(
    hw_bridge_base_url="http://localhost:8001",
    screen_api_base_url="http://localhost:8000",
    adb_server_host="localhost",
    adb_server_port=5037
)

config = Builders.AgentConfig.with_servers(servers).build()

LLMConfig

Configuration for LLM models used by different agent components.
from minitap.mobile_use.config import LLM, LLMConfig, LLMConfigUtils, LLMWithFallback

Structure

llm_config = LLMConfig(
    planner=LLM(provider="openai", model="gpt-5-nano"),
    orchestrator=LLM(provider="openai", model="gpt-5-nano"),
    cortex=LLMWithFallback(
        provider="openai",
        model="gpt-5",
        fallback=LLM(provider="openai", model="gpt-5")
    ),
    executor=LLM(provider="openai", model="gpt-5-nano"),
    utils=LLMConfigUtils(
        hopper=LLM(provider="openai", model="gpt-5-nano"),
        outputter=LLM(provider="openai", model="gpt-5-nano")
    )
)

Components

planner

Creates high-level plans from goals

orchestrator

Coordinates execution steps

cortex

Visual understanding and decision-making

executor

Performs specific actions

hopper

Extracts relevant information from large data batches

outputter

Extracts structured output

LLM

Basic LLM configuration.
from minitap.mobile_use.config import LLM

llm = LLM(provider="openai", model="gpt-5")
provider
str
required
Provider name: openai, google, xai, openrouter
model
str
required
Model identifier (e.g., gpt-5, gemini-2.5-flash)

LLMWithFallback

LLM configuration with a fallback model.
from minitap.mobile_use.config import LLM, LLMWithFallback

llm = LLMWithFallback(
    provider="openai",
    model="o4-mini",
    fallback=LLM(provider="openai", model="gpt-5")
)
If the primary model fails, the fallback is used automatically.

TaskRequestCommon

Common configuration shared across tasks.
from minitap.mobile_use.sdk.types import TaskRequestCommon
Created via TaskDefaults builder:
defaults = (
    Builders.TaskDefaults
    .with_max_steps(500)
    .build()
)

Next Steps

⌘I