X Tutup
Skip to main content
This API is in beta and is accessible via the /v1beta/tasks/groups endpoint.
The Parallel Task Group API enables you to batch process hundreds or thousands of Tasks efficiently. Instead of running Tasks one by one, you can organize them into groups, monitor their progress collectively, and retrieve results in bulk. The API is comprised of the following endpoints: Creation: To run a batch of tasks in a group, you first need to create a task group, after which you can add runs to it, which will be queued and processed.
  • POST /v1beta/tasks/groups (Create task-group)
  • POST /v1beta/tasks/groups/{taskgroup_id}/runs (Add runs. Up to 1,000 runs per POST request.)
Progress Snapshot: At any moment during the task, you can get an instant snapshot of the state of it using GET /{taskgroup_id} and GET /{taskgroup_id}/runs. Please note that the runs endpoint streams back the requested runs instantly (using SSE) to allow for large payloads without pagination, and it doesn’t wait for runs to complete. Runs in a task group are stored indefinitely, so unless you have high performance requirements, you may not need to keep your own state of the intermediate results. However, it’s recommended to still do so after the task group is completed.
  • GET /v1beta/tasks/groups/{taskgroup_id} (Get task-group summary)
  • GET /v1beta/tasks/groups/{taskgroup_id}/runs (Fetch task group runs)
Realtime updates: You may want to provide efficient real-time updates to your app. For a high-level summary and run completion events, you can use GET /{taskgroup_id}/events. To also retrieve the task run result upon completion you can use the task run endpoint
  • GET /v1beta/tasks/groups/{taskgroup_id}/events (Stream task-group events)
  • GET /v1/tasks/runs/{run_id}/result (Get task-run result)
To determine whether a task group is fully completed, you can either use realtime update events, or you can poll the task-group summary endpoint. You can also keep adding runs to your task group indefinitely.

Key Concepts

Task Groups

A Task Group is a container that organizes multiple task runs. Each group has:
  • A unique taskgroup_id for identification
  • A status object with is_active (boolean) and task_run_status_counts (counts by status)
  • The ability to add new Tasks dynamically

Group Status

Track progress with real-time status updates:
  • Total number of task runs
  • Count of runs by status (queued, running, completed, failed)
  • Whether the group is still active (is_active becomes false when all runs finish)
  • Human-readable status messages

Quick Start

1. Define Types and Task Structure

# Define task specification as a variable
TASK_SPEC='{
  "input_schema": {
    "json_schema": {
      "type": "object",
      "properties": {
        "company_name": {
          "type": "string",
          "description": "Name of the company"
        },
        "company_website": {
          "type": "string",
          "description": "Company website URL"
        }
      },
      "required": ["company_name", "company_website"]
    }
  },
  "output_schema": {
    "json_schema": {
      "type": "object",
      "properties": {
        "key_insights": {
          "type": "array",
          "items": {"type": "string"},
          "description": "Key business insights"
        },
        "market_position": {
          "type": "string",
          "description": "Market positioning analysis"
        }
      },
      "required": ["key_insights", "market_position"]
    }
  }
}'

2. Create a Task Group

# Create task group and capture the ID
response=$(curl --request POST \
  --url https://api.parallel.ai/v1beta/tasks/groups \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: ${PARALLEL_API_KEY}' \
  --data '{}')

# Extract taskgroup_id from response
TASKGROUP_ID=$(echo $response | jq -r '.taskgroup_id')
echo "Created task group: $TASKGROUP_ID"

3. Add Tasks to the Group

curl --request POST \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/runs \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: ${PARALLEL_API_KEY}' \
  --data '{
  "default_task_spec": '$TASK_SPEC',
  "inputs": [
    {
      "input": {
        "company_name": "Acme Corp",
        "company_website": "https://acme.com"
      },
      "processor": "pro"
    },
    {
      "input": {
        "company_name": "TechStart",
        "company_website": "https://techstart.io"
      },
      "processor": "pro"
    }
  ]
}'

4. Monitor Progress

# Get status of the group
curl --request GET \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID} \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

# Get status of all runs in the group
curl --request GET \
  --no-buffer \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/runs \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

5. Retrieve Results

The getRuns endpoint returns a Server-Sent Events stream, not a simple JSON response. Each event in the stream has:
  • type: Either "task_run.state" (a run reached a non-active status: completed, failed, or cancelled) or "error"
  • event_id: Cursor for resuming the stream via the last_event_id parameter
  • run: The TaskRun object with run_id, status, and is_active
  • input: The original input (only included when include_input=true)
  • output: The result output (only included when include_output=true and the run completed successfully)
curl --request GET \
  --no-buffer \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/events \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

Batch Processing Pattern

For large datasets, process Tasks in batches to optimize performance:
async def process_companies_in_batches(
    client: AsyncParallel,
    taskgroup_id: str,
    companies: list[dict[str, str]],
    batch_size: int = 500,
) -> None:
    total_created = 0

    for i in range(0, len(companies), batch_size):
        batch = companies[i : i + batch_size]

        # Create run inputs for this batch
        run_inputs = [
            BetaRunInputParam(
                input=CompanyInput(**company).model_dump(),
                processor="pro",
            )
            for company in batch
        ]

        # Add batch to group
        response = await client.beta.task_group.add_runs(
            taskgroup_id,
            inputs=run_inputs,
            default_task_spec=task_spec,
        )
        total_created += len(response.run_ids)

        print(f"Processed {i + len(batch)} companies. Created {total_created} Tasks.")

Error Handling

The Group API provides robust error handling:
async def process_with_error_handling(client: AsyncParallel, taskgroup_id: str):
    successful_results = []
    failed_results = []

    run_stream = await client.beta.task_group.get_runs(
        taskgroup_id,
        include_input=True,
        include_output=True,
    )

    async for event in run_stream:
        if isinstance(event, ErrorEvent):
            failed_results.append(event)
            continue

        if isinstance(event, TaskRunEvent) and event.output:
            try:
                # Validate the result
                company_output = CompanyOutput.model_validate(event.output.content)
                successful_results.append(event)
            except Exception as e:
                print(f"Validation error: {e}")
                failed_results.append(event)
        elif isinstance(event, TaskRunEvent):
            # Run failed or was cancelled (no output)
            failed_results.append(event)

    print(f"Success: {len(successful_results)}, Failed: {len(failed_results)}")
    return successful_results, failed_results

Complete Example

Here’s a complete script that demonstrates the full workflow, including all of the setup code above.
import asyncio
import pydantic
from parallel import AsyncParallel
from parallel.types import TaskSpecParam, JsonSchemaParam
from parallel.types.beta.beta_run_input_param import BetaRunInputParam
from parallel.types.beta.task_run_event import TaskRunEvent
from parallel.types.beta.error_event import ErrorEvent


# Define your input and output models
class CompanyInput(pydantic.BaseModel):
    company_name: str = pydantic.Field(description="Name of the company")
    company_website: str = pydantic.Field(description="Company website URL")

class CompanyOutput(pydantic.BaseModel):
    key_insights: list[str] = pydantic.Field(description="Key business insights")
    market_position: str = pydantic.Field(description="Market positioning analysis")


# Create reusable task specification
task_spec = TaskSpecParam(
    input_schema=JsonSchemaParam(json_schema=CompanyInput.model_json_schema()),
    output_schema=JsonSchemaParam(json_schema=CompanyOutput.model_json_schema()),
)


async def wait_for_completion(client: AsyncParallel, taskgroup_id: str) -> None:
    while True:
        task_group = await client.beta.task_group.retrieve(taskgroup_id)

        status = task_group.status
        print(f"Status: {status.task_run_status_counts}")

        if not status.is_active:
            print("All tasks completed!")
            break

        await asyncio.sleep(10)


async def get_all_results(client: AsyncParallel, taskgroup_id: str):
    results = []

    run_stream = await client.beta.task_group.get_runs(
        taskgroup_id,
        include_input=True,
        include_output=True,
    )

    async for event in run_stream:
        if isinstance(event, TaskRunEvent) and event.output:
            company_output = CompanyOutput.model_validate(event.output.content)

            results.append(
                {
                    "company": event.input.input["company_name"],
                    "insights": company_output.key_insights,
                    "market_position": company_output.market_position,
                }
            )
        elif isinstance(event, ErrorEvent):
            print(f"Error: {event.error}")

    return results


async def batch_company_research():
    client = AsyncParallel(api_key="PARALLEL_API_KEY")

    # Create task group
    task_group = await client.beta.task_group.create()
    taskgroup_id = task_group.task_group_id
    print(f"Created taskgroup id {taskgroup_id}")

    # Define companies to research
    companies = [
        {"company_name": "Stripe", "company_website": "https://stripe.com"},
        {"company_name": "Shopify", "company_website": "https://shopify.com"},
        {"company_name": "Salesforce", "company_website": "https://salesforce.com"},
    ]

    # Add Tasks to group
    run_inputs = [
        BetaRunInputParam(
            input=CompanyInput(**company).model_dump(),
            processor="pro",
        )
        for company in companies
    ]

    response = await client.beta.task_group.add_runs(
        taskgroup_id,
        inputs=run_inputs,
        default_task_spec=task_spec,
    )
    print(f"Added {len(response.run_ids)} runs to taskgroup {taskgroup_id}")

    # Wait for completion and get results
    await wait_for_completion(client, taskgroup_id)
    results = await get_all_results(client, taskgroup_id)
    print(f"Successfully processed {len(results)} companies")
    return results


# Run the batch job
results = asyncio.run(batch_company_research())
X Tutup