context.artifacts() – ArtifactFacade API Reference¶
The ArtifactFacade wraps an AsyncArtifactStore (persistence) and an AsyncArtifactIndex (search/metadata) and automatically indexes artifacts you create within a node/run.
0. Artifact Schema¶
Artifact
Represents an artifact with metadata and optional tracking information. This dataclass encapsulates artifact data including identification, versioning, content information, and tenant-level metadata. It provides serialization capabilities and supports both 'mime' and 'mimetype' nomenclature.
Attributes:
| Name | Type | Description |
|---|---|---|
artifact_id |
str
|
Unique identifier for the artifact. |
run_id |
str | None
|
Associated run identifier. Defaults to None. |
graph_id |
str | None
|
Associated graph identifier. Defaults to None. |
node_id |
str | None
|
Associated node identifier. Defaults to None. |
tool_name |
str | None
|
Name of the tool that created the artifact. Defaults to None. |
tool_version |
str | None
|
Version of the tool that created the artifact. Defaults to None. |
kind |
str | None
|
Type or category of the artifact. Defaults to None. |
sha256 |
str | None
|
SHA256 hash of the artifact content. Defaults to None. |
bytes |
int | None
|
Size of the artifact in bytes. Defaults to None. |
mime |
str | None
|
MIME type of the artifact content. Defaults to None. |
created_at |
str | None
|
Timestamp when the artifact was created. Defaults to None. |
tags |
list[str] | None
|
List of tags associated with the artifact. Defaults to None. |
labels |
dict[str, Any] | None
|
Dictionary of labels for the artifact. Defaults to None. |
metrics |
dict[str, Any] | None
|
Dictionary of metrics associated with the artifact. Defaults to None. |
pinned |
bool
|
Whether the artifact is pinned. Defaults to False. |
uri |
str | None
|
URI or path to the artifact. Defaults to None. |
preview_uri |
str | None
|
URI for previewing the artifact. Defaults to None. |
org_id |
str | None
|
Organization identifier for multi-tenant support. Defaults to None. |
user_id |
str | None
|
User identifier for multi-tenant support. Defaults to None. |
client_id |
str | None
|
Client identifier for multi-tenant support. Defaults to None. |
app_id |
str | None
|
Application identifier for multi-tenant support. Defaults to None. |
session_id |
str | None
|
Session identifier for multi-tenant support. Defaults to None. |
Properties
mimetype (str | None): Alias property for accessing and setting the 'mime' attribute. Provides backward compatibility with alternative naming conventions.
1. Save API¶
save_file(path, *, kind, labels, ...)
Save an existing file and index it.
This method saves a file to the artifact store, associates it with the current execution context, and records it in the artifact index. It supports adding metadata such as labels, metrics, and a suggested URI for logical organization.
Examples:
Basic usage with a file path:
artifact = await context.artifacts().save_file(
path="/tmp/output.txt",
kind="text",
labels={"category": "logs"},
)
Saving a file with a custom name and pinning it:
artifact = await context.artifacts().save_file(
path="/tmp/data.csv",
kind="dataset",
name="data_backup.csv",
pin=True,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
The local file path to save. |
required |
kind
|
str
|
A string representing the artifact type (e.g., "text", "dataset"). |
required |
labels
|
dict | None
|
A dictionary of metadata labels to associate with the artifact. |
None
|
metrics
|
dict | None
|
A dictionary of numerical metrics to associate with the artifact. |
None
|
suggested_uri
|
str | None
|
A logical URI for the artifact (e.g., "s3://bucket/file"). |
None
|
name
|
str | None
|
A custom name for the artifact, used as the |
None
|
pin
|
bool
|
A boolean indicating whether to pin the artifact. |
False
|
cleanup
|
bool
|
A boolean indicating whether to delete the local file after saving. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
Artifact |
Artifact
|
The saved |
Notes
The name parameter is used to set the filename label for the artifact.
If both name and suggested_uri are provided, name takes precedence for the filename.
save_text(payload, *, ...)
This method stages the text as a temporary .txt file, writes the payload,
and persists it as an artifact with associated metadata. It is accessed via
context.artifacts().save_text(...).
Examples:
Basic usage to save a text artifact:
await context.artifacts().save_text("Hello, world!")
Saving with custom metadata and logical filename:
await context.artifacts().save_text(
"Experiment results",
name="results.txt",
labels={"experiment": "A1"},
metrics={"accuracy": 0.98},
pin=True
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
payload
|
str
|
The text content to be saved as an artifact. |
required |
suggested_uri
|
str | None
|
Optional logical URI for the artifact. If not provided, |
None
|
name
|
str | None
|
Optional logical filename for the artifact. |
None
|
kind
|
str
|
The artifact kind, defaults to |
'text'
|
labels
|
dict | None
|
Optional dictionary of string labels for categorization. |
None
|
metrics
|
dict | None
|
Optional dictionary of numeric metrics for tracking. |
None
|
pin
|
bool
|
If True, pins the artifact for retention. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
Artifact |
Artifact
|
The fully persisted |
save_json(payload, *, ...)
Save a JSON payload as an artifact with full context metadata.
This method stages the JSON data as a temporary .json file, writes the payload,
and persists it as an artifact with associated metadata. It is accessed via
context.artifacts().save_json(...).
Examples:
Basic usage to save a JSON artifact:
await context.artifacts().save_json({"foo": "bar", "count": 42})
Saving with custom metadata and logical filename:
await context.artifacts().save_json(
{"results": [1, 2, 3]},
name="results.json",
labels={"experiment": "A1"},
metrics={"accuracy": 0.98},
pin=True
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
payload
|
dict
|
The JSON-serializable dictionary to be saved as an artifact. |
required |
suggested_uri
|
str | None
|
Optional logical URI for the artifact. If not provided,
the |
None
|
name
|
str | None
|
Optional logical filename for the artifact. |
None
|
kind
|
str
|
The artifact kind, defaults to |
'json'
|
labels
|
dict | None
|
Optional dictionary of string labels for categorization. |
None
|
metrics
|
dict | None
|
Optional dictionary of numeric metrics for tracking. |
None
|
pin
|
bool
|
If True, pins the artifact for retention. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
Artifact |
Artifact
|
The fully persisted |
writer(*, kind, planned_ext, ...)
Async context manager for streaming artifact writes.
This method yields a writer object that supports:
writer.write(bytes)for streaming datawriter.add_labels(...)to attach metadatawriter.add_metrics(...)to record metrics
After the context exits, the writer's artifact is finalized and recorded in the index.
Accessed via context.artifacts().writer(...).
Examples:
Basic usage to stream a file artifact:
async with context.artifacts().writer(kind="binary") as w:
await w.write(b"some data")
Streaming with custom file extension and pinning:
async with context.artifacts().writer(
kind="log",
planned_ext=".log",
pin=True
) as w:
await w.write(b'Log entry 1\n')
w.add_labels({"source": 'app'})
w.add_metrics({"lines": 1})
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kind
|
str
|
The artifact type (e.g., "binary", "log", "text"). |
required |
planned_ext
|
str | None
|
Optional file extension for the staged artifact (e.g., ".txt"). |
None
|
pin
|
bool
|
If True, pins the artifact for retention. |
False
|
Returns:
| Type | Description |
|---|---|
AsyncIterator[Any]
|
AsyncIterator[Any]: Yields a writer object for streaming data and metadata. |
Notes
- Scope labels are added during
_recordafter the context exits, so they are not available during the write phase. - If you want tags, call
w.add_labels({"tags": [...]})inside the context
2. Search API¶
get_by_id(artifact_id)
Retrieve a single artifact by its unique identifier.
This asynchronous method queries the configured artifact index for the specified
artifact_id. If the index is not set up, a RuntimeError is raised. The method
is typically accessed via context.artifacts().get_by_id(...).
Examples:
Fetching an artifact by ID:
artifact = await context.artifacts().get_by_id("artifact_123")
if artifact:
print(artifact.name)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_id
|
str
|
The unique string identifier of the artifact to retrieve. |
required |
Returns:
| Type | Description |
|---|---|
Artifact | None
|
Artifact | None: The matching |
list(*, view)
List artifacts using structured index filters.
Scoping is controlled by level (scope, session, run, user, org)
and optional node narrowing (include_node=True by default).
Examples:
List all artifacts for the current run:
artifacts = await context.artifacts().list()
for a in artifacts:
print(a.artifact_id, a.kind)
List artifacts for the current graph regardless of node:
graph_artifacts = await context.artifacts().list(include_node=False)
List artifacts filtered by tags:
tagged = await context.artifacts().list(tags=["report"])
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
level
|
ScopeLevel | None
|
Scope level used to derive tenant/scope/run/session filters. |
'run'
|
include_node
|
bool
|
When True, constrain results to current |
True
|
tags
|
list[str] | None
|
Optional tag filter. |
None
|
filters
|
dict[str, str] | None
|
Extra label filters merged with scope filters. |
None
|
limit
|
int | None
|
Maximum number of rows to return. |
None
|
Returns:
| Type | Description |
|---|---|
list[Artifact]
|
list[Artifact]: Matching artifacts. |
search(*, kind, labels, metric, ...)
Search artifacts with structured and optional semantic/lexical retrieval.
Behavior
- If query is None/empty: structured search via ArtifactIndex.
- If query is non-empty: SearchBackend via ScopedIndices (corpus="artifact"),
with semantic/lexical/hybrid selected by
mode.
Examples:
Structured search by kind and tags:
rows = await context.artifacts().search(kind="report", tags=["weekly"])
Semantic search across artifact index text:
rows = await context.artifacts().search(
query="model with highest f1",
mode="semantic",
limit=10,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
str | None
|
Optional free text query for semantic/lexical/hybrid search. |
None
|
kind
|
str | None
|
Optional artifact kind to filter on (structured path only). |
None
|
tags
|
list[str] | None
|
Optional list of tag strings for filtering. |
None
|
labels
|
dict[str, str] | None
|
Extra label filters to apply. |
None
|
metric
|
str | None
|
Optional metric name to optimize (structured path). |
None
|
metric_mode
|
Literal['max', 'min'] | None
|
"max" or "min" for structured metric ranking. |
None
|
level
|
ScopeLevel | None
|
Scope level controlling tenant/scope filtering. |
'run'
|
extra_scope_labels
|
dict[str, str] | None
|
Additional scope labels to merge on top of level filters. |
None
|
limit
|
int | None
|
Maximum number of results (top_k for semantic; limit for structured). |
None
|
include_graph
|
bool
|
When True, constrain to current graph in addition to |
False
|
include_node
|
bool
|
When True, constrain to current node in addition to |
False
|
time_window
|
str | None
|
Optional time window for created_at_ts filtering (SearchBackend). |
None
|
mode
|
SearchMode | None
|
Search mode for the query path ("semantic", "lexical", "hybrid", etc.). |
None
|
Returns:
| Type | Description |
|---|---|
list[Artifact]
|
list[Artifact]: Matching artifacts, preserving search-order for query mode. |
best(*, kind, metric, ...)
Return the best artifact for a kind by metric optimization.
Examples:
Find the best model by accuracy for the current run:
best_model = await context.artifacts().best(
kind="model",
metric="accuracy",
metric_mode="max"
)
Find the lowest-loss dataset:
best_dataset = await context.artifacts().best(
kind="dataset",
metric="loss",
metric_mode="min",
level="run"
)
Apply additional label filters:
best_artifact = await context.artifacts().best(
kind="model",
metric="f1_score",
metric_mode="max",
filters={"domain": "finance"}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kind
|
str
|
The type of artifact to search for (e.g., "model", "dataset"). |
required |
metric
|
str
|
The metric name to optimize (e.g., "accuracy", "loss"). |
required |
metric_mode
|
Literal['max', 'min']
|
Optimization mode, either |
required |
level
|
ScopeLevel | None
|
Scope level controlling tenant/run/session filters. |
'run'
|
tags
|
list[str] | None
|
Optional tag filter. |
None
|
filters
|
dict[str, str] | None
|
Additional label filters to further restrict the search. |
None
|
Returns:
| Type | Description |
|---|---|
Artifact | None
|
Artifact | None: The best matching |
pin(artifact_id, pinned)
Mark or unmark an artifact as pinned for retention.
This asynchronous method updates the pinned status of the specified artifact
in the artifact index. Pinning an artifact ensures it is retained and not subject
to automatic cleanup. It is accessed via context.artifacts().pin(...).
Examples:
Pin an artifact for retention:
await context.artifacts().pin("artifact_123", pinned=True)
Unpin an artifact to allow cleanup:
await context.artifacts().pin("artifact_456", pinned=False)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_id
|
str
|
The unique string identifier of the artifact to update. |
required |
pinned
|
bool
|
Boolean indicating whether to pin ( |
True
|
Returns:
| Type | Description |
|---|---|
None
|
None |
3. Stage/Ingest API¶
stage_path(ext)
Plan a staging file path for artifact creation.
This method requests a temporary file path from the underlying artifact store, suitable for staging a new artifact. The file extension can be specified to guide downstream handling (e.g., ".txt", ".json").
Examples:
Stage a temporary text file:
staged_path = await context.artifacts().stage_path(".txt")
Stage a file with a custom extension:
staged_path = await context.artifacts().stage_path(".log")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ext
|
str
|
Optional file extension for the staged file (e.g., ".txt", ".json"). |
''
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The planned staging file path as a string. |
stage_dir(suffix)
Plan a staging directory for artifact creation.
This method requests a temporary directory path from the underlying artifact store, suitable for staging a directory artifact. The suffix can be used to distinguish different staging contexts.
Examples:
Stage a temporary directory:
staged_dir = await context.artifacts().stage_dir()
Stage a directory with a custom suffix:
staged_dir = await context.artifacts().stage_dir("_images")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
suffix
|
str
|
Optional string to append to the directory name for uniqueness. |
''
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The planned staging directory path as a string. |
ingest_file(staged_path, *, kind, ...)
Ingest a staged file as an artifact and record it in the index.
This method takes a file that has been staged locally, persists it in the artifact store, and records its metadata in the artifact index. It supports adding labels, metrics, and logical URIs for organization.
Examples:
Ingest a staged model file:
artifact = await context.artifacts().ingest_file(
staged_path="/tmp/model.bin",
kind="model",
labels={"domain": "vision"},
pin=True
)
Ingest with a suggested URI:
artifact = await context.artifacts().ingest_file(
staged_path="/tmp/data.csv",
kind="dataset",
suggested_uri="s3://bucket/data.csv"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
staged_path
|
str
|
The local path to the staged file. |
required |
kind
|
str
|
The artifact type (e.g., "model", "dataset"). |
required |
tags
|
list[str] | None
|
Optional list of tags to associate with the artifact. |
None
|
labels
|
dict | None
|
Optional dictionary of metadata labels. |
None
|
metrics
|
dict | None
|
Optional dictionary of numeric metrics. |
None
|
suggested_uri
|
str | None
|
Optional logical URI for the artifact. |
None
|
pin
|
bool
|
If True, pins the artifact for retention. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
Artifact |
Artifact
|
The fully persisted |
Notes
The staged_path must point to an existing file. The method will handle
cleanup of the staged file if configured in the underlying store.
If you already have a file at a specific URI (e.g. "s3://bucket/file" or local file path), consider using save_file instead.
ingest_dir(staged_path, **kwargs)
Ingest a staged directory as a directory artifact and record it in the index.
This method takes a directory that has been staged locally, persists its contents in the artifact store (optionally creating a manifest or archive), and records its metadata in the artifact index. Additional keyword arguments are passed to the store's ingest logic.
Examples:
Ingest a staged directory with manifest:
artifact = await context.artifacts().ingest_dir(
staged_dir="/tmp/output_dir",
kind="directory",
labels={"type": "images"}
)
Ingest with custom metrics:
artifact = await context.artifacts().ingest_dir(
staged_dir="/tmp/logs",
kind="log_dir",
metrics={"file_count": 12}
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
staged_dir
|
str
|
The local path to the staged directory. |
required |
**kwargs
|
Any
|
Additional keyword arguments for artifact metadata (e.g., kind, labels, metrics). |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
Artifact |
Artifact
|
The fully persisted |
4. Load API¶
load_bytes_by_id(artifact_id)
Load raw bytes for a file-like artifact by its unique identifier.
This asynchronous method retrieves the artifact metadata from the index using
the provided artifact_id, then loads the underlying bytes from the artifact store.
It is accessed via context.artifacts().load_bytes_by_id(...).
Examples:
Basic usage to load bytes for an artifact:
data = await context.artifacts().load_bytes_by_id("artifact_123")
Handling missing artifacts:
try:
data = await context.artifacts().load_bytes_by_id("artifact_456")
except FileNotFoundError:
print("Artifact not found.")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_id
|
str
|
The unique string identifier of the artifact to retrieve. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
bytes |
bytes
|
The raw byte content of the artifact. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the artifact is not found or missing a URI. |
load_text_by_id(artifact_id, *, ...)
Load the text content of an artifact by its unique identifier.
This asynchronous method retrieves the raw bytes for the specified artifact_id
and decodes them into a string using the provided encoding. It is accessed via
context.artifacts().load_text_by_id(...).
Examples:
Basic usage to load text from an artifact:
text = await context.artifacts().load_text_by_id("artifact_123")
print(text)
Loading with custom encoding and error handling:
text = await context.artifacts().load_text_by_id(
"artifact_456",
encoding="utf-16",
errors="ignore"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_id
|
str
|
The unique string identifier of the artifact to retrieve. |
required |
encoding
|
str
|
The text encoding to use for decoding bytes (default: |
'utf-8'
|
errors
|
str
|
Error handling strategy for decoding (default: |
'strict'
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The decoded text content of the artifact. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the artifact is not found or missing a URI. |
load_json_by_id(artifact_id, *, ...)
Load and parse a JSON artifact by its unique identifier.
This asynchronous method retrieves the raw text content for the specified
artifact_id, decodes it using the provided encoding, and parses it as JSON.
It is accessed via context.artifacts().load_json_by_id(...).
Examples:
Basic usage to load a JSON artifact:
data = await context.artifacts().load_json_by_id("artifact_123")
print(data["foo"])
Loading with custom encoding and error handling:
data = await context.artifacts().load_json_by_id(
"artifact_456",
encoding="utf-16",
errors="ignore"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_id
|
str
|
The unique string identifier of the artifact to retrieve. |
required |
encoding
|
str
|
The text encoding to use for decoding bytes (default: |
'utf-8'
|
errors
|
str
|
Error handling strategy for decoding (default: |
'strict'
|
Returns:
| Name | Type | Description |
|---|---|---|
Any |
Any
|
The parsed JSON object from the artifact. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the artifact is not found or missing a URI. |
JSONDecodeError
|
If the artifact content is not valid JSON. |
load_bytes(uri)
Load raw bytes from a file or URI in a backend-agnostic way.
This method retrieves the byte content from the specified uri, supporting both
local files and remote storage backends. It is accessed via context.artifacts().load_bytes(...).
Examples:
Basic usage to load bytes from a local file:
data = await context.artifacts().load_bytes("file:///tmp/model.bin")
Loading bytes from an S3 URI:
data = await context.artifacts().load_bytes("s3://bucket/data.bin")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
uri
|
str
|
The URI or path of the file to load. Supports local files and remote storage backends. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
bytes |
bytes
|
The raw byte content of the file or artifact. |
load_text(uri)
Load the text content from a file or URI in a backend-agnostic way.
This method retrieves the raw bytes from the specified uri, decodes them into a string
using the provided encoding, and returns the text. It is accessed via context.artifacts().load_text(...).
Examples:
Basic usage to load text from a local file:
text = await context.artifacts().load_text("file:///tmp/output.txt")
print(text)
Loading text from an S3 URI with custom encoding:
text = await context.artifacts().load_text(
"s3://bucket/data.txt",
encoding="utf-16"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
uri
|
str
|
The URI or path of the file to load. Supports local files and remote storage backends. |
required |
encoding
|
str
|
The text encoding to use for decoding bytes (default: |
'utf-8'
|
errors
|
str
|
Error handling strategy for decoding (default: |
'strict'
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The decoded text content of the file or artifact. |
load_json(uri)
Load and parse a JSON file from the specified URI.
This asynchronous method retrieves the file contents as text, then parses
the text into a Python object using the standard json library. It is
typically accessed via context.artifacts().load_json(...).
Examples:
Basic usage to load a JSON file:
data = await context.artifacts().load_json("file:///path/to/data.json")
Specifying a custom encoding:
data = await context.artifacts().load_json(
"file:///path/to/data.json",
encoding="utf-16"
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
uri
|
str
|
The URI of the JSON file to load. Supports local and remote paths. |
required |
encoding
|
str
|
The text encoding to use when reading the file (default: "utf-8"). |
'utf-8'
|
errors
|
str
|
The error handling scheme for decoding (default: "strict"). |
'strict'
|
Returns:
| Name | Type | Description |
|---|---|---|
Any |
Any
|
The parsed Python object loaded from the JSON file. |
5. Helpers¶
as_local_dir(artifact_or_uri, *, must_exist)
Ensure an artifact representing a directory is available as a local path.
This method provides a backend-agnostic way to access directory artifacts as local filesystem paths. For local filesystems, it returns the underlying CAS directory. For remote backends (e.g., S3), it downloads the directory contents to a staging location and returns the path.
Examples:
Basic usage to access a local directory artifact:
local_dir = await context.artifacts().as_local_dir("file:///tmp/output_dir")
print(local_dir)
Handling missing directories:
try:
local_dir = await context.artifacts().as_local_dir("s3://bucket/data_dir")
except FileNotFoundError:
print("Directory not found.")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_or_uri
|
str | Path | Artifact
|
The artifact object, URI string, or Path representing the directory. |
required |
must_exist
|
bool
|
If True, raises FileNotFoundError if the local path does not exist. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The resolved local filesystem path to the directory artifact. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the resolved local directory does not exist and |
as_local_file(artifact_or_uri, *, must_exist)
Resolve an artifact to a local file path.
This method transparently handles local and remote artifact URIs, downloading remote file artifacts to a staging path when needed.
Examples:
Using a local file path:
local_path = await context.artifacts().as_local_file("/tmp/data.csv")
Using an S3 URI:
local_path = await context.artifacts().as_local_file("s3://bucket/key.csv")
Using an Artifact object:
local_path = await context.artifacts().as_local_file(artifact)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
artifact_or_uri
|
str | Path | Artifact
|
The artifact to resolve, which may be a string URI, Path, or Artifact object. |
required |
must_exist
|
bool
|
If True, raises FileNotFoundError if the file does not exist or is not a file. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The absolute path to the local file containing the artifact's data. |