Insert batch data from the specified file path. The server will try to read and import data from the provided path in bulk. (For example: Use this when you upload a large file such as Parquet and want to insert its data)
Bearer token authentication. Include the token in the Authorization header as 'Bearer
The format of the file to insert. Default is Parquet.
Jsonl, Parquet Number of the maximum rows to insert at a time. It determines the size of each record batch. Default is 1024.
x >= 0The file path containing the data to be inserted. Note that inserting vector data is not supported for Jsonl (Json Lines) format.
Path to the file to insert.
"/Path/to/data/file.parquet"
Bucket name of cloud storage.
Credentials to access cloud storage. When it is not provided, try to use the credentials in the configuration such as environment variables.
directory path to the file to insert.
"/Path/to/data/"
Number of files to process in parallel.
x >= 0Options for Cloud Storage.
Storage type of the file to insert. When it is not provided, it is treated as a local file. Currently, only 'AWS', 'S3', 'CEPH', 'MINIO', and 'S3_COMPATIBLE' are supported.
LOCAL, AWS, S3, GCP, AZURE, CEPH, MINIO, S3_COMPATIBLE