quartzbio.cli.data module¶
- quartzbio.cli.data.create_dataset(args, template=None)¶
Attempt to create a new dataset given the following params:
template_id
template_file
capacity
tag
metadata
metadata_json_file
create_vault
full_path
dry_run
- quartzbio.cli.data.download(args)¶
Given a folder or file, download all the files contained within it (not recursive).
- quartzbio.cli.data.import_file(args)¶
Given a dataset and a local path, upload and import the file(s).
Command arguments (args):
- create_dataset and it’s args
capacity
template_id
template_file
capacity
tag
metadata
metadata_json_file
create_vault
full_path
commit_mode
remote_source
dry_run
follow
file (list)
- quartzbio.cli.data.ls(args)¶
Given a QuartzBio remote path, list the files and folders
- quartzbio.cli.data.queue(statuses=['running', 'queued'])¶
Get all running and queued Tasks for an account and groups them by User and status. It also prints out the Job queue in the order that they will be evaluated.
- quartzbio.cli.data.should_exclude(path, exclude_paths, dry_run=False, print_logs=True)¶
- quartzbio.cli.data.should_tag_by_object_type(args, object_)¶
Returns True if object matches object type requirements
- quartzbio.cli.data.show_queue(args)¶
Show running and queued tasks
- quartzbio.cli.data.tag(args)¶
Tags a list of paths with provided tags
- quartzbio.cli.data.upload(args)¶
Given a folder or file, upload all the folders and files contained within it, skipping ones that already exist on the remote.