Upload up to 5TB

  bucket = gcs_get_global_bucket(),
  type = NULL,
  name = deparse(substitute(file)),
  object_function = NULL,
  object_metadata = NULL,
  predefinedAcl = c("private", "bucketLevel", "authenticatedRead",
    "bucketOwnerFullControl", "bucketOwnerRead", "projectPrivate", "publicRead",
  upload_type = c("simple", "resumable")

gcs_upload_set_limit(upload_limit = 5000000L)



data.frame, list, R object or filepath (character) to upload file


bucketname you are uploading to


MIME type, guessed from file extension if NULL


What to call the file once uploaded. Default is the filepath


If not NULL, a function(input, output)


Optional metadata for object created via gcs_metadata_object


Specify user access to object. Default is 'private'. Set to 'bucketLevel' for buckets with bucket level access enabled.


Override automatic decision on upload type


Upload limit in bytes


If successful, a metadata object


When using object_function it expects a function with two arguments:

  • input The object you supply in file to write from

  • output The filename you write to

By default the upload_type will be 'simple' if under 5MB, 'resumable' if over 5MB. Use gcs_upload_set_limit to modify this boundary - you may want it smaller on slow connections, higher on faster connections. 'Multipart' upload is used if you provide a object_metadata.

If object_function is NULL and file is not a character filepath, the defaults are:

If object_function is not NULL and file is not a character filepath, then object_function will be applied to the R object specified in file before upload. You may want to also use name to ensure the correct file extension is used e.g. name = 'myobject.feather'

If file or name argument contains folders e.g. /data/file.csv then the file will be uploaded with the same folder structure e.g. in a /data/ folder. Use name to override this.


Requires scopes https://www.googleapis.com/auth/devstorage.read_write or https://www.googleapis.com/auth/devstorage.full_control


if (FALSE) { ## set global bucket so don't need to keep supplying in future calls gcs_global_bucket("my-bucket") ## by default will convert dataframes to csv gcs_upload(mtcars) ## mtcars has been renamed to mtcars.csv gcs_list_objects() ## to specify the name, use the name argument gcs_upload(mtcars, name = "my_mtcars.csv") ## when looping, its best to specify the name else it will take ## the deparsed function call e.g. X[[i]] my_files <- list.files("my_uploads") lapply(my_files, function(x) gcs_upload(x, name = x)) ## you can supply your own function to transform R objects before upload f <- function(input, output){ write.csv2(input, file = output) } gcs_upload(mtcars, name = "mtcars_csv2.csv", object_function = f) # upload to a bucket with bucket level ACL set gcs_upload(mtcars, predefinedAcl = "bucketLevel") # modify boundary between simple and resumable uploads # default 5000000L is 5MB gcs_upload_set_limit(1000000L) }