...

Source file src/cloud.google.com/go/storage/doc.go

Documentation: cloud.google.com/go/storage

     1  // Copyright 2016 Google LLC
     2  //
     3  // Licensed under the Apache License, Version 2.0 (the "License");
     4  // you may not use this file except in compliance with the License.
     5  // You may obtain a copy of the License at
     6  //
     7  //      http://www.apache.org/licenses/LICENSE-2.0
     8  //
     9  // Unless required by applicable law or agreed to in writing, software
    10  // distributed under the License is distributed on an "AS IS" BASIS,
    11  // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    12  // See the License for the specific language governing permissions and
    13  // limitations under the License.
    14  
    15  /*
    16  Package storage provides an easy way to work with Google Cloud Storage.
    17  Google Cloud Storage stores data in named objects, which are grouped into buckets.
    18  
    19  More information about Google Cloud Storage is available at
    20  https://cloud.google.com/storage/docs.
    21  
    22  See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts,
    23  connection pooling and similar aspects of this package.
    24  
    25  # Creating a Client
    26  
    27  To start working with this package, create a [Client]:
    28  
    29  	ctx := context.Background()
    30  	client, err := storage.NewClient(ctx)
    31  	if err != nil {
    32  	    // TODO: Handle error.
    33  	}
    34  
    35  The client will use your default application credentials. Clients should be
    36  reused instead of created as needed. The methods of [Client] are safe for
    37  concurrent use by multiple goroutines.
    38  
    39  You may configure the client by passing in options from the [google.golang.org/api/option]
    40  package. You may also use options defined in this package, such as [WithJSONReads].
    41  
    42  If you only wish to access public data, you can create
    43  an unauthenticated client with
    44  
    45  	client, err := storage.NewClient(ctx, option.WithoutAuthentication())
    46  
    47  To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST
    48  environment variable to the address at which your emulator is running. This will
    49  send requests to that address instead of to Cloud Storage. You can then create
    50  and use a client as usual:
    51  
    52  	// Set STORAGE_EMULATOR_HOST environment variable.
    53  	err := os.Setenv("STORAGE_EMULATOR_HOST", "localhost:9000")
    54  	if err != nil {
    55  	    // TODO: Handle error.
    56  	}
    57  
    58  	// Create client as usual.
    59  	client, err := storage.NewClient(ctx)
    60  	if err != nil {
    61  	    // TODO: Handle error.
    62  	}
    63  
    64  	// This request is now directed to http://localhost:9000/storage/v1/b
    65  	// instead of https://storage.googleapis.com/storage/v1/b
    66  	if err := client.Bucket("my-bucket").Create(ctx, projectID, nil); err != nil {
    67  	    // TODO: Handle error.
    68  	}
    69  
    70  Please note that there is no official emulator for Cloud Storage.
    71  
    72  # Buckets
    73  
    74  A Google Cloud Storage bucket is a collection of objects. To work with a
    75  bucket, make a bucket handle:
    76  
    77  	bkt := client.Bucket(bucketName)
    78  
    79  A handle is a reference to a bucket. You can have a handle even if the
    80  bucket doesn't exist yet. To create a bucket in Google Cloud Storage,
    81  call [BucketHandle.Create]:
    82  
    83  	if err := bkt.Create(ctx, projectID, nil); err != nil {
    84  	    // TODO: Handle error.
    85  	}
    86  
    87  Note that although buckets are associated with projects, bucket names are
    88  global across all projects.
    89  
    90  Each bucket has associated metadata, represented in this package by
    91  [BucketAttrs]. The third argument to [BucketHandle.Create] allows you to set
    92  the initial [BucketAttrs] of a bucket. To retrieve a bucket's attributes, use
    93  [BucketHandle.Attrs]:
    94  
    95  	attrs, err := bkt.Attrs(ctx)
    96  	if err != nil {
    97  	    // TODO: Handle error.
    98  	}
    99  	fmt.Printf("bucket %s, created at %s, is located in %s with storage class %s\n",
   100  	    attrs.Name, attrs.Created, attrs.Location, attrs.StorageClass)
   101  
   102  # Objects
   103  
   104  An object holds arbitrary data as a sequence of bytes, like a file. You
   105  refer to objects using a handle, just as with buckets, but unlike buckets
   106  you don't explicitly create an object. Instead, the first time you write
   107  to an object it will be created. You can use the standard Go [io.Reader]
   108  and [io.Writer] interfaces to read and write object data:
   109  
   110  	obj := bkt.Object("data")
   111  	// Write something to obj.
   112  	// w implements io.Writer.
   113  	w := obj.NewWriter(ctx)
   114  	// Write some text to obj. This will either create the object or overwrite whatever is there already.
   115  	if _, err := fmt.Fprintf(w, "This object contains text.\n"); err != nil {
   116  	    // TODO: Handle error.
   117  	}
   118  	// Close, just like writing a file.
   119  	if err := w.Close(); err != nil {
   120  	    // TODO: Handle error.
   121  	}
   122  
   123  	// Read it back.
   124  	r, err := obj.NewReader(ctx)
   125  	if err != nil {
   126  	    // TODO: Handle error.
   127  	}
   128  	defer r.Close()
   129  	if _, err := io.Copy(os.Stdout, r); err != nil {
   130  	    // TODO: Handle error.
   131  	}
   132  	// Prints "This object contains text."
   133  
   134  Objects also have attributes, which you can fetch with [ObjectHandle.Attrs]:
   135  
   136  	objAttrs, err := obj.Attrs(ctx)
   137  	if err != nil {
   138  	    // TODO: Handle error.
   139  	}
   140  	fmt.Printf("object %s has size %d and can be read using %s\n",
   141  	    objAttrs.Name, objAttrs.Size, objAttrs.MediaLink)
   142  
   143  # Listing objects
   144  
   145  Listing objects in a bucket is done with the [BucketHandle.Objects] method:
   146  
   147  	query := &storage.Query{Prefix: ""}
   148  
   149  	var names []string
   150  	it := bkt.Objects(ctx, query)
   151  	for {
   152  	    attrs, err := it.Next()
   153  	    if err == iterator.Done {
   154  	        break
   155  	    }
   156  	    if err != nil {
   157  	        log.Fatal(err)
   158  	    }
   159  	    names = append(names, attrs.Name)
   160  	}
   161  
   162  Objects are listed lexicographically by name. To filter objects
   163  lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used:
   164  
   165  	query := &storage.Query{
   166  	    Prefix: "",
   167  	    StartOffset: "bar/",  // Only list objects lexicographically >= "bar/"
   168  	    EndOffset: "foo/",    // Only list objects lexicographically < "foo/"
   169  	}
   170  
   171  	// ... as before
   172  
   173  If only a subset of object attributes is needed when listing, specifying this
   174  subset using [Query.SetAttrSelection] may speed up the listing process:
   175  
   176  	query := &storage.Query{Prefix: ""}
   177  	query.SetAttrSelection([]string{"Name"})
   178  
   179  	// ... as before
   180  
   181  # ACLs
   182  
   183  Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of
   184  ACLRules, each of which specifies the role of a user, group or project. ACLs
   185  are suitable for fine-grained control, but you may prefer using IAM to control
   186  access at the project level (see [Cloud Storage IAM docs].
   187  
   188  To list the ACLs of a bucket or object, obtain an [ACLHandle] and call [ACLHandle.List]:
   189  
   190  	acls, err := obj.ACL().List(ctx)
   191  	if err != nil {
   192  	    // TODO: Handle error.
   193  	}
   194  	for _, rule := range acls {
   195  	    fmt.Printf("%s has role %s\n", rule.Entity, rule.Role)
   196  	}
   197  
   198  You can also set and delete ACLs.
   199  
   200  # Conditions
   201  
   202  Every object has a generation and a metageneration. The generation changes
   203  whenever the content changes, and the metageneration changes whenever the
   204  metadata changes. [Conditions] let you check these values before an operation;
   205  the operation only executes if the conditions match. You can use conditions to
   206  prevent race conditions in read-modify-write operations.
   207  
   208  For example, say you've read an object's metadata into objAttrs. Now
   209  you want to write to that object, but only if its contents haven't changed
   210  since you read it. Here is how to express that:
   211  
   212  	w = obj.If(storage.Conditions{GenerationMatch: objAttrs.Generation}).NewWriter(ctx)
   213  	// Proceed with writing as above.
   214  
   215  # Signed URLs
   216  
   217  You can obtain a URL that lets anyone read or write an object for a limited time.
   218  Signing a URL requires credentials authorized to sign a URL. To use the same
   219  authentication that was used when instantiating the Storage client, use
   220  [BucketHandle.SignedURL].
   221  
   222  	url, err := client.Bucket(bucketName).SignedURL(objectName, opts)
   223  	if err != nil {
   224  	    // TODO: Handle error.
   225  	}
   226  	fmt.Println(url)
   227  
   228  You can also sign a URL without creating a client. See the documentation of
   229  [SignedURL] for details.
   230  
   231  	url, err := storage.SignedURL(bucketName, "shared-object", opts)
   232  	if err != nil {
   233  	    // TODO: Handle error.
   234  	}
   235  	fmt.Println(url)
   236  
   237  # Post Policy V4 Signed Request
   238  
   239  A type of signed request that allows uploads through HTML forms directly to Cloud Storage with
   240  temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised
   241  by a user.
   242  
   243  For more information, please see the [XML POST Object docs] as well
   244  as the documentation of [BucketHandle.GenerateSignedPostPolicyV4].
   245  
   246  	pv4, err := client.Bucket(bucketName).GenerateSignedPostPolicyV4(objectName, opts)
   247  	if err != nil {
   248  	    // TODO: Handle error.
   249  	}
   250  	fmt.Printf("URL: %s\nFields; %v\n", pv4.URL, pv4.Fields)
   251  
   252  # Credential requirements for signing
   253  
   254  If the GoogleAccessID and PrivateKey option fields are not provided, they will
   255  be automatically detected by [BucketHandle.SignedURL] and
   256  [BucketHandle.GenerateSignedPostPolicyV4] if any of the following are true:
   257    - you are authenticated to the Storage Client with a service account's
   258      downloaded private key, either directly in code or by setting the
   259      GOOGLE_APPLICATION_CREDENTIALS environment variable (see [Other Environments]),
   260    - your application is running on Google Compute Engine (GCE), or
   261    - you are logged into [gcloud using application default credentials]
   262      with [impersonation enabled].
   263  
   264  Detecting GoogleAccessID may not be possible if you are authenticated using a
   265  token source or using [option.WithHTTPClient]. In this case, you can provide a
   266  service account email for GoogleAccessID and the client will attempt to sign
   267  the URL or Post Policy using that service account.
   268  
   269  To generate the signature, you must have:
   270    - iam.serviceAccounts.signBlob permissions on the GoogleAccessID service
   271      account, and
   272    - the [IAM Service Account Credentials API] enabled (unless authenticating
   273      with a downloaded private key).
   274  
   275  # Errors
   276  
   277  Errors returned by this client are often of the type [googleapi.Error].
   278  These errors can be introspected for more information by using [errors.As]
   279  with the richer [googleapi.Error] type. For example:
   280  
   281  	var e *googleapi.Error
   282  	if ok := errors.As(err, &e); ok {
   283  		  if e.Code == 409 { ... }
   284  	}
   285  
   286  # Retrying failed requests
   287  
   288  Methods in this package may retry calls that fail with transient errors.
   289  Retrying continues indefinitely unless the controlling context is canceled, the
   290  client is closed, or a non-transient error is received. To stop retries from
   291  continuing, use context timeouts or cancellation.
   292  
   293  The retry strategy in this library follows best practices for Cloud Storage. By
   294  default, operations are retried only if they are idempotent, and exponential
   295  backoff with jitter is employed. In addition, errors are only retried if they
   296  are defined as transient by the service. See the [Cloud Storage retry docs]
   297  for more information.
   298  
   299  Users can configure non-default retry behavior for a single library call (using
   300  [BucketHandle.Retryer] and [ObjectHandle.Retryer]) or for all calls made by a
   301  client (using [Client.SetRetry]). For example:
   302  
   303  	o := client.Bucket(bucket).Object(object).Retryer(
   304  		// Use WithBackoff to change the timing of the exponential backoff.
   305  		storage.WithBackoff(gax.Backoff{
   306  			Initial:    2 * time.Second,
   307  		}),
   308  		// Use WithPolicy to configure the idempotency policy. RetryAlways will
   309  		// retry the operation even if it is non-idempotent.
   310  		storage.WithPolicy(storage.RetryAlways),
   311  	)
   312  
   313  	// Use a context timeout to set an overall deadline on the call, including all
   314  	// potential retries.
   315  	ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
   316  	defer cancel()
   317  
   318  	// Delete an object using the specified strategy and timeout.
   319  	if err := o.Delete(ctx); err != nil {
   320  		// Handle err.
   321  	}
   322  
   323  # Sending Custom Headers
   324  
   325  You can add custom headers to any API call made by this package by using
   326  [callctx.SetHeaders] on the context which is passed to the method. For example,
   327  to add a [custom audit logging] header:
   328  
   329  	ctx := context.Background()
   330  	ctx = callctx.SetHeaders(ctx, "x-goog-custom-audit-<key>", "<value>")
   331  	// Use client as usual with the context and the additional headers will be sent.
   332  	client.Bucket("my-bucket").Attrs(ctx)
   333  
   334  # Experimental gRPC API
   335  
   336  This package includes support for the Cloud Storage gRPC API, which is currently
   337  in preview. This implementation uses gRPC rather than the current JSON & XML
   338  APIs to make requests to Cloud Storage. Kindly contact the Google Cloud Storage gRPC
   339  team at gcs-grpc-contact@google.com with a list of GCS buckets you would like to
   340  allowlist to access this API. The Go Storage gRPC library is not yet generally
   341  available, so it may be subject to breaking changes.
   342  
   343  To create a client which will use gRPC, use the alternate constructor:
   344  
   345  	ctx := context.Background()
   346  	client, err := storage.NewGRPCClient(ctx)
   347  	if err != nil {
   348  		// TODO: Handle error.
   349  	}
   350  	// Use client as usual.
   351  
   352  If the application is running within GCP, users may get better performance by
   353  enabling Direct Google Access (enabling requests to skip some proxy steps). To enable,
   354  set the environment variable `GOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true` and add
   355  the following side-effect imports to your application:
   356  
   357  	import (
   358  		_ "google.golang.org/grpc/balancer/rls"
   359  		_ "google.golang.org/grpc/xds/googledirectpath"
   360  	)
   361  
   362  # Storage Control API
   363  
   364  Certain control plane and long-running operations for Cloud Storage (including Folder
   365  and Managed Folder operations) are supported via the autogenerated Storage Control
   366  client, which is available as a subpackage in this module. See package docs at
   367  [cloud.google.com/go/storage/control/apiv2] or reference the [Storage Control API] docs.
   368  
   369  [Cloud Storage IAM docs]: https://cloud.google.com/storage/docs/access-control/iam
   370  [XML POST Object docs]: https://cloud.google.com/storage/docs/xml-api/post-object
   371  [Cloud Storage retry docs]: https://cloud.google.com/storage/docs/retry-strategy
   372  [Other Environments]: https://cloud.google.com/storage/docs/authentication#libauth
   373  [gcloud using application default credentials]: https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
   374  [impersonation enabled]: https://cloud.google.com/sdk/gcloud/reference#--impersonate-service-account
   375  [IAM Service Account Credentials API]: https://console.developers.google.com/apis/api/iamcredentials.googleapis.com/overview
   376  [custom audit logging]: https://cloud.google.com/storage/docs/audit-logging#add-custom-metadata
   377  [Storage Control API]: https://cloud.google.com/storage/docs/reference/rpc/google.storage.control.v2
   378  */
   379  package storage // import "cloud.google.com/go/storage"
   380  

View as plain text