Initializing help system before first use

Authenticating

Topics covered in this chapter:

Using s3bucket Properties

The simplest way to initialize the s3bucket with your S3 bucket's URL and access credentials is by directly setting properties of the s3bucket object within the model. For example:

model DirectInitExample
  uses "s3"
  declarations
    mybucket: s3bucket
  end-declarations

  mybucket.url := "https://s3-us-west-2.amazonaws.com/nameofmybucket/"
  mybucket.region := "us-west-2"
  mybucket.keyprefix := "myprefix/"                                          ! Optional
  mybucket.accesskeyid := "JKHGDMNAYGWEnbbsJGDI"
  mybucket.secretkey := "jhdfusJasfui;SVFYSIAVS++siufgsuUISNISOWJ"
  mybucket.sessiontoken := "kHUFGBSUjbfusbuioUHDFSIngudblincxubhxop0szofbv"  ! Optional
  ! mybucket now initialized and can be used
end-model

The Bucket URL, Region, Access Key ID and Secret Key must always be specified. If the credentials you are given include a Session Token (sometimes referred to as a Security Token), then you must also specify this. Giving a Key Prefix is optional.

Note that the S3 credentials will not be verified until you make a request to the S3 service. The values you give for the url, region and keyprefix can be read back out of the s3bucket, but for security reasons the accesskeyid, secretkey and sessiontoken properties may not be read.

If you are using server-side encryption with AWS-managed keys, you can configure this by setting additional fields on the s3bucket object, as follows:

  mybucket.sse := "aws:kms"
  mybucket.ssekmskeyid := "keyid"     ! Replace 'keyid' with ID of yourkey in the AWS key management service
  mybucket.ssecontext := "x=1"        ! Encryption context; optional

If you need to periodically update your credentials, you can set a refreshfunc to a function pointer that will update the properties on the s3bucket once the timestamp has exceeded the credentials ttl (time-to-live), e.g.:

model DirectInitExample
  uses "s3"
  declarations
    mybucket: s3bucket
  end-declarations

  function refreshbucket(bucket:s3bucket):boolean
    mybucket.accesskeyid := "KHASFGusbf9634kJFGS8"
    mybucket.secretkey := "sndfd++sfa8KAD&*RWLHSjhsdlifgy8gdIA*RHDJ"
    mybucket.timestamp := timestamp    ! seconds since 1/1/1970
    mybucket.ttl := 3600               ! 1 hour, in seconds
    returned := true  ! Return true on success, false on error
  end-function

  mybucket.url := "https://s3-us-west-2.amazonaws.com/nameofmybucket/"
  mybucket.region := "us-west-2"
  mybucket.accesskeyid := "JKHGDMNAYGWEnbbsJGDI"
  mybucket.secretkey := "jhdfusJasfui;SVFYSIAVS++siufgsuUISNISOWJ"
  mybucket.timestamp := timestamp           ! seconds since 1/1/1970
  mybucket.ttl := 3600                      ! 1 hour, in seconds
  mybucket.refreshfunc := ->refreshbucket
  ! mybucket now initialized and can be used
end-model

Using a JSON Configuration File

As an alternative to setting credentials in the model, you can specify them in a JSON document, the contents of which which you assign to the s3_buckets parameter. The document should be in the following format:

{
  "<bucket-id>": {
    "url": "<URL of bucket>",
    "region": "<AWS Region>",
    "keyPrefix": "<Key Prefix, optional>",
    "accessKeyId": "<AWS Access Key ID>",
    "secretKey": "<AWS Secret Key>",
    "sessionToken": "<AWS Session Token, optional>"
  }
}

The "<bucket-id>" string is a key that you use to refer to the bucket definition in the JSON and has no other meaning. You can specify multiple buckets in the same JSON file so long as they have different "<bucket-id>" strings, e.g.:

{
  "firstbucket": {
    "url": "https://s3-us-west-2.amazonaws.com/nameofmybucket/",
    "region": "us-west-2",
    "keyPrefix": "myprefix/",
    "accessKeyId": "JKHGDMNAYGWEnbbsJGDI",
    "secretKey": "jhdfusJasfui;SVFYSIAVS++siufgsuUISNISOWJ",
    "sessionToken": "kHUFGBSUjbfusbuioUHDFSIngudblincxubhxop0szofbv"
  },
  "secondbucket": {
    "url": "https://s3-us-east-2.amazonaws.com/nameofmyotherbucket/",
    "region": "us-east-2",
    "accessKeyId": "IHFSUGFOSFUHFJSYIFSG",
    "secretKey": "nusduoUHf;sufbuOFUGSFHRHAFFAbvubddsa=jfb",
    "sessionToken": "UHFSUOFIhfushfglhoFGSiguosnoahusfppgjoUFSUFINM"
  }
}

Then you can initialize an s3bucket in the model with the credentials from the JSON by calling the s3init procedure. For example, if you save the above sample JSON in a file called "buckets.json":

model InitExample
  uses "s3","mmsystem"
  declarations
    public bucketcfg: text
    mybucket: s3bucket
  end-declarations

  ! Load buckets.json into a variable so it can be passed to a parameter
  fcopy("buckets.json","text:bucketcfg")
  setparam("s3_buckets",string(bucketcfg))

  ! Initialize mybucket using the 'firstbucket' set of credentials
  s3init(mybucket, "firstbucket")
  if s3status(mybucket)<>S3_OK then
    writeln("Bucket initialization error: ", s3getlasterror(mybucket))
    exit(1)
  end-if
  ! mybucket now initialized and can be used
end-model

As before, the S3 credentials will not be verified until you make a request to the S3 service.

The s3_buckets parameter is special in that it has a single value shared by all models within the Mosel instance - this means that if, for example, you set it for your master model, then the same value will be used for all submodels that you start in the same Mosel process.

After calling s3init, the only property on the s3bucket that may be modified is keyprefix, which must always start with the value it was given by s3init.

If you are using server-side encryption with AWS-managed keys, you can configure this by setting 3 additional properties on the JSON object: sse should be the string "aws:kms", sseKmsKeyId the identifier of your key stored in the AWS key-management service, and sseContext is the (optional) encryption context string.

Using the DMP Solution Bucket

When using an Xpress Insight, Xpress Workbench, or Xpress Executor DMP component, the Mosel instance will automatically be configured to access an S3 bucket that is shared by all components in the solution. To use this, call s3init with the constant S3_DMP_SOLUTIONDATA:

model DmpInitExample
  uses "s3"
  declarations
    mybucket: s3bucket
  end-declarations

  ! Initialize mybucket using the 'solutionData' set of credentials
  s3init(mybucket, S3_DMP_SOLUTIONDATA)
  if s3status(mybucket)<>S3_OK then
    writeln("Bucket initialization error: ", s3getlasterror(mybucket))
    exit(1)
  end-if
  ! mybucket now initialized and can be used
end-model

By default you will access the solutionData folder matching your component's current DMP lifecycle stage (design, staging or production). Alternatively, you can initialize your s3bucket with the folder of a different lifecycle stage by using the s3solutiondata function and passing one of the constants S3_DMP_DESIGN, S3_DMP_STAGING or S3_DMP_PRODUCTION, as follows:

model DmpInitExample
  uses "s3"
  declarations
    mybucket: s3bucket
  end-declarations

  ! Initialize mybucket using the 'solutionData' credentials for the 'staging' lifecycle
  s3init(mybucket, s3solutiondata(S3_DMP_STAGING))
  if s3status(mybucket)<>S3_OK then
    writeln("Bucket initialization error: ", s3getlasterror(mybucket))
    exit(1)
  end-if
  ! mybucket now initialized and can be used
end-model

Please note that in Xpress Workbench, only access to the "solution data" folder for the current lifecycle stage is supported, and the S3 credentials will only be usable within the first 45 minutes of the model's execution.

Using the DMP Tenant Bucket

When using an Xpress Insight or Xpress Executor DMP component, the Mosel instance will automatically be configured to access an S3 bucket that is shared by all components in your tenant. To use this, call s3init with the constant S3_DMP_TENANTSHARED:

model DmpInitExample
  uses "s3"
  declarations
    mybucket: s3bucket
  end-declarations

  ! Initialize mybucket using the 'tenantShared' set of credentials
  s3init(mybucket, S3_DMP_TENANTSHARED)
  if s3status(mybucket)<>S3_OK then
    writeln("Bucket initialization error: ", s3getlasterror(mybucket))
    exit(1)
  end-if
  ! mybucket now initialized and can be used
end-model

Unlike the solution bucket, the tenant bucket is shared by all component lifecycle stages - there are not separate folders for design, staging, and production.

Please note that the shared tenant bucket cannot be accessed from Xpress Workbench.

Using the DMP Solution Revision Bucket

When using an Xpress Insight or Xpress Executor DMP component, you can read from the S3 folder for a previously committed solution revision. (A solution revision is created by committing the solution, and a location in S3 will be created to store assets of this revision. This mechanism is frequently used by non-Xpress components to share DMP function implementations and other resources. There is no way to write to a solution revision bucket from an Xpress component.) To use this, call s3init with the return value from calling the function s3solutionrevision with the ID of the solution revision you want to access, e.g.:

model DmpInitExample
  uses "s3"
  parameters
    REVISION_ID="7djfgs9287"
  end-parameters
  declarations
    mybucket: s3bucket
  end-declarations

  ! Initialize mybucket using the 'solutionRevision' set of credentials for revision
  s3init(mybucket, s3solutionrevision(REVISION_ID))
  if s3status(mybucket)<>S3_OK then
    writeln("Bucket initialization error: ", s3getlasterror(mybucket))
    exit(1)
  end-if
  ! mybucket now initialized and can be used
end-model

Please note that the solution revision buckets cannot be accessed from Xpress Workbench.


© 2001-2024 Fair Isaac Corporation. All rights reserved. This documentation is the property of Fair Isaac Corporation (“FICO”). Receipt or possession of this documentation does not convey rights to disclose, reproduce, make derivative works, use, or allow others to use it except solely for internal evaluation purposes to determine whether to purchase a license to the software described in this documentation, or as otherwise set forth in a written software license agreement between you and FICO (or a FICO affiliate). Use of this documentation and the software described in it must conform strictly to the foregoing permitted uses, and no other use is permitted.