Configuring Amazon Security Lake Connectors

This connector allows Stellar Cyber to ingest logs from the Amazon Security Lake and add the records to the Stellar Cyber data lake.

Integration with Amazon Security Lake enables organizations to securely store and manage their security data, providing real-time threat detection and response capabilities through Stellar Cyber's advanced security analytics and machine learning algorithms.

This connector uses the standardized format for Open Cybersecurity Schema Framework (OCSF). The data in OCSF format, which is sent and pulled by third party vendors, is stored in the Amazon Security Lake. This connector is a subscriber integration that pulls data from the lake.

Stellar Cyber connectors with the Collect function (collectors) may skip collecting some data when the ingestion volume is large, which potentially can lead to data loss. This can happen when the processing capacity of the collector is exceeded.

Connector Overview: Amazon Security Lake 

Capabilities

  • Collect: Yes

  • Respond: No

  • Native Alerts Mapped: No

  • Runs on: DP

  • Interval: N/A

Collected Data

Content Type

Index

Locating Records

OCSF-formatted data from
Amazon Security Lake

Syslog (default)

Traffic (for vpcflow)

AWS Events (for cloudtrail)

msg_class:

<metadata.product.feature.name>

Default msg_class:

amazon_security_lake_log

Examples:

management_data_and_insights (for default)

vpcflow (if metadata.product.name is Amazon VPC)

cloudtrail (if metadata.product.name is CloudTrail)

msg_origin.source:

<metadata.product.name>

Default msg_origin.source:

amazon_security_lake

Example:

cloudtrail (if metadata.product.name is Amazon VPC)

msg_origin.vendor:

<metadata.product.vendor_name>

Example:

aws

msg_origin.category:

websecurity (for default)

paas (for Amazon VPC or CloudTrail)

msg_origin.processor.type:

log_collector

msg_origin.processor.name:

amazon_security_lake

This connector can have several different msg_class and msg_origin_source values. The values are assigned dynamically based on the original log source.

Domain

N/A

Response Actions

N/A

Third Party Native Alert Integration Details

N/A

Required Credentials

  • AWS Role ID, External ID, and SQS Queue URL, as well as Region

               Let us know if you find the above overview useful.

Adding an Amazon Security Lake Connector

To add an Amazon Security Lake connector:

  1. Obtain Amazon Security Lake credentials
  2. Add the connector in Stellar Cyber
  3. Test the connector
  4. Verify ingestion

Obtaining Amazon Security Lake Credentials

This connector uses Amazon Simple Queue Service (SQS) queues and S3 buckets to pull data from the Amazon Security Lake.

Follow guidance on Amazon Security Lake documentation.

Permissions

You need the following permissions to create the SQS queue:

  • s3:GetBucketNotification

  • s3:PutBucketNotification

  • sqs:CreateQueue

  • sqs:DeleteQueue

  • sqs:GetQueueAttributes

  • sqs:GetQueueUrl

  • sqs:SetQueueAttributes

When you set up data access for the Amazon Security Lake subscriber in the Enabling Subscriber Data Access procedure, the SQS queue is automatically created.

You will also need the following permissions to perform actions:

  • iam:CreateRole

  • iam:DeleteRolePolicy

  • iam:GetRole

  • iam:PutRolePolicy

  • lakeformation:GrantPermissions

  • lakeformation:ListPermissions

  • lakeformation:RegisterResource

  • lakeformation:RevokePermissions

  • ram:GetResourceShareAssociations

  • ram:GetResourceShares

  • ram:UpdateResourceShare

Use IAM to verify your permissions. Review the IAM policies that are attached to your IAM identity. Then, compare the information in those policies to the list of (permissions) actions that you must have to notify subscribers when new data is written to the data lake.

Enabling Subscriber Data Access

To enable subscriber data access:

  1. Log in as an administrative user to the Security Lake console at https://console.aws.amazon.com/securitylake.

  2. Use the AWS Region drop-down to choose a Region in which to create the subscriber.

  3. Choose Subscribers.

  4. Click Create subscriber.

  5. For Subscriber details, enter a Subscriber name. You can also enter an optional Description.

    The Region is automatically populated as your currently selected AWS Region. It cannot be changed.

  6. For Log and event sources, select All log and event sources.
  7. For Data access method, choose S3.

  8. For Subscriber credentials, enter Account Id and External Id.
    • The Account Id is the ID of the AWS account on which Stellar Cyber is running.
    • The External Id is a unique identifier. The suggested External Id is the Stellar Cyber tenant ID of the tenant running the connector. To find your tenant ID, navigate to System | Administration | Tenants and locate it in the table.

  9. For Notification details, choose SQS queue.
  10. Click Create.

  11. Navigate to Subscribers | Details. Copy the AWS role ID and the Subscription endpoint. You will need them when configuring the connector in Stellar Cyber.

  12. Navigate to Amazon SQS | Queues.

  13. For the specific Subscription endpoint noted above, copy the SQS queue URL. You will need it when configuring the connector in Stellar Cyber.

Adding the Connector in Stellar Cyber

With the access information handy, you can add an Amazon Security Lake connector in Stellar Cyber:

  1. Log in to Stellar Cyber.

  2. Click System | Integration | Connectors. The Connector Overview appears.

  3. Click Create. The General tab of the Add Connector screen appears. The information on this tab cannot be changed after you add the connector.

    The asterisk (*) indicates a required field.

  4. Choose Web Security from the Category drop-down.

  5. Choose Amazon Security Lake from the Type drop-down.

  6. For this connector, the supported Function is Collect, which is enabled already.

  7. Enter a Name.

    Notes:
    • This field does not accept multibyte characters.
    • It is recommended that you follow a naming convention such as tenantname-connectortype.
  8. Choose a Tenant Name. This identifies which tenant is allowed to use the connector.

  9. Choose the device on which to run the connector.

    • Certain connectors can be run on either a Sensor or a Data Processor. The available devices are displayed in the Run On menu. If you want to associate your collector with a sensor, you must have configured that sensor prior to configuring the connector or you will not be able to select it during initial configuration. If you select Data Processor, you will need to associate the connector with a Data Analyzer profile as a separate step. That step is not required for a sensor, which is configured with only one possible profile.

    • If the device you're connecting to is on premises, we recommend you run on the local sensor. If you're connecting to a cloud service, we recommend you run on the DP.

  10. (Optional) When the Function is Collect, you can apply Log Filters. For information, see Managing Log Filters.

  11. Click Next. The Configuration tab appears.

    The asterisk (*) indicates a required field.

  12. Enter the AWS Role ID you noted above in Obtain your Amazon Security Lake credentials.

  13. Enter the External ID you noted above.

  14. Choose the Region from the available AWS regions in the drop-down.

  15. Enter the SQS Queue URL you noted above.

  16. Click Next. The final confirmation tab appears.

  17. Click Submit.

    To pull data, a connector must be added to a Data Analyzer profile if it is running on the Data Processor.

  18. If you are adding rather than editing a connector with the Collect function enabled and you specified for it to run on a Data Processor, a dialog box now prompts you to add the connector to the default Data Analyzer profile. Click Cancel to leave it out of the default profile or click OK to add it to the default profile.

    • This prompt only occurs during the initial create connector process when Collect is enabled.

    • Certain connectors can be run on either a Sensor or a Data Processor, and some are best run on one versus the other. In any case, when the connector is run on a Data Processor, that connector must be included in a Data Analyzer profile. If you leave it out of the default profile, you must add it to another profile. You need the Administrator Root scope to add the connector to the Data Analyzer profile. If you do not have privileges to configure Data Analyzer profiles, a dialog displays recommending you ask your administrator to add it for you.

    • The first time you add a Collect connector to a profile, it pulls data immediately and then not again until the scheduled interval has elapsed. If the connector configuration dialog did not offer an option to set a specific interval, it is run every five minutes. Exceptions to this default interval are the Proofpoint on Demand (pulls data every 1 hour) and Azure Event Hub (continuously pulls data) connectors. The intervals for each connector are listed in the Connector Types & Functions topic.

    The Connector Overview appears.

The new connector is immediately active.

Testing the Connector

When you add (or edit) a connector, we recommend that you run a test to validate the connectivity parameters you entered. (The test validates only the authentication / connectivity; it does not validate data flow).

  1. Click System | Integrations | Connectors. The Connector Overview appears.

  2. Locate the connector that you added, or modified, or that you want to test.

  3. Click Test at the right side of that row. The test runs immediately.

    Note that you may run only one test at a time.

Stellar Cyber conducts a basic connectivity test for the connector and reports a success or failure result. A successful test indicates that you entered all of the connector information correctly.

To aid troubleshooting your connector, the dialog remains open until you explicitly close it by using the X button. If the test fails, you can select the  button from the same row to review and correct issues.

The connector status is updated every five (5) minutes. A successful test clears the connector status, but if issues persist, the status reverts to failed after a minute.

Repeat the test as needed.

ClosedDisplay sample messages...

Success !

Failure with summary of issue:

Show More example detail:

If the test fails, the common HTTP status error codes are as follows:

HTTP Error Code HTTP Standard Error Name Explanation Recommendation
400 Bad Request This error occurs when there is an error in the connector configuration.

Did you configure the connector correctly?

401 Unauthorized

This error occurs when an authentication credential is invalid or when a user does not have sufficient privileges to access a specific API.

Did you enter your credentials correctly?

Are your credentials expired?

Are your credentials entitled or licensed for that specific resource?

403 Forbidden This error occurs when the permission or scope is not correct in a valid credential.

Did you enter your credentials correctly?

Do you have the required role or permissions for that credential?

404 Not Found This error occurs when a URL path does not resolve to an entity. Did you enter your API URL correctly?
429 Too Many Requests

This error occurs when the API server receives too much traffic or if a user’s license or entitlement quota is exceeded.

The server or user license/quota will eventually recover. The connector will periodically retry the query.

If this occurs unexpectedly or too often, work with your API provider to investigate the server limits, user licensing, or quotas.

For a full list of codes, refer to HTTP response status codes.

Verifying Ingestion

To verify ingestion:

  1. Click Investigate | Threat Hunting. The Interflow Search tab appears.

  2. Change the Indices depending on the type of data received:

    • For the default, change the Indices to Syslog.
    • For a msg_class of vpcflow, change the Indices to Traffic.
    • For a msg_class of cloudtrail, change the Indices to AWS Events.

    The table immediately updates to show ingested Interflow records.