Integrating with Telemetry Data Collectors
The flow diagram illustrates a typical use case for configuring a data collector. The flow begins with the creation of a stream, followed by the subscription to that stream. Once subscribed, the user attaches assets to the stream, ensuring that the relevant event messages are captured. Optionally, the user can create alert rules to define specific conditions for triggering notifications or actions based on the event data. This flow represents a streamlined approach to configuring the data collector for event monitoring and management.
Data Collector Setup
You can collect event or metric data using various Sink Types, each designed to streamline data handling and processing. These Sink Types enable you to direct your data to preferred destinations like Splunk or Datadog for advanced analytics, or PagerDuty or Slack for real-time alerts.
Here's a list of what we support:
Sink Type | Data Type - Event | Data Type - Metric |
---|---|---|
Datadog | Supported | Supported |
Microsoft Teams | Supported | Supported |
Pager Duty | Supported | Not Supported |
Service Now | Supported | Not Supported |
Slack | Supported | Supported |
Splunk HEC | Supported | Supported |
Datadog
If you use Datadog as your data collector, provide your Datadog API and Application Keys when creating your subscription.
Microsoft Teams
If you are using a Microsoft Teams channel as your data collector, create and configure a webhook to post messages when a webhook request is received. Once your webhook is configured, provide your HTTP POST URL when creating your subscription.
Pager Duty
If you are using Pager Duty as your data collector, create a service and an Events APIv2 integration to handle your events. Provide your Pager Duty URI and the Integration Key when creating your subscription.
Slack
If you are using Slack as your data collector, create and configure a Slack App for incoming webhooks. Provide your webhook's unique URL when creating your subscription.
Splunk
If you are using Splunk as your data collector, create an event index to receive events, a metrics index to receive metrics or both, then create an HTTP Event Collector. Provide your HTTP Event Collector URI, your HTTP Event Collector token, and the names of your indices when creating your subscription
Receive Metrics in Client Sink Integration
Once you subscribe to the asset(s) metrics, you will receive metrics at a 5-minute interval:
-
Port and Connection Bandwidth Usage Metrics provide a bit per second (bit/s) output for the data transmitted (tx) or received (rx) in the port or connection asset.
-
Metro Latency Metrics provide the latency in milliseconds (ms) from a single subscribed metro code to other metros supported by Equinix.
-
Port Error and Dropped Metrics provide the number of packet discards on a given port due to packet format, transmission errors, or even when the port doesn’t have bandwidth to accept the packet.
-
Connection Packet Dropped Metrics provide the number of packets dropped on a connection due to exceeding bandwidth limits for both transmitted (tx) or received (rx) data.
Receive Events in Client Sink Integration
You can generate events in different ways.
-
BGP State Events - When configuring BGP and performing actions such as disable, enable, or reset on the portal, BGP events related to FCR connections will be generated. These events include BGP IPV4/IPv6 Idle, Connected, or Established states.
-
Route Quota Usage Events - FCR and FCR related connections can learn IPV4/IPv6 routes up to 90% or 100% of the FCR Package Type. These routes are typically learned based on the IPV4/IPv6 addresses configured in BGP.
-
Port and Connection Up/Down Events - These events are critical to the user. If an Equinix Port goes down, a Port and Connection Down Event will be triggered. Once the Equinix Port recovers, a Port and Connection Up Event will be sent.
-
Fabric and Network Edge Asset Lifecycle Provisioning Events – Track the service provisioning of your assets, such as ports, connections, FCRs, Virtual Devices, etc. Any status change to your asset, including provisioning, deprovisioning, and failures, will result in an event being sent.
-
Organization Events - These events are from access manager and resource manager to inform organization administrators of the change events happening within the organization such as adding or removing roles for users.
Once you create a subscription, you will receive Events whenever they occur.
For example, if you attached a project to your stream you can view asset lifecycle events whenever you create or delete an asset. So if you sign in to the Customer Portal and create a Port, Connection, Service Token, Network, etc., your sink receives an event:
{ [-]
_source: https://api.equinix.com/fabric/v4/cloudevents
data: { [-]
message: Router named router-name state changed to provisioning
resource:[+]
}
}
equinixproject: 377533000114703
id: d2bb7d5d-3e7b-4638-9023-acdb08cc38a4
severitynumber: 9
severitytext: INFO
subject: /fabric/v4/routers/3cbd8a7f-6878-4492-88a9-1a8be65cc461
time: 2025-02-04T01:43:45Z
type: equinix.fabric.router.state.provisioning
Show as raw text
host = http-inputs-<host>.splunkcloud.com
source = Equinix
source = https://api.equinix.com/fabric/v4/cloudevents
Another example, for your FCR to Port Connection configured with Direct and BGP routing protocol and you enable or disable BGP, your sink receives an event.
Once your subscriptions are active, you can go to your data collector and search using index="<name_of_splunk_hec>". This search should return the relevant Event data collected by Splunk.
{ [-]
_source: https://api.equinix.com/fabric/v4/cloudevents
equinixmessage: Virtual port status changed to UP
id: 5345e011-4478-484b-beb4-38c940ff2f9e
severitynumber: 9
severitytext: INFO
subject: /fabric/v4/ports/c4d85dbe-f965-9659-f7e0-306a5c00af26
time: 2024-07-26T12:31:53.975Z
type: equinix.fabric.port.status.up
}
Show as raw text
host = http-inputs-equinix-digin.splunkcloud.com
source = Equinix
sourcetype = _json