Search Results for: collect data

Defining ATT&CK Data Sources, Part II: Operationalizing the Methodology

Defining ATT&CK Data Sources, Part II: Operationalizing the Methodology

In Part I of this two-part blog series, we reviewed the current state of the data sources and an initial approach to enhancing them through data modeling. We also defined what an ATT&CK data source object represents and extended it to introduce the concept of data components.

In Part II, we’ll explore a methodology to help define new ATT&CK data source objects, how to implement the methodology with current data sources, and share an initial set of data source objects at https://github.com/mitre-attack/attack-datasources.

Formalizing the Methodology

In Part I we proposed defining data sources as objects within the ATT&CK framework and developing a standardized approach to name and define data sources through data modeling concepts. Our methodology to accomplish this objective is captured in five key steps — Identify Sources of Data, Identify Data Elements, Identify Relationships Among Data Elements, Define Data Components, and Assemble the ATT&CK Data Source Object.

Figure 1: Proposed Methodology to Define Data Sources Object

Step 1: Identify Sources of Data

Identifying the security events to inform the specific ATT&CK data sources being assessed kickstarts the process. The security events can be uncovered by reviewing the metadata in the event logs that reference the specific data source (i.e., process name, process path, application, image). We recommend complementing this step with documentation or data dictionaries that identify relevant event logs to provide the key context around the data source. It’s important at this phase in the process to document where the data can be collected (collection layer and platform).

Step 2: Identify Data Elements

Extracting data elements found in the available data enables identification of the data elements that could provide the name and the definition of the data source.

Step 3: Identify Relationships Among Data Elements

During the identification of data elements, we can also start documenting the available relationships that will be grouped to enable us to define potential data components.

Step 4: Define Data Components

The output of grouping the relationships is a list of all potential data components that could provide additional context to the data source.

Step 5: Assemble the ATT&CK Data Source Object

Connecting all of the information from previous steps enables us to structure them as properties of the data source object. The table below provides an approach for organizing the combined information into a data source object.

Table 1: ATT&CK Data Source Object

Operationalizing the Methodology

To illustrate how the methodology can be applied to ATT&CK data sources, we feature use cases in the following sections that reflect and operationalize the process.

Starting with the ATT&CK data source that is mapped to the most sub-techniques in the framework, Process Monitoring, we will create our first ATT&CK data source object. Next we will create another ATT&CK data source object around Windows Event Logs, a data source that is key for detecting a significant number of techniques

Windows is leveraged for the use cases, but the approach can and should be applied to other platforms.

Improving Process Monitoring

1) Identifying Sources of Data: In a Windows environment, we can collect information pertaining to “Processes” from built-in event providers such as Microsoft-Windows-Security-Auditing and open third-party tools, including Sysmon.

This step also takes into account the overall security events where a process can be represented as the main data element around an adversary action. This could include actions such as a process connecting to an IP address, modifying a registry, or creating a file. The following image displays security events from the Microsoft-Windows-Security-Auditing provider and the associated context about a process performing an action on an endpoint:

Figure 2: Windows Security Events Featuring a Process Data Element

These security events also provide information about other data elements such as “User”, “Port” or “Ip”. This means that security events can be mapped to other data elements depending on the data source and the adversary (sub-)technique.

The source identification process should leverage available documentation about organization-internal security events. We recommend using documentation about your data or examining data source information in open source projects such as DeTT&CT, the Open Source Security Events Metadata (OSSEM), or ATTACK Datamap.

An additional element that we can extract from this step is the data collection location. A simple approach for identifying this information includes documenting the collection layer and platform for the data source:

  • Collection Layer: Host
  • Platform: Windows

The most effective data collection strategy will be customized to your unique environment. From a collection layer standpoint, this varies depending on how you collect data in your environment, but Process information is generally collected directly from the endpoint. From a platform perspective, this approach can be replicated on other platforms (e.g., Linux, macOS, Android) with the corresponding data collection locations captured.

2) Identifying Data Elements: Once we identify and understand more about sources of data that can be mapped to an ATT&CK data source, we can start identifying data elements within the data fields that could help us eventually represent adversary behavior from a data perspective. The image below displays how we can extend the concept of an event log and capture the data elements featured within it.

Figure 3: Process Data Source — Data Elements

We will also use the data elements identified within the data fields to create and improve the naming of data sources and inform the data source definition. Data source designations are represented by the core data element(s). In the case of Process Monitoring, it makes sense for the data source name to contain “Process” but not “Monitoring,” as monitoring is an activity around the data source that is performed by the organization. Our naming and definition adjustments for “Process” are featured below:

  • Name: Process
  • Definition: Information about instances of computer programs that are being executed by at least one thread.

We can leverage this approach across ATT&CK to strategically remove extraneous wording in data sources.

3) Identifying Relationships Among Data Elements: Once we have a better understanding of the data elements and a more relevant definition for the data source itself, we can start extending the data elements information and identifying relationships that exist among them. These relationships can be defined based on the activity described by the collected telemetry. The following image features relationships identified in the security events that are related to the “Process” data source.

Figure 4: Process Data Source — Relationships

4) Defining Data Components: All of the combined information aspects in the previous steps contribute to the concept of data components in the framework.

Based on the relationships identified among data elements, we can start grouping and developing corresponding designations to inform a high-level overview of the relationships. As highlighted in the image below, some data components can be mapped to one event (Process Creation -> Security 4688) while other components such as “Process Network Connection” involve more than one security event from the same provider.

Figure 5: Process Data Source — Data Components

“Process” now serves as an umbrella over the linked information facets relevant to the ATT&CK data source.

Figure 6: Process Data Source

5) Assembling the ATT&CK Data Source Object: Aggregating all of the core outputs from the previous steps and linking them together represents the new “Process” ATT&CK data source object. The table below provides a basic example of it for “Process”:

Table 2: Process Data Source Object

Improving Windows Event Logs

1) Identifying Sources of Data: Following the established methodology, our first step is to identify the security events we can collect pertaining to “Windows Event Logs”, but it’s immediately apparent that this data source is too broad. The image below displays a few of the Windows event providers that exist under the “Windows Event logs” umbrella.

Figure 7: Multiple Event Logs in Windows Event Logs

The next image reveals additional Windows event logs that could also be considered sources of data.

Figure 8: Windows Event Viewer — Event Providers

With so many events, how do we define what needs to be collected from a Windows endpoint when an ATT&CK technique recommends “Windows Event Logs” as a data source?

2–3–4) Identifying Data Elements, Relationships and Data Components: We suggest that the current ATT&CK data source Windows Event Logs can be broken down, compared with other data sources for potential overlaps, and replaced. To accomplish this, we can duplicate the process we previously used with Process Monitoring to demonstrate that Windows Event Logs covers several data elements, relationships, data components and even other existing ATT&CK data sources.

Figure 9: Windows Event Logs Broken Down

5) Assembling the ATT&CK Data Source Object: Assembling the outputs from the process, we can leverage the information from Windows security event logs to create and define a few data source objects.

Table 3: File Data Source Object
Table 4: PowerShell Log Data Source Object

In addition, we can identify potential new ATT&CK data sources. The User Account case was the result of identifying several data elements and relationships around the telemetry generated when adversaries create a user, enable a user, modify properties of a user account, and even disable user accounts. The table below is an example of what the new ATT&CK data source object would look like.

Table 5: User Account Data Source Object (NEW)

This new data source could be mapped to ATT&CK techniques such as Account Manipulation (T1098).

Figure 10: User Account Data Source for Account Manipulation Technique

Applying the Methodology to (Sub-)Techniques

Now that we’ve operationalized the methodology to enrich ATT&CK data through defined data source objects, how does this apply to techniques and sub-techniques? With the additional context around each data source, we can leverage the results with more context and detail when defining a data collection strategy for techniques and sub-techniques.

Sub-Technique Use Case: T1543.003 Windows Service

T1543 Create and Modify System Process (used to accomplish Persistence and Privilege Escalationtactics) includes the following sub-techniques: Launch Agent, System Service, Windows Service, and Launch Daemon.

Figure 11: Create or Modify System Process Technique

We’ll focus on T1543.003 Windows Service to highlight how the additional context provided by the data source objects make it easier to identify potential security events to be collected.

Figure 12: Windows Service Sub-Technique

Based on the information provided by the sub-technique, we can start leveraging some of the ATT&CK data objects that can be defined with the methodology. With the additional information from Process, Windows Registry and Service data source objects, we can drill down and use properties such as data components for more specificity from a data perspective.

In the image below, concepts such as data components not only narrow the identification of security events, but also create a bridge between high- and low-level concepts to inform data collection strategies.

Figure 13: Mapping Event Logs to Sub-Techniques Through Data Components Example

Implementing these concepts from an organizational perspective requires identifying what security events are mapped to specific data components. The image above leverages free telemetry examples to illustrate the concepts behind the methodology.

This T1543.003 use case demonstrates how the methodology aligns seamlessly with ATT&CK’s classification as a mid-level framework that breaks down high-level concepts and contextualizes lower-level concepts.

Where can we find initial Data Sources objects?

The initial data source objects that we developed can be found at https://github.com/mitre-attack/attack-datasources in Yaml format for easy consumption. Most of the data components and relationships were defined from a Windows Host perspective and there are many opportunities for contributions from collection layer (i.e. Network, Cloud) and platform (i.e. Mac, Linux) perspectives for applying this methodology.

Outlined below is an example of the Yaml file structure for the Service data source object:

Figure 14: Service Data Source Object — Yaml File

Going Forward

In this two-part series, we introduced, formalized and operationalized the methodology to revamp ATT&CK data sources. We encourage you to test this methodology in your environment and provide feedback about what works and what needs improvement as we consider adopting it for MITRE ATT&CK.

As highlighted both in this post and in Part I, mapping data sources to data elements and identifying their relationships is still a work in progress and we look forward to continuing to develop this concept with community input.

©2020 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 20–02605–3


Defining ATT&CK Data Sources, Part II: Operationalizing the Methodology was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

ISO 27001 Annex : A.16.1.5 Response to Information Security Incidents, A.16.1.6 Learning from Information Security Incidents & A.16.1.7 Collection of Evidence

ISO 27001 Annex : A.16.1.5 Response to Information Security Incidents, A.16.1.6 Learning from Information Security Incidents & A.16.1.7 Collection of Evidence

In this article explain ISO 27001 Annex : A.16.1.5 Response to Information Security Incidents, A.16.1.6 Learning from Information Security Incidents & A.16.1.7 Collection of Evidence this controls. A.16.1.5 Response to Information Security Incidents Control- In the context of the documented procedures, information security incidents should be responded to. Implementation Guidance- A nominated point of contact …

ISO 27001 Annex : A.16.1.5 Response to Information Security Incidents, A.16.1.6 Learning from Information Security Incidents & A.16.1.7 Collection of Evidence Read More »

ISO 27001 Annex : A.14.3 Test data

ISO 27001 Annex : A.14.3 Test data

ISO 27001 Annex : A.14.3  Test data its objective is to ensure that data used for research are secured. A.14.3.1  Protection of test data Control – Careful collection, security, and review of test data should be performed. Implementation Guidance – It should be avoided the use of operational information containing personal information or any other …

ISO 27001 Annex : A.14.3 Test data Read More »

What’s New in ATT&CK v9?

What’s New in ATT&CK v9?

Data Sources, Containers, Cloud, and More: What’s New in ATT&CK v9?

By Jamie Williams (MITRE), Jen Burns (MITRE), Cat Self (MITRE), and Adam Pennington (MITRE)

As we promised in the ATT&CK 2021 Roadmap, today marks our April release (ATT&CK v9) and we’re thrilled to share the additions with you, and how to use them. So, what changed with this release?

  • Updated: A revamp of data sources (Episode 1 of 2)
  • Updated: Some refreshes to macOS techniques
  • New: Consolidation of IaaS platforms
  • New: The Google Workspace platform
  • New: ATT&CK for Containers (and not the kind on boats)

This is in addition to our usual updates and additions to Techniques, Groups, and Software, which you can find more details about on our release notes. Notably this release includes 16 new Groups, 67 new pieces of Software, with updates to 36 Groups and 51 Software entries.

Making Sense of the New Data Sources: Episode I

As much as we love tracking and nerding out over adversary behaviors, one of the most important goals of ATT&CK is to bridge offensive actions with potential defensive countermeasures. We strive to achieve this goal by tagging each (sub-)technique with defensive-focused fields/properties, such as what data to collect (data sources) and how to analyze that data in order to potentially identify specific behaviors (detections).

Many of you in the community have made great use of ATT&CK data sources ¹ ² ³, but we heard from you and recognized the opportunity for improvement. Our goal for the new data sources is to better connect the defensive data in ATT&CK with how operational defenders see and work these challenges.

The initial changes are a revamp of the data sources values, which were previously text strings without additional details or descriptions.

Example of previous data sources on OS Credential Dumping: LSASS Memory (T1003.001)

These high-level concepts were a helpful starting point, but along with issues regarding consistency, this level of detail didn’t effectively answer “Am I collecting the right data?

Redefining Data Sources

Prior to ATT&CK’s v9 release, data sources only highlighted a specific sensor or logging system (e.g., Process Monitoring or PowerShell Logs). What we were trying to capture with this approach was the defender’s requirement to collect data about processes and executed (PowerShell) commands. However, while these clues often directed us to “where we should collect data”, they didn’t always provide details on “what data values are necessary to collect?

Details on what to collect can be important for mapping from the framework to defensive operations. For example, Process Monitoring can take many forms depending on what technologies you are using and what data about a process is needed (ex: do you need command-line parameters, inter-process interactions, and/or API functions executed by the process?). The same applies to PowerShell logs, which can be collected from a variety of sources (event logs, trace providers, third-party tools).The specifics of what exact data were often only highlighted in the additional context provided by the detection section of the technique.

With this in mind, we redefined data sources to focus on answering “what type of data do we need?” Our new list of data sources describe the types of objects our detection data needs to observe. Examples that are very commonly used across techniques include process, file, command, and network traffic.

Process data source (https://github.com/mitre-attack/attack-datasources/blob/main/contribution/process.yml)

Building on this, we added data components to further define specific needed elements within each data source. Going back to the OS Credential Dumping: LSASS Memory (T1003.001) example, we can see how the additional context helps us identify exactly what relevant data we need. Illustrating this with the Sysmon tool, we can quickly map our exact needs for process data to corresponding operational telemetry.

Mapping process (monitoring) data source of OS Credential Dumping: LSASS Memory (T1003.001) to real detection tools

We reviewed and remapped both data sources and data components for all of the Enterprise matrix, including the Cloud and our newest Containers platform (more details about those matrices in the New and Improved Cloud section). Featured below are an example of the new Data Source: Data Component values that replaced the previous text.

Example of updated data sources on OS Credential Dumping: LSASS Memory (T1003.001)

These values fulfill the same objective of directing us towards “where we should collect data,” as well as providing the added context of “what specific values are necessary to collect.” As defenders, we can operationalize these Data Source: Data Component pairings as part of our detection engineering process by:

1. Using data sources to identify relevant sensors/logs
(i.e., where and how do/can I collect data about processes?)

2. Using data components to identify relevant events/fields/values
(i.e., what data about processes is provided by each sensor/log and how can these values be used to identify adversary behaviors?)

We’ll add additional details behind each data source when we release data source objects in October, but for now the data sources on the ATT&CK site link to our GitHub repository, where you can read more about each one. As always, we invite feedback and contributions (and a special thanks to those who have already contributed).

For more background about the data sources work, check out our previously published two-part blog series ¹ ² and/or watch us discuss and demonstrate the potential power of these improvements!

What’s New with Mac

The community was at the heart of macOS improvements featured in this release. We collaboratively updated several techniques, rescoped others, and added macOS specific malware. Our focus was primarily on Persistence and Execution, building in red team walkthroughs and code examples for a deeper look into the sub-techniques. Along with the rest of Enterprise, we also refactored macOS data sources to start building out visibility for defenders. We’ve only scratched the surface and are excited to continue enhancing and updating macOS and Linux content targeted at our October release.

New and Improved Cloud

As we highlighted in the 2021 roadmap, this release features the consolidation of the former AWS, Azure, and GCP platforms into a single IaaS (Infrastructure as a Service) platform. In addition to community feedback favoring consolidation, this update creates a more inclusive domain that represents all Cloud Service Providers.

We also refactored data sources for Cloud platforms, with a slightly different flavor than the host-based data sources. Specifically for IaaS, we wanted to align more with the events and APIs involved in detections instead of just focusing on the log sources (e.g., AWS CloudTrail logs, Azure Activity Logs). With that goal in mind, the new Cloud data sources include Instance, Cloud Storage, and others that align with terminology found in events within Cloud environments.

Instance data source mapped to potential events

An ATT&CK for Cloud bonus in this release is the addition of the Google Workspace platform. Since ATT&CK already covers Office 365, we wanted to ensure that users of Google’s productivity tools were also able to map similar applicable adversary behaviors to ATT&CK. We hope that this platform addition is helpful to the community, and would appreciate any feedback or insights.

Container Updates (that don’t include the Suez Canal)

We’re also excited to publish ATT&CK for Containers in this release! An ATT&CK research team partnered with the Center for Threat-Informed Defense to develop this contribution to ATT&CK. You can find more information about the ATT&CK for Containers research project and the new matrix in their blog post.

ATT&CK Containers platform matrix

What’s Next

We hope you’re as excited as we are about v9 and are looking forward to the rest of the updates and new capabilities we have planned for 2021. October’s release should include episode 2 of data sources, featuring descriptive objects, as well as updates to ATT&CK for ICS and Mobile. We’ll also continue enhancing coverage of macOS and Linux techniques, so now is a great time to let us know if you have contributions or feedback on one of those platforms. We may have some additional improvements to announce in the coming months, but we stand by our promise of nothing as disruptive as the new tactics and sub-techniques from 2020.

We look forward to connecting with you on email, Twitter, or Slack.

©2021 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 21–00706–2.


What’s New in ATT&CK v9? was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Operational Excellence Pillar-3

The Operational Excellence Pillar-3

Custom Deployment ConfigurationsYou can also create custom deployment configurations. This is useful if you want to customize how many instances CodeDeploy attempts to deploy to simultaneously. The deployment must complete successfully on these instances before CodeDeploy moves onto the remaining instances in the deployment group. Hence, the value you must specify when creating a custom …

The Operational Excellence Pillar-3 Read More »

INTERNATIONAL STANDARD – ISO/IEC 27102

INTERNATIONAL STANDARD – ISO/IEC 27102

Information security management — Guidelines for cyberinsurance 8 Role of ISMS in support of cyber-insurance 8.1 Overview ISO/IEC 27001 provides organizations with a structured management framework for an ISMS designed to establish, implement, maintain and continually information security. An effective ISMS allows an organization to:a) identify, analyze, and address its information security risks;b) continually secure …

INTERNATIONAL STANDARD – ISO/IEC 27102 Read More »

What Is Network Forensics? How to Successfully Examine the Network

Network forensics has become an essential aspect of digital forensics in recent years. The examination of a network can provide critical information about how a company functions and the activities that have taken place on its systems. To conduct a successful network forensic examination, following the correct steps and using the appropriate tools is essential.…

The post What Is Network Forensics? How to Successfully Examine the Network appeared first on Cybersecurity Exchange.

What is Cloud Security

What is Cloud Security

An organization’s incident response plan is the set of measures and procedures it has in place to respond to and protect against a cyberattack. An effective incident response plan can reduce the damage experienced after a security breach and ensure faster systems recovery. As the rates of cybercrime continue to increase, incident response plans have…

The post What is Cloud Security appeared first on Cybersecurity Exchange.

What is Digital Forensics

What is Digital Forensics

Digital forensic science is a branch of forensic science that focuses on the recovery and investigation of material found in digital devices related to cybercrime. The term digital forensics was first used as a synonym for computer forensics. Since then, it has expanded to cover the investigation of any devices that can store digital data.…

The post What is Digital Forensics appeared first on Cybersecurity Exchange.

Best Practices for Cloud Incident Response (E|CIH) 

Organizations of all sizes are moving to the cloud because of increased agility, scalability, and cost-efficiency. However, with these advantages come new risks and challenges that must be managed. Incident response is one of the most important but often overlooked aspects of cloud management.

The post Best Practices for Cloud Incident Response (E|CIH)  appeared first on Cybersecurity Exchange.

Open Whatsapp chat
Whatsapp Us
Chat with us for faster replies.