Search Results for: Network Layer

ATT&CK 2022 Roadmap

Where We’ve Been and Where We’re Going​

In 2021, as we navigated a pandemic and moved into a new normal, we continued evolving ATT&CK without any significant structural overhauls (as promised). We were able to make strides in many areas — including the ATT&CK data sources methodology, to more effectively represent adversary behavior from a data perspective. We refined and added new macOS and Linux content and released ATT&CK for Containers. The Cloud domain benefitted from consolidation of the former AWS, Azure, and GCP platforms into a single IaaS (Infrastructure as a Service) platform. We updated ICS with cross-domain mappings and our infrastructure team introduced new ATT&CK Navigator elements to enhance your layer comparison and visualization experience. Finally, we added 8 new techniques, 27 sub-techniques, 24 new Group and over 100 new Software entries.

2022 Roadmap

We have several exciting adjustments to the framework on the horizon for 2022, and while we will be making some structural changes this year (Mobile sub-techniques and the introduction of Campaigns), it won’t be nearly as painful as the addition of Enterprise sub-techniques in 2020. In addition to Campaigns and Mobile subs, our key adjustments this year include converting detections into objects, innovating how you can use overlays and combinations, and expanding ICS assets. We plan on maintaining the biannual release schedule of April and October, with a point release (v11.1) for Mobile sub-techniques.

ATT&CKcon 3.0 | March 2022

Your wait is finally over for ATT&CKCon, and we’re thrilled to be hosting it in McLean, VA on March 29–30. We welcome you to join the ATT&CK team and those across the community to hear about all the updates, insights, and creative ways organizations and individuals have been leveraging ATT&CK. We’ll be live streaming the full conference for free and you can find all of the latest details and updates on our ATT&CKcon 3.0 page.

Detection Objects | April & October 2022

Over the past few years, transforming various actionable ATT&CK fields into managed objects has been a reoccurring theme. In v5 of ATT&CK, we converted mitigations into objects to enhance their value and usability — with this conversion, you can now identify a mitigation and pivot to various techniques it can potentially prevent. This has been a feature that many of you have leveraged to map ATT&CK to different control/risk frameworks. We also converted data sources to objects for the v10 release, enabling similar pivoting and analysis opportunities.

Next, we plan on implementing a parallel approach for detections, taking the currently free text featured in techniques, and refining and merging them into descriptions that are connected to data sources. This will enable us to describe for each technique what you need to collect as inputs for that detection (data sources), as well as how you could analyze that data to identify a given technique (detection).

Figure 1: Example ATT&CK technique (T1595.001 Active Scanning: Scanning IP Blocks) showing a draft of the complete Data Sources to Data Components to Detections mappings.

Campaigns | October 2022

One of the more significant changes you can expect this year is the introduction of Campaigns. We define campaigns as a grouping of intrusion activity conducted over a specific period of time with common targets and objectives; this activity may or may not be linked to a specific threat actor. The Solar Winds cyber intrusion, for instance, would become a campaign attributed to the G0016 threat group in ATT&CK. In ATT&CK’s existing structure, all activity for a given threat actor is combined under a single Group entry, making it challenging to accurately see trends, understand how a threat actor has evolved over time (or not), identify the variance between different events, or, conversely, identify certain techniques that an actor may rely on.

In ATT&CK, we’ve never added activity as a Group that hasn’t been given a name by someone else. For example, if a report describes the behaviors of a group or campaign, but never gives that intrusion activity a unique name like FUZZYSNUGGLYDUCK/APT1337 (or links it to someone else’s reporting that does), we wouldn’t incorporate that report into ATT&CK. With the introduction of Campaigns we’ll start including reports that leave activity unnamed and use our own identifiers (watch out for Campaign C0001). On the flip side, this new structure will let us better manage activity where too many things have been given the same name (e.g., Lazarus), providing us a way to tease apart activity that shouldn’t have been grouped together. Finally, we’ll be able to better address intrusion activity where multiple threat actors may be involved, such are Ransomware-as-a-Service operations.

We’re still working to best determine how Campaigns and associated IDs will be displayed in ATT&CK and will provide additional detail in the coming months. Group and Software pages will mostly remain unchanged — they’ll still feature collective lists of techniques and sub-techniques so network defenders can continue to create overall associated Navigator layers and conduct similar analysis. However, we’ll be adding Campaign links to the associated Group/Software pages. We’ll be providing additional details later in the year, as we prepare to integrate Campaigns as part of the October release.

Mobile | April 2022

We’ve been talking about Mobile sub-techniques for a while, and we’re thrilled to say that they’re almost here. The Mobile team was hard at work in 2021, bringing ATT&CK for Mobile into feature equity with ATT&CK for Enterprise, including identifying where sub-techniques would fit into the Mobile matrix. As we covered in our October 2021 v10 release post, the Mobile sub-techniques will mirror the structure of the Enterprise sub-techniques to address granularity levels. We’ll be including a beta version of the sub-techniques, similar to what we did with Enterprise, for community feedback as part of the April ATT&CK v11 release. We plan on publishing the finalized sub-techniques in a point release (e.g., v11.1), and we’ll include more details about the subs process and timeline in our April release post. In addition to sub-techniques, we’ll be working on a concept for Mobile data source objects, and reigniting our mini-series highlighting significant threats to mobile devices that we kicked off last year. As always, we remain very interested in adversary behavior targeting mobile devices, so if you would like to help us create new techniques, or if you have observed behaviors you’d like to share, reach out to us.

Finally, stay tuned for the ATT&CK for Mobile 2022 Roadmap that will be arriving soon. While we don’t typically publish separate roadmaps for technology domains, Mobile needs some additional space this year to cover the updates and planned content changes.

MacOS and Linux | April & October 2022

We made many adjustments, additions, and content updates to the macOS and Linux platforms last year, with a focus on macOS. For 2022 we hope to maintain the macOS momentum while transitioning our focus to updating Linux. Our April release will center around resolving several macOS contributions from last year. These updates include broadening the scope of parent techniques to include additional platforms, adding sub-techniques, updating procedures with specific usage examples, and supporting the data sources + detection efforts. We will continue to update macOS throughout the year and greatly appreciate the community engagement and all of the contributors that have enabled us to better represent this platform.

The April release will also feature revised language and platform mapping for Linux. We’re aiming for an improved representation of Linux within ATT&CK for all techniques by our October release. Although Linux is frequently leveraged by adversaries, public reporting is often scarce on detail making this a challenging platform for ATT&CK. Our ability to describe this space is closely tied to those of you in the Linux security community, and we hope to engage and establish more connections with you over the next several months. If you’re interested in sharing any observed activities or suggestions for techniques, please reach out and let us know.

ICS | October 2022

We updated our ICS content and data sources in 2021, and over the next several months, we’ll be expanding ICS Assets and adding detections. Asset names are tied to specific ICS verticals (e.g., electric power, water treatment, manufacturing), and the associated technique mappings enable users to understand if and how techniques apply to their environments. In addition, more granular asset definitions will help to highlight similarities and differences in functionality across technologies and verticals. The detections we’ll be adding to each technique will provide guidance on how the recently updated data sources can be used to identify adversary behavior. Finally, we’re preparing to integrate ICS onto the same platform as Enterprise and join the rest of the domains on the ATT&CK website (attack.mitre.org) later this year.

Overlays and Combinations | October 2022

Throughout the next several months, we’ll continue moving towards developing and sharing ideas for overlays and combinations, or how you can pull various ATT&CK platforms and domains together into a specialized view of ATT&CK. Using Linux and Containers together, for example, or integrating security across Enterprise and Mobile, or between Enterprise and ICS. Our goal with this effort is to provide the tools and resources for the community to leverage the various spaces of ATT&CK, and tailor them to their security needs.

Connect With Us!

ATT&CK will always be community-driven and our continued impact hinges on our collaboration with all of you. Your on-the-ground experience and input enables us to continue to evolve and we look forward to connecting with you on email, Twitter, or Slack.


ATT&CK 2022 Roadmap was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing ATT&CK v10: More Objects, Parity and Features

Introducing ATT&CK v10: More Objects, Parity, and Features

By Amy L. Robertson (MITRE), Alexia Crumpton (MITRE), and Chris Ante (MITRE)

As announced a couple of weeks ago, we’re back with the latest release and we’re thrilled to reveal all the updates and features waiting for you in ATT&CK v10. The v10 release includes the next episode in our data sources saga, as well as new content and our usual enhancements to (sub-)Techniques, Groups, and Software across Enterprise, Mobile and ICS, which you can find more details about on our release notes.

Making Sense of the New Data Sources: Episode II

In ATT&CK v9, we launched the new form of data sources which featured an updated structure for the data source names (Data Source: Data Component), reflecting

“What is the subject/topic of the collected data (file, process, network traffic, etc.)?” :

“What specific values/properties are needed in order to detect adversary behaviors?”

These updates were linked to Yaml files in GitHub, but weren’t fully integrated into the rest of ATT&CK yet. Our updated content in ATT&CK v10 aggregates this information about data sources, while structuring them as the new ATT&CK data source objects (somewhat similar to how Mitigations are reflected).

The data source object features the name of the data source as well as key details and metadata, including an ID, a definition, where it can be collected (collection layer), what platform(s) it can be found on, and the data components highlighting relevant values/properties that comprise the data source. Featured below is an example of a data source page in ATT&CK v10.

Figure 1: Network Traffic Data Source Page

Data Components are also listed below, each highlighting mappings to the various (sub-)techniques that may be detected with that particular data. On individual (sub-)techniques, data sources and components have been relocated from the metadata box at the top of the page to be collocated with Detection content.

Figure 2: New Data Source Placement on Technique (T1055.001) Page

These data sources are available for all platforms of Enterprise ATT&CK, including our newest additions that cover OSINT-related data sources mapped to PRE platform techniques.

Figure 4: Malware Repository Data Source Page

These updated structures are also visible in ATT&CK’s STIX representation, with both the data sources and the data components captured as custom STIX objects. You’ll be able to see the relationships between those objects, with the data sources featuring one or more data components, each of which detects one or more techniques. For more information about ATT&CK’s STIX representation, including these new objects and relationships, you can check out our STIX usage document.

Figure 5: Data Source STIX Model

We hope that these enhancements further increase our ability to translate our understanding of the adversary behaviors captured within ATT&CK to the data we collect as defenders. We are very excited to see these data source objects grow and evolve, and like the rest of ATT&CK, invite the community to submit contributions and feedback!

Note: We will no longer be working with Enterprise data sources in GitHub after ATT&CK v10. Moving forward we will accept all related contributions through our normal contribution process.

MacOS and Linux: Now with New Content!

Over the past several months, we’ve been continuing to improve and expand coverage across the macOS and Linux platforms. We understand adversaries actively target these platforms, however there is significantly less public reporting for adversarial hands-on-keyboard procedures and malware analysis. We’re pleased to report that we’ve been collaborating with macOS security and vulnerability research contributors across the globe to address these challenges. In upcoming releases, we’re hoping to leverage this same community engagement for Linux. We’re excited to see the growth in content from the community’s contribution, and the improvements ranging from how we capture new techniques to conveying the impact of existing techniques was a collaborative effort.

One of the most notable changes we made for techniques across the board was providing more in-depth references and use-cases on how procedures and processes work, and the impact they have. Remote services along with additional techniques for macOS and Linux received some attention, but most improvements were more detailed examples in the description section with supporting detection ideas. Along with the rest of Enterprise, we also updated our macOS data sources to enhance defender visibility.

ICS : Object-Oriented and Integrating

ICS has been focusing on feature equity with Enterprise, including updating data sources, adding and refining techniques, revamping assets, and charting out our detections plan. We’re also making some key changes to facilitate hunting in ICS environments. As we noted in the 2021 Roadmap, v10 also includes cross-domain mappings of Enterprise techniques to software that were previously only represented in the ICS Matrix, including Stuxnet, Industroyer, and several others. The fact that adversaries don’t respect theoretical boundaries is something we’ve consistently emphasized, and we think it’s crucial to feature Enterprise-centric mappings for more comprehensive coverage of all the behaviors exhibited by the software. With Stuxnet and Industroyer specifically, both malware operated within OT/ICS networks, but the two incidents displayed techniques that are also well researched and represented within the Enterprise matrix. Based on this, we created Enterprise entries for the ICS-focused software to provide network defenders with a view of software behavior spanning both matrices. We also expect the cross-domain mappings to enable you to leverage the knowledge bases together more effectively.

For data sources, we’re aligning with Enterprise ATT&CK in updating data source names. ICS’s current release reflects Enterprise’s v9 data sources update, with the new name format and content featured in GitHub. These data sources will be linked to YAML files that provide more detail, including what the data sources are and how they should be used. For future releases we plan on mapping the more granular assets to techniques to enable you to track how these behaviors can affect a technique, or what assets these behaviors are associated with. On the detections front, we’re working behind the scenes to add detections to each technique, and this will be reflected in future releases (we expect detections to really help out in hunt and continuous monitoring). Also in 2022, we’re preparing to integrate onto the same development platform as Enterprise, the ATT&CK Workbench, and join the rest of the domains on the ATT&CK website (attack.mitre.org).

Expanding Our Mobile Features

In the Mobile space, we’ve been focused on catching up on the contributions from the community, updating (sub-)techniques, Groups, and Software, and enhancing general parity with Enterprise. We’ve also been working hard behind the scenes to implement sub-techniques as mentioned in our 2021 Roadmap. We’re excited to introduce this new Mobile structure in April 2022, to better align with other platforms on Enterprise. Our plan is to do a beta release for the sub-techniques prior to the release of v11 to provide you with an opportunity to test out those updates and provide feedback.

About Cloud

Along with the rest of Enterprise, we’ve been updating content across Cloud, collaborating with community members on activity in the Cloud domain, and keeping an eye out for new platforms to add to the space. We also continued working on data sources, although as we outlined for the v9 release, our Cloud data sources are a little different than the host-based data sources, specifically aligning more with the events and APIs involved in detections instead of just focusing on the log sources.

What’s Next in 2022?

We hope you’re as excited as we are about v10, and we’d love your feedback and for you to join us in shaping our v11 release. We already have a lot on the horizon for 2022, included structured detections​, campaigns, tools to enable overlays and combinations, and ATT&CKcon. If you have feedback, comments, contributions, or just want to ask questions, connect with us on email, Twitter, or Slack.

©2021 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 21–00706–18.


Introducing ATT&CK v10: More Objects, Parity and Features was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

How Port Scanning Works ? Port Scanning TCP & UDP Explained

Identifying open ports on a target system is extremely important step to defining the attack surface of a target system. Open ports correspond to the networked services that are running on a system. Programming errors or implementation flaws can make these services susceptible to security and it also may cause compromise entire system. to work out the possible attack vectors, we must first enumerate the open ports on all of the remote systems.

port scanning explained

These open ports correspond to services which will be addressed with either UDP or TCP traffic. Both TCP and UDP are transport protocols. Transmission Control Protocol (TCP) is that the more widely used of the 2 and provides connection-oriented communication. User Datagram Protocol (UDP) may be a non connection-oriented protocol that’s sometimes used with services that speed of transmission is more important than data integrity.

The penetration testing method used to determine these services is called port scanning. In our this article we are going to cover some basic theory about the port scanning then we can easily understand the work methodology of any port scanner tools.

UDP Port Scanning

Because TCP may be a more widely used transport layer protocol, services that operate over UDP are frequently forgotten. Despite the natural tendency to overlook UDP services, it’s absolutely critical that these services are enumerated to accumulate an entire understanding of the attack surface of any given target. UDP scanning can often be challenging, tedious, and time consuming. within the next article we’ll cover the way to perform a UDP port scan in Kali Linux. to know how these tools work, it’s important to know the 2 different approaches to UDP scanning which will be used.

In the first method, is to rely exclusively on ICMP port-unreachable responses. this sort of scanning relies on the idea that any UDP ports that aren’t related to a live service will return an ICMP port-unreachable response, and a scarcity of response is interpreted as a sign of a live service. While this approach are often effective in some circumstances, it also can return inaccurate leads to cases where the host isn’t generating port-unreachable responses, or the port-unreachable replies are rate limited or they’re filtered by a firewall.
In the second method, which is addressed within the second and third recipes, is to use service-specific probes to aim to solicit a response, which might indicate that the expected service is running on the targeted port. While this approach are often highly effective, it also can be very time consuming.

TCP Port Scanning

In this article, many different methods to TCP scanning will be covered. These methods include stealth scanning, connect scanning, and zombie scanning. To understand how these scanning techniques work, it is important to understand how TCP connections are established and worded. TCP is a connection-oriented protocol, and data is only transported over TCP after a connection has been established between two systems. The process associated with establishing a TCP connection is often referred to as the three-way handshake. This name alludes to the three steps involved in the connection process. The following diagram shows this process in a graphical form:

threeway handshake

From the above picture we can see that a TCP SYN packet is sent from the device that wishes to establish a connection with a port of the device that it desires to connect with. If the service associated with the receiving port grants the connection, it will reply to the requesting system with a TCP packet that has both the SYN and ACK bits activated. The connection is established that time when the requesting system responds with a TCP ACK response. This three-step process (three-way handshake) establishes a TCP session between the two systems. All of the TCP port scanning techniques will perform some varieties of this process to identify live services on remote hosts.

Connect scanning and stealth scanning both are quite easy to know . Connect scanning wont to establish a full TCP connection for each port that’s scanned. that’s to mention , for each port that’s scanned, the complete three-way handshake is completed. If a connection is successfully established, the port is then seems to be open.
In the case of stealth scanning doesn’t establish a full connection. Stealth scanning is additionally referred as SYN scanning or half-open scanning. for every port that’s scanned, one SYN packet is shipped to the destination port, and every one ports that reply with a SYN+ACK packet are assumed to be running live services. Since no final ACK is shipped from the initiating system, the connection is left half-open. this is often mentioned as stealth scanning because logging solutions that only record established connections won’t record any evidence of the scan. the ultimate method of TCP scanning which will be discussed during this chapter may be a technique called zombie scanning. the aim of zombie scanning is to map open ports on a foreign system without producing any evidence that you simply have interacted thereupon system. The principles behind how zombie scanning works are somewhat complex. perform the method of zombie scanning with the subsequent steps:

  • Identify a remote system for our zombie host. The system should have the some characteristics, they are following:
  1. The system need to be idle and does not communicate actively with other systems over the network.
  2. The system need to use an incremental IPID sequence.
  • Send a SYN+ACK packet to this zombie host and record the initial IPID value.
  • Send a SYN packet with a spoofed source IP address of the zombie system to the scan target system.
  • Depending on the status of the port on the scan target, one of the following two things will happen:
  1. If the port is open, the scan target will return a SYN+ACK packet to the zombie host, which it believes sent the original SYN request. In this case, the zombie host will respond to this unsolicited SYN+ACK packet with an RST packet and thereby increment its IPID value by one.
  2. If the port is closed, the scan target will return an RST response to the zombie host, which it believes sent the original SYN request. This RST packet will solicit no response from the zombie, and the IPID will not be incremented.
  • Send another SYN+ACK packet to the zombie host, and evaluate the final IPID value of the returned RST response. If this value has incremented by one, then the port on the scan target is closed, and if the value has incremented by two, then the port on the scan target is open.

The following image shows the interactions that take place when we use a zombie host to scan an open port:

Zombie port scanning process

To perform a zombie scan, an initial SYN+ACK request should be sent to the zombie system to work out the present IPID value within the returned RST packet. Then, a spoofed SYN packet is shipped to the scan target with a source IP address of the zombie system. If the port is open, the scan target will send a SYN+ACK response back to the zombie. Since the zombie didn’t actually send the initial SYN request, it’ll interpret the SYN+ACK response as unsolicited and send an RST packet back to the target, thereby incrementing its IPID by one.

Finally, another SYN+ACK packet should be sent to the zombie, which can return an RST packet and increment the IPID another time. An IPID that has incremented by two from the initial response is indicative of the very fact that each one of those events have transpired which the destination port on the scanned system is open. Alternatively, if the port on the scan target is closed, a special series of events will transpire, which can only cause the ultimate RST response IPID value to increment by one.
The following picture is an demo of the sequence of events comes with the zombie scan of a closed port:

Zombie scan port close

If the destination port on the scan target is closed, an RST packet are going to be sent to the zombie system in response to the initially spoofed SYN packet. Since the RST packet solicits no response, the IPID value of the zombie system won’t be incremented. As a result, the ultimate RST packet returned to the scanning system in response to the SYN+ACK packet will have the IPID incremented by just one .

This process are often performed for every port that’s to be scanned, and it are often wont to map open ports on a remote system without leaving any evidence that a scan was performed by the scanning system.

This is how port scanning methods works. In this article we tried to do something different, this is not about any tool but if we are using Kali Linux or we are in cybersecurity field then we should have some technical knowledge. Hope this article also get love. This is all for today.

Love our articles? Make sure to follow us on Twitter and GitHub, we post article updates there. To join our KaliLinuxIn family, join our Telegram Group. We are trying to build a community for Linux and Cybersecurity. For anything we always happy to help everyone on the comment section. As we know our comment section is always open to everyone. We read each and every comment and we always reply.

Certified Ethical Hacker Version 11 | CEHv11 Exam (312-50)

Certified Ethical Hacker Version 11 | CEHv11 The Certified Ethical Hacker (CEH) credential is the most trusted ethical hacking certification and accomplishment recommended by employers globally. It is the most desired information security certification and represents one of the fastest-growing cyber credentials required by critical infrastructure and essential service providers. Since the introduction of CEH …

Certified Ethical Hacker Version 11 | CEHv11 Exam (312-50) Read More »

ATT&CK 2021 Roadmap

A review of how we navigated 2020 and where we’re heading in 2021

With the monumental disruptions, challenges, and hybrid work environments of 2020, we found innovative ways to collaborate and maintain momentum. We started off 2020 by launching ATT&CK for ICS and expanding it over the next few months to feature mitigations and STIX integration. A proposed ATT&CK data sources methodology was introduced, with the goal of more effectively representing adversary behavior from a data perspective. We added sub-techniques to address abstraction imbalances across the knowledge base, and for a few months, the matrix could fit on one slide again. PRE-ATT&CK’s scope was integrated into Enterprise ATT&CK, and two new tactics, Reconnaissance and Resource Development, emerged from the fusion. We released the Network Devices platform, featuring techniques targeting network infrastructure devices. The Cloud domain benefitted from refined Cloud data sources and new Cloud technique content. Our infrastructure team updated ATT&CK Navigator with new elements to enhance your visualization and planning experience. We launched the virtual ATT&CKCon PowerHour, featuring insights from ATT&CK practitioners and the ATT&CK team. Finally, we mapped techniques used in a series of intrusions involving SolarWinds (recently published as a point release to ATT&CK, v8.2) and publicly tracked reports describing those behaviors.

2021 Roadmap

Our objectives for the next 12 months shouldn’t be as disruptive as 2020’s changes. There aren’t significant structural adjustments planned and we’re looking forward to a period of stability. Our chief focus will be on enhancing and enriching content across the ATT&CK platforms and technical domains. We’ll be making incremental updates to core concepts, such as Software and Groups, and working towards a more structured contributions process, while maintaining a biannual release tempo, scheduled for April and October.

Improving and Expanding Mac/Linux | April & October 2021

We first introduced Mac and Linux techniques in 2017 and we’re ramping up our effort to improve and expand the coverage in this space. Our research efforts are ongoing, and we’re coordinating with industry partners to enrich the existing techniques and develop additional content to cover evolving adversary behavior. We’re also venturing into sub-technique exploration and the refactoring of data sources. Our current timeline is targeting macOS updates for the April release and slating Linux updates for the October release. Interested in contributing to this effort? Connect with us or check out our Contributions page.

Evolving ATT&CK Data Sources | April 2021 & October 2021

You may be aware that we’re revamping the process for ATT&CK data sources. Data sources are currently reflected in ATT&CK as properties/field objects of (sub-)techniques and are featured as a list of text strings without additional details or descriptions. With the refactoring, we’re converting the data sources into objects, a role previously only held by tactics, techniques, groups, software and mitigations. With data sources as objects, they’ll have their own corresponding properties, or metadata.

The new metadata provided by data sources includes the concepts of relationships and data components. These concepts will more effectively represent adversary behavior from a data perspective and will provide an additional sub-layer of context to data sources. Data components narrow the identification of security events, but also create a bridge between high- and low-level concepts to inform data collection strategies. They’ll also provide a good reference point to start mapping telemetry collected in your environment to specific sub(techniques) and/or tactics. With the additional context around each data source, the results can be leveraged with more detail when defining data collection strategy for techniques and sub-techniques.

An update of current Enterprise ATT&CK data sources in line with this new methodology is currently planned for the April release, with objects coming in October. Data source refactoring for other ATT&CK domains and platforms are also in progress.

Consolidating Cloud Platforms and Enhancing Data Sources | April 2021

Later this year we’ll be consolidating the AWS, Azure, and GCP platforms into a single Infrastructure as a Service (IaaS) platform. Many of you in the community provided feedback in favor of consolidation, and currently these three platforms share the same set of techniques and sub-techniques. Additionally, an IaaS platform will evolve ATT&CK for Cloud into a more inclusive domain, representing all Cloud Service Providers.

We’re also focused on creating more beneficial data sources for Cloud, shifting from a log-centric approach that isn’t necessarily the most effective for building detections, to aligning to events and API calls within the logs. The approach will mirror the refactoring happening across the rest of Enterprise and will be incorporated in future Cloud updates. IaaS data sources are in progress, and we’ll be expanding coverage to the SaaS, Azure AD, and Office 365 platforms. The initial IaaS data sources are the result of the 2020 revamping that involved normalizing name and structure of data sources across multiple Cloud vendors, with the APIs and events involved in detections across those multiple vendors relevant to a particular data source. The example below features a draft of the Instance data source:

If you have input or opinions on the future platforms or the data sources refactoring, let us know! We want to ensure that the changes we have planned are going to be beneficial to and continue to support your efforts.

Cross-Domain Mapping and Updating ICS Data Sources | October 2021

Along with Enterprise, one of our goals for ATT&CK for ICS this year is updating data sources. Network traffic is a popular source of data in ICS networks, but it often overshadows other valuable data sources, including embedded device logs, application logs, and operational databases. Some of the key elements we’ll be focusing on are processing information, asset management, configuration, performance and statistics, and physical sensors.

We’re also working on cross domain mapping. We’ve always emphasized that adversaries don’t respect theoretical boundaries, so having a deep understanding of how IT platforms are leveraged to access different domains or technology stacks, like ICS and Mobile, is really critical. The cross-domain mappings will help inform how to use the knowledge bases together and will more effectively demonstrate the full gamut and adversary behavior. Over the next few months, we’ll be focusing on mapping significant attacks against ICS, including Stuxnet, Industroyer, the 2015 Ukrainian attacks, and Triton, to Enterprise techniques This is a community effort, so if you have feedback on how you’re currently using mitigations, any input on our data source focus, or would like to contribute to the matrix, we encourage you to connect with us.

Refining and Expanding Mobile | October 2021

A key focus area for Mobile this year is working towards feature equity with Enterprise. This means continuing to refine and enhance our content, including working to identify new techniques, building out Software entries, and enhancing Group information. We’ll also be developing Mobile sub-techniques, which would provide that extra level of detail for the techniques that need it, without significantly expanding the size of the model. In addition to resolving the different levels of granularity between current techniques, sub-techniques would provide enhanced synergy between Mobile and the broader ATT&CK. The integration could potentially include unifying techniques between Mobile and Enterprise and using sub-techniques to differentiate mobile device specifics. Similar to Cloud and Network, the mobile device-specific content would still be separately viewable.

We’ve been coordinating with MITRE Engenuity as they look to examine mobile threats and how to evaluate the types of capabilities and solutions that address the threat. Their eventual goal is to provide public evaluations for Mobile, but there is still a lot of collaboration and awareness building needed to bring the community up to a collective understanding of the mobile threat landscape. Building on the criticality of a collective community understanding of Mobile threats, we kicked off a mini-series highlighting significant threats to mobile devices and we’ll continue walking through mobile security threats and how to use ATT&CK for Mobile to address them in over the next few months. We’re very interested in any adversary behavior targeting mobile devices that you’re seeing in the wild. If you would like to help us build out new techniques, or if you have data or observed behaviors you’d like to share, reach out or take a look at our Contributions page.

Investigating Container-based Techniques | Upcoming

Technique coverage for Container technologies (such as Kubernetes and Docker) have been on our docket for a while, and following the call for input in December, supporting a Center for Threat Informed Defense (CTID) research project, many of you responded with the contributions that informed the draft ATT&CK for Containers. We’re excited about this milestone, but we’re still exploring a few avenues before incorporating the techniques into ATT&CK. Most critically, we’re working to determine if adversary behaviors targeting containers result in objectives other than cryptomining. Our own research and ongoing conversations with contributors seem to point to most behaviors eventually leading to cryptomining activities, even when they involve accessing secrets such as cloud credentials.

With this in mind — we need your expertise and views from the trenches! If you’ve seen or heard of adversaries using containers for purposes such as exfiltration or collection of sensitive data, your input would be invaluable. With a better understanding of how adversary behavior in containers links to the rest of Enterprise, we’ll be able to develop a better approach for adding Containers techniques in a future ATT&CK release. We’re interested in your opinions on any gaps in the matrix or in-the-wild adversary behaviors that are not currently represented — let us know if you’d like to have a conversation!

Unleashing ATT&CK Workbench | Upcoming

Later this year we’re partnering with the CTID to launch a new toolset that will enable you to get behind the wheel and explore, create, annotate and share extensions of ATT&CK. ATT&CK Workbench will provide the tools, infrastructure, and documentation to simplify how you operate and adapt ATT&CK to local environments while staying in sync with upstream sources of ATT&CK content. Ever wanted to add some new procedures to T1531? Or monitor a threat group ATT&CK’s not currently tracking? How about sharing notes with team members on a specific object? Workbench will also enhance our ability to collaborate — you’ll be able to easily contribute techniques, extensions, and enhancements to ATT&CK. We’re excited to see how the community will leverage the toolset to apply the ATT&CK approach to new domains.

Innovating ATT&CKcon | Upcoming

We kicked off the concept of ATT&CKcon in 2018, and our inaugural venture featured around 1,250 virtual and in-person participants. In 2019, ATT&CKcon 2.0 reached more people than ever before, with 7,315 online registrations. With the global pandemic in 2020, we created ATT&CKcon Power Hour, a series of monthly 90-minute virtual power presentations, which have had a reach of over 12,000 to date. We don’t know exactly what ATT&CKcon 3.0 (4.0?) in 2021 will bring, aside from the great speakers sharing their insights from working with ATT&CK in the trenches, but we’re excited to see how it’ll continue to grow. Stay tuned for additional details on what ATT&CKcon 2021 will look like and how you can get involved.

In Closing

Listening to the ATT&CK community, incorporating your feedback, and acting on your input has always been central to our model. ATT&CK is community-driven, and your first-hand knowledge and on-the-ground experience will continue to be critical to our efforts to evolve and expand the framework. We look forward to collaborating with you and appreciate your dedication to helping us improve ATT&CK for the entire community. You can always connect with us via email, Twitter, or Slack.

©2021 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 20–00841–24.


ATT&CK 2021 Roadmap was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

Defining ATT&CK Data Sources, Part I: Enhancing the Current State

Figure 1: Example of Mapping of Process Data Source to Event Logs

Discussion around ATT&CK often involves tactics, techniques, procedures, detections, and mitigations, but a significant element is often overlooked: data sources. Data sources for every technique provide valuable context and opportunities to improve your security posture and impact your detection strategy.

This two-part blog series will outline a new methodology to extend ATT&CK’s current data sources. In this post, we explore the current state of data sources and an initial approach to enhance them through data modeling. We’ll define what an ATT&CK data source object represents and how we can extend it to introduce the concept of data components. In our next post we’ll introduce a methodology to help define new ATT&CK data source objects.

The table below outlines our proposed data source object schema:

Table 1: ATT&CK Data Source Object

Where to Find Data Sources Today

Data sources are featured as part of the (sub)technique object properties:

Figure 2: LSASS Memory Sub-Technique (https://attack.mitre.org/techniques/T1003/001/)

While the current structure only contains the names of the data sources, to understand and effectively apply these data sources, it is necessary to align them with detection technologies, logs, and sensors.

Improving the Current Data Sources in ATT&CK

The MITRE ATT&CK: Design and Philosophy white-paper defines data sources as “information collected by a sensor or logging system that may be used to collect information relevant to identifying the action being performed, sequence of actions, or the results of those actions by an adversary”.

ATT&CK’s data sources provide a way to create a relationship between adversary activity and the telemetry collected in a network environment. This makes data sources one of the most vital aspects when developing detection rules for adversary actions mapped to the framework.

Need some visualizations and audio track to help decipher the relationships between data sources and the number of techniques covered by them? My brother and I recently presented at ATT&CKcon on how you can explore more about data sources metadata and how to use sources to drive successful hunt programs.

Figure 3:ATT&CK Data Sources, Jose Luis Rodriguez & Roberto Rodriguez

We categorized a number of ways to improve the current approach to data sources. Many of these are based on community feedback, and we’re interested in your reactions and comments to our proposed upgrades.

1. Develop Data Source Definitions

Community feedback emphasizes that having definitions for each data source will enhance efficiency while also contributing to data collection strategy development. This will enable ATT&CK users to quickly translate data sources to specific sensors and logs in their environment.

Figure 4: Data Sources to Event Logs

2. Standardize the Name Syntax

Standardizing the naming convention for data sources is another factor that came up during feedback conversations. As we outline in the image below, data sources can be interpreted differently. For example, some data sources are very specific, e.g., Windows Registry, while others, such as Malware Reverse Engineering, have a wider scope. We propose a consistent naming syntax structure that addresses explicitly defined elements of interest from the data being collected such as files, processes, DLLs, etc.

Figure 5: Name Syntax Structure Examples

3. Address Redundancy and Overlapping

Another unintended consequence of not having a standard naming structure for data sources is redundancy, which can also lead to overlaps.

Example A: Loaded DLLs and DLL monitoring

The recommended data sources related to DLLs imply two different detection mechanisms; however, both techniques leverage DLLs being loaded to proxy execution of malicious code. Do we collect “Loaded DLLs” or focus on “DLL Monitoring”? Do we do both? Can they just be one data source?

Figure 6: AppInit DLLs Sub-Technique (https://attack.mitre.org/techniques/T1546/010/)
Figure 7: Netsh Helper DLL Sub-Technique (https://attack.mitre.org/techniques/T1546/007/)

Example B: Collecting process telemetry

All of the information provided by Process Command-line Parameters, Process use of Network, and Process Monitoring refer to a common element of interest, a process. Do we consider that “Process Command-Line Parameters” could be inside of “Process Monitoring”? Can “Process Use of Network” also cover “Process Monitoring” or could it be an independent data source?

Figure 8: Redundancy and overlapping among data sources

Example C: Breaking down or aggregating Windows Event Logs

Finally, data sources such as “Windows Event Logs” have a very broad scope and cover several other data sources. The image below shows some of the data sources that can be grouped under event logs collected from Windows endpoints:

Figure 9: Windows Event Logs Viewer

ATT&CK recommends collecting events from data sources such as PowerShell Logs, Windows Event Reporting, WMI objects, and Windows Registry. However, these could be already covered by “Windows Event Logs” as previously shown. Do we group every Windows data source under “Windows Event Logs” or keep them all as independent data sources?

Figure 10: Windows Event Logs Coverage Overlap

4. Ensure Platform Consistency

There are also data sources that, from a technique’s perspective, are linked to platforms where they can’t feasibly be collected. For example, the image below highlights data sources related to the Windows platform such as PowerShell logs and Windows Registry given for techniques that can be also used on other platforms such as macOS and Linux.

Figure 11: Windows Data Sources

This issue has been addressed to a degree by the release of ATT&CK’s sub-techniques. For instance, in the image below you can see a description of the OS Credential Dumping (T1003) technique, the platforms where it can be performed, and the recommended data sources.

Figure 12: OS Credential Dumping Technique (https://attack.mitre.org/techniques/T1003/)

While the field presentation could still lead us to relate PowerShell logs data source to non-Windows platform, once we start digging deeper into sub-technique details, the association between PowerShell logs and non-Windows platforms disappears.

Figure 13: LSASS Memory Sub-Technique (https://attack.mitre.org/techniques/T1003/001/)

Defining the concept of platforms at a data source level would increase the effectiveness of collection. This could be accomplished by upgrading data sources from a simple property or field value to the status of an object in ATT&CK, similar to a (sub)technique.

A Proposed Methodology to Update ATT&CK’s Data Sources

Based on feedback from the ATT&CK community, it made sense to start providing definitions for each ATT&CK data source. However, we realized right away that without a structure and a methodology to describe data sources, definitions would be a challenge. Even though it was simple to describe data sources such as “Process Monitoring”, “File Monitoring”, “Windows Registry” and even “DLL Monitoring”, data source descriptions for “Disk Forensics”, “Detonation Chamber” or “Third Party Application Logs” are more complex.

We ultimately recognized that we needed to apply data concepts that could help us provide more context to each data source in an organized and standardized way. This would allow us to also identify potential relationships among data sources and improve the mapping of adversary actions to data that we collect.

Our methodology for upgrading ATT&CK’s data sources is captured in the following six ideas:

1. Leverage Data Modeling

A data model is a collection of concepts for organizing data elements and standardizing how they relate to one another. If we apply this basic concept to security data sources, we can start identifying core data elements that could be used to describe a data source in a more structured way. Furthermore, this will help us to identify relationships among data sources and enhance the process of capturing TTPs from adversary actions.

Here is an initial proposed data model for ATT&CK data sources:

Table 2: Data Modeling Concepts

Based on this notional model, we can begin to identify relationships between data sources and how they apply to logs and sensors. For example, the image below represents several data elements and relationships identified while working with Sysmon event logs:

Figure 14: Relationships examples for process data object — https://github.com/hunters-forge/OSSEM/tree/master/data_dictionaries/windows/sysmon

2. Define Data Sources Through Data Elements

Data modeling enables us to validate data source names and provide a definition for each one in a standardized way. This is accomplished by leveraging the main data elements present in the data we collect.

We can use the data element to name the data source related to the adversary behavior that we want to collect data about. For example, if an adversary modifies a Windows Registry value, we’ll collect telemetry from the Windows Registry. How the adversary modifies the registry, such as the process or user that performed the action, is additional context we can leverage to help us define the data source.

Figure 15: Registry Key as main data element

We can also group related data elements to provide a general idea of what needs to be collected. For example, we can group the data elements that provide metadata about network traffic and name it Netflow.

Figure 16: Main data elements for Netflow data source

3. Incorporate Data Modeling and Adversary Modeling

Leveraging data modeling concepts would also enhance ATT&CK’s current approach to mapping a data source to a technique or sub-technique. Breaking down data sources and standardizing the way data elements relate to each other would allow us to start providing more context around adversary behaviors from a data perspective. ATT&CK users could take those concepts and identify what specific events they need to collect to ensure coverage over a specific adversary action.

For example, in the image below, we can add more information to the Windows Registry data source by providing some of the data elements that relate to each other to provide more context around the adversary action. We can go from Windows Registry to ( Process — created — Registry Key).

This is just one relationship that we can map to the Windows Registry data source. However, this additional information will facilitate a better understanding of the specific data we need to collect.

Figure 17: ATT&CKcon 2019 Presentation — Ready to ATT&CK? Bring Your Own Data (BYOD) and Validate Your Data Analytics!

4. Integrate Data Sources into ATT&CK as Objects

The key components in ATT&CK — tactics, techniques, and groups — are defined as objects. The image below demonstrates how the technique object is represented within the framework.

Figure 18: ATT&CK Object Model with Data Source Object

While data sources have always been a property/field object of a technique, it’s time to convert them into objects, with their own corresponding properties.

5. Expand the ATT&CK Data Source Object

Once data sources are integrated as objects in the ATT&CK framework, and we establish a structured way to define data sources, we can start identifying additional information or metadata in the form of properties.

The table below outlines some initial properties we propose starting off with:

Table 3: Data Modeling Concepts

These initial properties will advance ATT&CK data sources to the next level and open the door to additional information that will facilitate more efficient data collection strategies.

6. Extend Data Sources with Data Components

Our final proposal is to define data components. The relationships we previously discussed between the data elements related to the data sources (e.g., Process, IP, File, Registry) can be grouped together and provide an additional sub-layer of context to data sources. This concept was developed as part of the Open Source Security Event Metadata (OSSEM) project and presented at ATT&CKcon 2018 and 2019. We refer to this concept as Data Components.

Data Components in action

In the image below, we extended the concept of Process and defined a few data components including Process Creation and Process Network Connection to provide additional context. The outlined method is meant to provide a visualization of how to collect from a Process perspective. These data components were created based on relationships among data elements identified in the available data source telemetry.

Figure 19: Data Components & Relationships Among Data Sources

The diagram below maps out how ATT&CK could provide information from the data source to the relationships identified among the data elements that define the data source. It’d then be up to you to determine how best to map those data components and relationships to the specific data you collect.

Figure 20: Extending ATT&CK Data Sources

What’s Next

In the second post of this two-part series, we’ll explore a methodology to help define new ATT&CK data source objects and how to implement the methodology with current data sources. We will also release the output of our initial analysis, where we applied these data modeling concepts to draft a sample of the new proposed data source objects. In the interim, we appreciate those who contributed to the discussions around data sources and we look forward to your additional feedback.

©2020 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 20–00841–11.


Defining ATT&CK Data Sources, Part I: Enhancing the Current State was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

Defining ATT&CK Data Sources, Part II: Operationalizing the Methodology

In Part I of this two-part blog series, we reviewed the current state of the data sources and an initial approach to enhancing them through data modeling. We also defined what an ATT&CK data source object represents and extended it to introduce the concept of data components.

In Part II, we’ll explore a methodology to help define new ATT&CK data source objects, how to implement the methodology with current data sources, and share an initial set of data source objects at https://github.com/mitre-attack/attack-datasources.

Formalizing the Methodology

In Part I we proposed defining data sources as objects within the ATT&CK framework and developing a standardized approach to name and define data sources through data modeling concepts. Our methodology to accomplish this objective is captured in five key steps — Identify Sources of Data, Identify Data Elements, Identify Relationships Among Data Elements, Define Data Components, and Assemble the ATT&CK Data Source Object.

Figure 1: Proposed Methodology to Define Data Sources Object

Step 1: Identify Sources of Data

Identifying the security events to inform the specific ATT&CK data sources being assessed kickstarts the process. The security events can be uncovered by reviewing the metadata in the event logs that reference the specific data source (i.e., process name, process path, application, image). We recommend complementing this step with documentation or data dictionaries that identify relevant event logs to provide the key context around the data source. It’s important at this phase in the process to document where the data can be collected (collection layer and platform).

Step 2: Identify Data Elements

Extracting data elements found in the available data enables identification of the data elements that could provide the name and the definition of the data source.

Step 3: Identify Relationships Among Data Elements

During the identification of data elements, we can also start documenting the available relationships that will be grouped to enable us to define potential data components.

Step 4: Define Data Components

The output of grouping the relationships is a list of all potential data components that could provide additional context to the data source.

Step 5: Assemble the ATT&CK Data Source Object

Connecting all of the information from previous steps enables us to structure them as properties of the data source object. The table below provides an approach for organizing the combined information into a data source object.

Table 1: ATT&CK Data Source Object

Operationalizing the Methodology

To illustrate how the methodology can be applied to ATT&CK data sources, we feature use cases in the following sections that reflect and operationalize the process.

Starting with the ATT&CK data source that is mapped to the most sub-techniques in the framework, Process Monitoring, we will create our first ATT&CK data source object. Next we will create another ATT&CK data source object around Windows Event Logs, a data source that is key for detecting a significant number of techniques

Windows is leveraged for the use cases, but the approach can and should be applied to other platforms.

Improving Process Monitoring

1) Identifying Sources of Data: In a Windows environment, we can collect information pertaining to “Processes” from built-in event providers such as Microsoft-Windows-Security-Auditing and open third-party tools, including Sysmon.

This step also takes into account the overall security events where a process can be represented as the main data element around an adversary action. This could include actions such as a process connecting to an IP address, modifying a registry, or creating a file. The following image displays security events from the Microsoft-Windows-Security-Auditing provider and the associated context about a process performing an action on an endpoint:

Figure 2: Windows Security Events Featuring a Process Data Element

These security events also provide information about other data elements such as “User”, “Port” or “Ip”. This means that security events can be mapped to other data elements depending on the data source and the adversary (sub-)technique.

The source identification process should leverage available documentation about organization-internal security events. We recommend using documentation about your data or examining data source information in open source projects such as DeTT&CT, the Open Source Security Events Metadata (OSSEM), or ATTACK Datamap.

An additional element that we can extract from this step is the data collection location. A simple approach for identifying this information includes documenting the collection layer and platform for the data source:

  • Collection Layer: Host
  • Platform: Windows

The most effective data collection strategy will be customized to your unique environment. From a collection layer standpoint, this varies depending on how you collect data in your environment, but Process information is generally collected directly from the endpoint. From a platform perspective, this approach can be replicated on other platforms (e.g., Linux, macOS, Android) with the corresponding data collection locations captured.

2) Identifying Data Elements: Once we identify and understand more about sources of data that can be mapped to an ATT&CK data source, we can start identifying data elements within the data fields that could help us eventually represent adversary behavior from a data perspective. The image below displays how we can extend the concept of an event log and capture the data elements featured within it.

Figure 3: Process Data Source — Data Elements

We will also use the data elements identified within the data fields to create and improve the naming of data sources and inform the data source definition. Data source designations are represented by the core data element(s). In the case of Process Monitoring, it makes sense for the data source name to contain “Process” but not “Monitoring,” as monitoring is an activity around the data source that is performed by the organization. Our naming and definition adjustments for “Process” are featured below:

  • Name: Process
  • Definition: Information about instances of computer programs that are being executed by at least one thread.

We can leverage this approach across ATT&CK to strategically remove extraneous wording in data sources.

3) Identifying Relationships Among Data Elements: Once we have a better understanding of the data elements and a more relevant definition for the data source itself, we can start extending the data elements information and identifying relationships that exist among them. These relationships can be defined based on the activity described by the collected telemetry. The following image features relationships identified in the security events that are related to the “Process” data source.

Figure 4: Process Data Source — Relationships

4) Defining Data Components: All of the combined information aspects in the previous steps contribute to the concept of data components in the framework.

Based on the relationships identified among data elements, we can start grouping and developing corresponding designations to inform a high-level overview of the relationships. As highlighted in the image below, some data components can be mapped to one event (Process Creation -> Security 4688) while other components such as “Process Network Connection” involve more than one security event from the same provider.

Figure 5: Process Data Source — Data Components

“Process” now serves as an umbrella over the linked information facets relevant to the ATT&CK data source.

Figure 6: Process Data Source

5) Assembling the ATT&CK Data Source Object: Aggregating all of the core outputs from the previous steps and linking them together represents the new “Process” ATT&CK data source object. The table below provides a basic example of it for “Process”:

Table 2: Process Data Source Object

Improving Windows Event Logs

1) Identifying Sources of Data: Following the established methodology, our first step is to identify the security events we can collect pertaining to “Windows Event Logs”, but it’s immediately apparent that this data source is too broad. The image below displays a few of the Windows event providers that exist under the “Windows Event logs” umbrella.

Figure 7: Multiple Event Logs in Windows Event Logs

The next image reveals additional Windows event logs that could also be considered sources of data.

Figure 8: Windows Event Viewer — Event Providers

With so many events, how do we define what needs to be collected from a Windows endpoint when an ATT&CK technique recommends “Windows Event Logs” as a data source?

2–3–4) Identifying Data Elements, Relationships and Data Components: We suggest that the current ATT&CK data source Windows Event Logs can be broken down, compared with other data sources for potential overlaps, and replaced. To accomplish this, we can duplicate the process we previously used with Process Monitoring to demonstrate that Windows Event Logs covers several data elements, relationships, data components and even other existing ATT&CK data sources.

Figure 9: Windows Event Logs Broken Down

5) Assembling the ATT&CK Data Source Object: Assembling the outputs from the process, we can leverage the information from Windows security event logs to create and define a few data source objects.

Table 3: File Data Source Object
Table 4: PowerShell Log Data Source Object

In addition, we can identify potential new ATT&CK data sources. The User Account case was the result of identifying several data elements and relationships around the telemetry generated when adversaries create a user, enable a user, modify properties of a user account, and even disable user accounts. The table below is an example of what the new ATT&CK data source object would look like.

Table 5: User Account Data Source Object (NEW)

This new data source could be mapped to ATT&CK techniques such as Account Manipulation (T1098).

Figure 10: User Account Data Source for Account Manipulation Technique

Applying the Methodology to (Sub-)Techniques

Now that we’ve operationalized the methodology to enrich ATT&CK data through defined data source objects, how does this apply to techniques and sub-techniques? With the additional context around each data source, we can leverage the results with more context and detail when defining a data collection strategy for techniques and sub-techniques.

Sub-Technique Use Case: T1543.003 Windows Service

T1543 Create and Modify System Process (used to accomplish Persistence and Privilege Escalationtactics) includes the following sub-techniques: Launch Agent, System Service, Windows Service, and Launch Daemon.

Figure 11: Create or Modify System Process Technique

We’ll focus on T1543.003 Windows Service to highlight how the additional context provided by the data source objects make it easier to identify potential security events to be collected.

Figure 12: Windows Service Sub-Technique

Based on the information provided by the sub-technique, we can start leveraging some of the ATT&CK data objects that can be defined with the methodology. With the additional information from Process, Windows Registry and Service data source objects, we can drill down and use properties such as data components for more specificity from a data perspective.

In the image below, concepts such as data components not only narrow the identification of security events, but also create a bridge between high- and low-level concepts to inform data collection strategies.

Figure 13: Mapping Event Logs to Sub-Techniques Through Data Components Example

Implementing these concepts from an organizational perspective requires identifying what security events are mapped to specific data components. The image above leverages free telemetry examples to illustrate the concepts behind the methodology.

This T1543.003 use case demonstrates how the methodology aligns seamlessly with ATT&CK’s classification as a mid-level framework that breaks down high-level concepts and contextualizes lower-level concepts.

Where can we find initial Data Sources objects?

The initial data source objects that we developed can be found at https://github.com/mitre-attack/attack-datasources in Yaml format for easy consumption. Most of the data components and relationships were defined from a Windows Host perspective and there are many opportunities for contributions from collection layer (i.e. Network, Cloud) and platform (i.e. Mac, Linux) perspectives for applying this methodology.

Outlined below is an example of the Yaml file structure for the Service data source object:

Figure 14: Service Data Source Object — Yaml File

Going Forward

In this two-part series, we introduced, formalized and operationalized the methodology to revamp ATT&CK data sources. We encourage you to test this methodology in your environment and provide feedback about what works and what needs improvement as we consider adopting it for MITRE ATT&CK.

As highlighted both in this post and in Part I, mapping data sources to data elements and identifying their relationships is still a work in progress and we look forward to continuing to develop this concept with community input.

©2020 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 20–02605–3


Defining ATT&CK Data Sources, Part II: Operationalizing the Methodology was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.

ISO 27001 Annex : A.14.2.3 Technical Review of Applications after Operating Platform Changes , A.14.2.4 Restrictions on Changes to Software Packages & A.14.2.5 Secure System Engineering Principles

In this article explain ISO 27001 Annex : A.14.2.3 Technical Review of Applications after Operating Platform Changes , A.14.2.4 Restrictions on Changes to Software Packages & A.14.2.5 Secure System Engineering Principles this controls. A.14.2.3  Technical Review of Applications after Operating Platform Changes Control- In changing operating platforms, critical applications of business should be revised and …

ISO 27001 Annex : A.14.2.3 Technical Review of Applications after Operating Platform Changes , A.14.2.4 Restrictions on Changes to Software Packages & A.14.2.5 Secure System Engineering Principles Read More »

Open Whatsapp chat
Whatsapp Us
Chat with us for faster replies.