Advanced Threat Detection

Optimizing efficiency for speedy threat detection & protection

A bit of context

Advanced threat detection (ATD) solutions are designed to detect attacks that employ advanced malware and persistent remote access in an attempt to steal sensitive corporate data over a length of time.
Graph ATD takes information from multiple data sources, such as logs and events in your network, to learn the behavior of users and other entities in the organization, and builds a behavioral profile about them. ATA can receive events and logs from SIEM Integration, Windows Event Forwarding (WEF), Directly from the Windows Event Collector (for the Lightweight Gateway), and gathers them all in Graph ATD panel.


My Role

I was the only designer responsible for design and maintenance of Graph ATD, for this project I worked alongside our user researcher to discover pain points and improve the Graph ATD panel.


Background

Through our feedback channels we figured out that parts of the system are not functioning well and have severe usability issues. The challenge was to make it easier to use and help security analysts to get their job done as fast as possible.


PHASE 1: Understanding THE PAin Points

We conducted a usability test to find the problems and observe the feedbacks.
Five users participated in this test which was the 1st usability testing for this product.
Each session lasted approximately 30-40 minutes.

Scope of the tasks

Graph ATD users must be able to:

  • View “Alarms” in the system

  • Manage the alarms and comment on them

  • Work perfectly with sorts and filters to narrow down to the types of alerts they wish.

Task #1:

You are on the main page of Graph ATD and want to learn more about the security alerts in the system. Please go to the location at which you think it is best to review the system alerts.

Task #2:

Look for the High-Risk alerts of the Operating System by details to check if there is any threat to the system.

Task #3:

At the previously opened alert, assign it to the user "Ali Tar", comment on the possible solution for him (Can use dummy text), and change its status to “In progress.”

Task #4:

For a given alarm, check if it's false positive or not.

Task #5:

Expand a medium level alarm and write a comment on it.

Task #6:

You only want to see the alarms generated from Sep 19 through Sep 22, 2020. Do as you think is the best way.

Task #7:

You want to change the 1st, 2nd, 4th, and 6th alarm’s status to closed. Do as you think is the best way.

Task #8:

For a low-level alarm, create a case and add the alarm to that case.

Task #9:

Open another alarm and add it to the recently created case.


We chose usability test to measure effectiveness, efficiency and satisfaction.

We measure effectiveness by using two usability metrics; success rate, which is called the completion rate, and the number of errors.

The efficiency of the product is measured by two factors; completion time and overall relative efficiency. After finishing the test session, the user answers the last element; the task level satisfaction, by filling a post-task question.


Summary of interviewees

We invited 5 users, all male aged between 28 to 36, educated in software engineering and IT, with the occupation of security analyst in different seniorities


Summary of quantitative data

The table below displays a summary of the qualitative data we collected from the tests.

Affinity Diagramming

After the test we went through the results and made an affinity diagram of the comments, errors and problems we discovered during the test.

Phase 2: contextual inquiry

Through the usability testing process, we discovered some pains that an interview couldn't cover. We decided to conduct a contextual inquiry to observe what users are doing on a working day.
This would help us understand how users interact with the system under real conditions such as interruptions, how do they achieve their goal when missing a function or pieces of information.

Key Findings

  • Filters are confusing and should be customizable, users didn't necessarily need the same sets of filters

  • It's hard to find the date on the date picker

  • Alarm cards can be viewed in 2 different ways, expert users are different than normal users

  • Seeing notifications when an action happens, users don't understand if the task is done or an error has occurred

  • Fixing development issues


Design and validation

I designed solutions for customizable filters, changed the look and information structure of the alarm cards, made improvements to the date picker and tested a high fidelity prototype with the users.

The results were delightful, but this is not the end of this road.
Due to a NDA contract, I can't share the whole screen. You can see the solutions for each problem separately.

Alarms, and information architecture

Filters AND View Options

Date Picker

What came next?

We discovered common patterns in what users do on a daily-basis when investigating alarms or false-positive incidents. We visualized it in a design thinking process to reduce the effort and pain for a security analyst.


Conclusion

The project's goal was to improve the experience and help our Security Analysts achieve their goals in a more reliable and efficient way. The observation and research process helped us discover the main issues and pain points. We continuously introduced iterative improvements to create a product with the least possible issues.