The case for Cisco and “Network Insights”

Quick background check

The two main products my team and I focused on were Secure Network Analytics and Secure Cloud Analytics. Both products offer comprehensive visibility and network traffic analysis solutions that use enterprise telemetry from the existing network infrastructure. They provides advanced threat detection, accelerated threat response, and simplified network segmentation using multilayer machine learning and entity modeling.

 

What that means as a product and interface for our users can be a number of things. But on a simpler view, it means tons and tons of data, stored on various pages behind layers of search criteria and millions of rows of table data. Users need this data to be concise, prioritized, sorted and put into reports that are digestible to help them focus on real threats and not waste their time.

Discovering the problem space

First things first, I led various workshops in pursuit of what content or data sets within our products were most essential and used most often for our core user group.

 

Common Secure Network Analytics pain points:

  • Many users pushed security info upstream to a SIEM that would organize the data
  • The barrier of entry for everyday users to tune the alerts and desired prioritization
  • Network analysts would come into the product for an investigation, specifically for flow search capabilities, and open many tabs
  • Categories and alarm information are helpful, but users more often come into SNA to dive into the depth of their network data to gather insights on what happened and where it occured

Common Secure Cloud Analytics pain points:

  • Being that SCA comes pre-set with a large base of alerts already configured, most users don’t adjust much or any of the priority settings
  • Many users experienced alert fatigue due to lack of prioritization and common naming
  • Various users relied heavily on email alerting to know when to come into the product
  • Although the Security Analyst would check the alerts more regularly within the product, the Network Analyst was there for the network data and would quickly pivot to the Device Insights, Observations and Event Viewer pages for correlations and clarity
"The longer I've been here it seems we've programmed outselves to constantly send us email alerts so my mailbox gets filled with about a million email alerts and frankly, most of which go straight to the trash. I'll be hoenst: if we knew that we were getting alarms that truly were a high probability issue, we would probably look at email, texting, or webex messaging to help prioritize those. But we just don't have time to dive in all of them."
User 1
CTO

Defining the solution

Upon joining the team, I was soon assigned a high-profile, significant project concerning the overall vision for the primary dashboard and everyday experience of the core product. After diving in with stakeholders, we quickly discovered we had very little quantitative or qualitative data to justify such significant assumptions for what we knew about our most common persona, Remi.

With such a high-risk/high-profile project as this was, it was critical to follow every step of the design process to ensure alignment with all our stakeholders and deliver real value to our customers.

So we started our discovery process, leading to an external survey and over 30+ interviews and hundreds of notes and insights. We conducted 4 total rounds of research and iterations, and each round consisted of the same steps below:

1. REFINEMENT AND PRIORITIZATION

This step typically consisted of meetings and workshops focused on refining priorities, roadmap planning, journey mapping,competitive analysis, user workflows updating designs, assumption mapping, aligning with stakeholders, and creating a research plan for the following phase.

(Personal tip:  Although plenty of activities can be done at this stage, the most important to me is clear and consistent communication. No amount of activity can make up for building trust with stakeholders and translating complex ideas into concise problem and value statements.)

This part of the process was always the most crucial and challenging as it can, at times, feel like you’re going in circles. Still, I found that we could keep the work on track and progressing forward by minimizing one-off sessions and planning strategic workshops and collab sessions with effective communication. It was essential always to be selective with the users we were going to interview to ensure that they matched our target persona for the given user study and to ensure our research questions were all ironed out and in alignment with our stakeholders.

2. IDEATION AND PROTOTYPES

This step is always the most fun and often looked at as what we do best as designers. To the right you can view the progression of similar dashboard screens and prototypes we tested throughout the process. These are just the screenshots of the actual prototypes that were filled with interactions, drawers and contextual data.

(Personal tip: It’s important to never rush or skip over the prototyping phase even when your designs are sticking close to a design system. This can result in critical issues or misdirections in user interviews or stakeholder reviews.  I say, take your time and gather feedback and refine your design as it can drastically affect the feedback you will receive.)

3. USER INTERVIEWS AND HUNDREDS OF NOTES

Now, this step tended to always be the most time consuming of them all and for good reason. You should never rush good analysis of user feedback and be sure to always check yourself and others of assumptions you may be making when making sense of it all.

With a research plan and study flow in hand, it was time to test the Figma prototype with the users. We were sure to leverage the current design system so as not to add extra usability elements within the user testing and include data that would be familiar to the user’s common workflow.

Adequate note-taking was always taken in the voice of the customer and devoid of any personal assumptions. It was key to keep these notes sorted correctly and efficiently using the template you can see in the image on the right. Team members would often hop on the call in the background to aid in note-taking but it was crucial that I also watch every video and fill in the gaps of notes after every call.

4. ANALYSIS, PRESENTATIONS AND HANDOFFS

The fourth and final step is often the most rewarding. We start by completing a thematic analysis of all the notes we took from the interviews. We do this by breaking down all the insights into smaller digestible categories that are sorted by similarity and priority. We then quantify them and cross-check that our original research questions were being answered.

(Personal tip: It’s super easy at this stage to get excited with PMs on delivering value that we can quickly “overengineer” some of the discovered value and features and not include engineering leads early and often when prioritizing where to focus next steps.)

After this comes everyone’s favorite part of sharing these insights, highlights, and recommendations through presentations, recordings, and vidcasts. It’s vital to ensure that every main stakeholder receives this feedback and is a part of the collaboration process for decision-making and planning.

After each stage of research, it was essential that we broke down and implemented any key design elements that were validated enough to be handed off to development that fit within their bounds of key initiatives and time management.

"I think this is a great dashboard. I think it tells me what I need to know. It tells me if there's anything imminent and, if so, what they're using. It also shows me my alerts and alarms and shows me kind of an overview of what concerns it's seeing based on your machine. I also like that it gives me the high-value targets, the concen area or machines, and that way you can see how protected things are."
User 2
SecOps Lead

Final prototype we tested

Some key insights we learned

1. We discovered halfway that our secondary persona was in fact the user that got the most from the product and was the least focused on.

2. The majority of those users came into the product as step 3 or 4 with regards to an investigation they were threat-hunting.

3. These users would dive into the deeper network data in search of spikes or anomalies to discover their relations, correlations, and changes outside of the norm.

4. All users wanted their primary dashboard experience to be more of a customizable “workboard” that included custom visualizations, sorting, filtering and adequate pivoting.

5. We could encourage more proactive threat-hunting activity within the product by focusing on trends and percentage changes that could help the users hunt from that point of view.

 

Let’s work on
your product together.

Send me an email

Visit to download my resume and see my info