top of page

NAUTILUS SECURITY PLATFORM

The Risk Management Systems team at Amazon needed a new and scalable solution for security management and investigation. I led the design of Nautilus, an Amazon internal investigation platform which launched in 2018. 

MY ROLE

I led the design of Amazon's Security Investigation Platform. 

 

I was responsible for User Research, Design Strategy and overall UX. I set the overall roadmap for my team, established UX engagement mechanism with product, prioritized designers' work, and reviewed UX work. I wrote Design guidelines for Nautilus. 

I worked with the Platform product and development team servicing 30 programs onboarding to Nautilus. Design and research I created was leveraged by other designers on the team, product managers, developers and quality assurance engineers. 

IMPACT 

naut.png
  • Optimizing the investigator experience meant an increase in operational efficiency for the TRMS Organization. 

  • Increasing accuracy of investigator outcomes meant a positive shopping experience for legitimate Amazon customers. 

THE PROBLEM

EFFICIENCY VS ACCURACY 

1. Investigation is very manual - Investigators make decisions on customer orders 1x1 

2. Investigators are presented with hundreds of raw data points needed to make a risk decision in less than 1 minute. Investigators requested more data on customers, and yet it wasn't clear how they'd review all of it in a single minute. 

3. Tools are not supporting them in making an accurate decision. They rely on SOPs (manuals) to follow instructions that are constantly changing. 

 

THE CHALLENGE

HIGHLY SPECIALIZED WORK

Existing legacy tools

Existing concepts from another x-amazon designer

2.png
page_4.png

“We are tasked with sorting through copious amounts of data to make informed decisions. The use of a tool with a great UX enables us to quickly analyze complex situations and make quality decisions in a timely manner. 

 

When we are able to diagnose the root cause of issues efficiently and take the best course of action it results in a much better customer experience and assures that both buyers and sellers are being protected on the platform” - Buyer Risk Investigator

GOALS 

Reduce the queue volume 

Improve accuracy (quality) of the manual investigation 

Improve investigator efficiency. 


Short term Goal

Long Term Business Goal 

Deprecate legacy tools and transfer functionality to a new platform


My Design Goals 

Enable users to visualize data to analyze data

make meaningful insights and make accurate decisions efficiently. 


THE PROCESS: FOLLOW THE USER AND ALL ELSE WILL FOLLOW 

BREAKING DOWN COMPLEXITY

DATA ANALYSIS

CONCEPT DESIGN

USER FEEDBACK

GLOBAL

INVESTIGATORS

Given the highly specialized nature of manual investigations, this meant it was crucial to work directly with end users. I attended investigator training sessions, shadowed their work and led a global research study to understand their worldview. 

 

In the past, developers scrappily created solutions in siloed teams. This meant that a holistic investigator experience was at risk - Workflows getting overly complex with little thought into the overall information architecture of the platform and task completion time. Working with one-off stories in SIMs sometimes meant that solutions get chopped off into separate SIMs, decisions get made last minute and the intended user flow is lost.

 

Over time this lead to an overly complex tool that was not easy to understand, let alone write requirements for improving. We were not sure how investigators reached their outcomes and what process or workflow they followed. 

We wanted to optimize the experience to show only what's needed by the human eye. Our current process required 6+ weeks of training, human help and efficiency metrics they needed to meet. 

 

The challenge here is as tools got more complex, so did our training materials, our SOPs, and the number of investigators needed to process all of it. We needed a wholistic solution that streamlined manual investigation. 

Problems I wanted to solve were: 

1. How do I support 30+ teams with separate workflows building on this platform? 

2. How to solve the long-term problem for investigators to improve accuracy and efficiency using tools? 

3. How can I create a consistent look and feel without re-inventing the Amazon Wheel or having to hire an army of designers? :) 

I started doing a few things in parallel. 

Reusable Components 

DESIGN FRAMEWORK

I worked with the development team to define page types, UX Patterns and components to ensure that what we're building is consistent and easy to maintain. Our approach was to dive deep into a single user type, and test solutions against others. I focussed on Buyer Risk Investigations to build and test designs. I created user flows that helped us clarify where and how features should get built, where and how a feature would go on a page and in which page. This helped unblock development teams and 30 other program teams that were working at different levels of maturity and timelines.

USER INTERVIEWS  

UNDERSTANDING PAIN POINTS 

I talked with 17 investigators. This helped us understand requirements from a user's point of view and was eye opening for leadership across various teams ( OPS, TECH, PRODUCT, TRAINING AND ONBOARDING) and fed into 3YP ideas and other project initiatives for the organization. We had user stories captured in SIMs but they came from functionality in legacy tools - We needed to ask how a user-centered process can lead to better outcomes, particularly for the future of manual tools.    

5

themes

70

anecdotes

11

design recommendations

page_1.png

DESIGN SYSTEM

REUSABLE COMPONENTS  

We needed to create a component system that optimizes for visual display of key metrics and speeds up development across 30 programs. The team decided to go with reusable widgets that can be leveraged across programs. The challenge was designing them in such a way that was user-centric, knowing that user needs are program-specific. 

Existing

2.png

Platform Framework

new_page_4.png

An overview helped unblock development in the short term in identifying where features should live and enabled us to have discussions around page type definitions and layout conventions.  

Page-Level framework 

Creating page-level guidelines that would work across numerous use cases across 30+ programs was challenging. A healthy level of discussion around data density, screen size and real estate with various teams led to a high level of confidence in the proposed template.   

template.png

Component-Level framework

We negotiated pixels and padding widths as requests for more columns and more data 

points accumulated. I helped define distinctions in data types and how and where it should be displayed. My goal was to decrease the number of columns and eliminate horizontal & Vertical scrolling within fixed height widgets, and eliminate over reliance on modal windows. 

page_2.png

THE CHOICE OF WORKING IN AXURE 

DYNAMIC PANELS 

With various moving parts in the design, I relied on Axure's Dynamic Panel and Master Layers to make design changes as well as test clickable prototypes with end users all within a single file. Diagramming the information Architecture of the platform allowed me to create an ordered system that maps to pages and widgets being designed that can holistically fit into the overall. Axure helped in keeping track of all the elements all at once, while creating 'click-ability' in areas where we needed to test with users. 

WIDGET DESIGN.png
5.png

RISK INDICATION SUMMARY 

4.png
7.png
6.png

IDEATION FOR LONG TERM GOALS 

DEFINING RISK INDICATORS

We still didn't solve the main problem of the cognitive overload. I was skeptical investigators needed all this data and I wanted to show how they can make a decision with less information. I created 8-minute concepts to pitch for 3YP of ways data points would be aggregated into meaningful risk indicators that users can action on or further drill down into.

 

 

I worked with Machine Learning Scientists and Business Analysts to define indicators and tested them directly with users to get quick feedback. 

HSI.png

PIVOTING TO THE FUTURE 

GUIDED WORKFLOWS 

Another long term idea I pushed for was streamlining investigator time through a guided workflow. Our tech was not there yet, but I worked with TRMS Tech leaders to define questionnaires and designs for a guided workflow scenario to define the vision for the future so that we can incrementally get there and test success.  

 

The challenge was to create meaningful questions that required a human eye, and let the system do the rest. 

sanctions_flow.png
3__investigation_page_option_a__1_.png
step_3__send_email.png

VISUAL DESIGN 

ICONOGRAPHY AND TRAINING

Training materials heavily relied on iconography for process and SOP enforcement. This made it challenging to propose a new system but also a design challenge that required extensive study. I started creating guidelines that built on Amazon's AUI design philosophy, but I needed to create logic around how iconography can be leveraged, given the large amount of statuses and risk indicators that can easily add confusion to the page.

 

The team had not shied away from using color in the past, and we needed to put some guardrails in place to address how scalable this solution will be in the long term as more features get added. 

The largest proposed change was allowing icons at the page level and the row level but not at the data cell level. This reduces the guesswork around conflicting icons per row and new hiring ramp up time to reach a decision 

3.png
icon_strategy.jpg
icons.png
Screen Shot 2020-04-17 at 2.47.36 PM.png

SUCESS MEASUREMENTS 

SHORT TERM VS LONG TERM

In the long term, we wanted to measure user clicks against decision outcome and time at task, as well as HMD feedback on perceived ease of use and overall satisfaction with tools. 

We launched a global Survey to get a pulse on ease of use, perceived reliability of tools and overall satisfaction using Nautilus.

 

 

I worked with the product team, TRMS OPS leadership to launch the survey. I led the design of the overall survey strategy and wrote questions specific to UX. 

page_5.png

WORKING WITH INTERNAL USERS 

WHAT I LEARNED 

Efficiency metrics get in the way of decision accuracy 

Internal tools often need to work for users to achieve a particular outcome. It's important to pick the right success criteria of internal users. When looking at product improvements, it's important to take a comprehensive look at not only usability issues of tools but the processes and systems that we are enforcing to achieve the desired outcomes. 

Optimizing the experience for Best Practice 

Unlike other products where it is often good to offer users a wide array of choices and paths through an experience, work tools serve internal users when they guide users through a successful path. Training, help and other guidelines when offered within a user's screen as they go through the experience can eliminate user confusion and improve outcomes. It's important to conduct user research in a way while being highly aware of the context of user behaviors and needs. Internal users almost always request more information displayed on the page when asked, and yet we knew that this didn't result in a better outcome. A system optimized to meet user needs in the literal sense without design thinking will ultimately end up too complex - While we may end up with new features and quick wins in the short term, ultimately the product will suffer.  

Usability Guidelines Still Apply 

It's important to consider that a certain baseline of usability should be established and agreed upon. It's easy to fall into the idea that since we're designing for expert users, that basic usability guidelines don't apply, and that "the user knows best". Established usability conventions still very much apply and should be adhered to, unless new interaction patterns have been thoroughly tested at scale, a single Call to Action Per Screen is still a good guideline to follow - Having pop-ups inside pop-ups is still a poor interaction pattern regardless of your user type as we have seen causes human error. It's important that we stick to usability conventions that have been tested widely as error-prone, optimized for screen size and cognitive load. Where design adds a lot of value is understanding users' needs in context, which comes with spending a lot of time with them and understanding nuances of why certain ideas will succeed or fail given your use base. 

Tribal Knowledge, feature requests and user-led solutions  

Tribal knowledge sharing is a common phenomena when users' trust in the system is broken. It's important to unpack the impact this has as we enforce policy adherence on end users. Users will create work arounds if there is an incentive to do so. When your user base sits in the same building as your product team, it's easy to engage end users in product discussions and solutions. While an amount of participatory research and design could be healthy, it's important to step back and not lose sight of the wider product. Relying on a few end users to drive solutions for the product is risky, as they may not represent the wider group of users using tools or have enough visibility into the impact these proposals have on overall outcomes.  

Once user trust is gone, it's hard to earn it back  

Feature discrepancies can easily create mis-trust in data displays. Once users don't trust information on their screens, it's hard to earn the trust back. Establishing a mechanism where system bugs are discovered early and fixed are quick wins with huge impact on every day productivity and reliability of tools. 

“The two most important attributes about an investigator’s performance are efficiency and quality. If either of those attributes are stymied by a poor UX design, the investigator can suffer. 

 

Whenever I have the ability to adapt my own best practices to a solid foundational system, I perform better and more quickly. When it comes to the information provided, not only does pertinent and valuable information need to be available and presented cleanly, but unneeded information needs to be limited as much as possible – otherwise, it creates the potential to rely on irrelevant information to make a decision.” - Merchant Risk Investigator 

“My job is to ensure that any activity that is identified by a fraud investigator as suspicious is quickly and efficiently assessed. Mitigating or escalating a potential risk means looking at a case in a holistic and neutral way to identify gaps. 

 

As a result, it is paramount that each and every tool I use is designed to help me identify those gaps and analyze data as precisely and efficiently as possible. A clean, simple, and user-friendly interface means that I have to worry less about finding the murder weapon in my crime scene and can focus, from the get go, on how to make Amazon a safer place.”

 - Compliance Risk Control Analyst 

bottom of page