Automating security questionnaire intake

Automating security questionnaire intake

Triaging requests was manual and time-consuming—while costing credits, morale, and promotions. Asking, "How might we get faster first drafts?" led to designing an AI agent to automatically handle questionnaires.

Company

Conveyor

Series A (at time of project)

Timeline

7 weeks

Role

Lead Designer

Team

1 product manager, 6 engineers, 1 product designer

Services

Store

Irrelevant requests from sellers extend the questionnaire lifecycle by multiple business days

Irrelevant requests from sellers extend the questionnaire lifecycle by multiple business days

InfoSec analysts complain that sellers upload questionnaire requests that shouldn't be handled by the platform—so they end up spending countless hours triaging a queue.

Costing credits

Since credits in Conveyor are consumed every 100 questions, customers want to be able to control which questionnaires get imported. If a questionnaire doesn't have many questions, they may want to handle it manually or reject it altogether.

Missing SLAs

We heard most frustrations from teams who were triaging 100 requests a month. This high volume combined with the manual process contributed to missed SLAs.

Lowering morale

Manually triaging was not only taking the fun out of the job, but it was taking InfoSec analysts away from higher priority initiatives.

As one said:

“To get promoted, I need to get out of the queue.”

Unfortunately, triaging takes precedence. This lowered their team's morale significantly.

InfoSec analysts complain that sellers upload questionnaire requests that shouldn't be handled by the platform—so they end up spending countless hours triaging a queue.

Costing credits

Since credits in Conveyor are consumed every 100 questions, customers want to be able to control which questionnaires get imported. If a questionnaire doesn't have many questions, they may want to handle it manually or reject it altogether.

Missing SLAs

We heard most frustrations from teams who were triaging 100 requests a month. This high volume combined with the manual process contributed to missed SLAs.

Lowering morale

Manually triaging was not only taking the fun out of the job, but it was taking InfoSec analysts away from higher priority initiatives.

As one said:

“To get promoted, I need to get out of the queue.”

Unfortunately, triaging takes precedence. This lowered their team's morale significantly.

Automating intake frees up analysts to get promoted

Automating intake frees up analysts to get promoted

Analysts use rules to triage

We learned in user interviews that InfoSec teams have criteria for triaging. This often includes things like: deal size, if there's a signed NDA in place, if the content is relevant, and how many questions there are—to name a few.

We used this as a foundation for our automated solution: an AI agent.

Measuring AI agent's success

Customer outcomes
・30%+ reduction in median SLAs after 30 days of use
・50% reduction in hours spent triaging after 30 days of use

Performance indicators
・<5% of rejections are incorrectly rejected
・<10% of accepted cases are incorrect
・95% of questionnaires don’t have edits to the suggested tags

Analysts use rules to triage

We learned in user interviews that InfoSec teams have criteria for triaging. This often includes things like: deal size, if there's a signed NDA in place, if the content is relevant, and how many questions there are—to name a few.

We used this as a foundation for our automated solution: an AI agent.

Measuring AI agent's success

Customer outcomes
・30%+ reduction in median SLAs after 30 days of use
・50% reduction in hours spent triaging after 30 days of use

Performance indicators
・<5% of rejections are incorrectly rejected
・<10% of accepted cases are incorrect
・95% of questionnaires don’t have edits to the suggested tags

Defining the MVP to launch in 7 weeks

Defining the MVP to launch in 7 weeks

Launching quickly so we could generate demand and build pipeline meant we needed to make tradeoffs.

Here's some we made:

Read-only rules

We learned from customers that triage rules don't change often. With this in mind, we prioritized a read-only approach initially to simplify development and user experience. They could update their rules by contacting our Support team.

Salesforce to start

We identified a likely launch partner at an enterprise company that used Salesforce. Knowing this is a common integration, we decided to start with it—with intentions of launching additional common integrations shortly after launch.

Launching quickly so we could generate demand and build pipeline meant we needed to make tradeoffs.

Here's some we made:

Read-only rules

We learned from customers that triage rules don't change often. With this in mind, we prioritized a read-only approach initially to simplify development and user experience. They could update their rules by contacting our Support team.

Salesforce to start

We identified a likely launch partner at an enterprise company that used Salesforce. Knowing this is a common integration, we decided to start with it—with intentions of launching additional common integrations shortly after launch.

AI agent gets the first draft done

AI agent gets the first draft done

As part of this work, we incorporated existing functionality into the AI agent's capabilities.

Not only does it triage, but it answers your questionnaires based on your custom tone and verbosity settings and delegates outstanding questions to subject matter experts so questionnaires are completed faster.

As part of this work, we incorporated existing functionality into the AI agent's capabilities.

Not only does it triage, but it answers your questionnaires based on your custom tone and verbosity settings and delegates outstanding questions to subject matter experts so questionnaires are completed faster.

Building customer trust

Building customer trust

Try before you buy

One barrier we identified was getting buy-in. To de-risk this and build trust in the feature, we created a test experience (designed by another designer) to allow customers to see their rules in action before having it in production. This also supported proof of concept opportunities who didn't have integrations set up yet.

Getting closer to the vision

Shortly after we launched, we iterated on the tester design to allow customers to reference their rules while inputing values and made the response feel more like a chat.

Try before you buy

One barrier we identified was getting buy-in. To de-risk this and build trust in the feature, we created a test experience (designed by another designer) to allow customers to see their rules in action before having it in production. This also supported proof of concept opportunities who didn't have integrations set up yet.

Getting closer to the vision

Shortly after we launched, we iterated on the tester design to allow customers to reference their rules while inputing values and made the response feel more like a chat.

Evaluating AI agent's success

Evaluating AI agent's success

Unfortunately, I moved on before I could see if we accomplished our goals.

However, we launched our early-access in 7 weeks. We also successfully implemented the AI agent for a publicly traded enterprise software company after getting buy-in. The agent was successfully triaging requests.

Unfortunately, I moved on before I could see if we accomplished our goals.

However, we launched our early-access in 7 weeks. We also successfully implemented the AI agent for a publicly traded enterprise software company after getting buy-in. The agent was successfully triaging requests.