Completing security questionnaires faster

Triaging requests was manual and time-consuming—while costing credits, morale, and promotions. Asking, "How might we get faster first drafts?" led to designing an AI agent to automatically complete security questionnaires.

Company

Conveyor

Series A (at time of project)

Timeline

7 weeks

Role

Lead Designer

Team

1 product manager, 6 engineers, 1 product designer

Services

Services

Services

Services

Services

Services

Irrelevant requests from sellers extend the questionnaire lifecycle by multiple business days

Irrelevant requests from sellers extend the questionnaire lifecycle by multiple business days

InfoSec analysts complain that sellers upload questionnaire requests that shouldn't be handled by the platform—so they end up spending hours triaging a queue.

Costing credits

Since credits in Conveyor are consumed every 100 questions, customers want to control which questionnaires get imported. If a questionnaire doesn't have many questions, they may want to handle it manually or reject it altogether.

Missing SLAs

We heard most frustrations from teams who were triaging 100+ requests a month. Handling this high volume manually contributed to missing SLAs.

Lowering morale

The taxing nature of manually triaging was not only lowering teams' morale, but it was keeping them from working on strategic initiatives.

As one InfoSec Analyst said:

“To get promoted, I need to get out of the queue.”

Unfortunately for analysts, triaging takes priority.

InfoSec analysts complain that sellers upload questionnaire requests that shouldn't be handled by the platform—so they end up spending hours triaging a queue.

Costing credits

Since credits in Conveyor are consumed every 100 questions, customers want to control which questionnaires get imported. If a questionnaire doesn't have many questions, they may want to handle it manually or reject it altogether.

Missing SLAs

We heard most frustrations from teams who were triaging 100+ requests a month. Handling this high volume manually contributed to missing SLAs.

Lowering morale

The taxing nature of manually triaging was not only lowering teams' morale, but it was keeping them from working on strategic initiatives.

As one InfoSec Analyst said:

“To get promoted, I need to get out of the queue.”

Unfortunately for analysts, triaging takes priority.

Automating intake allows analysts to get promoted

Automating intake allows analysts to get promoted

Triage follows rules

We learned in user interviews that InfoSec teams ave criteria for triaging. This often includes: deal size, having a signed NDA, content relevancy, and number of questions—to name a few.

This was foundational to our solution: an AI agent.

Measuring AI agent's success

Customer outcomes
・30%+ reduction in median SLAs after 30 days of use
・50% reduction in hours spent triaging after 30 days of use

Performance indicators
・<5% of rejections are incorrectly rejected
・<10% of accepted cases are incorrect
・95% of questionnaires don’t have edits to the suggested tags

Exploring what the agent could be…

Triage follows rules

We learned in user interviews that InfoSec teams ave criteria for triaging. This often includes: deal size, having a signed NDA, content relevancy, and number of questions—to name a few.

This was foundational to our solution: an AI agent.

Measuring AI agent's success

Customer outcomes
・30%+ reduction in median SLAs after 30 days of use
・50% reduction in hours spent triaging after 30 days of use

Performance indicators
・<5% of rejections are incorrectly rejected
・<10% of accepted cases are incorrect
・95% of questionnaires don’t have edits to the suggested tags

Exploring what the agent could be…

Launching in 7 weeks

Launching in 7 weeks

Getting an AI agent working in 7 weeks meant we needed to make tradeoffs.

These are ones we made:

Read-only rules

Customers told us that triaging rules don't change often. With this in mind, we prioritized a read-only approach initially to simplify development and user experience. Rules could be updated by contacting our Support team.

Salesforce to start

After identifying a launch partner at an enterprise company who used Salesforce to manage their queue, we decided to start with it—knowing other customers used it too. We also planned to quickly follow the launch with other systems.

Getting an AI agent working in 7 weeks meant we needed to make tradeoffs.

These are ones we made:

Read-only rules

Customers told us that triaging rules don't change often. With this in mind, we prioritized a read-only approach initially to simplify development and user experience. Rules could be updated by contacting our Support team.

Salesforce to start

After identifying a launch partner at an enterprise company who used Salesforce to manage their queue, we decided to start with it—knowing other customers used it too. We also planned to quickly follow the launch with other systems.

AI agent completes the first draft

AI agent completes the first draft

As part of this work, we incorporated existing functionality into the AI agent's capabilities.

Not only does it triage, but it answers your questionnaires based on your custom tone and verbosity settings and delegates outstanding questions to subject matter experts so questionnaires are completed faster.

As part of this work, we incorporated existing functionality into the AI agent's capabilities.

Not only does it triage, but it answers your questionnaires based on your custom tone and verbosity settings and delegates outstanding questions to subject matter experts so questionnaires are completed faster.

Building trust with customers

Building trust with customers

Try before you buy

One barrier we identified was getting buy-in. To de-risk it and build trust in the feature, we created a test experience (designed by another designer) to allow customers to see their rules in action before having it in production. This also supported proof of concept opportunities where it wasn't set up yet.

Moving to the vision

Shortly after we launched, we iterated on the tester design to allow customers to reference their rules while inputing values and made the response feel more like a chat.

Try before you buy

One barrier we identified was getting buy-in. To de-risk it and build trust in the feature, we created a test experience (designed by another designer) to allow customers to see their rules in action before having it in production. This also supported proof of concept opportunities where it wasn't set up yet.

Moving to the vision

Shortly after we launched, we iterated on the tester design to allow customers to reference their rules while inputing values and made the response feel more like a chat.

Evaluating AI agent's success

Evaluating AI agent's success

Unfortunately, I moved on before I could see if we accomplished our goals.

However:

  • We successfully launched early-access in 7 weeks

  • We successfully implemented the AI agent for a publicly traded enterprise software company

  • The agent was successfully triaging requests

Unfortunately, I moved on before I could see if we accomplished our goals.

However:

  • We successfully launched early-access in 7 weeks

  • We successfully implemented the AI agent for a publicly traded enterprise software company

  • The agent was successfully triaging requests

Unfortunately, I moved on before I could see if we accomplished our goals.

However:

  • We successfully launched early-access in 7 weeks

  • We successfully implemented the AI agent for a publicly traded enterprise software company

  • The agent was successfully triaging requests