Build boundaries into your AI tools
Feb 9, 2023
― Prentis Hemphill
― Marissa Montgomery
Boundaries in engineering
I find the emotional intelligence concept of maintaining and respecting healthy boundaries to be useful in a lot of other situations, including engineering.
A great example is alerting.
When you’re configuring the settings of PagerDuty, you’re setting boundaries for yourself and your team.
You’re telling the system under which conditions you can do your best work. If you set PagerDuty up to alert for every response over 1000 milliseconds, you may get woken up at 4am for something you can fix later, and your sleep and focus during the workday will suffer.
When boundaries are breached, the most common feeling is overwhelm. And it's extremely hard to do good work when you're overwhelmed.
The most common gripe about Slack comes down to a lack of boundaries. Messages can come from anywhere at any time, and it’s very difficult to close or deflect channels of communication.
What this means for AI
In an interface like ChatGPT, you’re interacting directly with the bot. The bot’s response is always prompted by a question from the user.
But now we have AI tools that embed directly in our creative tools.
In a creative tool where the bot moves autonomously between the background and foreground, there’s a much wider surface area for interactions, and more opportunities to interact.
When should a bot interact with you? On every keystroke? After a few words or units of work, when it has a high confidence that it can contribute?
Should it be able to edit your work in the background? Should it let you know that it did? Should it let you know immediately, or later?
The possibilities for interaction have grown, the feedback loops have shortened, and without healthy boundaries you may feel like you have a robot coworker in your ear all the time.
While we’re still figuring out what this technology can do, we haven’t yet experienced what it’s like to live with it in every product everyday.
Types of boundaries
To build sustainable AI products, we have to take special care to design boundaries, and allow users to configure them where necessary.
There are a few different types, and we'll take another look at the PagerDuty example to explain each one.
Don't bother me right now.
For PagerDuty alerting, you can set a on-call schedule. Typically, each member of the rotation consents to a week or two of receiving alerts, and then they're off the hook. If someone was on-call all the time, they might feel drained.
Don't bother me in these situations.
You don't want to get alerted when a user hits a 404, but you might when a user hits a 500.
Interact with me in these ways.
PagerDuty gives you a few options when you answer a page. You can acknowledge it, assign it to another team, or even snooze it.
Ways to implement boundaries
How can we create great AI tools that don't leave users feeling overwhelmed?
It all starts with thoughtful design. Err on the side of giving the user more breathing room to create, as it builds trust in the AI.
The beauty of boundaries is that everyone's are different. Allow users to set their own defaults, and to change them easily and quickly.
Boundaries in AI tools
A new AI tool comes out every day. Now, we have Copilot in our IDEs (which I love), Diagram in Figma, and a host of other AI tools in our writing editors.
GitHub clearly knows it's always a good idea to give developers control over their tools. You can easily deactivate Copilot with one click at the bottom of your IDE temporarily if you need to focus.
Copilot even lets you deactivate it for certain types of files.
A common format for AI output is the suggestion in the style of auto-correct. This is rightly popular while the AI is still improving, because we don't yet trust it to make changes on its own.
Room for innovation
Action boundaries are where I'd love to see some more innovation.
Here are some ideas for other formats:
Long-form suggestions to be reviewed later, like a pull request.
Edits made in the background directly on the user's content, with a log to be reviewed later by the user.
The bot provides multiple options in parallel (DALL-E does this).
The bot iniates a higher level discussion, and frames future changes around that.
The best peer programming sessions I've been a part of didn't just involve line-by-line discussions. We'd go over the problem to be solved, and briefly align on strategy, tactics, and make a few high-level decisions at the start.
When designing an AI tool, you should ask yourself: how can I let the users love themselves and the robots at the same time?