Technical case study: A Peek into AI Development & Building the Chop Out Convos mental health app

Paul Tran
No Moss Co.
Published in
6 min readAug 14, 2023

--

At No Moss, we recently had the delightful opportunity to collaborate with IPC Health to support the mental wellbeing of Australian tradies. This project was funded by the Movember Foundation under their Social Connections Challenge, with assistance from HALT.

The result is the AI-assisted “Chop Out Convos” app, designed to enable tradies to check in on each other’s wellbeing. You can explore our detailed case study on the project here.

Screenshots from the Chop Out Convos app

I was privileged to work with our Senior Developer, Hugh Blackall, on this project. We’ve written up a technical case study about our experiences working with Generative AI (GPT-3 and ChatGPT), our considerations regarding AI ethics, and provide a glimpse into our technical implementation. Happy reading!

How it started

The possibility of incorporating OpenAI’s models into a mental health app thrilled us. Although we had experimented with various versions of Large Language Models (LLMs) internally at No Moss, the public release of ChatGPT was a turning point, firmly positioning OpenAI, LLMs, and Generative AI on the technology map. The significance of this shift hit home when even my mum began discussing it!

At the project’s outset, the feasibility of our early ideas of how to incorporate AI into the Chop Out Convos app remained uncertain. Concurrently, the field of AI and Generative AI was rapidly evolving and garnering increasing attention. Our incorporation of the design sprint process amidst these technological advancements proved to be a serendipitous development for the team.

Using the Design Sprints to hone in on AI features that provide the most value to users

As we got into the project, our experiments with GPT-3, the most advanced release at the time, yielded surprising and exciting results. The latest features made integrating AI more efficient than ever. Additionally, we found inspiration in other successful AI applications such as Roo, a sexual health chatbot developed by our friends at Planned Parenthood.

How we did it

Through design sprint ideation activities, like crazy 8s and concept sketches, we generated numerous ideas about integrating AI into the app. Some of these included:

  • A tone sentiment analysis keyboard, inspired by Grammarly
  • A Conversational AI chatbot
  • A Q&A bot similar to Roo

Mental health experts and licensed counsellors were involved early in the design process to ensure our app was helpful, not harmful.

Post-ideation, our priority was to discard most ideas and test those capable of solving actual problems. Only after validating product ideas did we begin to explore feasibility and, more crucially, how to manage AI while minimising potential harm to users.

Ethical considerations

Being an AI-supported mental health app, it was crucial for us to provide a safe chat environment for our users. Working with our mental health experts and licensed counsellors, we were aware of several ethical challenges, particularly given the project timeline:

  • Effective parsing of user inputs would generate relevant conversation continuations, ensuring an interactive and engaging chat experience, however they would take a lot of time and development energy
  • While Generative AI and LLMs offer human-like responses, their non-deterministic nature can sometimes lead to unexpected and potentially harmful responses. This potential for harmful dialogue through generation of unexpected and inappropriate responses is a known risk, evident from historical instances of AI systems behaving unexpectedly or going rogue
  • The large datasets the model is trained on inevitably include uninformed opinions. Our focus lay in avoiding a scenario where the AI chatbot could potentially defy its ethical constraints

We considered additional challenges, including the risk of alienating our users due to the AI’s robotic and formal tone of voice, and managing prompt injection security vulnerabilities.

How we safely integrated the AI into the app

In consultation with clinicians and experts, we compiled a safelist of positive conversational approaches to mental health discussions with workmates. Rather than generating its own unmoderated content, the AI was “prompted” to exclusively select a safe and appropriate response from this list.

This strategy allowed us to control the conversation and ensure the advice given was positive, safe, and vetted by clinicians and experts. The AI’s tone balanced avoiding certain words and connotations as advised by clinicians, and incorporating language and slang that felt authentic and appealing to our target audience.

We also used AI to generate initial content, which significantly reduced the team’s preliminary workload. Subsequent testing revealed the AI should judge user input based on sentiment and intention rather than a text-based match. Delightfully, we discovered users would input topics we hadn’t anticipated, such as suggesting skate outings or lunches with their workmates. These new topics are collected, and reviewed to be incorporated back into an updated safelist of positive conversations.

Technical showcase

For those curious, below is a small showcase of one of the prompt templates we had developed.

An example snippet of prompt template

The AI generated response is validated against the safelist to determine if there is an unexpected deviation from its instruction, and the final result looks something like the below.

Future-proofing the app

Working with a non-profit organisation meant that we aimed to operate the app as cost-effectively as possible. We decided to use a serverless model for backend development, using AWS SAM, which allowed us to deploy a small scalable backend that required very little maintenance.

Accommodating an emerging technology introduced its own set of risks and challenges associated with ensuring future readiness. We were using GPT3, and during our development phase, a new API for ChatGPT 3.5 was launched. Merely two weeks later, GPT4 was released, which had incredible advancements of its own.

ChatGPT 3.5 was faster and cheaper than the older GPT-3 and the even newer GPT-4. During development we started the process for upgrading to the new GPT 3.5 Chat Completions API.

Looking forward, we’re excited to explore further AI possibilities, including:

  • Allowing the AI more flexibility and creativity in responding to users, safely.
  • Enabling the AI to respond to various user characteristics to increase comfort, such as engaging in playful banter, using common emojis and abbreviations.
  • Further fine-tuning the AI based on our users experience.
  • Using additional OpenAI capabilities such as the Whisper API to improve the user experience.

The Future of AI in Society

While there is considerable apprehension and speculation about AI eliminating certain job roles, especially within the tech sector, history shows us that technological advancement and automation often disrupt rather than eliminate labour markets. This trend typically involves certain job roles losing relevance while new ones are created.

Relying entirely on AI-generated content will lead us to a competitive race to the bottom. Less thoughtful creators may struggle to produce authentic work supported by AI tools, such as GitHub’s Copilot code writer. I am optimistic that the positive impact of AI will outweigh the negative influences and that we, as leaders, should continue to nurture the skills required for effective collaboration with AI.

Final thoughts

We firmly believe in the principle of building the right product is more important than just building the product right.

Initiating the design process with a focus on desirability, enabled by design sprints, facilitates important conversations around feasibility. I am immensely proud of the team for their bravery in trusting themselves to breathe life into an ambitious user experience, and proud of our dedicated engineers practising their growth-focused mindset.

One of the aspects I hold in the highest regard for this project is how the team came together to produce outstanding results within a challenging time frame of just nine weeks. It was extremely fulfilling for us, as digital consultants, to utilise AI that provided functionality within the mental health sphere, and to work for a client that was so trusting and willing to adopt new technology.

This project would have demanded considerably more time and financial resources even just two years ago; the accessibility of Generative AI has truly made a difference.

If you enjoyed our deep dive into our technical article on how we used AI in the Chop Out Convos app, let us know! And read our detailed case study on the app here.

--

--

Head of Product at No Moss. Talk to me about building the right thing for the right people at the right time, electric cars, AI, and the best BBH in Melb