How we used AI to help an ISO 27001 company make sense of their documentation

7th December 2023

As companies get bigger so does their documentation; their procedures, their policies, their employee manuals  … Until it gets to the point where there is so much of it that nobody knows how to find any information. Thereby making the whole process of creating the documentation in the first place totally pointless.

This is the problem we were given earlier this year. The company possessed all the necessary information for their employees to excel, yet the staff were unaware of how to access it!

Our client, a large European company, needed a solution. Their team was not reading their carefully created documentation, they were struggling to navigate their labyrinthine folder structure and unusual document naming convention to retrieve even the basic information they needed; from simple contract information such as paternity rights, to specific procedures they needed to follow in emergencies. 

Our Client

Our client has hundreds of distributed staff. They store their ISO documentation in a set of folders on Google Drive and they use Slack for internal communication. Their teams work together daily, communicating primarily through Slack and Google Meet. They needed a solution that would work within their current setup.

The Solution

To address this, Secret Source implemented an innovative Retrieval Augmented Generation (RAG) solution within Slack. 

Slack offers customers the ability to create custom “slash” commands. These are commands that Slack understands and knows how to handle. In the case of custom slash commands, Slack normally just passes the payload off to a third-party API, which is exactly what we did for this client.

The API was built in FastAPI (python) and LangChain using OpenAI’s models for the AI bits. The system (the server) plays two functions: 

  1. It periodically indexes the data sources (Google Drive in this case) and 
  2. Handles the slash command payloads, consulting the index, composing the API request for OpenAI, formatting the response from OpenAI, and sending it back to the user via Slack.

Along the way, the system also removes personally identifiable information so we aren’t storing it or sending it to OpenAI, encrypts the Google Drive data in the index, and logs some activity so we can find and fix problems before they arise for the end user. We also cache common queries to minimize token use for our clients and in this particular case, because some of the users are not native speakers of English, we use the LLM to restructure the query so it more closely matches the data in the index prior to running the similarity search.

These are the mechanics of every RAG solution (and there are many!) but what makes our version particularly special is the QA that we subject our prompt engineering to. Unlike most other RAGs that use a generic or standard prompt telling the LLM to be polite, etc. We spent quite a bit more time crafting the prompt, and then testing it for Chain of Thought accuracy and relevancy to make sure the RAG is returning logical and relevant responses.

The Solution

Users can now ask a question directly in Slack and their LLM will reply with answers taken only from the company’s own documentation, with links to the source material. The answers are crafted to fit the company style and tone of voice. 

The managers of the system can also see data on which questions are asked. This has proved particularly useful to our client as it has shown where there are global holes in the staffs’ knowledge and therefore which part of the onboarding process has not been successful. 

The Outcome

Every year, all employees of this company need to undergo a training session to make sure they are up to date with all the latest policies.

We ran this training session using our new Slack system. This killed two birds with one stone: firstly they learned how to use this new system in Slack, secondly they did their annual company training!

After the online session, adoption was rapid, with over 90% of the users using it again within the first week of release.

Unexpected outcomes

With hundreds of users suddenly accessing previously barely-read documents, feedback on the content skyrocketed. Discrepancies and out of date information were reported to the ISMS manager and updates which used to take a year were now taking days.

Since the LLM was designed to mirror the company’s writing style, the marketing team began utilising the system to write first drafts of proposals, emails and even social media copy.

The future

We are currently designing the next features for the system, these include:

  • A complete mentoring system that will assess, test and correct the technical staff in the company to support their own personal development, all based on the client’s documentation.
  • Advanced analytics so the system managers are able to see, in more detail, anonymised reports of usage.
  • Integration with Jira and Bitbucket (in progress) so our client can access code comments, commits and Jira tickets to ask questions directly about the code.
  • Integration with the client’s HR system so users can query their holiday and overtime.

We are very excited to be on this journey with our client, getting to use truly innovative solutions to make a genuine difference.

If you are still exploring how you can incorporate AI into your company and you want to chat with someone to give you some guidance, we’d love to help.
Please email us at [email protected]  or pop a time directly in our calendar