August 30, 2023

Avoiding Common Pitfalls of ChatGPT - Advice for HR Professionals

Lots of businesses are experimenting with ChatGPT to automate time consuming workflows. Whilst there will be enterprise versions before too long, at present only a small group of Early Adopter companies have been granted access to Microsoft’s full suite of tools. This leaves most employees wondering whether it is safe to use ChatGPT for tasks in their job. For those responsible for setting workplace policies, like many HR teams, this responsibility extends to the wider organisation. 

Nevertheless, GenAI systems, including ChatGPT offer such profound productivity improvements, for everything from responding to employee queries to retrieving information about internal policies and historical decisions, that companies are keen to ask: how can we have a version of ChatGPT that works for us, without exposing information we shouldn’t.

What is the problem? 

For many organisations the biggest risk is sharing data or information which should be kept confidential. This includes commercially sensitive data and Personally Identifiable Information (PII) which the organisation holds for employees or customers. The main thing to keep in mind is that inputting information into ChatGPT constitutes “sharing” data with another organisation for most legal purposes. Restrictions should therefore be put in place to safeguard this information as for other workflows and systems. For particularly sensitive data, such as recorded voice, or financial or health records, sharing this via ChatGPT would represent a serious confidentiality breach. 

What does ChatGPT do with the data?  

For users who only use ChatGPT (as opposed to the other services that are available) the key sentence in OpenAI’s privacy policy is this: for consumer products like ChatGPT and DALL-E, we may use content such as prompts, responses, uploaded images, and generated images to improve our services. 

In short, by default they use everything that you enter for future training.

You can request to opt out of having your content used to improve our services by filling out this form, but this opt out will apply on a going-forward basis only and will need to be done for each individual user’s account. Anything that you don’t want shared with OpenAI should therefore not be entered in ChatGPT or you may find that future versions of the model start repeating the information back to users outside your organisation. 

A key concern for all managers we have spoken to is understanding what their teams are currently doing and how to set policies to determine what can and can’t be put into ChatGPT. 

For most businesses, even opting out is unlikely to provide enough assurance to hand over commercially valuable data or PII. The risk of it being shared further is too high. This is why many organisations are exploring using Bing Chat for Enterprise (Microsoft’s offering, which is kept in an organisation's own Azure environment - mostly). 

What other problems are there? 

The second item on any HR Director’s risk register should be anything that resembles automated decision making. Despite great efforts on the part of model developers new forms of bias keep being detected, seemingly varying by how the prompt is entered. Where companies are found to have relied on an automated system and that system is then shown to have discriminated on the basis of a protected characteristic the company itself is likely liable for the consequence of the decisions. A case against Uber is going through the courts at present. 

Generative AI systems, like ChatGPT, should be thought of as highly engaged, well read, enthusiastic interns. They are excellent at some tasks, but not substitute for considered human judgement on matters which require experience or tacit knowledge. For tasks like summarising a meeting or researching the difference between various companies’ employee policies the tools can save tremendous amounts of time. However, a good rule of thumb is not to trust them with decisions that you wouldn’t delegate to a similarly inexperienced employee! 

What options do I have? 

Beyond an outright ban (which loses all the productivity gains) most solutions will mean ensuring that any Large Language Models used keep everything inside an organisation’s digital walls - or on prem as it is often referred to. 

There are several advantages to having a private LLM:

 

  • Keep your data private 

Whether the data needs to be retained for legal, trust or purely commercial reasons, a bespoke language model can give you assurance that any prompts you enter stay firmly within your business.

  • Make it learn about your business

GPT4 has impressive general intelligence, but knows nothing of the specifics of your work (unless these details are readily to be found on Wikipedia or Stackoverflow!). It is like a highly trained graduate turning up for their first day of work - every single day. This dramatically reduces the utility, compared to a language model which could understand how we do things around here. Whether this is a knowledge of the org chart to correctly direct customer queries, or appreciation that a project like this was tried before when writing a strategy presentation, local, situational knowledge is key to extracting competitive advantages with these tools.

One publicly available version is GPT4All. However, for more bespoke applications or long lasting systems it may be wise to build something yourself. This can be done relatively easily by many contractors who will fine tune an existing open source model to your business context and data. 

--

To have a conversation about what factors should influence your choice whether to use something off the shelf or customise your own model feel free to email james@paradigmjunction.com

Related posts

Buying Generative AI in Government

PUBLIC and Paradigm Junction have teamed up to build this practical guide for public buyer and suppliers to effectively navigate the process of 'Buying GenAI'. Examining critical factors of the procurement process - defining scope, securing finance, evaluating suppliers, defining IP, managing contracts - this report provides usable insights into what makes GenAI different and how Government can engage with it confidently.

Computers aren't supposed to be able to do that

18 months ago he would have been right. To me, it had stopped being remarkable they now can.

Introduction to Futures and Foresight for Emerging Technologies

No one is sure about how the capabilities of AI will develop, let alone how business, government and society will respond to them. But this doesn’t mean that you should stand still. Tools exist for helping decision makers make smart choices in the face of uncertainty about the future, where traditional forecasts are liable to falter.

Apple Vision Pro - Seeing The World Through A New Lens

The Vision Pro isn’t only an AR/VR play from Apple - it’s a first bid to equip us with the tools we will want to be using in the future world of work.