A national firm has denied that a policy for restricting staff access to AI tools amounts to a ban. It was responding to a BBC report that Hill Dickinson had blocked AI usage after an email was sent to all employees last week.
It is understood the email referred to a policy that was created last September which restricted access until a colleague had approved its use. Access is reinstated if that use is approved, and the firm has received and granted requests in the last six months.
In a statement, Hill Dickinson said it wanted to positively embrace the use of AI tools such as ChatGPT while ensuring ‘safe and proper’ use by staff and for clients. AI can have many benefits for how we work, but we are mindful of the risks it carries and must ensure there is human oversight throughout,’ said the firm.
‘Last week, we sent an update to our colleagues regarding our AI policy, which was launched in September 2024. This policy does not discourage the use of AI, but simply ensures that our colleagues use such tools safely and responsibly - including having an approved case for using AI platforms, prohibiting the uploading of client information and validating the accuracy of responses provided by large language models.
‘We are confident that, in line with this policy and the additional training and tools we are providing around AI, its usage will remain safe, secure and effective.’
Read more
It was reported that Hill Dickinson’s chief technology officer had written in the internal memo that the firm had detected more than 32,000 hits to ChatGPT over seven days in January and February. There were also 3,000 hits to the Chinese AI service DeepSeek and 50,000 hits on writing assistance tool Grammarly.
It is understood that the number of website hits does not refer to individual instances, but instead was a record of inputted prompts: effectively many of the hits may have down to individuals making repeated requests in a single session.
No client or internal files were uploaded during the period the firm monitored AI use.
Companies such as Samsung, Accenture and Amazon are reported to have restricted the use of ChatGPT in the workplace amid concerns about data privacy.
Information Commissioner John Edwards said in a speech last year that organisations must consider the risks associated with AI, alongside the benefits. For generative AI, the ICO has been consulting on various aspects of the technology and how it would comply with existing data protection law.
‘AI and emerging tech can be a huge force for good,’ said Edwards. ‘The strides forward we’ve made in terms of healthcare, productivity and transportation have been massive.
‘But organisations who use these technologies must be clear with their users about how their information will be processed. It’s the only way that we continue to reap the benefits of AI and emerging technologies.’
2 Readers' comments