<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=157691538053669&amp;ev=PageView&amp;noscript=1">
Rapid Changes in Artificial Intelligence Governance

May 26, 2025 11:40:58 AM | Risk Rapid Changes in Artificial Intelligence Governance

As AI accelerates its integration into daily business and society, governments worldwide are racing to establish laws that address both its vast potential and complex risks.

As AI becomes more integrated into business and society, governments are establishing legislation to address its potential and risks. Recent developments, such as the EU's AI Act and discussions in Australia, aim to create transparency, accountability, and fairness in AI deployment, focusing on data privacy, algorithmic bias, and protection from harmful autonomous systems.

In response, Persol has introduced a comprehensive AI Policy in consultation with its subsidiaries including Programmed, to ensure responsible and ethical use of AI technologies. This policy sets principles for fairness, transparency, and human oversight, mandates rigorous risk assessments, continuous monitoring, and robust staff training. It also includes procedures for reporting and remedying AI-related incidents, fostering trust among clients, employees, and partners.

By embedding these standards, we aim to comply with emerging laws and position ourselves as a responsible leader in AI innovation.

You will see and hear more as we start to deploy the new AI Policy and introduce a set of AI guardrails that will provide a clear framework to help us embrace all the benefits of AI while protecting ourselves from the threats. In the meantime, there are some practical steps you can take to manage AI risks as much as possible:

  • Ensuring human responsibility for all AI-generated outputs. This means that while AI can assist with tasks such as preparing documents, the final responsibility lies with our team members. This approach aligns with the Persol Group AI policy, which requires 'human in the loop' decision-making. 
  • To safeguard our data and maintain trust, we have strict guidelines in place. Company data, customer data (including contracts), personal information, and confidential or commercial information belonging to Persol, Programmed, our customers, or vendors must not be used in any AI tools, except for the approved Programmed instance of CoPilot or other reviewed and approved tools.
  • When it comes to personal instances of AI platforms like ChatGPT or Claude, they can only be used for basic background research or administrative tasks. It is crucial to ensure that no company, customer, personal, confidential, or commercial information is disclosed, as we must assume that the chat is publicly available.
  • Any AI tools procured must pass a Cybersecurity Review, undergo Legal T&Cs review, and be approved for use. It is essential to inform your manager when AI tools have been used and to adhere to other relevant policies, including the Persol AI Policy, Programmed Privacy Policy, and customer contract requirements.
  • Lastly, to ensure accuracy and reliability, all AI-generated outputs must be cross-checked with appropriate sources. This includes verifying that no confidential information or customer data is inadvertently disclosed.

By embracing AI responsibly, we can leverage its benefits while maintaining integrity and trust.