Embracing the disruption of generative AI at speed, responsibly

Not since the nuclear age ushered the Cold War has the world faced an existential threat as profound and immediate as irresponsible use of artificial intelligence (AI), say its founding fathers.

Embracing the disruption of generative AI at speed, responsibly

Not since the nuclear age ushered the Cold War has the world faced an existential threat as profound and immediate as irresponsible use of artificial intelligence (AI), say its founding fathers.

Pioneers such as former Google AI ‘Godfather’ Geoffrey Hinton, and Sam Altman (CEO of ChatGPT developer OpenAI), warned the dangers of irresponsible AI use are on par with nuclear war and pandemics.

Tools such as ChatGPT and Google Bard have sparked national debates from Washington to Brussels — and even in Canberra.

And while AI was established in areas such as web search and image recognition, the paradigm shift thanks to widespread success of “generative AI” that create (or generate) a galaxy of fresh content has invoked a new age of prosperity leavened with risk, said Peter Hawkins, Akkodis Australia CEO and Senior Vice President.

“Security, privacy and bias are the top responsible AI risks but, if you get it right and educate your people, the opportunities are endless. You fundamentally shift how you do business,” said Hawkins.

“Everybody at their desk, whatever their job role, can jump in and leverage these tools. And it’s now moving so fast that we couldn’t govern it as in the past.”

As sponsor for Akkodis’ freshly minted Responsible AI Council (RAIC), Hawkins said it’s an imperative for employees, partners and customers, to establish guidance and guard rails to speed their responsible use of AI, supported by eight foundational principles.

  1. Privacy and security first: Implement rigorous measures to protect client data and prevent misuse of AI.
  2. Ethical use: Prioritise the wellbeing of individuals and society, aligned with ethical and social responsibilities.
  3. Human oversight and control: Maintain human oversight and control over AI systems to prevent unintended consequences.
  4. Transparency: Ensure stakeholders understand how AI is used, what data is collected, and how decisions are made.
  5. Inclusivity: Champion inclusivity and diversity in AI development and use.
  6. Environment now, not later: What are the environmental implications of how we use technology and AI.
  7. Societal considerations: What are the societal impacts and issues arising by using or not using AI.
  8. Good governance: Apply best practices in technology, data management, legal compliance, ethics, and business operations.

The Akkodis RAIC has a 10-point discussion agenda that includes balancing human–machine collaboration, data management, eradicating bias, and realms ranging from AI’s environmental impact to leveraging augmented reality and quantum computing.

Diversity and inclusion to speed responsible AI use

The responsible use of AI demand that organisations brush up on their diversity and inclusion policies to ensure their processes are free of systemic bias. The speed and scale at which AI operates — combined with its emergent properties — supercharges risks from homogenous groups driving its development and execution.

“Algorithmic bias is often raised as one of the biggest risks or dangers of AI,” wrote the authors of the 2023 Australian Government report, .

“This can lead to disproportionate impacts on vulnerable groups from AI, including First Nations people, as they are not properly represented in datasets.” 


“Generative AI is moving so fast that having different people’s opinions in the room — and then also technical specialists who understand how it works under the covers — is incredibly important to ensure that we’re using it in a responsible way,” Hawkins said.

Joshua Morley, Akkodis’ Head of AI specifically designed the RAIC framework to feature representation from around the business and is a better representation of broader Australian society than a purely technical decision-making team.

“No individual can answer the question of what are our ethics; it’s about having a diverse group challenging, correcting and discussing what are values and ethics are according to our collective and individual identities,” said Morley,

Morley, who is a recognised guest lecturer at top tier universities around the country is also a graduate of the Australian Institute of Company Directors and will be chairing the Australian council.

“An important feature of the framework is how it can translate globally because our [Australian] priorities and cultural intricacies are very different to other countries, and it’s not our place to make decisions for other cultures.

“The whole point is that diversity is recognised and celebrated.”

Morley did note, “It’s very easy for progress to be caught up in important issues that have solutions. Bias is only one item the council will be charged with addressing, data privacy and security is another key topic, as is human oversight and control, as well as decision making transparency. These are all very real issues, but are all issues with solutions”.

It’s not enough for pockets of SME’s

Morley warns how quickly a curious employee can become a danger to DEI progress: “Without a reasonable understanding of the technology you’re using, it’s very easy to cross a line.”

“If you’re a recruiter, one of the easiest uses of generative AI is to upload your CVs, give it a job description and say, ‘Find the top three matches’. But you will not get the diverse representation, and they will most likely be white males,” Hawkins said.

Part of the Responsible AI Council, and Morley’s mandate, is to uplift the capability of all of their staff to raise the baseline understanding of Generative AI.

For this, Akkodis will rewire its recruitment and Academy training businesses. Akkodis has five million tech professionals on its books, last year placed over 180,000 people in permanent jobs and upskilled or reskilled over 7,000 people.

Morley shared, “A solution to improve recruitment with generative AI quickly but responsibly is to firstly target use cases that do not ‘interpret’ data, a practice that will result in bias if a foundation model is used. Some examples Akkodis has already developed include job advert writing and Boolean search string creation.”

“For higher complexity tasks, it comes down to your datasets. Fine tuning is expensive, but can yield significant results, we also undertake performance tuning and anti-bias actions, as well as educating and aiding users to write better prompts”, said Morley.

“While we are developing custom AI Agents, each with individual incentives and tasks such as privacy, compliance and anti-bias, another approach that is immediately useful is creating ‘prompt libraries’, with well-constructed prompts that include anti-bias prompt wrappers.”

Akkodis Academy will amplify this strategy with three new generative AI learning pathways designed by Morley, in collaboration with Microsoft.

“We have a two-week boot camp for beginners, longer 10-week course for intermediate skilled learners and final ‘expert’ level module to upskill developers.”

“A leadership course heavy with case studies will help business leaders develop their strategic understanding of AI and the broader business wide implications of adopting or missing out on the technology.”

“Finally, we have a pathway for business professionals that include prompt engineering to help them drive productivity in their existing job roles.”

All courses include responsible AI usage as some of their first lessons.

Responsible AI use for faster, better decision making with less stress

Benefits of the Responsible AI Council will soon spread within Akkodis, as generative AI relieves pressure on staff even as their productivity and effectiveness lifts, said council Chair, Joshua Morley. This will especially resonate with staff who now have a sounding board in the form of an internal conversational assistant.

“People are really excited to have this framework because they are confident of embracing the best of this new, cool technology without the risk or concern of misuse,” Morley said.

Hawkins said Akkodis’ strategy to support people mitigated the danger of unintentional consequences while connecting them to the purpose of what they are accomplishing for the organisation, its partners, customers and wider society.

“People want guardrails so that they’re not making it up for themselves. Our people now feel safer about how they use generative AI in their job. They can put their ideas forward to the Council and receive guidance, and they aren’t afraid to ask questions.”


About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.