Quantcast

NORTHERN CALIFORNIA RECORD

Monday, November 4, 2024

C-Suite Executives Are Advancing Workplace Generative AI Policies as Risks Mount, Littler Survey Finds

3

Law Firm | Unsplash by Tingey Injury Law Firm

Littler, the world’s largest employment and labor law practice representing management, has released its 2024 AI C-Suite Survey Report, completed by more than 330 C-suite executives across the United States.

As artificial intelligence (AI) adoption grows across corporate America, fewer than half of executives (44%) say their organizations have policies in place for employee use of generative AI. While this represents a significant jump from Littler’s 2023 Employer Survey, when just 10% said the same, this year’s survey also reveals a misalignment among key members of the C-suite – making it harder for organizations to successfully implement such policies.

“Companies have made encouraging progress on workplace generative AI policies, but it’s not surprising that more than half have yet to implement one,” says Marko Mrkonich, Littler shareholder and a core member of the firm’s AI and Technology Practice Group. “There are several practical challenges that come with creating an effective policy for such a ubiquitous and evolving technology, including securing alignment and internal buy-in—especially when views about generative AI’s risk level and opportunities can vary widely among stakeholders.”

Key Components of Generative AI Policies

Among organizations that have established generative AI policies, most (74%) require employees to adhere to those policies, which is generally more effective than merely offering guidelines (an approach taken by 23% of respondents). Only 3% say their organizations prohibit employees from using generative AI altogether.

As for what goes into generative AI policies, 55% say that employee use is limited to approved tools, while slightly fewer limit to uses that are approved with managers and supervisors (52%) or a centralized AI decision-making group (47%). A smaller percentage of executives say their organizations limit use to approved tasks (40%) and certain groups of employees (21%).

“The current generative AI policy landscape represents a continuum, with organizations typically starting by vetting particular tools and then looking at specific tasks and how they are used by different groups and departments,” says Niloy Ray, Littler shareholder and a core member of the firm’s AI and Technology Practice Group. “Given that uses of both generative and predictive AI vary widely by employee role, it’s important that executives focus on defining who the decision-makers are, ensuring they are knowledgeable about the use of AI across the organization, and effectively socializing requirements and guidelines among employees.”

Policy Tracking, Enforcement, and Training

When it comes to enforcement of generative AI policies, most executives (67%) say their organizations are focusing on setting clear expectations for use and relying on employees to meet them. More than half are using access controls that limit AI tools to specific groups (55%) and relying on employee reporting of violations (52%).

A workplace policy is only as good as an organization’s ability to get employees to follow it – and successful expectation-setting goes hand in hand with employees actually understanding those expectations. That’s where training and education come in, yet fewer than a third of executives (31%) say their organizations currently offer such programs for generative AI.

“To effectively implement a generative AI policy, it’s vital that leaders agree on the organization’s ultimate objective and how they’ll get there,” says Britney Torres, senior counsel in Littler’s AI and Technology Practice Group. “That includes training both on compliance issues to mitigate risk and technical use to realize the greatest benefits from the technology.” 

As AI Use in HR Grows, So Do Legal Risks

When it comes to AI’s use in human resources (HR) and talent acquisition processes, C-suite executives clearly see the benefit of both generative and predictive AI tools. Two-thirds of executives (66%) say their organizations are using AI in HR functions, including to create HR-related materials (42%) and recruit (30%) and source (24%) candidates.

At the same time, with AI-related lawsuits expected to rise and an ever-growing patchwork of AI regulation coming to the fore, C-suite executives are eying the legal risks. Nearly 85% are concerned with litigation related to the use of AI in HR functions and 73% say their organizations are decreasing the use of these tools for such purposes as a result of regulatory uncertainty.

C-Suite Misalignment on Key AI Issues

Securing alignment among top executives is critical to the success of AI initiatives. However, Littler’s survey reveals several areas of division among members of the C-suite, with Chief Executive Officers (CEOs) and Chief Human Resources Officers (CHROs) on one side and legal executives – Chief Legal Officers (CLOs) and General Counsel (GCs) – on the other.

For example:

  • 52% of CLOs/GCs say their organizations are not using AI tools in HR, compared with 31% of CEOs and 18% of CHROs.
      
  • 42% of CEOs and CHROs say that AI has the potential to enhance HR processes to a large extent, compared with 18% of CLOs/GCs.
      
  • CLOs/GCs report significantly lower levels of activity when it comes tracking and enforcing generative AI policies than CEOs and CHROs, including utilizing access controls (38% vs. 65%), audits and reviews (27% vs. 56%), and automated monitoring systems (20% vs. 46%).
Littler’s second annual AI Survey was completed by 336 C-suite executives, mainly comprising CEOs, CLOs and GCs, CHROs, Chief Operating Officers, and Chief Technology Officers. Respondents represented a range of industries and company sizes, and all indicated being familiar with their organizations’ AI use.

Original source can be found here.

ORGANIZATIONS IN THIS STORY

More News