AI Use Is Rising at Work but Productivity Gains Lag

Despite widespread access and training, most workers lack AI use cases that generate business returns, shows study by AI transformation firm Section.

Reading Time: 6 minutes 

Topics

  • Companies are deploying artificial intelligence more widely than ever, but most employees are still failing to turn that access into meaningful productivity gains, according to a new report that points to a growing disconnect between executive optimism and how AI is actually used at work.

    The ‘AI Proficiency Report,’ published by AI transformation firm Section, draws on a survey of 5,000 knowledge workers at companies with more than 1,000 employees across the US, UK and Canada.

    Its central finding is blunt: while awareness and experimentation have spread quickly, the kind of AI use that delivers measurable business value remains rare.

    What has changed, the report suggests, is not access to AI but the threshold for what counts as effective use. Over the past year, companies concentrated on ensuring employees could use AI tools safely, understand basic limitations, and write competent prompts. That effort succeeded in raising baseline familiarity.

    But as AI capabilities have advanced, the definition of proficiency has moved again. Employers now need workers to apply AI regularly to substantive tasks that alter how work is done, not just how quickly emails are written. On that front, the report finds little evidence of progress.

    Value Remains Elusive

    The mismatch is striking given how rapidly AI use has spread outside corporate settings. Consumer adoption of AI tools has climbed sharply, yet inside large organizations the benefits remain elusive.

    Section estimates that 85% of the workforce does not have an AI use case that drives measurable business value, while a quarter do not use AI for work at all. Even in sectors expected to lead, including technology companies and roles heavily dependent on language and analysis, AI use tends to remain episodic and surface-level rather than embedded into workflows.

    Most employees, the report finds, sit in a holding pattern of experimentation. Nearly 70% are classified as “AI experimenters,” using tools mainly for summarizing meetings, rewriting emails or retrieving information.

    Another 28% are “novices,” meaning they rarely or never use AI at work. Only a sliver of the workforce, fewer than 3%, qualify as “AI practitioners” or “experts,” employees who integrate AI into day-to-day workflows and generate meaningful productivity gains. In practical terms, the report concludes, almost the entire workforce is either underusing AI or failing to use it in ways that matter.

    The constraint is not lack of technical familiarity. Most employees understand how to interact with large language models. The bigger obstacle is identifying where AI meaningfully fits into their specific role. More than a quarter of respondents said they have no work-related AI use case, while 60% described their use cases as basic.

    After reviewing thousands of reported examples, Section judged that only 15% were likely to deliver a return for employers. The result is what the report describes as a “use case desert,” where employees know how to use AI tools but struggle to connect them to problems worth solving.

    That disconnect is reflected in modest productivity gains. Nearly a quarter of respondents reported saving no time with AI, and another 44% said they save less than four hours a week. Fewer than one-third reported saving four hours or more.

    The report argues that most organizations would need to unlock significantly larger gains, closer to 10 hours a week per employee, to justify sustained investment at scale. While more proficient users do save more time, they remain too small a group to move aggregate outcomes.

    Where AI is being used, the focus remains narrow. The most common applications cited involve replacing search engines, drafting text and editing language.

    Automation of tasks and processes, which typically produces the largest efficiency gains, ranks far lower. Section’s analysis suggests that AI is often treated as a convenience tool rather than as a mechanism for redesigning workflows, limiting its impact on productivity, quality and speed.

    “This report should serve as truth serum for leaders, and a mandate to get your team to the new, and still changing, bar,” said Taylor Malmsheimer, Section’s chief operating officer.

    Employees See Friction

    Executives, however, tend to see a different picture. Senior leaders overwhelmingly report confidence that their organizations have clear AI strategies, strong policies, accessible tools and broad adoption.

    Individual contributors are far less likely to agree. Executives also report higher enthusiasm for AI and greater trust in its outputs, and are far more likely to use it daily in their own work. The result is a perception gap in which leadership believes deployments are succeeding while much of the workforce reports limited impact.

    The report frames this gap as structural rather than cultural. Individual contributors, who perform much of the repetitive and automatable work, are the least likely to have clear access to AI tools, formal training or reimbursement for paid services. They also receive less encouragement from managers to use AI, a trend that has deteriorated since mid-2025. As a result, they are more likely to feel anxious or overwhelmed by AI and less likely to view it as transformative.

    Differences across industries and functions reinforce the picture. Technology, finance and consulting rank highest on AI proficiency, though even the leading sector scores only 42 out of 100.

    Retail, healthcare and education sit at the bottom, reflecting weaker strategies, inconsistent access and unclear policies. By function, engineering, strategy and business development lead, while customer service ranks last despite what the report describes as substantial untapped potential for automation.

    Even within roles where AI adoption should be straightforward, usage lags expectations. More than half of engineers surveyed said they do not use AI for writing or debugging code, while most product managers said they do not use it to create prototypes.

    Section interprets this as evidence that AI is not being applied to the most obvious, high-value tasks within many jobs, leaving productivity gains on the table.

    Companies are not standing still. Nearly two-thirds of respondents said their employer has an AI policy, half have access to AI tools, and 44% receive some form of training. These investments improve proficiency, but only incrementally.

    Employees who have undergone AI training score an average of 40 out of 100 on Section’s hands-on proficiency assessment. Most programs, the report suggests, remain focused on safe use and basic prompting rather than on redesigning work.

    The broader risk, according to the report, is that companies are measuring the wrong indicators. Adoption rates and access metrics can look healthy even when productivity gains are weak. Without a sharper focus on role-specific use cases, clearer accountability and better visibility into how AI is used in everyday work, organizations may continue to report success while failing to capture the value they expect as AI capabilities continue to advance.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.