Generative AI has the potential to significantly impact the workplace by automating routine tasks, personalising products and services, and enhancing creativity. It can also facilitate collaboration between humans and machines and create new revenue streams and market opportunities. However, the implementation of generative AI also poses challenges and risks such as bias, privacy and ethical considerations. AI tools are not a replacement for human workers and the responsible and ethical use of generative AI will be critical to realising its potential benefits while minimising potential negative impacts.

Generative AI and the workplace

The paragraph you’ve read above was written by ChatGPT when we asked it to write an introductory paragraph summarising the opportunities and challenges generative AI will have on the workplace. Despite evident opportunities to open-up and unlock new revenue streams, there are a few significant limitations.

As many businesses battle to mitigate the impact of current skills shortages, generative AI has reignited the debate over the impact of emerging technology on work and jobs. Many businesses and individuals are rapidly assessing the extent to which generative AI will shape their operating models and individual jobs, continuing a shift towards the future world of work accelerated by the pandemic.

With 2023 likely to see a seismic shift towards mass interaction with AI, this article explores what generative AI is, how it works and what practical impact it may have in different industries and areas of work.

What is generative AI?

Generative Artificial Intelligence (generative AI) is an umbrella term which encompasses systems that apply machine learning algorithms to large data sets to generate new content, such as text, imagery and audio. Developers are also considering how to leverage generative AI to apply it to smaller curated datasets.

Although this technology is not new, newfound capabilities have resulted in simpler user interfaces producing convincingly authentic, human-like and engaging content. Early implementations have, however, illustrated a number of limitations around accuracy, risks of AI infringing legal rights and bias. Even so, as the capability of this type of AI evolves, it has the potential to disrupt business models and fundamentally impact the world of work.

How does generative AI differ from AI?

Generative AI uses machine learning to create content, responses and designs. Models can “learn” from data patterns without human direction, although users can interact using iterative processes to enhance the content generation. The premise of generative AI is to create content. Other forms of AI focus on detecting patterns, making decisions, enhancing analytics and classifying and analysing data by processing it to return a simple result.

How does generative AI work?

Generative AI starts with a prompt, which is usually in plain language. This request could be in the form of a text, an image, a video, a design or musical notes. AI algorithms generate content in response to the prompt which can be customised with feedback about the style, tone and other elements that the user wants to be adapted. It can be applied in a wide variety of scenarios.

What are the use cases for generative AI?

Generative AI can make it easier to interpret and understand existing content and automatically create new content. As such it can be applied extensively across almost every area of business and impact on a wide variety of jobs. Use cases for generative AI include:

  • Generating and improving responses to customer services and technical support queries

  • Improving dubbing and producing content in different languages

  • Summarising and producing education material

  • Automating book-keeping and reporting

  • Writing blogs, articles, information material and product descriptions

  • Producing marketing copy and sales emails

  • Generating basic web content and graphics

  • Writing email responses, profiles, resumes and study papers

  • Summarising complex information into a coherent narrative

  • Simplifying the process of creating content in a particular style

  • Creating photorealistic art in a particular style or writing music in a specific style

  • Optimising new chip design or suggesting new drug compounds to test

  • Designing physical products and buildings

  • Improving existing workflows

  • Deploying deep fakes for mimicking people

  • Comparing information (e.g. across jurisdictions)

  • Producing complex formulas and code

  • Creating starting points for presentations and business models

  • Providing drafting assistance on contracts

What is the potential of generative AI?

A 2022 McKinsey survey shows that AI adoption has more than doubled over the past 5 years with investment continuing apace. 94% of respondents to Deloitte’s report believe that AI is the key to success over the next five years. The Gartner Emerging Technologies and Trends Impact Radar for 2022 predicts that, by 2025, generative AI will be producing 10% of all data (less than 1% currently).

The developing ability to integrate generative AI technology with other sources will enable it to adapt to particular use cases and enhance its capability to perform a wide range of actions in the future.

Examples of popular generative AI

ChatGPT

Generative Pretrained Transformer (GPT) is a series of deep learning language models built by OpenAI, an AI research and deployment company. Essentially, it is a free chatbot (with a premium paid for version) that can generate answers for most individual users’ questions, and can make new content; including computer code, essays, and emails.

OpenAI released ChatGPT to the public in November 2022. It attracted over 100 million users within two months, representing the fastest ever consumer adoption of a service[MJ6]. A new iteration – ChatGPT 4 - was announced in March 2023.

Jasper

Like ChatGPT, Jasper uses natural language processing to generate human-like responses. However, Jasper has additional tools which can check for grammatical errors and plagiarism, and can adopt different templates, including blog posts and video scripts. Jasper requires individual users to pay a subscription fee of $49 per month. According to Inc. 5000, Jasper is "one of America's fastest-growing private companies".

Bard

Google has recently rolled out its AI chatbot Bard. It is only available to certain users and they have to be over the age of 18. Unlike ChatGPT, it can access up-to-date information from the internet and has a “Google it” button which accesses search.

Dall-E

Dall-E was built using OpenAI’s GPT and connects the meaning of words to visual elements. Dall-E 2 was released in 2022 and enables users to generate imagery in multiple styles driven by user prompts.

What are the advantages of generative AI?

The advantages of generative AI are its creative output, ability for personalisation, and improved decision-making.

Creative output

Music, images, or text can be created by using pre-existing data which will enable creators to think “outside the box” and grow their portfolios. This can be especially useful for the music, production, design, and art industries.

Personalisation

Highly personalised and tailored content for individuals could help in improving education and learning. For instance, customised product recommendations can be created for each user based on their browsing history and preferences to improve customer experience.

Improved decision-making

Generative AI can help improve an individual’s decision-making by providing insights and recommendations based on the generated data gathered from very large datasets.

What are the disadvantages of generative AI?

The disadvantages of generative AI are its potential to incorporate bias or heighten ethical risks, intellectual property ownership issues, lack of control, and plagiarism. The convincing realism of generative AI content and lack of transparency makes it harder to identify when things go wrong, creating legal and reputational risks for users.

Bias and ethical risks

Generative AI models may incorporate biases in their responses which are built from gender or racial biases acquired from the internet. It can also perpetuate the misuse of information because the output cannot be readily verified. For example, a generative AI model trained on images of certain people may generate images that are biased against certain races or genders. AI can also be used to produce deepfakes. Deepfakes are videos or images that have been manipulated to show people saying or doing things that they have not said or done. This can be used to spread disinformation or defame individuals, leading to serious consequences.

The way the tools are currently built also risk prioritising one way of writing over another. ChatGPT generates content without knowing the meaning of the words by looking through various definitions and then assembling those into a single response for the specific query. This can exacerbate dominant writing models, reinforcing existing hierarchies, homogenising writing and perpetuating inequity.

Intellectual Property (IP)

Generative AI can generate output that is like existing IP, such as copyright protected text, images or music. This makes it difficult to determine whether the output of a generative AI infringes the intellectual property rights of others, which can lead to legal disputes.

 Lack of control

Generative AI can produce unexpected or unwanted output that is difficult to control. There is no guarantee that the information created by generative AI from the algorithm they are trained on will be correct. This can be problematic in situations where the generated output has significant consequences, such as in medical diagnosis or financial forecasting.

Plagiarism

The use of generative AI brings with it a risk of plagiarism. Not long ago, a writer found that their article had been plagiarised after analysing another article and noted that it was plagiarised work being a clear copy of his own article, “both in terms of particular lines and general meaning and structure”.

How can the disadvantages of generative AI be managed?

Without appropriate measures in place, the output produced by generative AI may cause errors or misinterpretations if there is not an appropriate, clear procedure on how to use it and review the outputs.

Accuracy risks can also be managed by domain-specific pre-training, model alignment and supervised fine-tuning to modify the large language model that CPT and other technologies are based on, making the large language model more practical to the specific need. Other emerging risks can be mitigated by involving human input to check the content generated. It would be important to assess the risks of using generative AI models for critical decisions, including those involving individuals, health and welfare.

Beyond human intervention, integrating other data sources could manage several disadvantages of generative AI too. For example, ChatGPT allows integration with sources such as Wolfram Alpha to search for mathematical or scientifically precise answers. Similarly Bing Chat uses its search engine technology as an up to date source for content generation and to provide citations.

What is the impact of generative AI in different industries?

Generative AI could have a real impact on a number of industries. This section will explore the impact generative AI is having on different sectors through the lens of specific job roles and/or industries.

Generative AI and the creative sector

Generative AI models present challenges for businesses engaged in content creation, with substantial impact on marketing, software, design, entertainment, and interpersonal communications. These models can produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images.

Music industry

David Guetta used two AI sites to create lyrics and a rap in the style of Eminem for a live show and believes musicians will use AI as a tool to create new sounds in the future, because “every new music style comes from a new technology”. Some people reacted positively, but others were less positive about this, preferring authenticity and viewed using AI to generate music as limiting creativity.

Social media managers

Social media managers can stay on top of social media trends and enhance their social media marketing efforts by using generative AI.

Generative AI models can be used to generate a wide range of content, from social media posts to videos, reels and images to attract existing and potential customers. These models can identify the types of content that are most likely to be shared and engaged with and generate content optimised for the highest impact, by analysing social media trends that social media managers are not able to identify initially. Generative AI could also be used to identify the sentiments within specific social media posts by extracting the emotions and opinions expressed.

Nevertheless, there is a danger that the content produced through these generative AI models may become generic leading to a lack of differentiation, authenticity and originality between brands and businesses.

Marketing executives

AI models can help marketing executives to personalise their messages to specific audiences. By analysing customer data (such as browsing history or purchase behaviour), AI models can generate content that is tailored to the interests and needs of individual customers and identify patterns and trends in data. In turn this data can be used to develop marketing strategies and campaigns.

Copywriters

Copywriters can use generative AI to assist in generating ideas for content or even drafting entire pieces of content that are grammatically correct and error-free. Generative AI has the ability to ensure that the piece of content is well written and communicates the copywriters intended message clearly and effectively whilst improving brand awareness and increasing engagement.

However, by relying heavily on generative AI models to publish materials, copywriters can face challenges such as the lack of a unique voice and the nuanced human approach.

Generative AI and the media sector

Journalists

With the potential for generative AI to help journalists to produce content, much as been written about its impact in the field of journalism, including in relation to IP ownership of computer-generated works and inadvertently infringing third party rights.

Whilst generative AI lacks real-life journalist’s social nuance and ability to interact with human case-studies, it can help journalists analyse large volumes of data from multiple sources, allowing them to identify patterns and trends that may be otherwise missed. Routine tasks such as fact-checking and proof-reading can also be automated which will help free up time. This has the potential to help journalists be more productive and efficient.

Generative AI and the tech sector

Coders

Coding is the process of creating instructions that a computer can understand and execute that are written in a programming language, consisting of a set of rules and syntax that dictate how the code should be written.

Although coders will need to develop new skills like data analysis and programming to adapt to the impact of this technology, generative AI may assist coders with performing tasks that do not need human input. There are a few commercial use cases in the pipeline, such as GitHub Copilot. Copilot is a cloud-based AI tool developed by GitHub and OpenAI to assist users of software applications such as Visual Studio Code, Visual Studio, Neovim, and JetBrains to integrate development environments by autocompleting code. However, emerging risks include GPT derived coding samples polluting sites such as StackOverflow.

Generative AI and the legal sector

A number of legal firms have adopted generative AI as a way of automating some aspects of their lawyers’ work to enable them to deliver work more flexibly and efficiently, prioritise more complex work and focus on client care. While the output will need careful review by lawyers, common areas of focus include automating document analysis, creating legal documents, due diligence and performing legal research.

Generative AI and employment law

AI and algorithmic systems are increasingly being used in the workplace. Many employers are using AI to speed up decision-making, giving rise to so called “algorithmic management”. Generative AI can help employers with common areas of work for HR, such as in recruiting new staff and managing existing staff in relation to performance management and employee retention and engagement. However, the use of generative AI in the workplace and the use of AI in decision-making more generally, could give rise to several challenges, including discrimination risks.

Discrimination

Algorithms-based decisions and outputs from AI systems are particularly vulnerable to discrimination claims in the UK. AI may reflect and even compound existing biases and stereotypes. This may be due to the underlying data, bias present in the algorithm itself. Employment laws were not designed to meet this challenge and ill-equipped to do so. Claims about algorithms and discrimination are likely to become more common in the years ahead as the adoption rates continue to grow. Human supervision and review will be necessary to harness the benefits of generative AI in the modern workplace.

Generative AI and data protection

Generative AI tools rely on vast data sets which are used to create content, continuously learn and improve.

Data protection

As the application and adoption of AI tools grows, it amasses increasing amounts of data, including data scraped from the internet. It is very likely that these data sources will include personal data as well as potential special category personal data. This raises considerations about lawful processing and data protection compliance.

Generative AI and intellectual property law

The development and uptake of AI has taken place against a backdrop of uncertainty surrounding legal issues involved in the development and use of AI text and image generation tools.

Intellectual property

IP ownership can be complex and, when it comes to AI generated works, the answer is likely to vary depending on the extent of the role performed by both the human user and AI platform in generating the output and the IP provisions in the T&Cs of the relevant AI platform.

The most significant claims brought to date have involved training AI on databases of images or text. For example, Getty Images is claiming in proceedings in the UK and USA that Stability AI has used its work to train their AI generator.

Generative AI and the regulatory framework

The regulatory landscape is evolving in response to the exponential growth of AI tools. In April 2021, the European Commission published a proposal for the Artificial Intelligence Regulation which seeks to harmonise rules on AI by ensuring what AI produces are sufficiently safe and robust before entering the EU market. The proposals aim to control the use of this technology in a high-risk context (where there is a risk posed to fundamental rights and health and safety) with non-compliance subject to potentially significant fines. Domestically, by way of its National AI strategy, the UK government set out an ambitious ten-year plan for the UK to remain a global AI superpower. In March 2023 the UK government published its pro-innovation white paper on AI to empower existing regulators through the application of a set of overarching principles. The UK’s approach focusses on the context in which AI is used, rather than on specific technologies. This proposed regime is less stringent than the EU’s approach, and, as yet no new legislation or statutory duty of enforcement is proposed.

With AI’s huge potential for problem-solving and addressing major societal challenges, laws will need to keep abreast of technological advances. The regulatory landscape will be scrutinised as the technology evolves to ensure sufficient protections are in place for safe usage whilst not stifling innovation.

How will generative AI impact the future of work?

The extraordinary rate of adoption of ChatGPT illustrates the depth of its potential impact on the world of work. It has provoked widespread curiosity and unearthed a number of problems and challenges.

Generative AI has challenged existing assumptions that creativity is inherently human. This new iteration of technology has the potential to bring technology into industries where it was not traditionally used.

However, generative AI has reignited the debate about whether new technology will increase productivity and create new jobs or eliminate jobs (or create less secure and well paid jobs).

Nevertheless, this new technology has the potential to augment human capabilities and free up time to focus on more challenging, higher-value work, whilst at the same time requiring human skills, knowledge and judgment in its application.

Organisations should reflect on the short and longer term impact that generative AI will have on their business models and consider the impact adoption will have on its workforce and job tasks. Opportunities for retraining and upskilling programmes should be considered to help workers transition to new roles.

As technology and societal norms evolve, risks and opportunities will continue to emerge in the months and years ahead. Organisations starting to experiment with this new technology should keep an eye on areas of potential adoption in the workplace alongside evolving reputational and legal risks.

We have expanded on the discourse around this area in our AI in the workplace: mind the regulatory gap? article that explores whether legislatures are stepping up to fill the regulatory gap and what the considerations are for employers looking to step in and codify employees’ use of new technology themselves.

Comment