robot-representing-ai-ethics
Training & Learning

3 Common Concerns Employees Have About AI Ethics and How to Address Them

Over the past weeks and months, we’ve seen an explosion in the speed of development with AI technology. As we head into uncharted waters, many employees and employers across many industries, not just L&D, are left wondering just how AI will impact the workplace. 

While more than half of UK workers (54%) agree that AI will help cut down on manual tasks, empower them to learn new skills (56%) and enhance productivity (46%), people still have answers that remain unanswered, especially when it comes to AI ethics: the set of values, principles, and techniques that guide moral conduct in the development and use of AI technologies. 

Employers can embrace AI in a human-centric way, and not just in learning and development. All teams and industries will benefit from adopting AI. But to reap the rewards, especially in L&D, we need to examine the ethical considerations. In this article, we break down three common concerns employees have about AI ethics and how to handle them.

Have the robots really taken over?

3 common concerns employees have about AI ethics

AI ethics is an important topic to discuss in your workplace. Here are 3 common concerns about AI ethics and some tips and techniques on how to address and mitigate them.

1. Displacement of jobs

For many organisations, AI leads to significant cost-savings while also reducing human error. For example, AI can help cut down on accidents and minimize errors in payment processes. But what does this mean for displacement of jobs? One study found that the automation of tasks replaces 3.3 jobs locally and according to McKinsey, 15% of workers worldwide could be displaced between 2016 and 2030. 

Some employees are also concerned that AI will give them less bargaining power or job opportunities in the future. Others have also signalled that AI has the potential to reduce wages, particularly in worker groups specialised in routine tasks in industries experiencing rapid automation. 

So, as workers become more concerned with AI ethics, it’s up to employers to calm employee fears. 

While we invent ways to automate jobs, we also need to create room for people to take on more complex, strategic tasks so that they can continue to contribute to the wider society. 

From a business perspective, AI will increase productivity but it’s also critical for organisations to keep their human touch to remain competitive. AI should be used to amplify and augment human activity, rather than completely displace human abilities. The success of any AI tool is based on how well an experienced individual can use it. For example, you need a great writer to use AI and edit the content so it sounds natural, empathetic, and aligned with your brand. 

Remember that AI can’t create; it copies. If you want to produce unique products, services, and content and keep your competitive edge.

Next up? Let’s discuss the accuracy and ownership of AI-generated content.

2. Accuracy and ownership of AI-generated content

The emerging use of ChatGPT and other AI tools that create things for us has led to many unaddressed questions around content custody, ownership, and attribution. 

There are no issues around the personal use of ChatGPT, but when it comes to applying AI-generated content intended for wider distribution e.g. marketing materials, white papers etc. the legalities are unclear. EU and US lawmakers are in the process of drafting an AI code of conduct and hopefully this should mitigate some of the risks around ownership of AI-generated content.

But even if businesses stay legally and ethically sound, there’s still the issue of accuracy. Currently, AI-generated content can be incorrect, and CNET learned that lesson the hard way. Over half of their AI-written content included errors, and they needed to rehire back the editors they let go to fix it for them. So, it’s clear that generative AI needs to work alongside humans for optimal results. 

Another common AI concern? Privacy.

3. AI and privacy

One of the most recognised concerns of AI is around privacy and misuse of data. As more and more data is collected and processed, both employees and employers alike are worried about the greater risk of unauthorised access to personal information, resulting in data breaches. 

Cybercrimes affect the security of 80% of businesses across the world, and now more than ever, people recognise that personal data in the wrong hands can have a devastating impact. So, how do we avoid this as emerging AI technologies continue to evolve? At a minimum, organisations need to take the right measures to safeguard the privacy of their employees and customers’ information with authentication platforms.

Another AI privacy concern is around surveillance and monitoring. The adoption of AI has made it easier for businesses to gather data on their employees and customers. This has raised ethical concerns, especially when employees or customers don’t know they’re being watched or it produces biased data that affects their treatment. 

Employers are already using AI to monitor their employees. In 2019, Garner surveyed 239 large corporations and found that over 50% were using nontraditional monitoring techniques in their workforce. These include analysing emails and social media messages. In the UK, there is no express permission given anywhere allowing monitoring nor is there anything to prevent it, making it difficult for organisations and employees to know whether the practice is actually legal.

While 61% of employees are comfortable with being monitored, this privilege comes at a price. Employees want to be able to see the data, challenge any interpretations of the data, and be ensured that said data is being used ethically. But unfortunately, there are already examples of companies misusing data

In cases where employers aren’t transparent with how AI could impact their employees, they risk breaking the trust between the organisation and its workers. As a result of companies failing to be transparent, employees will likely feel like they’re being micromanaged.

While you can actively monitor your staff’s performance, especially if you’re doing so to coach or reward your staff, businesses have a lot to lose if they’re actively watching their workforce. Many leadership teams will argue that if the work gets done safely and efficiently, it shouldn’t matter when or how they’re doing it. 

In times when organisations need to use workplace surveillance, for instance if you’re monitoring the safety of your workers in a warehouse, you should always be transparent and communicate how it’s used. In times when AI is necessary, it should always be monitored by a board of diverse employees to cut down on bias.

Winning companies are embracing AI in a human-centric way

Certainly, AI has the potential to revolutionise our lives, but it also raises ethical concerns— we’re already seeing examples of it being used poorly or to the detriment of employees.

To change this perception, governments and employers need to work together to create policies that position AI as a support system rather than a replacement. Organisations that adopt AI in a human-centric way will be well positioned to boost productivity and growth without becoming faceless machines. 

Businesses have an opportunity to make AI more personable while also calming employees’ fears about displacement. This will give your workforce the assurance that they matter to you and that you see their continued involvement as fundamental to the organisations’ success.