Understanding AI’s Ethical Landscape
In an era where artificial intelligence is transforming how we work, UK small businesses and charities face important questions about whether AI adoption aligns with their values. If you’re hesitant about incorporating AI into your operations due to ethical concerns, accessibility challenges or sustainability issues, you’re not alone. This practical guide explores the sustainability and ethics of AI to help you make informed decisions.
The Current State of AI Ethics
Today’s AI systems vary enormously in their environmental impact and ethical implications; there’s copyright concerns, data privacy, bias (sexism, racism, ableism – we’ve taught it all those things). But the key is understanding that not all AI is created equal when it comes to sustainability and ethics and there are better ways to work with it.
Many UK organisations now approach AI with a “responsible innovation” mindset—leveraging its benefits while actively mitigating potential harms. This balanced approach is especially important for mission-driven charities and values-led small businesses.
And, it’s important that your company ensures transparency – about the use of it within the company and setting clear boundaries for how employees can use it (i.e. what data should and shouldn’t be shared with it).
Environmental Sustainability Considerations
Energy Consumption
Some large language models require significant computational resources for training, while smaller, specialized models can be much more efficient.
Training sophisticated AI models requires substantial computing power and energy. For example, training a large language model can generate a carbon footprint equivalent to the lifetime emissions of several cars. However, once trained, using these models typically consumes far less energy.
For small businesses and charities, the sustainability equation is different than for big tech companies. You’re likely using pre-trained models rather than developing your own, so your direct environmental impact comes primarily from day-to-day usage, not training.
Trying to work out how much it could cost isn’t straight forward but here are some rough estimates of how much energy using AI will need;
Text generation – using something like ChatGPT to generate text for you
It’s estimated this is about 0.3 watt-hours so 10 generations is about the same as using a laptop for a few minutes.
Image generation – creating an image from scratch or using reference images
It’s estimated generating just one image is about 0.012 kWh which is apparently the same energy needed to charge a smart phone.
Video generation – creating a one minute video
This is estimated as 0.05kWh which is similar to charging a laptop for an hour.
So in this time where we’re teaching AI, learning how to use it ourselves and ramping up genuine usage, it’s costing a lot of energy -something has to be implemented to mitigate this extra energy usage as it’s not sustainable and could have a massive impact on the environment.
The good news is new better hardware and smarter AI are being investigated to help cut the energy usage but as it becomes more efficient, it will probably be more used and embedded into society.
Practical Steps for Sustainable AI Use
- Choose cloud AI services that run on renewable energy
- Opt for more efficient, purpose-built AI tools rather than general-purpose solutions when possible
- Consider whether AI is actually necessary for your specific needs or if simpler solutions might work
- Try not to ‘play around’ unnecessarily and seek ‘prompts’ that have been proven to work to get the right results.
Ethical Considerations for Small Organisations
Data Privacy and GDPR Compliance
AI systems are only as good as the data they process. UK organisations must ensure their AI usage complies with GDPR and respects user privacy. This includes:
- Being transparent about AI use in customer interactions
- Ensuring proper consent mechanisms for data processing
- Implementing data minimisation practices.
So if you’re using AI in any of your marketing processes, check what data is being provided to it.
Avoiding Bias and Discrimination
AI systems can inadvertently perpetuate existing biases when trained on biased data – it can only learn from the data we provide it and as a whole the world is full of biases. For charities serving vulnerable communities, this risk is particularly important to address.
An example of AI having a potential bias; A housing charity in Manchester discovers their tenant screening tool was disproportionately flagging applications from certain immigrant communities, prompting them to implement additional fairness checks.
Your organisation can mitigate similar risks by regularly auditing AI outputs for potential bias.
Digital Accessibility and AI
Accessibility as an Ethical Imperative
For UK organisations, ensuring digital accessibility isn’t just good practice—it’s a legal requirement under the Equality Act 2010. However, unlike the USA where lawsuits are common, there hasn’t been many consequences of not being accessible (apart from a large number of your potential customers not being able to easily use your website, of course!)….that is until now. The European Accessibility Act 2025 covers:
- The EAA applies to any business that provides goods and services to consumers in the EU
- It covers a wide range of goods and services
- It is mainly focused on digital accessibility but it does crossover with physical accessibility
- It will affect any UK business that provides services to EU consumers
- It will also affect any UK business that provides services to public or private bodies that are in scope
And applies to:
- Unlike previous accessibility legislation, such as the Public Sector Bodies Accessibility Regulations (PSBAR), the EAA is mainly focused on private sector firms, including companies in the UK.
- It applies to any business with at least 10 staff and a turnover above €2 million;
- It applies to any business that trades in the EU
- Companies with headquarters based outside the EU must also comply with the EAA if they sell relevant goods or services within the EU.
AI can either enhance or hinder accessibility depending on implementation choices.
How AI Can Improve Accessibility
When thoughtfully deployed, AI tools can dramatically improve accessibility:
- Speech-to-text and text-to-speech services make content accessible to users with visual or hearing impairments
- AI-powered automatic captioning for video content benefits people with hearing disabilities
- Predictive text and autocomplete features assist users with motor or cognitive disabilities
- Language simplification tools, like summary tools, can make complex content more accessible to users with cognitive disabilities or those for whom English is a second language
Avoiding AI-Driven Accessibility Barriers
Conversely, poorly implemented AI can create new barriers:
- AI-generated content without proper alt text for images excludes screen reader users
- Chatbots without keyboard navigation options may be unusable for people relying on assistive technologies
- Over-reliance on CAPTCHA systems that can’t be navigated by screen readers
- Voice-only interfaces without text alternatives exclude users with speech impairments
Practical Accessibility Checks for AI Implementation
- Test all AI-powered features with common assistive technologies
- Ensure AI-generated content meets WCAG 2.1 AA standards
- Provide alternative interaction methods for AI-driven interfaces
- Include people with disabilities in your testing process
An example of this could potentially be; a small disability advocacy charity in Leeds finds that their AI-powered content generator needed significant adjustments to produce properly structured content for screen readers—highlighting the importance of accessibility testing before full implementation.
Practical Guide to Ethical AI Implementation
Start Small and Scale Thoughtfully
Begin with low-risk applications where AI errors wouldn’t cause significant harm. For example, a charity might first implement AI for:
- Summarising research reports
- Categorising donation information
- Generating initial drafts of newsletters
Only after gaining confidence should you consider more sensitive applications. There are also a wide range of free webinars and courses available now to be able to gain more knowledge about what tools are available and how they should be used; look out for ones that are specifically geared towards ethics, sustainability, and accessibility.
Implement Human Oversight
The most ethical AI implementations maintain meaningful human control. This “human-in-the-loop” approach ensures that AI serves as a tool to enhance human capabilities rather than replace human judgment in critical decisions.
An example of this could potentially be; a small accounting firm uses AI to flag unusual transactions for review but always has accountants make final determinations about potential issues. This approach combines AI efficiency with human expertise and accountability.
AI is what you make of it
AI is neither inherently sustainable nor inherently problematic—the ethics lie in how we choose to implement it, much like anything. By approaching AI adoption with clear values and practical safeguards, UK small businesses and charities can harness its benefits while minimising risks.
The most successful organisations view AI not as a replacement for human judgment but as a tool that amplifies human capabilities. By maintaining this perspective and implementing the practical steps outlined in this guide, your organisation can navigate the AI landscape confidently and ethically.
Remember that ethical AI implementation is a journey rather than a destination. Start small, learn continuously, and align your AI strategy with your organisation’s core mission and values.