We are all becoming more familiar with using artificial intelligence. In addition to AI chats such as ChatGPT, Microsoft Copilot, and Google Gemini, we already use AI when we authenticate ourselves with our fingerprints, plan trips with map apps, and explore recommendations on Spotify and Netflix. Artificial intelligence makes our lives easier in many ways – but none of us would want AI to lead to discrimination against us when looking for a job or applying for a bank loan. Irresponsible use of artificial intelligence can have negative consequences for both individuals and society. In this article, we explore different perspectives on responsible use of AI.
Copyright, data security, fairness, and ownership
Especially training general-purpose AI models has raised issues related to copyright and data security. A vast amount of content that is available on the open internet has been harvested as training material for the AI models, including also user input. When AI is used to support work, there is a risk of leaking company secrets or personal data to AI applications. Analyzing user input in AI chats (ChatGPT, Gemini, and so on), researchers from Harmonic found that in addition to customer data which includes personal data, also security information and sensitive code were leaked to the AI models (Korhonen, 2025).
This means that the responses provided by the AI application may reveal confidential or sensitive information that has been entered into it. This can cause all kinds of problems for both individuals and companies. Let’s say you have invented a new method or device. You want to use AI to help figure out how to tell about your invention, so you enter detailed information about it into the AI, which then stores the information in its own large model. Later, you realize that you should patent your invention. However, your patent application is rejected because some of the details related to your invention have become public knowledge after the AI leaked the information to other users.
The author owns their work. Such work may be, for example, a magazine article, a book, or a work of art, but when works that are subject to copyright are fed into an artificial intelligence application, the AI may exploit these works without regard for ownership and rights. In such cases, the author of the work does not receive the compensation they are entitled to when their work is used, and as a result they may lose their livelihood.
This is why you should be careful to not inadvertently give work by yourself or others to AI for free use.
The use of artificial intelligence affects society
The use of artificial intelligence has major societal impacts that may not be immediately apparent. Artificial intelligence is often discussed as a global revolution in working life, and as a phenomenon which will make work more efficient and replace many professions. However, the various AI models do not come up with the answers by themselves: they have been trained on extensive data. Reviewing and labeling such data requires human labor, and for this purpose wealthy companies use cheap labor in countries such as Kenya that have low labor costs (Stahl, 2025). Also in Finland, for example, prisoners have been employed to perform similar AI training (Rikosseuraamuslaitos, 2022).
There is often a desire to use artificial intelligence as support for decision-making, and it is thought to function impartially and neutrally. However, this is not the case. For example, Dastin (2018) notes how Amazon trained its own AI model to assist the company in recruitment as early as 2018, but due to the male bias in training material, such as job applications, the model ended up discriminating against women when reviewing the applications. This AI experiment was quietly discontinued (Dastin, 2018). Artificial intelligence was also tested in the United States for automated loan risk calculation, but it was soon discovered that the results provided by AI discriminated against low-income individuals and minorities (Andrews, 2021). As another example, Guo et al. (2025) discuss a project in Amsterdam to develop an algorithm for granting social benefits in a way that would be neutral and not discriminate against anyone. A program called Smart Check was developed over several years and tested for various biases, feedback on its performance was collected, and its technical functionality was ensured. However, similarly to previous experiments, also Smart Check failed to operate neutrally and without discrimination (Guo et al., 2025).
In addition to such close calls, we have already seen that uncritical use of artificial intelligence can lead to immediate problems. One example of this is the draft of a school network report produced in Tromsø, which resulted in eight schools being threatened with closure (Pinola, 2025). Upon closer examination of the report, it was discovered that one of the books cited as a source did not even exist, and most of the other sources could not be traced. One of the book sources in the study had also been quoted in a way that was completely opposite to what the author meant (Pinola, 2025).
What all these cases have in common is that AI has been used to help with automation and research, but it has given incorrect results, and decisions made based on those results put people in an unequal position, in a way that can have a major impact on their livelihoods. To use AI responsibly, it must not be used to automate or perform tasks without human evaluation or review, at least not in cases where the matter has a direct impact on people’s livelihoods, employment, or their lives otherwise.
In addition to this, we must not forget covert influencing in politics. As Merczer (2025) notes, nowadays various foreign countries use “information operations” to try to influence election results both in Finland and abroad, and the use of artificial intelligence has made it even easier and cheaper to create fake accounts and bots on social media. Fake accounts can be used to spread increasing amounts of distorted information, hoaxes, and propaganda on social media channels (Merczer, 2025). For example, Russian attempts to spread propaganda through artificial intelligence chats have been recently uncovered (Mäkeläinen, 2025).
Shortcomings in AI responses, distorted information, and algorithm-induced echo-chambers
You may already be aware that responses from AI chatbots such as ChatGPT cannot be trusted because the responses generated by large language models may contain significant errors. The same applies to the AI Overview responses provided by Google’s search function. In addition to errors, the information produced by AI is often distorted. Many AI applications first translate the original text into English and then use this text to create the model on which they generate their response. Such translations can cause errors and misunderstandings. In addition to this, already the original training data fed into the AI may contain distortions or omissions, which are then repeated in the answers it provides.
For example, when you ask AI to produce a picture of a leader, you will probably get a picture of an older, white male. AI reproduces real-life biases that exist in its training materials, for example, also when ChatGPT suggests a salary for a woman that is hundreds of euros lower than the salary for a man for similar work (Virtanen, 2025). In the case of the Amazon recruitment AI mentioned in the previous section, the trainers did not realize that if most of the teaching material deals with men, the AI model may interpret this so that that men should be favored at the expense of women. On the other hand, training materials for AI models often use information from the open internet, which means that various prejudices, misconceptions, and hate speech will likely also be reflected in AI responses. The idea that AI provides us with unbiased and neutral answers is therefore false.
What is more, we also do not know what information is missing from the AI’s responses or on what basis its responses are generated. In this sense, AI responses come from a black box, and even AI researchers cannot explain why exactly an AI model gives the responses it does. What does this mean? Even if we were to check all the links mentioned in the AI response, we would not know if an important perspective was missing, or if we got all the most relevant articles on the topic when using scientific AI applications. Also, because AI companies often use articles without permission, there is a current trend of reliable information being hidden behind paywalls. This means also that when reliable information is locked behind a paywall, it is no longer available for use by an AI model, and the answers provided by AI become even worse. There is also an increasing number of websites online where the content is produced entirely by an artificial intelligence. When these pages and their incorrect information end up as teaching material for AI, the errors caused by AI are repeated and reinforced.
Because of this, when you use artificial intelligence, always check the answers you receive. Also, be careful when sharing content you encounter, for example, on social media platforms, so that you do not spread misinformation.
Artificial intelligence and ecological challenges
Artificial intelligence and its use can also be examined from an ecological perspective. AI is currently being used to solve large-scale problems, such as issues related to climate change, as its computing power enables the creation of various scenarios and forecasts. On the other hand, artificial intelligence itself accelerates climate change as its training and use consumes electricity and water in a way that produces a surprising amount of carbon dioxide emissions (O’Donnell & Crownhart, 2025). New data centers are being built around the world, including in Finland. For example, water-intensive data centers are being built to run TikTok, even in areas suffering from drought (Martins & Amorim, 2025). If running artificial intelligence consumes a large part of the region’s drinking water, the price of water will rise beyond the reach of ordinary citizens. This has already happened in some areas in the United States.
How much water and electricity does the use of artificial intelligence language models consume, then? According to calculations made in 2023, the ChatGPT-3 model consumed about half a liter of water to produce 10–50 medium-length responses (Li et al., 2025). Today, the amount is likely to be higher due to the model’s more complex functions and the fact that AI is widely used to produce, for example, videos. The electricity consumption of artificial intelligence has been estimated in an MIT Technology Review article by O’Donnell and Crownhart (2025). As research for the article, artificial intelligence was asked to answer 15 questions and instructed to create 10 images and 3 short videos. The total electricity consumption was equivalent to using a microwave oven for 3.5 hours, riding an electric bike for 160 kilometers, or driving an electric car for 16 kilometers (O’Donnell & Crownhart, 2025).
Although we can influence the matter ourselves, for example, by using artificial intelligence only when truly necessary, the decisions with more significant impacts are in the hands of politicians.
Your responsibility as an AI user
There are many different aspects to the responsible use of AI, and there are many things you simply cannot influence on your own, such as the use of cheap labor in AI training and labeling. However, you can reduce the negative effects of AI by using the tools responsibly, for example, by paying attention to data security and copyrights and by checking facts. As a responsible user, you choose how and for what purposes you use AI. You should, for instance, consider where AI adds value to your studies and where it should not be used due to unreliability or ecological reasons. Since AI keeps developing rapidly, it is a good idea to also keep an eye on how these responsibility perspectives develop.
Lähdeluettelo
- Andrews, E. L. (6.8.2021). How Flawed Data Aggravates Inequality in Credit. Stanford University Human Centered Artificial Intelligence. https://hai.stanford.edu/news/how-flawed-data-aggravates-inequality-credit
- Dastin, J. (11.10.2018). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
- Guo, E., Geiger G., Braun, J-C. (11.6.2025). Inside Amsterdam’s high stakes experiment to create fair welfare AI. MIT Technology Review. https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
- Korhonen, S. (20.1.2025). Työntekijät vuotavat asiakastietoja ChatGPT:hen – Näin paljon salassa pidettävää tietoa päätyy tekoälylle. Tivi. https://www-tivi-fi.ezproxy.hamk.fi/uutiset/a/ee5aac74-b80b-4b96-8134-ffae5a8f99cd
- Li, P., Yang, J., Islam, M. A., Ren, S. (17.6.2025). Making AI Less ‘Thirsty’. Uncovering and addressing the secret water footprint of AI models. Communications of the ACM. https://cacm.acm.org/sustainability-and-computing/making-ai-less-thirsty/
- Martins, L. & Amorim, F. (22.5.2025). Draining cities dry: the giant tech companies queueing up to build datacentres in drought-hit Latin America. The Guardian. https://www.theguardian.com/global-development/2025/may/22/datacentre-drought-chinese-social-media-supercomputers-brazil-latin-america
- Menczer, F. (8.8.2024). How foreign operations are manipulating social media to influence your views. The Conversation. https://theconversation.com/how-foreign-operations-are-manipulating-social-media-to-influence-your-views-240089
- Mäkeläinen, M. (12.4.2025). Amerikkalaiset tutkijat varoittavat: Venäjän propagandan vyöry on saastuttanut tekoälyn vastaukset. Yle. https://yle.fi/a/74-20155223
- O’Donnell, J. & Crownhart, C. (20.5.2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
- Pinola, M. (6.4.2025). Norjalaiskunta tunaroi tekoälyn kanssa – ”Vuosikymmenen pahin poliittinen skandaali”. Tekniikka ja talous. https://www-tekniikkatalous-fi.ezproxy.hamk.fi/uutiset/a/4fac03b4-f91c-4f20-aba7-34d5f0c7c568
- Rikosseuraamuslaitos. (11.4.2022). Vankiloissa aloitetaan tekoälyn kehittäminen. https://www.rikosseuraamus.fi/fi/index/ajankohtaista/tiedotteet/2022/vankiloissaaloitetaant ekoalynkehittaminen_1.html
- Stahl, L. (29.6.2025). Labelers training AI say they’re overworked, underpaid and exploited by big American tech companies. CBS News. https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/
- Virtanen, S. (14.8.2025). Chat GPT ehdotti Hilkka Rissaselle satoja euroja pienempää palkkaa kuin miehelle. Helsingin Sanomat. https://www.hs.fi/suomi/art-2000011426548.html
Authors
