When Algorithms Think for Us: Where Technology Ends and Responsibility Begins

  • 2025-08-13
  • Pauls Barkāns, Lead Solution Architect at “Helmes Latvia”

Artificial intelligence (AI) is increasingly present in our daily lives, work, education, and the flow of information—often without us even realizing it. Alongside the development of various solutions, public concern and caution are also growing. According to a 2025 study by KPMG and the University of Melbourne, 54% of people are generally unwilling to trust AI, yet 65% appreciate its technical capabilities. This shows that while people are encountering AI more frequently, they still feel uncertain about its risks, its impact on society, and ethical use. Trust can only be fostered through transparent regulations, clear usage principles, and public education. At this stage of development, caution is not only understandable—it is necessary, especially in a time of rising disinformation, deepfakes, and data misuse. Still, it is important to remember that AI is neither inherently good nor bad—it has no emotions, intentions, or political beliefs.

Openness to AI Must Go Hand in Hand with Education

Caution is welcome at this stage—if there is a lack of understanding about how AI works, a cautious attitude is preferable. In a time of increasing disinformation, deepfakes, and various types of fraud, excessive reliance on AI can leave us vulnerable. To reduce skepticism, we must consider safety standards and oversight mechanisms. Technology is advancing rapidly, and people do not always keep pace. Given our current level of knowledge about AI, a large part of society is still relatively easy to mislead. We can see the power of information manipulation—our neighboring country provides a vivid example—and AI further amplifies the possibilities of manipulation and propaganda. Therefore, openness to AI must go hand in hand with education and digital literacy. It would be far worse if we approached AI with low understanding but high optimism—that would open the door to various risks.

We Cannot Assume AI Tools Will Always Be Used for Good

Trust in AI can also be strengthened through regulation. For example, the AI Act may not address every possible situation or eliminate all risks, but it is a step in the right direction. We cannot assume that everyone will use AI tools solely for good and noble purposes. That’s why training, guidelines, and risk detection strategies are needed both at the organizational and national levels.

AI Can Also Mislead Unintentionally

It's essential to understand that AI is neither good nor bad—it all depends on how we, as humans, use it. AI is simply a mathematical model. It does not have emotions or political opinions. So where do the risks come from? The risks arise from those who control these models and how and where they are applied. Every AI output is a result of the data it was trained on. For instance, “ChatGPT” can be trained on any kind of data—even propaganda—and the responses and recommendations it provides will reflect that training. Unfortunately, it is nearly impossible to verify how most AI models are trained, as they are often owned and managed by private companies. For example, “ChatGPT” is operated by OpenAI. Some companies work with open datasets; others train their models using sources like Wikipedia. This means that AI-generated information can be incorrect not because of malicious intent but due to unintentional inaccuracies. Similarly, AI can make biased or discriminatory decisions based on historical data.

The Model Doesn’t Think on Its Own – It’s a Trained Algorithm

It is vital to realize that AI does not make decisions—humans do. AI provides information, which may consciously or unconsciously influence our decisions. The model doesn’t think for itself—it is a trained algorithm that nudges us toward one choice or another. However, we can never truly know with what intent the model was trained. For example, if AI suggests where to invest money and that investment is later lost, who is responsible—AI or the individual? Perhaps the model was trained by people with a vested interest in the recommended investment?

Education Is a Shared Responsibility—Individually, Organizationally, and Nationally

How can we tackle these challenges and reduce the risks? The bad news: there is no one-size-fits-all solution, as AI tools are numerous and developing faster than we can keep up. The good news: there are several key areas we can focus on. One is the aforementioned regulatory framework, and another is public education—understanding how AI works, the risks it may pose, and when caution is necessary. Education is not only an individual responsibility—it is also the duty of employers and a matter of national concern. For instance, companies can ensure AI is used within a secure, internal infrastructure. This prevents data from leaking outside the organization, which is especially important when working with sensitive information. There are also simpler steps—companies can train employees and establish guidelines tailored to their specific operations. At “Helmes Latvia,” we take care to ensure that our employees use AI solutions safely and responsibly. Not everyone needs to be an AI expert, but digital literacy must be developed by all, and on an individual level, everyone must take responsibility to ensure AI is used safely and ethically.

AI is one of the most powerful tools of our time, but its value is not defined by the technology itself—it lies in how we choose to use it. A responsible approach, understanding, and digital literacy are essential to transforming AI’s potential into meaningful, beneficial solutions for society. The right approach lies not in blind trust nor outright rejection, but in a balanced perspective—where knowledge, regulation, and clear principles go hand in hand with human responsibility and a readiness to make the final decision. AI is only a tool—it is up to us how we choose to use it.