Sójka AI – digital guardian of ethics and security
18 July 2025 / AI
In an age of constantly evolving technology, artificial intelligence (AI) is playing an increasingly important role in our everyday lives. From apps that help us organise our time to image recognition algorithms, AI is gaining popularity, but at the same time it poses serious challenges. These challenges are particularly evident in the areas of ethics, the spread of fake news, law and cybersecurity. In response to these challenges, the Sójka AI project was created, which combines the ethical boundaries of technology with a focus on user safety.
Ethical Artificial Intelligence – a necessity in the digital age
The internet is a space that offers unlimited possibilities, but at the same time it is full of threats. At every turn, you can encounter toxic content, hate speech or manipulation. We are also increasingly using artificial intelligence tools designed to support us in our daily lives. Unfortunately, AI can also pose a threat if it is not properly designed and controlled. That is why the Sójka AI project was created, which acts as a digital ethics guardian, protecting users from harmful content.
Why do we need ethical artificial intelligence?
The advantage of artificial intelligence is its ability to analyse huge amounts of data in a short time. This allows AI to identify and respond to problems that often escape human attention.
However, without appropriate ethical boundaries, such algorithms can generate dangerous content or encourage inappropriate behaviour. Sójka AI is the answer to these threats, providing tools to moderate content and eliminate toxic and potentially dangerous interactions.
Bielik AI – a taste of responsible artificial intelligence
Bielik AI is the first step towards more responsible AI technology in Poland. The project has gained recognition for its ability to analyse data, detect threats and support the ethical development of AI. Bielik AI has achieved its mission by using advanced algorithms that analyse data in an ethical context, ensuring a safe and positive online experience for users.
Community collaboration
An important aspect of Bielik AI was its collaboration with users, who helped develop the project through their experiences and suggestions. The same approach has been adopted by the creators of Sójka AI, which also relies on collaboration with Internet users to create an algorithm that will effectively protect users from online threats. Link to the survey.
How does Bielik AI influence the development of Sójka AI?
Bielik AI served as inspiration for the creation of Sójka AI. Thanks to the experience gained in creating Bielik, the creators of Sójka were able to focus on developing new technologies that will enable even more effective content moderation, detection of harmful activities, and protection of users against manipulation and inappropriate content.
What can Sójka AI – a digital ethics guardian – do?
Sójka AI is an algorithm that acts as a digital guardian, eliminating toxic content and protecting users from harmful information. Thanks to its advanced design, Sójka can:
- analyse chats and detect toxic intentions – the algorithm is able to recognise when an online conversation takes a negative turn and users start using offensive words or expressions;
- verify AI responses and eliminate dangerous content – Sójka AI can analyse responses generated by other AI models, ensuring that they comply with ethical principles and do not contain harmful content;.
- moderate content, protecting against hate and manipulation – Sójka AI can effectively moderate content in various spaces, eliminating content that contains hate, hate speech or manipulation;.
- Ensure the mental well-being of the youngest users – Sójka AI is also a tool that pays special attention to younger Internet users, protecting them from content that may have a negative impact on their mental health.
Practical applications of Sójka AI
Sójka AI can be used in many areas, both in business and in the everyday lives of users. Thanks to its ability to analyse large data sets and respond quickly to threats, it can be used, for example, to:
- moderate online discussions – companies and online platforms can potentially use Sójka to manage discussions and eliminate negative content in real time;.
- supporting educational platforms – Sójka AI can be used on educational platforms to monitor student interactions, ensuring that they are not exposed to harmful or inappropriate content;.
- enhancing social media platforms – with Sójka AI, social media platforms can become a more friendly and safe place for all users, especially the youngest ones;.
- and much more.
Legal context and the future of ethical AI models
Projects such as Sójka AI are only the beginning of the quest to create a safe, ethical and responsible Internet. With each passing year, AI technology will become more complex and its role in our lives will become more significant.
Ethical AI models in the European Union – law, practice and directions for development
With the rapid development of artificial intelligence, the European Union has become a global leader in shaping the legal and ethical framework for this technology. Projects such as Sójka AI and Bielik AI in Poland illustrate the evolution from pure innovation to the responsible implementation of systems that respect fundamental rights, transparency and user safety. At the heart of this transformation is the AI Act – the world’s first comprehensive regulation governing AI, which, together with other initiatives, creates a coherent ecosystem for ethical artificial intelligence.
Legal basis for ethical AI in the EU
AI Act – risk classification and obligations
The AI Act, which entered into force on 1 August 2024, introduces a four-tier risk model:
- unacceptable risk (e.g. social scoring systems, manipulative AI exploiting the vulnerabilities of vulnerable groups) – total ban on use;.
- high risk (e.g. recruitment systems, credit ratings, medical diagnostics) – require certification, audits and continuous supervision;.
- limited risk (e.g. chatbots) – obligation to inform users about interaction with AI;.
- minimal risk (e.g. spam filters) – no additional regulations beyond general ethical principles.
Example: content moderation systems such as Sójka AI may qualify as high-risk systems due to their impact on freedom of expression and data protection. If an AI system is classified in this category, it must then comply with the AI Act requirements regarding data quality, algorithm transparency and appeal mechanisms.
EU ethical guidelines
In 2019, the European Commission defined seven pillars of trustworthy AI:
- respect for human autonomy;
- technical and societal safety;
- privacy protection,
- transparency,
- diversity and non-discrimination,
- social responsibility,
- and sustainable development.
These principles have become the basis for the AI Act, especially in the context of the requirement for ‘human-centric’ system design.
Initiatives in the EU
- GenAI4EU – a programme supporting the implementation of generative AI in key sectors of the economy.
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Steps for businesses – how to build ethical AI in line with the AI Act?
1. Risk mapping and system audits
– ensure that functionalities and objectives are lawful, ethical and do not create opportunities for abuse. Consider how the results of the AI system’s work may be used by third parties;
– identify any compliance requirements (these may vary depending on your company’s location and target markets) and confirm that you are able to meet them;
– create a list of all potential risks and risk mitigation measures in relation to functionality and legal requirements;
– Conduct risk analyses in the context of the compliance requirements you are subject to. For example, if the system will use personal data as part of a training database, conduct a DPIA (Data Protection Impact Assessment). This will help you understand the scope of the project and the challenges it faces.
More here: How to develop AI tools legally?
2. Implement governance
- Establish an AI ethics team consisting of lawyers, engineers, and compliance specialists, depending on the situation and capabilities.
- Develop an AI use policy that takes into account EU guidelines and industry specifics. You can read more about this here:
3. Transparency and control
- For high-risk systems:
- Provide technical summaries describing the logic behind AI (e.g. through dashboards such as Explainable AI Booster). Example: https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
- Introduce mandatory human approval of results in key processes.
4. Data management
- Use debiaisng algorithms to eliminate bias in training data.
Debiasing algorithms are techniques used in artificial intelligence (AI) and machine learning to remove or reduce bias in data that can lead to unfair, discriminatory or inaccurate results. Bias in the context of AI refers to unintended tendencies in a system that may arise as a result of errors in the input data or in the way the AI model is trained.
- Perform regular data quality audits, especially for systems that use sensitive data (race, religion, health).
5. Certification and compliance
- Use the certification platforms developed as part of the CERTAIN project, which automate compliance checks with the AI Act.
- Register high-risk systems in the European AI database.
6. Training and organisational culture
- Conduct educational programmes for employees.
AI Literacy i AI Act – jak firmy mogą dostosować się do nowych przepisów?
Challenges and future of ethical AI in the EU
Key trends for the coming years:
- development of AI for ESG
- regulation of foundation models – plans to extend regulation to include requirements for general-purpose models (e.g. Meta Llama 3).
- cross-border cooperation
Summary
The EU’s approach to ethical AI, while challenging, creates a unique opportunity for companies to build customer trust and competitive advantage. Projects such as Sójka AI show that combining innovation with responsibility is possible – but it requires strategic planning, investment in governance and close cooperation with regulators.
In the coming decade, ethics will become the main driver of technological progress in Europe.
Need help with this topic?
Write to our expert
Articles in this category
Artificial intelligence, copyright and the controversy surrounding Studio Ghibli
Artificial intelligence, copyright and the controversy surrounding Studio GhibliSTEP platform – a new era for AI in the European Union
STEP platform – a new era for AI in the European UnionDeclassification of documents on the Kennedy assassination – sheds new light on historical events and the impact of technology on historical research
Declassification of documents on the Kennedy assassination – sheds new light on historical events and the impact of technology on historical researchThe music of the future – Suno AI and Sora AI: will artificial intelligence be the new generation of music creators?
The music of the future – Suno AI and Sora AI: will artificial intelligence be the new generation of music creators?AGI – a machine on a human scale. Is the law keeping up with technology?
AGI – a machine on a human scale. Is the law keeping up with technology?