Responsible use policy for artificial intelligence

1. OBJECTIVE AND SCOPE OF APPLICATION

Marfeel Solutions, S.L. (hereinafter, “Marfeel”) is a technology company based in Barcelona that develops software solutions under the Software as a Service (SaaS) model, primarily targeting digital publishers. Its mission is to help publishers maximize the performance of their digital properties through advanced tools for audience analytics, editorial automation, and smart content distribution. Notable among its AI-powered products are Marfeel Copilot, a generative AI-based writing assistant, and Marfeel Amplify, a platform that automates multi-channel content publication.

In developing these solutions, Marfeel integrates artificial intelligence (AI) technologies with the goal of increasing efficiency, scalability, and personalization in its clients’ editorial processes.

This Responsible Use Policy for Artificial Intelligence (hereinafter referred to as the “AI Use Policy”) is established to ensure that such AI systems are used ethically, legally, transparently, and in compliance with fundamental rights, adhering to applicable European and national regulations, particularly Regulation (EU) 2024/1669 on Artificial Intelligence (the “AI Act”), the General Data Protection Regulation (GDPR), and Spain’s Organic Law 3/2018 on data protection and digital rights (LOPDGDD).

This AI Use Policy aims to establish an internal regulatory, technical, organizational, and ethical framework for the development, use, and oversight of AI systems within Marfeel. It applies to all AI systems developed internally or sourced from third parties, as well as to all internal personnel, contractors, collaborators, and data processors involved throughout their life cycle; from design to eventual decommissioning.

The policy defines a set of criteria to ensure responsible, legal, and ethical AI use at Marfeel.

2. CURRENT USES OF AI AT MARFEEL AND RISK CLASSIFICATION

Currently, Marfeel uses AI for various purposes, classified by impact and risk level according to the AI Act. In the editorial and content distribution field, the company has developed internal tools like Marfeel Copilot and Marfeel Amplify. Copilot assists writers with content generation using natural language models specifically trained for publishing. Amplify automates content publishing and dissemination across digital platforms, optimizing scheduling via predictive audience behavior analysis.

Both systems are considered limited-risk under the AI Act and are subject to transparency obligations, including clear disclosure when content is generated or distributed using AI. For instance, limited-risk systems must inform individuals clearly when interacting with AI or when content was generated, wholly or partially, by AI—especially when it may be mistaken for human-generated content. This applies to Copilot (which produces text that may be published as-is) and Amplify (which automates posts without human editorial review).

3. GUIDING PRINCIPLES OF THIS AI USE POLICY

The use of AI within Marfeel is guided by six core principles:

1. Respect for Fundamental Rights, Responsibility, and Human Oversight

AI must be developed and used ethically, respecting human rights. AI should never replace human decision-making. It should support professionals but ultimate responsibility for decisions must always rest with humans. This ensures decisions are independent and ethically sound.

2. Non-Discrimination

AI must be developed and used in a way that avoids all forms of discrimination, particularly against sensitive attributes such as race, ethnicity, socioeconomic status, political opinions, religious or philosophical beliefs, union membership, genetics, biometrics, health, sexual life or orientation.

3. Quality and Security

AI models and algorithms must be stored and executed in secure environments to preserve system integrity and prevent tampering.

4. Transparency

As detailed in section 1, Marfeel uses generative AI to assist content creation.
The AI Act requires that any content generated by AI that could be mistaken for human-created content must be clearly labeled as such. To support this, Marfeel:

  • Includes clear instructions in technical documentation informing editors of their duty to disclose AI-generated content.
  • Ensures its tools do not mislead users about AI-generated content.
  • Provides technical means for clients to meet transparency obligations (AI-generated articles come with a preset legal notice).

Additionally, Marfeel ensures all AI systems (internal or third-party) are transparent and explainable. Users have the right to understand how decisions affecting them are made, especially in sensitive areas like HR.

5. Privacy and Personal Data Protection

Strict privacy measures must be in place. Prior data protection impact assessments (DPIAs) are required. AI systems must comply with data protection laws, ensuring confidential information is handled ethically and securely. Marfeel adopts all principles of Article 5 of the GDPR (lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity/confidentiality, and accountability).

6. Responsible Innovation, Evaluation, and Ongoing Training

Responsible innovation must be encouraged, with periodic impact evaluations and adjustments to ensure fairness and effectiveness. Training is essential. Employees using AI must understand ethical and practical principles. Collaboration between departments is promoted to share knowledge and AI tools across the organization in an ethical, innovative way.

4. RULES FOR USE AND RISK MANAGEMENT OF AI SYSTEMS

To ensure regulatory compliance, Marfeel has implemented technical, organizational, and legal measures for lawful, transparent, and responsible AI use.

  • Any AI system handling personal data must undergo a Data Protection Impact Assessment (DPIA) under GDPR.
  • High-risk systems must also undergo an AI-specific Impact Assessment, as per the AI Act.
  • All active AI systems at Marfeel have comprehensive documentation describing their purpose, logic, training processes, data sets, human oversight, and expected results.
  • An internal log of algorithms and their versions is maintained for traceability and auditability.

A Digital Ethics and Algorithmic Oversight Committee has been established, composed of technical, legal, HR, and compliance personnel. This committee evaluates and monitors AI projects and can halt any implementation not meeting Marfeel’s ethical, legal, or technical standards.

Key use rules:

  1. Human review is required for outputs affecting user rights.
  2. Data sovereignty must be maintained; no data may be accessible to unauthorized third parties.
  3. Biases must be identified and addressed.
  4. Generative AI outputs must be labeled as such (per the AI Act).
  5. AI must not be used for fraud, identity misrepresentation, or illegal purposes.
  6. Unauthorized AI tools must not be used; “Shadow AI” is prohibited (e.g., using tools like ChatGPT without IT approval). Risks include:
  • Security vulnerabilities
  • Inaccurate or biased results
  • Regulatory non-compliance
  1. AI outputs must align with Marfeel’s ethical, fairness, and suitability standards.
  2. No results may discriminate on the basis of race, color, religion, gender, national origin, age, disability, marital status, political affiliation, or sexual orientation.
  3. Third-party vendors using high-risk AI must comply with the law and be certified by the European Commission.

5. RISK MINIMIZATION AND RIGHTS PROTECTION

Marfeel actively works to mitigate legal, ethical, and social risks of AI. This includes identifying and correcting algorithmic bias, especially in systems that impact employment or physical safety. Training data quality is validated before each model update, with emphasis on diversity, non-discrimination, and proportionality.

All AI systems allow for meaningful human intervention in decisions affecting individual rights. No one will be subject to fully automated decision-making without human oversight capable of modifying, explaining, or annulling the result.

GDPR rights (access, rectification, erasure, objection, restriction, portability, and protection from automated decisions) are guaranteed via accessible and well-documented channels, managed by the Data Protection Officer (DPO).

6. RESPONSIBILITIES AND AI GOVERNANCE

To ensure lawful, safe, and responsible AI use, Marfeel clearly defines responsibilities:

Operational Use:

Each department is responsible for the AI systems it uses. Users must follow usage instructions and act ethically. Improper, negligent, or uninformed use may lead to corrective, training, or disciplinary actions.

System Design & Implementation:

The Product Department must ensure AI solutions meet standards for quality, safety, transparency, and explainability. They must apply privacy by design and ethics by design, coordinating with the DPO and Ethics Committee at key development stages.

Quality Control:

Also the Product Department’s responsibility—establish periodic checks to identify systematic errors, accuracy issues, or unintended bias.

Legal and Regulatory Audit:

The Legal Department, with the DPO, ensures legal compliance, performs ethical audits, impact assessments, documentation reviews, and handles inquiries, incidents, or complaints.

Training sessions on AI ethics, risks, and best practices are mandatory and held regularly.

An internal review protocol is activated when major system changes, regulatory updates, or significant incidents occur.

7. REVIEW, OVERSIGHT, AND ENFORCEMENT

This policy will be reviewed annually by the DPO with the Ethics Committee or earlier if there are major regulatory or technological changes. Updates are approved by senior management and communicated to all staff.

A confidential internal channel is available for reporting irregularities related to AI use. It can be used by employees, candidates, or suppliers and is managed under compliance protocols.

Non-compliance may lead to disciplinary action and, if warranted, civil, administrative, or criminal liability.

8. EFFECTIVE DATE

This policy enters into force on July 3, 2025, and supersedes any previous provisions related to AI use at Marfeel. Its application is mandatory and binding for the entire organization, including employees, departments, contractors, and technology partners.